Sample records for automatic registration method

  1. Real-time automatic registration in optical surgical navigation

    NASA Astrophysics Data System (ADS)

    Lin, Qinyong; Yang, Rongqian; Cai, Ken; Si, Xuan; Chen, Xiuwen; Wu, Xiaoming

    2016-05-01

    An image-guided surgical navigation system requires the improvement of the patient-to-image registration time to enhance the convenience of the registration procedure. A critical step in achieving this aim is performing a fully automatic patient-to-image registration. This study reports on a design of custom fiducial markers and the performance of a real-time automatic patient-to-image registration method using these markers on the basis of an optical tracking system for rigid anatomy. The custom fiducial markers are designed to be automatically localized in both patient and image spaces. An automatic localization method is performed by registering a point cloud sampled from the three dimensional (3D) pedestal model surface of a fiducial marker to each pedestal of fiducial markers searched in image space. A head phantom is constructed to estimate the performance of the real-time automatic registration method under four fiducial configurations. The head phantom experimental results demonstrate that the real-time automatic registration method is more convenient, rapid, and accurate than the manual method. The time required for each registration is approximately 0.1 s. The automatic localization method precisely localizes the fiducial markers in image space. The averaged target registration error for the four configurations is approximately 0.7 mm. The automatic registration performance is independent of the positions relative to the tracking system and the movement of the patient during the operation.

  2. Phantom Study Investigating the Accuracy of Manual and Automatic Image Fusion with the GE Logiq E9: Implications for use in Percutaneous Liver Interventions.

    PubMed

    Burgmans, Mark Christiaan; den Harder, J Michiel; Meershoek, Philippa; van den Berg, Nynke S; Chan, Shaun Xavier Ju Min; van Leeuwen, Fijs W B; van Erkel, Arian R

    2017-06-01

    To determine the accuracy of automatic and manual co-registration methods for image fusion of three-dimensional computed tomography (CT) with real-time ultrasonography (US) for image-guided liver interventions. CT images of a skills phantom with liver lesions were acquired and co-registered to US using GE Logiq E9 navigation software. Manual co-registration was compared to automatic and semiautomatic co-registration using an active tracker. Also, manual point registration was compared to plane registration with and without an additional translation point. Finally, comparison was made between manual and automatic selection of reference points. In each experiment, accuracy of the co-registration method was determined by measurement of the residual displacement in phantom lesions by two independent observers. Mean displacements for a superficial and deep liver lesion were comparable after manual and semiautomatic co-registration: 2.4 and 2.0 mm versus 2.0 and 2.5 mm, respectively. Both methods were significantly better than automatic co-registration: 5.9 and 5.2 mm residual displacement (p < 0.001; p < 0.01). The accuracy of manual point registration was higher than that of plane registration, the latter being heavily dependent on accurate matching of axial CT and US images by the operator. Automatic reference point selection resulted in significantly lower registration accuracy compared to manual point selection despite lower root-mean-square deviation (RMSD) values. The accuracy of manual and semiautomatic co-registration is better than that of automatic co-registration. For manual co-registration using a plane, choosing the correct plane orientation is an essential first step in the registration process. Automatic reference point selection based on RMSD values is error-prone.

  3. Phantom Study Investigating the Accuracy of Manual and Automatic Image Fusion with the GE Logiq E9: Implications for use in Percutaneous Liver Interventions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burgmans, Mark Christiaan, E-mail: m.c.burgmans@lumc.nl; Harder, J. Michiel den, E-mail: chiel.den.harder@gmail.com; Meershoek, Philippa, E-mail: P.Meershoek@lumc.nl

    PurposeTo determine the accuracy of automatic and manual co-registration methods for image fusion of three-dimensional computed tomography (CT) with real-time ultrasonography (US) for image-guided liver interventions.Materials and MethodsCT images of a skills phantom with liver lesions were acquired and co-registered to US using GE Logiq E9 navigation software. Manual co-registration was compared to automatic and semiautomatic co-registration using an active tracker. Also, manual point registration was compared to plane registration with and without an additional translation point. Finally, comparison was made between manual and automatic selection of reference points. In each experiment, accuracy of the co-registration method was determined bymore » measurement of the residual displacement in phantom lesions by two independent observers.ResultsMean displacements for a superficial and deep liver lesion were comparable after manual and semiautomatic co-registration: 2.4 and 2.0 mm versus 2.0 and 2.5 mm, respectively. Both methods were significantly better than automatic co-registration: 5.9 and 5.2 mm residual displacement (p < 0.001; p < 0.01). The accuracy of manual point registration was higher than that of plane registration, the latter being heavily dependent on accurate matching of axial CT and US images by the operator. Automatic reference point selection resulted in significantly lower registration accuracy compared to manual point selection despite lower root-mean-square deviation (RMSD) values.ConclusionThe accuracy of manual and semiautomatic co-registration is better than that of automatic co-registration. For manual co-registration using a plane, choosing the correct plane orientation is an essential first step in the registration process. Automatic reference point selection based on RMSD values is error-prone.« less

  4. Automatic Registration of GF4 Pms: a High Resolution Multi-Spectral Sensor on Board a Satellite on Geostationary Orbit

    NASA Astrophysics Data System (ADS)

    Gao, M.; Li, J.

    2018-04-01

    Geometric correction is an important preprocessing process in the application of GF4 PMS image. The method of geometric correction that is based on the manual selection of geometric control points is time-consuming and laborious. The more common method, based on a reference image, is automatic image registration. This method involves several steps and parameters. For the multi-spectral sensor GF4 PMS, it is necessary for us to identify the best combination of parameters and steps. This study mainly focuses on the following issues: necessity of Rational Polynomial Coefficients (RPC) correction before automatic registration, base band in the automatic registration and configuration of GF4 PMS spatial resolution.

  5. Registration of Panoramic/Fish-Eye Image Sequence and LiDAR Points Using Skyline Features

    PubMed Central

    Zhu, Ningning; Jia, Yonghong; Ji, Shunping

    2018-01-01

    We propose utilizing a rigorous registration model and a skyline-based method for automatic registration of LiDAR points and a sequence of panoramic/fish-eye images in a mobile mapping system (MMS). This method can automatically optimize original registration parameters and avoid the use of manual interventions in control point-based registration methods. First, the rigorous registration model between the LiDAR points and the panoramic/fish-eye image was built. Second, skyline pixels from panoramic/fish-eye images and skyline points from the MMS’s LiDAR points were extracted, relying on the difference in the pixel values and the registration model, respectively. Third, a brute force optimization method was used to search for optimal matching parameters between skyline pixels and skyline points. In the experiments, the original registration method and the control point registration method were used to compare the accuracy of our method with a sequence of panoramic/fish-eye images. The result showed: (1) the panoramic/fish-eye image registration model is effective and can achieve high-precision registration of the image and the MMS’s LiDAR points; (2) the skyline-based registration method can automatically optimize the initial attitude parameters, realizing a high-precision registration of a panoramic/fish-eye image and the MMS’s LiDAR points; and (3) the attitude correction values of the sequences of panoramic/fish-eye images are different, and the values must be solved one by one. PMID:29883431

  6. An automatic markerless registration method for neurosurgical robotics based on an optical camera.

    PubMed

    Meng, Fanle; Zhai, Fangwen; Zeng, Bowei; Ding, Hui; Wang, Guangzhi

    2018-02-01

    Current markerless registration methods for neurosurgical robotics use the facial surface to match the robot space with the image space, and acquisition of the facial surface usually requires manual interaction and constrains the patient to a supine position. To overcome these drawbacks, we propose a registration method that is automatic and does not constrain patient position. An optical camera attached to the robot end effector captures images around the patient's head from multiple views. Then, high coverage of the head surface is reconstructed from the images through multi-view stereo vision. Since the acquired head surface point cloud contains color information, a specific mark that is manually drawn on the patient's head prior to the capture procedure can be extracted to automatically accomplish coarse registration rather than using facial anatomic landmarks. Then, fine registration is achieved by registering the high coverage of the head surface without relying solely on the facial region, thus eliminating patient position constraints. The head surface was acquired by the camera with a good repeatability accuracy. The average target registration error of 8 different patient positions measured with targets inside a head phantom was [Formula: see text], while the mean surface registration error was [Formula: see text]. The method proposed in this paper achieves automatic markerless registration in multiple patient positions and guarantees registration accuracy inside the head. This method provides a new approach for establishing the spatial relationship between the image space and the robot space.

  7. Automatic allograft bone selection through band registration and its application to distal femur.

    PubMed

    Zhang, Yu; Qiu, Lei; Li, Fengzan; Zhang, Qing; Zhang, Li; Niu, Xiaohui

    2017-09-01

    Clinical reports suggest that large bone defects could be effectively restored by allograft bone transplantation, where allograft bone selection acts an important role. Besides, there is a huge demand for developing the automatic allograft bone selection methods, as the automatic methods could greatly improve the management efficiency of the large bone banks. Although several automatic methods have been presented to select the most suitable allograft bone from the massive allograft bone bank, these methods still suffer from inaccuracy. In this paper, we propose an effective allograft bone selection method without using the contralateral bones. Firstly, the allograft bone is globally aligned to the recipient bone by surface registration. Then, the global alignment is further refined through band registration. The band, defined as the recipient points within the lifted and lowered cutting planes, could involve more local structure of the defected segment. Therefore, our method could achieve robust alignment and high registration accuracy of the allograft and recipient. Moreover, the existing contour method and surface method could be unified into one framework under our method by adjusting the lift and lower distances of the cutting planes. Finally, our method has been validated on the database of distal femurs. The experimental results indicate that our method outperforms the surface method and contour method.

  8. [Medical image elastic registration smoothed by unconstrained optimized thin-plate spline].

    PubMed

    Zhang, Yu; Li, Shuxiang; Chen, Wufan; Liu, Zhexing

    2003-12-01

    Elastic registration of medical image is an important subject in medical image processing. Previous work has concentrated on selecting the corresponding landmarks manually and then using thin-plate spline interpolating to gain the elastic transformation. However, the landmarks extraction is always prone to error, which will influence the registration results. Localizing the landmarks manually is also difficult and time-consuming. We the optimization theory to improve the thin-plate spline interpolation, and based on it, used an automatic method to extract the landmarks. Combining these two steps, we have proposed an automatic, exact and robust registration method and have gained satisfactory registration results.

  9. Automatic Marker-free Longitudinal Infrared Image Registration by Shape Context Based Matching and Competitive Winner-guided Optimal Corresponding

    PubMed Central

    Lee, Chia-Yen; Wang, Hao-Jen; Lai, Jhih-Hao; Chang, Yeun-Chung; Huang, Chiun-Sheng

    2017-01-01

    Long-term comparisons of infrared image can facilitate the assessment of breast cancer tissue growth and early tumor detection, in which longitudinal infrared image registration is a necessary step. However, it is hard to keep markers attached on a body surface for weeks, and rather difficult to detect anatomic fiducial markers and match them in the infrared image during registration process. The proposed study, automatic longitudinal infrared registration algorithm, develops an automatic vascular intersection detection method and establishes feature descriptors by shape context to achieve robust matching, as well as to obtain control points for the deformation model. In addition, competitive winner-guided mechanism is developed for optimal corresponding. The proposed algorithm is evaluated in two ways. Results show that the algorithm can quickly lead to accurate image registration and that the effectiveness is superior to manual registration with a mean error being 0.91 pixels. These findings demonstrate that the proposed registration algorithm is reasonably accurate and provide a novel method of extracting a greater amount of useful data from infrared images. PMID:28145474

  10. Ultrasound fusion image error correction using subject-specific liver motion model and automatic image registration.

    PubMed

    Yang, Minglei; Ding, Hui; Zhu, Lei; Wang, Guangzhi

    2016-12-01

    Ultrasound fusion imaging is an emerging tool and benefits a variety of clinical applications, such as image-guided diagnosis and treatment of hepatocellular carcinoma and unresectable liver metastases. However, respiratory liver motion-induced misalignment of multimodal images (i.e., fusion error) compromises the effectiveness and practicability of this method. The purpose of this paper is to develop a subject-specific liver motion model and automatic registration-based method to correct the fusion error. An online-built subject-specific motion model and automatic image registration method for 2D ultrasound-3D magnetic resonance (MR) images were combined to compensate for the respiratory liver motion. The key steps included: 1) Build a subject-specific liver motion model for current subject online and perform the initial registration of pre-acquired 3D MR and intra-operative ultrasound images; 2) During fusion imaging, compensate for liver motion first using the motion model, and then using an automatic registration method to further correct the respiratory fusion error. Evaluation experiments were conducted on liver phantom and five subjects. In the phantom study, the fusion error (superior-inferior axis) was reduced from 13.90±2.38mm to 4.26±0.78mm by using the motion model only. The fusion error further decreased to 0.63±0.53mm by using the registration method. The registration method also decreased the rotation error from 7.06±0.21° to 1.18±0.66°. In the clinical study, the fusion error was reduced from 12.90±9.58mm to 6.12±2.90mm by using the motion model alone. Moreover, the fusion error decreased to 1.96±0.33mm by using the registration method. The proposed method can effectively correct the respiration-induced fusion error to improve the fusion image quality. This method can also reduce the error correction dependency on the initial registration of ultrasound and MR images. Overall, the proposed method can improve the clinical practicability of ultrasound fusion imaging. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Automatic alignment of pre- and post-interventional liver CT images for assessment of radiofrequency ablation

    NASA Astrophysics Data System (ADS)

    Rieder, Christian; Wirtz, Stefan; Strehlow, Jan; Zidowitz, Stephan; Bruners, Philipp; Isfort, Peter; Mahnken, Andreas H.; Peitgen, Heinz-Otto

    2012-02-01

    Image-guided radiofrequency ablation (RFA) is becoming a standard procedure for minimally invasive tumor treatment in clinical practice. To verify the treatment success of the therapy, reliable post-interventional assessment of the ablation zone (coagulation) is essential. Typically, pre- and post-interventional CT images have to be aligned to compare the shape, size, and position of tumor and coagulation zone. In this work, we present an automatic workflow for masking liver tissue, enabling a rigid registration algorithm to perform at least as accurate as experienced medical experts. To minimize the effect of global liver deformations, the registration is computed in a local region of interest around the pre-interventional lesion and post-interventional coagulation necrosis. A registration mask excluding lesions and neighboring organs is calculated to prevent the registration algorithm from matching both lesion shapes instead of the surrounding liver anatomy. As an initial registration step, the centers of gravity from both lesions are aligned automatically. The subsequent rigid registration method is based on the Local Cross Correlation (LCC) similarity measure and Newton-type optimization. To assess the accuracy of our method, 41 RFA cases are registered and compared with the manually aligned cases from four medical experts. Furthermore, the registration results are compared with ground truth transformations based on averaged anatomical landmark pairs. In the evaluation, we show that our method allows to automatic alignment of the data sets with equal accuracy as medical experts, but requiring significancy less time consumption and variability.

  12. larvalign: Aligning Gene Expression Patterns from the Larval Brain of Drosophila melanogaster.

    PubMed

    Muenzing, Sascha E A; Strauch, Martin; Truman, James W; Bühler, Katja; Thum, Andreas S; Merhof, Dorit

    2018-01-01

    The larval brain of the fruit fly Drosophila melanogaster is a small, tractable model system for neuroscience. Genes for fluorescent marker proteins can be expressed in defined, spatially restricted neuron populations. Here, we introduce the methods for 1) generating a standard template of the larval central nervous system (CNS), 2) spatial mapping of expression patterns from different larvae into a reference space defined by the standard template. We provide a manually annotated gold standard that serves for evaluation of the registration framework involved in template generation and mapping. A method for registration quality assessment enables the automatic detection of registration errors, and a semi-automatic registration method allows one to correct registrations, which is a prerequisite for a high-quality, curated database of expression patterns. All computational methods are available within the larvalign software package: https://github.com/larvalign/larvalign/releases/tag/v1.0.

  13. Automatic Mrf-Based Registration of High Resolution Satellite Video Data

    NASA Astrophysics Data System (ADS)

    Platias, C.; Vakalopoulou, M.; Karantzalos, K.

    2016-06-01

    In this paper we propose a deformable registration framework for high resolution satellite video data able to automatically and accurately co-register satellite video frames and/or register them to a reference map/image. The proposed approach performs non-rigid registration, formulates a Markov Random Fields (MRF) model, while efficient linear programming is employed for reaching the lowest potential of the cost function. The developed approach has been applied and validated on satellite video sequences from Skybox Imaging and compared with a rigid, descriptor-based registration method. Regarding the computational performance, both the MRF-based and the descriptor-based methods were quite efficient, with the first one converging in some minutes and the second in some seconds. Regarding the registration accuracy the proposed MRF-based method significantly outperformed the descriptor-based one in all the performing experiments.

  14. Automatic selection of landmarks in T1-weighted head MRI with regression forests for image registration initialization

    NASA Astrophysics Data System (ADS)

    Wang, Jianing; Liu, Yuan; Noble, Jack H.; Dawant, Benoit M.

    2017-02-01

    Medical image registration establishes a correspondence between images of biological structures and it is at the core of many applications. Commonly used deformable image registration methods are dependent on a good preregistration initialization. The initialization can be performed by localizing homologous landmarks and calculating a point-based transformation between the images. The selection of landmarks is however important. In this work, we present a learning-based method to automatically find a set of robust landmarks in 3D MR image volumes of the head to initialize non-rigid transformations. To validate our method, these selected landmarks are localized in unknown image volumes and they are used to compute a smoothing thin-plate splines transformation that registers the atlas to the volumes. The transformed atlas image is then used as the preregistration initialization of an intensity-based non-rigid registration algorithm. We show that the registration accuracy of this algorithm is statistically significantly improved when using the presented registration initialization over a standard intensity-based affine registration.

  15. Automatic bone detection and soft tissue aware ultrasound-CT registration for computer-aided orthopedic surgery.

    PubMed

    Wein, Wolfgang; Karamalis, Athanasios; Baumgartner, Adrian; Navab, Nassir

    2015-06-01

    The transfer of preoperative CT data into the tracking system coordinates within an operating room is of high interest for computer-aided orthopedic surgery. In this work, we introduce a solution for intra-operative ultrasound-CT registration of bones. We have developed methods for fully automatic real-time bone detection in ultrasound images and global automatic registration to CT. The bone detection algorithm uses a novel bone-specific feature descriptor and was thoroughly evaluated on both in-vivo and ex-vivo data. A global optimization strategy aligns the bone surface, followed by a soft tissue aware intensity-based registration to provide higher local registration accuracy. We evaluated the system on femur, tibia and fibula anatomy in a cadaver study with human legs, where magnetically tracked bone markers were implanted to yield ground truth information. An overall median system error of 3.7 mm was achieved on 11 datasets. Global and fully automatic registration of bones aquired with ultrasound to CT is feasible, with bone detection and tracking operating in real time for immediate feedback to the surgeon.

  16. Automatic localization of the da Vinci surgical instrument tips in 3-D transrectal ultrasound.

    PubMed

    Mohareri, Omid; Ramezani, Mahdi; Adebar, Troy K; Abolmaesumi, Purang; Salcudean, Septimiu E

    2013-09-01

    Robot-assisted laparoscopic radical prostatectomy (RALRP) using the da Vinci surgical system is the current state-of-the-art treatment option for clinically confined prostate cancer. Given the limited field of view of the surgical site in RALRP, several groups have proposed the integration of transrectal ultrasound (TRUS) imaging in the surgical workflow to assist with accurate resection of the prostate and the sparing of the neurovascular bundles (NVBs). We previously introduced a robotic TRUS manipulator and a method for automatically tracking da Vinci surgical instruments with the TRUS imaging plane, in order to facilitate the integration of intraoperative TRUS in RALRP. Rapid and automatic registration of the kinematic frames of the da Vinci surgical system and the robotic TRUS probe manipulator is a critical component of the instrument tracking system. In this paper, we propose a fully automatic registration technique based on automatic 3-D TRUS localization of robot instrument tips pressed against the air-tissue boundary anterior to the prostate. The detection approach uses a multiscale filtering technique to identify and localize surgical instrument tips in the TRUS volume, and could also be used to detect other surface fiducials in 3-D ultrasound. Experiments have been performed using a tissue phantom and two ex vivo tissue samples to show the feasibility of the proposed methods. Also, an initial in vivo evaluation of the system has been carried out on a live anaesthetized dog with a da Vinci Si surgical system and a target registration error (defined as the root mean square distance of corresponding points after registration) of 2.68 mm has been achieved. Results show this method's accuracy and consistency for automatic registration of TRUS images to the da Vinci surgical system.

  17. Reproducibility measurements of three methods for calculating in vivo MR-based knee kinematics.

    PubMed

    Lansdown, Drew A; Zaid, Musa; Pedoia, Valentina; Subburaj, Karupppasamy; Souza, Richard; Benjamin, C; Li, Xiaojuan

    2015-08-01

    To describe three quantification methods for magnetic resonance imaging (MRI)-based knee kinematic evaluation and to report on the reproducibility of these algorithms. T2 -weighted, fast-spin echo images were obtained of the bilateral knees in six healthy volunteers. Scans were repeated for each knee after repositioning to evaluate protocol reproducibility. Semiautomatic segmentation defined regions of interest for the tibia and femur. The posterior femoral condyles and diaphyseal axes were defined using the previously defined tibia and femur. All segmentation was performed twice to evaluate segmentation reliability. Anterior tibial translation (ATT) and internal tibial rotation (ITR) were calculated using three methods: a tibial-based registration system, a combined tibiofemoral-based registration method with all manual segmentation, and a combined tibiofemoral-based registration method with automatic definition of condyles and axes. Intraclass correlation coefficients and standard deviations across multiple measures were determined. Reproducibility of segmentation was excellent (ATT = 0.98; ITR = 0.99) for both combined methods. ATT and ITR measurements were also reproducible across multiple scans in the combined registration measurements with manual (ATT = 0.94; ITR = 0.94) or automatic (ATT = 0.95; ITR = 0.94) condyles and axes. The combined tibiofemoral registration with automatic definition of the posterior femoral condyle and diaphyseal axes allows for improved knee kinematics quantification with excellent in vivo reproducibility. © 2014 Wiley Periodicals, Inc.

  18. A novel scheme for automatic nonrigid image registration using deformation invariant feature and geometric constraint

    NASA Astrophysics Data System (ADS)

    Deng, Zhipeng; Lei, Lin; Zhou, Shilin

    2015-10-01

    Automatic image registration is a vital yet challenging task, particularly for non-rigid deformation images which are more complicated and common in remote sensing images, such as distorted UAV (unmanned aerial vehicle) images or scanning imaging images caused by flutter. Traditional non-rigid image registration methods are based on the correctly matched corresponding landmarks, which usually needs artificial markers. It is a rather challenging task to locate the accurate position of the points and get accurate homonymy point sets. In this paper, we proposed an automatic non-rigid image registration algorithm which mainly consists of three steps: To begin with, we introduce an automatic feature point extraction method based on non-linear scale space and uniform distribution strategy to extract the points which are uniform distributed along the edge of the image. Next, we propose a hybrid point matching algorithm using DaLI (Deformation and Light Invariant) descriptor and local affine invariant geometric constraint based on triangulation which is constructed by K-nearest neighbor algorithm. Based on the accurate homonymy point sets, the two images are registrated by the model of TPS (Thin Plate Spline). Our method is demonstrated by three deliberately designed experiments. The first two experiments are designed to evaluate the distribution of point set and the correctly matching rate on synthetic data and real data respectively. The last experiment is designed on the non-rigid deformation remote sensing images and the three experimental results demonstrate the accuracy, robustness, and efficiency of the proposed algorithm compared with other traditional methods.

  19. Automatic selection of landmarks in T1-weighted head MRI with regression forests for image registration initialization.

    PubMed

    Wang, Jianing; Liu, Yuan; Noble, Jack H; Dawant, Benoit M

    2017-10-01

    Medical image registration establishes a correspondence between images of biological structures, and it is at the core of many applications. Commonly used deformable image registration methods depend on a good preregistration initialization. We develop a learning-based method to automatically find a set of robust landmarks in three-dimensional MR image volumes of the head. These landmarks are then used to compute a thin plate spline-based initialization transformation. The process involves two steps: (1) identifying a set of landmarks that can be reliably localized in the images and (2) selecting among them the subset that leads to a good initial transformation. To validate our method, we use it to initialize five well-established deformable registration algorithms that are subsequently used to register an atlas to MR images of the head. We compare our proposed initialization method with a standard approach that involves estimating an affine transformation with an intensity-based approach. We show that for all five registration algorithms the final registration results are statistically better when they are initialized with the method that we propose than when a standard approach is used. The technique that we propose is generic and could be used to initialize nonrigid registration algorithms for other applications.

  20. An effective non-rigid registration approach for ultrasound image based on "demons" algorithm.

    PubMed

    Liu, Yan; Cheng, H D; Huang, Jianhua; Zhang, Yingtao; Tang, Xianglong; Tian, Jiawei

    2013-06-01

    Medical image registration is an important component of computer-aided diagnosis system in diagnostics, therapy planning, and guidance of surgery. Because of its low signal/noise ratio (SNR), ultrasound (US) image registration is a difficult task. In this paper, a fully automatic non-rigid image registration algorithm based on demons algorithm is proposed for registration of ultrasound images. In the proposed method, an "inertia force" derived from the local motion trend of pixels in a Moore neighborhood system is produced and integrated into optical flow equation to estimate the demons force, which is helpful to handle the speckle noise and preserve the geometric continuity of US images. In the experiment, a series of US images and several similarity measure metrics are utilized for evaluating the performance. The experimental results demonstrate that the proposed method can register ultrasound images efficiently, robust to noise, quickly and automatically.

  1. An image registration based ultrasound probe calibration

    NASA Astrophysics Data System (ADS)

    Li, Xin; Kumar, Dinesh; Sarkar, Saradwata; Narayanan, Ram

    2012-02-01

    Reconstructed 3D ultrasound of prostate gland finds application in several medical areas such as image guided biopsy, therapy planning and dose delivery. In our application, we use an end-fire probe rotated about its axis to acquire a sequence of rotational slices to reconstruct 3D TRUS (Transrectal Ultrasound) image. The image acquisition system consists of an ultrasound transducer situated on a cradle directly attached to a rotational sensor. However, due to system tolerances, axis of probe does not align exactly with the designed axis of rotation resulting in artifacts in the 3D reconstructed ultrasound volume. We present a rigid registration based automatic probe calibration approach. The method uses a sequence of phantom images, each pair acquired at angular separation of 180 degrees and registers corresponding image pairs to compute the deviation from designed axis. A modified shadow removal algorithm is applied for preprocessing. An attribute vector is constructed from image intensity and a speckle-insensitive information-theoretic feature. We compare registration between the presented method and expert-corrected images in 16 prostate phantom scans. Images were acquired at multiple resolutions, and different misalignment settings from two ultrasound machines. Screenshots from 3D reconstruction are shown before and after misalignment correction. Registration parameters from automatic and manual correction were found to be in good agreement. Average absolute differences of translation and rotation between automatic and manual methods were 0.27 mm and 0.65 degree, respectively. The registration parameters also showed lower variability for automatic registration (pooled standard deviation σtranslation = 0.50 mm, σrotation = 0.52 degree) compared to the manual approach (pooled standard deviation σtranslation = 0.62 mm, σrotation = 0.78 degree).

  2. Automatic Image Registration of Multimodal Remotely Sensed Data with Global Shearlet Features

    NASA Technical Reports Server (NTRS)

    Murphy, James M.; Le Moigne, Jacqueline; Harding, David J.

    2015-01-01

    Automatic image registration is the process of aligning two or more images of approximately the same scene with minimal human assistance. Wavelet-based automatic registration methods are standard, but sometimes are not robust to the choice of initial conditions. That is, if the images to be registered are too far apart relative to the initial guess of the algorithm, the registration algorithm does not converge or has poor accuracy, and is thus not robust. These problems occur because wavelet techniques primarily identify isotropic textural features and are less effective at identifying linear and curvilinear edge features. We integrate the recently developed mathematical construction of shearlets, which is more effective at identifying sparse anisotropic edges, with an existing automatic wavelet-based registration algorithm. Our shearlet features algorithm produces more distinct features than wavelet features algorithms; the separation of edges from textures is even stronger than with wavelets. Our algorithm computes shearlet and wavelet features for the images to be registered, then performs least squares minimization on these features to compute a registration transformation. Our algorithm is two-staged and multiresolution in nature. First, a cascade of shearlet features is used to provide a robust, though approximate, registration. This is then refined by registering with a cascade of wavelet features. Experiments across a variety of image classes show an improved robustness to initial conditions, when compared to wavelet features alone.

  3. Automatic Image Registration of Multi-Modal Remotely Sensed Data with Global Shearlet Features

    PubMed Central

    Murphy, James M.; Le Moigne, Jacqueline; Harding, David J.

    2017-01-01

    Automatic image registration is the process of aligning two or more images of approximately the same scene with minimal human assistance. Wavelet-based automatic registration methods are standard, but sometimes are not robust to the choice of initial conditions. That is, if the images to be registered are too far apart relative to the initial guess of the algorithm, the registration algorithm does not converge or has poor accuracy, and is thus not robust. These problems occur because wavelet techniques primarily identify isotropic textural features and are less effective at identifying linear and curvilinear edge features. We integrate the recently developed mathematical construction of shearlets, which is more effective at identifying sparse anisotropic edges, with an existing automatic wavelet-based registration algorithm. Our shearlet features algorithm produces more distinct features than wavelet features algorithms; the separation of edges from textures is even stronger than with wavelets. Our algorithm computes shearlet and wavelet features for the images to be registered, then performs least squares minimization on these features to compute a registration transformation. Our algorithm is two-staged and multiresolution in nature. First, a cascade of shearlet features is used to provide a robust, though approximate, registration. This is then refined by registering with a cascade of wavelet features. Experiments across a variety of image classes show an improved robustness to initial conditions, when compared to wavelet features alone. PMID:29123329

  4. A fast and fully automatic registration approach based on point features for multi-source remote-sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Le; Zhang, Dengrong; Holden, Eun-Jung

    2008-07-01

    Automatic registration of multi-source remote-sensing images is a difficult task as it must deal with the varying illuminations and resolutions of the images, different perspectives and the local deformations within the images. This paper proposes a fully automatic and fast non-rigid image registration technique that addresses those issues. The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model. Once the coarse registration is completed, it performs a fine-scale registration process based on a piecewise linear transformation technique using feature points that are detected by the Harris corner detector. The registration process firstly finds in succession, tie point pairs between the input and the reference image by detecting Harris corners and applying a cross-matching strategy based on a wavelet pyramid for a fast search speed. Tie point pairs with large errors are pruned by an error-checking step. The input image is then rectified by using triangulated irregular networks (TINs) to deal with irregular local deformations caused by the fluctuation of the terrain. For each triangular facet of the TIN, affine transformations are estimated and applied for rectification. Experiments with Quickbird, SPOT5, SPOT4, TM remote-sensing images of the Hangzhou area in China demonstrate the efficiency and the accuracy of the proposed technique for multi-source remote-sensing image registration.

  5. Comparison of landmark-based and automatic methods for cortical surface registration

    PubMed Central

    Pantazis, Dimitrios; Joshi, Anand; Jiang, Jintao; Shattuck, David; Bernstein, Lynne E.; Damasio, Hanna; Leahy, Richard M.

    2009-01-01

    Group analysis of structure or function in cerebral cortex typically involves as a first step the alignment of the cortices. A surface based approach to this problem treats the cortex as a convoluted surface and coregisters across subjects so that cortical landmarks or features are aligned. This registration can be performed using curves representing sulcal fundi and gyral crowns to constrain the mapping. Alternatively, registration can be based on the alignment of curvature metrics computed over the entire cortical surface. The former approach typically involves some degree of user interaction in defining the sulcal and gyral landmarks while the latter methods can be completely automated. Here we introduce a cortical delineation protocol consisting of 26 consistent landmarks spanning the entire cortical surface. We then compare the performance of a landmark-based registration method that uses this protocol with that of two automatic methods implemented in the software packages FreeSurfer and BrainVoyager. We compare performance in terms of discrepancy maps between the different methods, the accuracy with which regions of interest are aligned, and the ability of the automated methods to correctly align standard cortical landmarks. Our results show similar performance for ROIs in the perisylvian region for the landmark based method and FreeSurfer. However, the discrepancy maps showed larger variability between methods in occipital and frontal cortex and also that automated methods often produce misalignment of standard cortical landmarks. Consequently, selection of the registration approach should consider the importance of accurate sulcal alignment for the specific task for which coregistration is being performed. When automatic methods are used, the users should ensure that sulci in regions of interest in their studies are adequately aligned before proceeding with subsequent analysis. PMID:19796696

  6. Automatic lung nodule matching for the follow-up in temporal chest CT scans

    NASA Astrophysics Data System (ADS)

    Hong, Helen; Lee, Jeongjin; Shin, Yeong Gil

    2006-03-01

    We propose a fast and robust registration method for matching lung nodules of temporal chest CT scans. Our method is composed of four stages. First, the lungs are extracted from chest CT scans by the automatic segmentation method. Second, the gross translational mismatch is corrected by the optimal cube registration. This initial registration does not require extracting any anatomical landmarks. Third, initial alignment is step by step refined by the iterative surface registration. To evaluate the distance measure between surface boundary points, a 3D distance map is generated by the narrow-band distance propagation, which drives fast and robust convergence to the optimal location. Fourth, nodule correspondences are established by the pairs with the smallest Euclidean distances. The results of pulmonary nodule alignment of twenty patients are reported on a per-center-of mass point basis using the average Euclidean distance (AED) error between corresponding nodules of initial and follow-up scans. The average AED error of twenty patients is significantly reduced to 4.7mm from 30.0mm by our registration. Experimental results show that our registration method aligns the lung nodules much faster than the conventional ones using a distance measure. Accurate and fast result of our method would be more useful for the radiologist's evaluation of pulmonary nodules on chest CT scans.

  7. Registration uncertainties between 3D cone beam computed tomography and different reference CT datasets in lung stereotactic body radiation therapy.

    PubMed

    Oechsner, Markus; Chizzali, Barbara; Devecka, Michal; Combs, Stephanie Elisabeth; Wilkens, Jan Jakob; Duma, Marciana Nona

    2016-10-26

    The aim of this study was to analyze differences in couch shifts (setup errors) resulting from image registration of different CT datasets with free breathing cone beam CTs (FB-CBCT). As well automatic as manual image registrations were performed and registration results were correlated to tumor characteristics. FB-CBCT image registration was performed for 49 patients with lung lesions using slow planning CT (PCT), average intensity projection (AIP), maximum intensity projection (MIP) and mid-ventilation CTs (MidV) as reference images. Both, automatic and manual image registrations were applied. Shift differences were evaluated between the registered CT datasets for automatic and manual registration, respectively. Furthermore, differences between automatic and manual registration were analyzed for the same CT datasets. The registration results were statistically analyzed and correlated to tumor characteristics (3D tumor motion, tumor volume, superior-inferior (SI) distance, tumor environment). Median 3D shift differences over all patients were between 0.5 mm (AIPvsMIP) and 1.9 mm (MIPvsPCT and MidVvsPCT) for the automatic registration and between 1.8 mm (AIPvsPCT) and 2.8 mm (MIPvsPCT and MidVvsPCT) for the manual registration. For some patients, large shift differences (>5.0 mm) were found (maximum 10.5 mm, automatic registration). Comparing automatic vs manual registrations for the same reference CTs, ∆AIP achieved the smallest (1.1 mm) and ∆MIP the largest (1.9 mm) median 3D shift differences. The standard deviation (variability) for the 3D shift differences was also the smallest for ∆AIP (1.1 mm). Significant correlations (p < 0.01) between 3D shift difference and 3D tumor motion (AIPvsMIP, MIPvsMidV) and SI distance (AIPvsMIP) (automatic) and also for 3D tumor motion (∆PCT, ∆MidV; automatic vs manual) were found. Using different CT datasets for image registration with FB-CBCTs can result in different 3D couch shifts. Manual registrations achieved partly different 3D shifts than automatic registrations. AIP CTs yielded the smallest shift differences and might be the most appropriate CT dataset for registration with 3D FB-CBCTs.

  8. A robust and hierarchical approach for the automatic co-registration of intensity and visible images

    NASA Astrophysics Data System (ADS)

    González-Aguilera, Diego; Rodríguez-Gonzálvez, Pablo; Hernández-López, David; Luis Lerma, José

    2012-09-01

    This paper presents a new robust approach to integrate intensity and visible images which have been acquired with a terrestrial laser scanner and a calibrated digital camera, respectively. In particular, an automatic and hierarchical method for the co-registration of both sensors is developed. The approach integrates several existing solutions to improve the performance of the co-registration between range-based and visible images: the Affine Scale-Invariant Feature Transform (A-SIFT), the epipolar geometry, the collinearity equations, the Groebner basis solution and the RANdom SAmple Consensus (RANSAC), integrating a voting scheme. The approach presented herein improves the existing co-registration approaches in automation, robustness, reliability and accuracy.

  9. Automatic Intensity-based 3D-to-2D Registration of CT Volume and Dual-energy Digital Radiography for the Detection of Cardiac Calcification

    PubMed Central

    Chen, Xiang; Gilkeson, Robert; Fei, Baowei

    2013-01-01

    We are investigating three-dimensional (3D) to two-dimensional (2D) registration methods for computed tomography (CT) and dual-energy digital radiography (DR) for the detection of coronary artery calcification. CT is an established tool for the diagnosis of coronary artery diseases (CADs). Dual-energy digital radiography could be a cost-effective alternative for screening coronary artery calcification. In order to utilize CT as the “gold standard” to evaluate the ability of DR images for the detection and localization of calcium, we developed an automatic intensity-based 3D-to-2D registration method for 3D CT volumes and 2D DR images. To generate digital rendering radiographs (DRR) from the CT volumes, we developed three projection methods, i.e. Gaussian-weighted projection, threshold-based projection, and average-based projection. We tested normalized cross correlation (NCC) and normalized mutual information (NMI) as similarity measurement. We used the Downhill Simplex method as the search strategy. Simulated projection images from CT were fused with the corresponding DR images to evaluate the localization of cardiac calcification. The registration method was evaluated by digital phantoms, physical phantoms, and clinical data sets. The results from the digital phantoms show that the success rate is 100% with mean errors of less 0.8 mm and 0.2 degree for both NCC and NMI. The registration accuracy of the physical phantoms is 0.34 ± 0.27 mm. Color overlay and 3D visualization of the clinical data show that the two images are registered well. This is consistent with the improvement of the NMI values from 0.20 ± 0.03 to 0.25 ± 0.03 after registration. The automatic 3D-to-2D registration method is accurate and robust and may provide a useful tool to evaluate the dual-energy DR images for the detection of coronary artery calcification. PMID:24386527

  10. Automatic Intensity-based 3D-to-2D Registration of CT Volume and Dual-energy Digital Radiography for the Detection of Cardiac Calcification.

    PubMed

    Chen, Xiang; Gilkeson, Robert; Fei, Baowei

    2007-03-03

    We are investigating three-dimensional (3D) to two-dimensional (2D) registration methods for computed tomography (CT) and dual-energy digital radiography (DR) for the detection of coronary artery calcification. CT is an established tool for the diagnosis of coronary artery diseases (CADs). Dual-energy digital radiography could be a cost-effective alternative for screening coronary artery calcification. In order to utilize CT as the "gold standard" to evaluate the ability of DR images for the detection and localization of calcium, we developed an automatic intensity-based 3D-to-2D registration method for 3D CT volumes and 2D DR images. To generate digital rendering radiographs (DRR) from the CT volumes, we developed three projection methods, i.e. Gaussian-weighted projection, threshold-based projection, and average-based projection. We tested normalized cross correlation (NCC) and normalized mutual information (NMI) as similarity measurement. We used the Downhill Simplex method as the search strategy. Simulated projection images from CT were fused with the corresponding DR images to evaluate the localization of cardiac calcification. The registration method was evaluated by digital phantoms, physical phantoms, and clinical data sets. The results from the digital phantoms show that the success rate is 100% with mean errors of less 0.8 mm and 0.2 degree for both NCC and NMI. The registration accuracy of the physical phantoms is 0.34 ± 0.27 mm. Color overlay and 3D visualization of the clinical data show that the two images are registered well. This is consistent with the improvement of the NMI values from 0.20 ± 0.03 to 0.25 ± 0.03 after registration. The automatic 3D-to-2D registration method is accurate and robust and may provide a useful tool to evaluate the dual-energy DR images for the detection of coronary artery calcification.

  11. Automatic intensity-based 3D-to-2D registration of CT volume and dual-energy digital radiography for the detection of cardiac calcification

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Gilkeson, Robert; Fei, Baowei

    2007-03-01

    We are investigating three-dimensional (3D) to two-dimensional (2D) registration methods for computed tomography (CT) and dual-energy digital radiography (DR) for the detection of coronary artery calcification. CT is an established tool for the diagnosis of coronary artery diseases (CADs). Dual-energy digital radiography could be a cost-effective alternative for screening coronary artery calcification. In order to utilize CT as the "gold standard" to evaluate the ability of DR images for the detection and localization of calcium, we developed an automatic intensity-based 3D-to-2D registration method for 3D CT volumes and 2D DR images. To generate digital rendering radiographs (DRR) from the CT volumes, we developed three projection methods, i.e. Gaussian-weighted projection, threshold-based projection, and average-based projection. We tested normalized cross correlation (NCC) and normalized mutual information (NMI) as similarity measurement. We used the Downhill Simplex method as the search strategy. Simulated projection images from CT were fused with the corresponding DR images to evaluate the localization of cardiac calcification. The registration method was evaluated by digital phantoms, physical phantoms, and clinical data sets. The results from the digital phantoms show that the success rate is 100% with mean errors of less 0.8 mm and 0.2 degree for both NCC and NMI. The registration accuracy of the physical phantoms is 0.34 +/- 0.27 mm. Color overlay and 3D visualization of the clinical data show that the two images are registered well. This is consistent with the improvement of the NMI values from 0.20 +/- 0.03 to 0.25 +/- 0.03 after registration. The automatic 3D-to-2D registration method is accurate and robust and may provide a useful tool to evaluate the dual-energy DR images for the detection of coronary artery calcification.

  12. SU-E-J-29: Automatic Image Registration Performance of Three IGRT Systems for Prostate Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barber, J; University of Sydney, Sydney, NSW; Sykes, J

    Purpose: To compare the performance of an automatic image registration algorithm on image sets collected on three commercial image guidance systems, and explore its relationship with imaging parameters such as dose and sharpness. Methods: Images of a CIRS Virtually Human Male Pelvis phantom (VHMP) were collected on the CBCT systems of Varian TrueBeam/OBI and Elekta Synergy/XVI linear accelerators, across a range of mAs settings; and MVCT on a Tomotherapy Hi-ART accelerator with a range of pitch. Using the 6D correlation ratio algorithm of XVI, each image was registered to a mask of the prostate volume with a 5 mm expansion.more » Registrations were repeated 100 times, with random initial offsets introduced to simulate daily matching. Residual registration errors were calculated by correcting for the initial phantom set-up error. Automatic registration was also repeated after reconstructing images with different sharpness filters. Results: All three systems showed good registration performance, with residual translations <0.5mm (1σ) for typical clinical dose and reconstruction settings. Residual rotational error had larger range, with 0.8°, 1.2° and 1.9° for 1σ in XVI, OBI and Tomotherapy respectively. The registration accuracy of XVI images showed a strong dependence on imaging dose, particularly below 4mGy. No evidence of reduced performance was observed at the lowest dose settings for OBI and Tomotherapy, but these were above 4mGy. Registration failures (maximum target registration error > 3.6 mm on the surface of a 30mm sphere) occurred in 5% to 10% of registrations. Changing the sharpness of image reconstruction had no significant effect on registration performance. Conclusions: Using the present automatic image registration algorithm, all IGRT systems tested provided satisfactory registrations for clinical use, within a normal range of acquisition settings.« less

  13. Generalized procrustean image deformation for subtraction of mammograms

    NASA Astrophysics Data System (ADS)

    Good, Walter F.; Zheng, Bin; Chang, Yuan-Hsiang; Wang, Xiao Hui; Maitz, Glenn S.

    1999-05-01

    This project is a preliminary evaluation of two simple fully automatic nonlinear transformations which can map any mammographic image onto a reference image while guaranteeing registration of specific features. The first method automatically identifies skin lines, after which each pixel is given coordinates in the range [0,1] X [0,1], where the actual value of a coordinate is the fractional distance of the pixel between tissue boundaries in either the horizontal or vertical direction. This insures that skin lines are put in registration. The second method, which is the method of primary interest, automatically detects pectoral muscles, skin lines and nipple locations. For each image, a polar coordinate system is established with its origin at the intersection of the nipple axes line (NAL) and a line indicating the pectoral muscle. Points within a mammogram are identified by the angle of their position vector, relative to the NAL, and by their fractional distance between the origin and the skin line. This deforms mammograms in such a way that their pectoral lines, NALs and skin lines are all in registration. After images are deformed, their grayscales are adjusted by applying linear regression to pixel value pairs for corresponding tissue pixels. In a comparison of these methods to a previously reported 'translation/rotation' technique, evaluation of difference images clearly indicates that the polar coordinates method results in the most accurate registration of the transformations considered.

  14. Localization accuracy from automatic and semi-automatic rigid registration of locally-advanced lung cancer targets during image-guided radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Scott P.; Weiss, Elisabeth; Hugo, Geoffrey D.

    2012-01-15

    Purpose: To evaluate localization accuracy resulting from rigid registration of locally-advanced lung cancer targets using fully automatic and semi-automatic protocols for image-guided radiation therapy. Methods: Seventeen lung cancer patients, fourteen also presenting with involved lymph nodes, received computed tomography (CT) scans once per week throughout treatment under active breathing control. A physician contoured both lung and lymph node targets for all weekly scans. Various automatic and semi-automatic rigid registration techniques were then performed for both individual and simultaneous alignments of the primary gross tumor volume (GTV{sub P}) and involved lymph nodes (GTV{sub LN}) to simulate the localization process in image-guidedmore » radiation therapy. Techniques included ''standard'' (direct registration of weekly images to a planning CT), ''seeded'' (manual prealignment of targets to guide standard registration), ''transitive-based'' (alignment of pretreatment and planning CTs through one or more intermediate images), and ''rereferenced'' (designation of a new reference image for registration). Localization error (LE) was assessed as the residual centroid and border distances between targets from planning and weekly CTs after registration. Results: Initial bony alignment resulted in centroid LE of 7.3 {+-} 5.4 mm and 5.4 {+-} 3.4 mm for the GTV{sub P} and GTV{sub LN}, respectively. Compared to bony alignment, transitive-based and seeded registrations significantly reduced GTV{sub P} centroid LE to 4.7 {+-} 3.7 mm (p = 0.011) and 4.3 {+-} 2.5 mm (p < 1 x 10{sup -3}), respectively, but the smallest GTV{sub P} LE of 2.4 {+-} 2.1 mm was provided by rereferenced registration (p < 1 x 10{sup -6}). Standard registration significantly reduced GTV{sub LN} centroid LE to 3.2 {+-} 2.5 mm (p < 1 x 10{sup -3}) compared to bony alignment, with little additional gain offered by the other registration techniques. For simultaneous target alignment, centroid LE as low as 3.9 {+-} 2.7 mm and 3.8 {+-} 2.3 mm were achieved for the GTV{sub P} and GTV{sub LN}, respectively, using rereferenced registration. Conclusions: Target shape, volume, and configuration changes during radiation therapy limited the accuracy of standard rigid registration for image-guided localization in locally-advanced lung cancer. Significant error reductions were possible using other rigid registration techniques, with LE approaching the lower limit imposed by interfraction target variability throughout treatment.« less

  15. Multi-Sensor Registration of Earth Remotely Sensed Imagery

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Cole-Rhodes, Arlene; Eastman, Roger; Johnson, Kisha; Morisette, Jeffrey; Netanyahu, Nathan S.; Stone, Harold S.; Zavorin, Ilya; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    Assuming that approximate registration is given within a few pixels by a systematic correction system, we develop automatic image registration methods for multi-sensor data with the goal of achieving sub-pixel accuracy. Automatic image registration is usually defined by three steps; feature extraction, feature matching, and data resampling or fusion. Our previous work focused on image correlation methods based on the use of different features. In this paper, we study different feature matching techniques and present five algorithms where the features are either original gray levels or wavelet-like features, and the feature matching is based on gradient descent optimization, statistical robust matching, and mutual information. These algorithms are tested and compared on several multi-sensor datasets covering one of the EOS Core Sites, the Konza Prairie in Kansas, from four different sensors: IKONOS (4m), Landsat-7/ETM+ (30m), MODIS (500m), and SeaWIFS (1000m).

  16. Automatic markerless registration of point clouds with semantic-keypoint-based 4-points congruent sets

    NASA Astrophysics Data System (ADS)

    Ge, Xuming

    2017-08-01

    The coarse registration of point clouds from urban building scenes has become a key topic in applications of terrestrial laser scanning technology. Sampling-based algorithms in the random sample consensus (RANSAC) model have emerged as mainstream solutions to address coarse registration problems. In this paper, we propose a novel combined solution to automatically align two markerless point clouds from building scenes. Firstly, the method segments non-ground points from ground points. Secondly, the proposed method detects feature points from each cross section and then obtains semantic keypoints by connecting feature points with specific rules. Finally, the detected semantic keypoints from two point clouds act as inputs to a modified 4PCS algorithm. Examples are presented and the results compared with those of K-4PCS to demonstrate the main contributions of the proposed method, which are the extension of the original 4PCS to handle heavy datasets and the use of semantic keypoints to improve K-4PCS in relation to registration accuracy and computational efficiency.

  17. The analysis of selected orientation methods of architectural objects' scans

    NASA Astrophysics Data System (ADS)

    Markiewicz, Jakub S.; Kajdewicz, Irmina; Zawieska, Dorota

    2015-05-01

    The terrestrial laser scanning is commonly used in different areas, inter alia in modelling architectural objects. One of the most important part of TLS data processing is scans registration. It significantly affects the accuracy of generation of high resolution photogrammetric documentation. This process is time consuming, especially in case of a large number of scans. It is mostly based on an automatic detection and a semi-automatic measurement of control points placed on the object. In case of the complicated historical buildings, sometimes it is forbidden to place survey targets on an object or it may be difficult to distribute survey targets in the optimal way. Such problems encourage the search for the new methods of scan registration which enable to eliminate the step of placing survey targets on the object. In this paper the results of target-based registration method are presented The survey targets placed on the walls of historical chambers of the Museum of King Jan III's Palace at Wilanów and on the walls of ruins of the Bishops Castle in Iłża were used for scan orientation. Several variants of orientation were performed, taking into account different placement and different number of survey marks. Afterwards, during next research works, raster images were generated from scans and the SIFT and SURF algorithms for image processing were used to automatically search for corresponding natural points. The case of utilisation of automatically identified points for TLS data orientation was analysed. The results of both methods for TLS data registration were summarized and presented in numerical and graphical forms.

  18. Non-rigid registration of 3D ultrasound for neurosurgery using automatic feature detection and matching.

    PubMed

    Machado, Inês; Toews, Matthew; Luo, Jie; Unadkat, Prashin; Essayed, Walid; George, Elizabeth; Teodoro, Pedro; Carvalho, Herculano; Martins, Jorge; Golland, Polina; Pieper, Steve; Frisken, Sarah; Golby, Alexandra; Wells, William

    2018-06-04

    The brain undergoes significant structural change over the course of neurosurgery, including highly nonlinear deformation and resection. It can be informative to recover the spatial mapping between structures identified in preoperative surgical planning and the intraoperative state of the brain. We present a novel feature-based method for achieving robust, fully automatic deformable registration of intraoperative neurosurgical ultrasound images. A sparse set of local image feature correspondences is first estimated between ultrasound image pairs, after which rigid, affine and thin-plate spline models are used to estimate dense mappings throughout the image. Correspondences are derived from 3D features, distinctive generic image patterns that are automatically extracted from 3D ultrasound images and characterized in terms of their geometry (i.e., location, scale, and orientation) and a descriptor of local image appearance. Feature correspondences between ultrasound images are achieved based on a nearest-neighbor descriptor matching and probabilistic voting model similar to the Hough transform. Experiments demonstrate our method on intraoperative ultrasound images acquired before and after opening of the dura mater, during resection and after resection in nine clinical cases. A total of 1620 automatically extracted 3D feature correspondences were manually validated by eleven experts and used to guide the registration. Then, using manually labeled corresponding landmarks in the pre- and post-resection ultrasound images, we show that our feature-based registration reduces the mean target registration error from an initial value of 3.3 to 1.5 mm. This result demonstrates that the 3D features promise to offer a robust and accurate solution for 3D ultrasound registration and to correct for brain shift in image-guided neurosurgery.

  19. ACIR: automatic cochlea image registration

    NASA Astrophysics Data System (ADS)

    Al-Dhamari, Ibraheem; Bauer, Sabine; Paulus, Dietrich; Lissek, Friedrich; Jacob, Roland

    2017-02-01

    Efficient Cochlear Implant (CI) surgery requires prior knowledge of the cochlea's size and its characteristics. This information helps to select suitable implants for different patients. To get these measurements, a segmentation method of cochlea medical images is needed. An important pre-processing step for good cochlea segmentation involves efficient image registration. The cochlea's small size and complex structure, in addition to the different resolutions and head positions during imaging, reveals a big challenge for the automated registration of the different image modalities. In this paper, an Automatic Cochlea Image Registration (ACIR) method for multi- modal human cochlea images is proposed. This method is based on using small areas that have clear structures from both input images instead of registering the complete image. It uses the Adaptive Stochastic Gradient Descent Optimizer (ASGD) and Mattes's Mutual Information metric (MMI) to estimate 3D rigid transform parameters. The use of state of the art medical image registration optimizers published over the last two years are studied and compared quantitatively using the standard Dice Similarity Coefficient (DSC). ACIR requires only 4.86 seconds on average to align cochlea images automatically and to put all the modalities in the same spatial locations without human interference. The source code is based on the tool elastix and is provided for free as a 3D Slicer plugin. Another contribution of this work is a proposed public cochlea standard dataset which can be downloaded for free from a public XNAT server.

  20. DIRBoost-an algorithm for boosting deformable image registration: application to lung CT intra-subject registration.

    PubMed

    Muenzing, Sascha E A; van Ginneken, Bram; Viergever, Max A; Pluim, Josien P W

    2014-04-01

    We introduce a boosting algorithm to improve on existing methods for deformable image registration (DIR). The proposed DIRBoost algorithm is inspired by the theory on hypothesis boosting, well known in the field of machine learning. DIRBoost utilizes a method for automatic registration error detection to obtain estimates of local registration quality. All areas detected as erroneously registered are subjected to boosting, i.e. undergo iterative registrations by employing boosting masks on both the fixed and moving image. We validated the DIRBoost algorithm on three different DIR methods (ANTS gSyn, NiftyReg, and DROP) on three independent reference datasets of pulmonary image scan pairs. DIRBoost reduced registration errors significantly and consistently on all reference datasets for each DIR algorithm, yielding an improvement of the registration accuracy by 5-34% depending on the dataset and the registration algorithm employed. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Fast automatic 3D liver segmentation based on a three-level AdaBoost-guided active shape model.

    PubMed

    He, Baochun; Huang, Cheng; Sharp, Gregory; Zhou, Shoujun; Hu, Qingmao; Fang, Chihua; Fan, Yingfang; Jia, Fucang

    2016-05-01

    A robust, automatic, and rapid method for liver delineation is urgently needed for the diagnosis and treatment of liver disorders. Until now, the high variability in liver shape, local image artifacts, and the presence of tumors have complicated the development of automatic 3D liver segmentation. In this study, an automatic three-level AdaBoost-guided active shape model (ASM) is proposed for the segmentation of the liver based on enhanced computed tomography images in a robust and fast manner, with an emphasis on the detection of tumors. The AdaBoost voxel classifier and AdaBoost profile classifier were used to automatically guide three-level active shape modeling. In the first level of model initialization, fast automatic liver segmentation by an AdaBoost voxel classifier method is proposed. A shape model is then initialized by registration with the resulting rough segmentation. In the second level of active shape model fitting, a prior model based on the two-class AdaBoost profile classifier is proposed to identify the optimal surface. In the third level, a deformable simplex mesh with profile probability and curvature constraint as the external force is used to refine the shape fitting result. In total, three registration methods-3D similarity registration, probability atlas B-spline, and their proposed deformable closest point registration-are used to establish shape correspondence. The proposed method was evaluated using three public challenge datasets: 3Dircadb1, SLIVER07, and Visceral Anatomy3. The results showed that our approach performs with promising efficiency, with an average of 35 s, and accuracy, with an average Dice similarity coefficient (DSC) of 0.94 ± 0.02, 0.96 ± 0.01, and 0.94 ± 0.02 for the 3Dircadb1, SLIVER07, and Anatomy3 training datasets, respectively. The DSC of the SLIVER07 testing and Anatomy3 unseen testing datasets were 0.964 and 0.933, respectively. The proposed automatic approach achieves robust, accurate, and fast liver segmentation for 3D CTce datasets. The AdaBoost voxel classifier can detect liver area quickly without errors and provides sufficient liver shape information for model initialization. The AdaBoost profile classifier achieves sufficient accuracy and greatly decreases segmentation time. These results show that the proposed segmentation method achieves a level of accuracy comparable to that of state-of-the-art automatic methods based on ASM.

  2. The One to Multiple Automatic High Accuracy Registration of Terrestrial LIDAR and Optical Images

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Hu, C.; Xia, G.; Xue, H.

    2018-04-01

    The registration of ground laser point cloud and close-range image is the key content of high-precision 3D reconstruction of cultural relic object. In view of the requirement of high texture resolution in the field of cultural relic at present, The registration of point cloud and image data in object reconstruction will result in the problem of point cloud to multiple images. In the current commercial software, the two pairs of registration of the two kinds of data are realized by manually dividing point cloud data, manual matching point cloud and image data, manually selecting a two - dimensional point of the same name of the image and the point cloud, and the process not only greatly reduces the working efficiency, but also affects the precision of the registration of the two, and causes the problem of the color point cloud texture joint. In order to solve the above problems, this paper takes the whole object image as the intermediate data, and uses the matching technology to realize the automatic one-to-one correspondence between the point cloud and multiple images. The matching of point cloud center projection reflection intensity image and optical image is applied to realize the automatic matching of the same name feature points, and the Rodrigo matrix spatial similarity transformation model and weight selection iteration are used to realize the automatic registration of the two kinds of data with high accuracy. This method is expected to serve for the high precision and high efficiency automatic 3D reconstruction of cultural relic objects, which has certain scientific research value and practical significance.

  3. Automatic image fusion of real-time ultrasound with computed tomography images: a prospective comparison between two auto-registration methods.

    PubMed

    Cha, Dong Ik; Lee, Min Woo; Kim, Ah Yeong; Kang, Tae Wook; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga

    2017-11-01

    Background A major drawback of conventional manual image fusion is that the process may be complex, especially for less-experienced operators. Recently, two automatic image fusion techniques called Positioning and Sweeping auto-registration have been developed. Purpose To compare the accuracy and required time for image fusion of real-time ultrasonography (US) and computed tomography (CT) images between Positioning and Sweeping auto-registration. Material and Methods Eighteen consecutive patients referred for planning US for radiofrequency ablation or biopsy for focal hepatic lesions were enrolled. Image fusion using both auto-registration methods was performed for each patient. Registration error, time required for image fusion, and number of point locks used were compared using the Wilcoxon signed rank test. Results Image fusion was successful in all patients. Positioning auto-registration was significantly faster than Sweeping auto-registration for both initial (median, 11 s [range, 3-16 s] vs. 32 s [range, 21-38 s]; P < 0.001] and complete (median, 34.0 s [range, 26-66 s] vs. 47.5 s [range, 32-90]; P = 0.001] image fusion. Registration error of Positioning auto-registration was significantly higher for initial image fusion (median, 38.8 mm [range, 16.0-84.6 mm] vs. 18.2 mm [6.7-73.4 mm]; P = 0.029), but not for complete image fusion (median, 4.75 mm [range, 1.7-9.9 mm] vs. 5.8 mm [range, 2.0-13.0 mm]; P = 0.338]. Number of point locks required to refine the initially fused images was significantly higher with Positioning auto-registration (median, 2 [range, 2-3] vs. 1 [range, 1-2]; P = 0.012]. Conclusion Positioning auto-registration offers faster image fusion between real-time US and pre-procedural CT images than Sweeping auto-registration. The final registration error is similar between the two methods.

  4. An Automatic Multi-Target Independent Analysis Framework for Non-Planar Infrared-Visible Registration.

    PubMed

    Sun, Xinglong; Xu, Tingfa; Zhang, Jizhou; Zhao, Zishu; Li, Yuankun

    2017-07-26

    In this paper, we propose a novel automatic multi-target registration framework for non-planar infrared-visible videos. Previous approaches usually analyzed multiple targets together and then estimated a global homography for the whole scene, however, these cannot achieve precise multi-target registration when the scenes are non-planar. Our framework is devoted to solving the problem using feature matching and multi-target tracking. The key idea is to analyze and register each target independently. We present a fast and robust feature matching strategy, where only the features on the corresponding foreground pairs are matched. Besides, new reservoirs based on the Gaussian criterion are created for all targets, and a multi-target tracking method is adopted to determine the relationships between the reservoirs and foreground blobs. With the matches in the corresponding reservoir, the homography of each target is computed according to its moving state. We tested our framework on both public near-planar and non-planar datasets. The results demonstrate that the proposed framework outperforms the state-of-the-art global registration method and the manual global registration matrix in all tested datasets.

  5. Agile Multi-Scale Decompositions for Automatic Image Registration

    NASA Technical Reports Server (NTRS)

    Murphy, James M.; Leija, Omar Navarro; Le Moigne, Jacqueline

    2016-01-01

    In recent works, the first and third authors developed an automatic image registration algorithm based on a multiscale hybrid image decomposition with anisotropic shearlets and isotropic wavelets. This prototype showed strong performance, improving robustness over registration with wavelets alone. However, this method imposed a strict hierarchy on the order in which shearlet and wavelet features were used in the registration process, and also involved an unintegrated mixture of MATLAB and C code. In this paper, we introduce a more agile model for generating features, in which a flexible and user-guided mix of shearlet and wavelet features are computed. Compared to the previous prototype, this method introduces a flexibility to the order in which shearlet and wavelet features are used in the registration process. Moreover, the present algorithm is now fully coded in C, making it more efficient and portable than the MATLAB and C prototype. We demonstrate the versatility and computational efficiency of this approach by performing registration experiments with the fully-integrated C algorithm. In particular, meaningful timing studies can now be performed, to give a concrete analysis of the computational costs of the flexible feature extraction. Examples of synthetically warped and real multi-modal images are analyzed.

  6. Automatic patient alignment system using 3D ultrasound.

    PubMed

    Kaar, Marcus; Figl, Michael; Hoffmann, Rainer; Birkfellner, Wolfgang; Stock, Markus; Georg, Dietmar; Goldner, Gregor; Hummel, Johann

    2013-04-01

    Recent developments in radiation therapy such as intensity modulated radiotherapy (IMRT) or dose painting promise to provide better dose distribution on the tumor. For effective application of these methods the exact positioning of the patient and the localization of the irradiated organ and surrounding structures is crucial. Especially with respect to the treatment of the prostate, ultrasound (US) allows for differentiation between soft tissue and was therefore applied by various repositioning systems, such as BAT or Clarity. The authors built a new system which uses 3D US at both sites, the CT room and the intervention room and applied a 3D/3D US/US registration for automatic repositioning. In a first step the authors applied image preprocessing methods to prepare the US images for an optimal registration process. For the 3D/3D registration procedure five different metrics were evaluated. To find the image metric which fits best for a particular patient three 3D US images were taken at the CT site and registered to each other. From these results an US registration error was calculated. The most successful image metric was then applied for the US/US registration process. The success of the whole repositioning method was assessed by taking the results of an ExacTrac system as golden standard. The US/US registration error was found to be 2.99 ± 1.54 mm with respect to the mutual information metric by Mattes (eleven patients) which revealed to be the most suitable of the assessed metrics. For complete repositioning chain the error amounted to 4.15 ± 1.20 mm (ten patients). The authors developed a system for patient repositioning which works automatically without the necessity of user interaction with an accuracy which seems to be suitable for clinical application.

  7. Automatic deformable diffusion tensor registration for fiber population analysis.

    PubMed

    Irfanoglu, M O; Machiraju, R; Sammet, S; Pierpaoli, C; Knopp, M V

    2008-01-01

    In this work, we propose a novel method for deformable tensor-to-tensor registration of Diffusion Tensor Images. Our registration method models the distances in between the tensors with Geode-sic-Loxodromes and employs a version of Multi-Dimensional Scaling (MDS) algorithm to unfold the manifold described with this metric. Defining the same shape properties as tensors, the vector images obtained through MDS are fed into a multi-step vector-image registration scheme and the resulting deformation fields are used to reorient the tensor fields. Results on brain DTI indicate that the proposed method is very suitable for deformable fiber-to-fiber correspondence and DTI-atlas construction.

  8. Demonstration of accuracy and clinical versatility of mutual information for automatic multimodality image fusion using affine and thin-plate spline warped geometric deformations.

    PubMed

    Meyer, C R; Boes, J L; Kim, B; Bland, P H; Zasadny, K R; Kison, P V; Koral, K; Frey, K A; Wahl, R L

    1997-04-01

    This paper applies and evaluates an automatic mutual information-based registration algorithm across a broad spectrum of multimodal volume data sets. The algorithm requires little or no pre-processing, minimal user input and easily implements either affine, i.e. linear or thin-plate spline (TPS) warped registrations. We have evaluated the algorithm in phantom studies as well as in selected cases where few other algorithms could perform as well, if at all, to demonstrate the value of this new method. Pairs of multimodal gray-scale volume data sets were registered by iteratively changing registration parameters to maximize mutual information. Quantitative registration errors were assessed in registrations of a thorax phantom using PET/CT and in the National Library of Medicine's Visible Male using MRI T2-/T1-weighted acquisitions. Registrations of diverse clinical data sets were demonstrated including rotate-translate mapping of PET/MRI brain scans with significant missing data, full affine mapping of thoracic PET/CT and rotate-translate mapping of abdominal SPECT/CT. A five-point thin-plate spline (TPS) warped registration of thoracic PET/CT is also demonstrated. The registration algorithm converged in times ranging between 3.5 and 31 min for affine clinical registrations and 57 min for TPS warping. Mean error vector lengths for rotate-translate registrations were measured to be subvoxel in phantoms. More importantly the rotate-translate algorithm performs well even with missing data. The demonstrated clinical fusions are qualitatively excellent at all levels. We conclude that such automatic, rapid, robust algorithms significantly increase the likelihood that multimodality registrations will be routinely used to aid clinical diagnoses and post-therapeutic assessment in the near future.

  9. Onboard Image Registration from Invariant Features

    NASA Technical Reports Server (NTRS)

    Wang, Yi; Ng, Justin; Garay, Michael J.; Burl, Michael C

    2008-01-01

    This paper describes a feature-based image registration technique that is potentially well-suited for onboard deployment. The overall goal is to provide a fast, robust method for dynamically combining observations from multiple platforms into sensors webs that respond quickly to short-lived events and provide rich observations of objects that evolve in space and time. The approach, which has enjoyed considerable success in mainstream computer vision applications, uses invariant SIFT descriptors extracted at image interest points together with the RANSAC algorithm to robustly estimate transformation parameters that relate one image to another. Experimental results for two satellite image registration tasks are presented: (1) automatic registration of images from the MODIS instrument on Terra to the MODIS instrument on Aqua and (2) automatic stabilization of a multi-day sequence of GOES-West images collected during the October 2007 Southern California wildfires.

  10. Automatic segmentation and co-registration of gated CT angiography datasets: measuring abdominal aortic pulsatility

    NASA Astrophysics Data System (ADS)

    Wentz, Robert; Manduca, Armando; Fletcher, J. G.; Siddiki, Hassan; Shields, Raymond C.; Vrtiska, Terri; Spencer, Garrett; Primak, Andrew N.; Zhang, Jie; Nielson, Theresa; McCollough, Cynthia; Yu, Lifeng

    2007-03-01

    Purpose: To develop robust, novel segmentation and co-registration software to analyze temporally overlapping CT angiography datasets, with an aim to permit automated measurement of regional aortic pulsatility in patients with abdominal aortic aneurysms. Methods: We perform retrospective gated CT angiography in patients with abdominal aortic aneurysms. Multiple, temporally overlapping, time-resolved CT angiography datasets are reconstructed over the cardiac cycle, with aortic segmentation performed using a priori anatomic assumptions for the aorta and heart. Visual quality assessment is performed following automatic segmentation with manual editing. Following subsequent centerline generation, centerlines are cross-registered across phases, with internal validation of co-registration performed by examining registration at the regions of greatest diameter change (i.e. when the second derivative is maximal). Results: We have performed gated CT angiography in 60 patients. Automatic seed placement is successful in 79% of datasets, requiring either no editing (70%) or minimal editing (less than 1 minute; 12%). Causes of error include segmentation into adjacent, high-attenuating, nonvascular tissues; small segmentation errors associated with calcified plaque; and segmentation of non-renal, small paralumbar arteries. Internal validation of cross-registration demonstrates appropriate registration in our patient population. In general, we observed that aortic pulsatility can vary along the course of the abdominal aorta. Pulsation can also vary within an aneurysm as well as between aneurysms, but the clinical significance of these findings remain unknown. Conclusions: Visualization of large vessel pulsatility is possible using ECG-gated CT angiography, partial scan reconstruction, automatic segmentation, centerline generation, and coregistration of temporally resolved datasets.

  11. Automatic initialization for 3D bone registration

    NASA Astrophysics Data System (ADS)

    Foroughi, Pezhman; Taylor, Russell H.; Fichtinger, Gabor

    2008-03-01

    In image-guided bone surgery, sample points collected from the surface of the bone are registered to the preoperative CT model using well-known registration methods such as Iterative Closest Point (ICP). These techniques are generally very sensitive to the initial alignment of the datasets. Poor initialization significantly increases the chances of getting trapped local minima. In order to reduce the risk of local minima, the registration is manually initialized by locating the sample points close to the corresponding points on the CT model. In this paper, we present an automatic initialization method that aligns the sample points collected from the surface of pelvis with CT model of the pelvis. The main idea is to exploit a mean shape of pelvis created from a large number of CT scans as the prior knowledge to guide the initial alignment. The mean shape is constant for all registrations and facilitates the inclusion of application-specific information into the registration process. The CT model is first aligned with the mean shape using the bilateral symmetry of the pelvis and the similarity of multiple projections. The surface points collected using ultrasound are then aligned with the pelvis mean shape. This will, in turn, lead to initial alignment of the sample points with the CT model. The experiments using a dry pelvis and two cadavers show that the method can align the randomly dislocated datasets close enough for successful registration. The standard ICP has been used for final registration of datasets.

  12. Comparison of manual and automatic MR-CT registration for radiotherapy of prostate cancer.

    PubMed

    Korsager, Anne Sofie; Carl, Jesper; Riis Østergaard, Lasse

    2016-05-08

    In image-guided radiotherapy (IGRT) of prostate cancer, delineation of the clini-cal target volume (CTV) often relies on magnetic resonance (MR) because of its good soft-tissue visualization. Registration of MR and computed tomography (CT) is required in order to add this accurate delineation to the dose planning CT. An automatic approach for local MR-CT registration of the prostate has previously been developed using a voxel property-based registration as an alternative to a manual landmark-based registration. The aim of this study is to compare the two registration approaches and to investigate the clinical potential for replacing the manual registration with the automatic registration. Registrations and analysis were performed for 30 prostate cancer patients treated with IGRT using a Ni-Ti prostate stent as a fiducial marker. The comparison included computing translational and rotational differences between the approaches, visual inspection, and computing the overlap of the CTV. The computed mean translational difference was 1.65, 1.60, and 1.80mm and the computed mean rotational difference was 1.51°, 3.93°, and 2.09° in the superior/inferior, anterior/posterior, and medial/lateral direction, respectively. The sensitivity of overlap was 87%. The results demonstrate that the automatic registration approach performs registrations comparable to the manual registration.

  13. Shearlet Features for Registration of Remotely Sensed Multitemporal Images

    NASA Technical Reports Server (NTRS)

    Murphy, James M.; Le Moigne, Jacqueline

    2015-01-01

    We investigate the role of anisotropic feature extraction methods for automatic image registration of remotely sensed multitemporal images. Building on the classical use of wavelets in image registration, we develop an algorithm based on shearlets, a mathematical generalization of wavelets that offers increased directional sensitivity. Initial experimental results on LANDSAT images are presented, which indicate superior performance of the shearlet algorithm when compared to classical wavelet algorithms.

  14. Automatic registration of optical imagery with 3d lidar data using local combined mutual information

    NASA Astrophysics Data System (ADS)

    Parmehr, E. G.; Fraser, C. S.; Zhang, C.; Leach, J.

    2013-10-01

    Automatic registration of multi-sensor data is a basic step in data fusion for photogrammetric and remote sensing applications. The effectiveness of intensity-based methods such as Mutual Information (MI) for automated registration of multi-sensor image has been previously reported for medical and remote sensing applications. In this paper, a new multivariable MI approach that exploits complementary information of inherently registered LiDAR DSM and intensity data to improve the robustness of registering optical imagery and LiDAR point cloud, is presented. LiDAR DSM and intensity information has been utilised in measuring the similarity of LiDAR and optical imagery via the Combined MI. An effective histogramming technique is adopted to facilitate estimation of a 3D probability density function (pdf). In addition, a local similarity measure is introduced to decrease the complexity of optimisation at higher dimensions and computation cost. Therefore, the reliability of registration is improved due to the use of redundant observations of similarity. The performance of the proposed method for registration of satellite and aerial images with LiDAR data in urban and rural areas is experimentally evaluated and the results obtained are discussed.

  15. Automatic registration of panoramic image sequence and mobile laser scanning data using semantic features

    NASA Astrophysics Data System (ADS)

    Li, Jianping; Yang, Bisheng; Chen, Chi; Huang, Ronggang; Dong, Zhen; Xiao, Wen

    2018-02-01

    Inaccurate exterior orientation parameters (EoPs) between sensors obtained by pre-calibration leads to failure of registration between panoramic image sequence and mobile laser scanning data. To address this challenge, this paper proposes an automatic registration method based on semantic features extracted from panoramic images and point clouds. Firstly, accurate rotation parameters between the panoramic camera and the laser scanner are estimated using GPS and IMU aided structure from motion (SfM). The initial EoPs of panoramic images are obtained at the same time. Secondly, vehicles in panoramic images are extracted by the Faster-RCNN as candidate primitives to be matched with potential corresponding primitives in point clouds according to the initial EoPs. Finally, translation between the panoramic camera and the laser scanner is refined by maximizing the overlapping area of corresponding primitive pairs based on the Particle Swarm Optimization (PSO), resulting in a finer registration between panoramic image sequences and point clouds. Two challenging urban scenes were experimented to assess the proposed method, and the final registration errors of these two scenes were both less than three pixels, which demonstrates a high level of automation, robustness and accuracy.

  16. An Automated Parallel Image Registration Technique Based on the Correlation of Wavelet Features

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Campbell, William J.; Cromp, Robert F.; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    With the increasing importance of multiple platform/multiple remote sensing missions, fast and automatic integration of digital data from disparate sources has become critical to the success of these endeavors. Our work utilizes maxima of wavelet coefficients to form the basic features of a correlation-based automatic registration algorithm. Our wavelet-based registration algorithm is tested successfully with data from the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) and the Landsat/Thematic Mapper(TM), which differ by translation and/or rotation. By the choice of high-frequency wavelet features, this method is similar to an edge-based correlation method, but by exploiting the multi-resolution nature of a wavelet decomposition, our method achieves higher computational speeds for comparable accuracies. This algorithm has been implemented on a Single Instruction Multiple Data (SIMD) massively parallel computer, the MasPar MP-2, as well as on the CrayT3D, the Cray T3E and a Beowulf cluster of Pentium workstations.

  17. A framework for automatic creation of gold-standard rigid 3D-2D registration datasets.

    PubMed

    Madan, Hennadii; Pernuš, Franjo; Likar, Boštjan; Špiclin, Žiga

    2017-02-01

    Advanced image-guided medical procedures incorporate 2D intra-interventional information into pre-interventional 3D image and plan of the procedure through 3D/2D image registration (32R). To enter clinical use, and even for publication purposes, novel and existing 32R methods have to be rigorously validated. The performance of a 32R method can be estimated by comparing it to an accurate reference or gold standard method (usually based on fiducial markers) on the same set of images (gold standard dataset). Objective validation and comparison of methods are possible only if evaluation methodology is standardized, and the gold standard  dataset is made publicly available. Currently, very few such datasets exist and only one contains images of multiple patients acquired during a procedure. To encourage the creation of gold standard 32R datasets, we propose an automatic framework. The framework is based on rigid registration of fiducial markers. The main novelty is spatial grouping of fiducial markers on the carrier device, which enables automatic marker localization and identification across the 3D and 2D images. The proposed framework was demonstrated on clinical angiograms of 20 patients. Rigid 32R computed by the framework was more accurate than that obtained manually, with the respective target registration error below 0.027 mm compared to 0.040 mm. The framework is applicable for gold standard setup on any rigid anatomy, provided that the acquired images contain spatially grouped fiducial markers. The gold standard datasets and software will be made publicly available.

  18. Comparison of manual and automatic MR‐CT registration for radiotherapy of prostate cancer

    PubMed Central

    Carl, Jesper; Østergaard, Lasse Riis

    2016-01-01

    In image‐guided radiotherapy (IGRT) of prostate cancer, delineation of the clinical target volume (CTV) often relies on magnetic resonance (MR) because of its good soft‐tissue visualization. Registration of MR and computed tomography (CT) is required in order to add this accurate delineation to the dose planning CT. An automatic approach for local MR‐CT registration of the prostate has previously been developed using a voxel property‐based registration as an alternative to a manual landmark‐based registration. The aim of this study is to compare the two registration approaches and to investigate the clinical potential for replacing the manual registration with the automatic registration. Registrations and analysis were performed for 30 prostate cancer patients treated with IGRT using a Ni‐Ti prostate stent as a fiducial marker. The comparison included computing translational and rotational differences between the approaches, visual inspection, and computing the overlap of the CTV. The computed mean translational difference was 1.65, 1.60, and 1.80 mm and the computed mean rotational difference was 1.51°, 3.93°, and 2.09° in the superior/inferior, anterior/posterior, and medial/lateral direction, respectively. The sensitivity of overlap was 87%. The results demonstrate that the automatic registration approach performs registrations comparable to the manual registration. PACS number(s): 87.57.nj, 87.61.‐c, 87.57.Q‐, 87.56.J‐ PMID:27167285

  19. Automatically processed alpha-track radon monitor

    DOEpatents

    Langner, Jr., G. Harold

    1993-01-01

    An automatically processed alpha-track radon monitor is provided which includes a housing having an aperture allowing radon entry, and a filter that excludes the entry of radon daughters into the housing. A flexible track registration material is located within the housing that records alpha-particle emissions from the decay of radon and radon daughters inside the housing. The flexible track registration material is capable of being spliced such that the registration material from a plurality of monitors can be spliced into a single strip to facilitate automatic processing of the registration material from the plurality of monitors. A process for the automatic counting of radon registered by a radon monitor is also provided.

  20. Automatically processed alpha-track radon monitor

    DOEpatents

    Langner, G.H. Jr.

    1993-01-12

    An automatically processed alpha-track radon monitor is provided which includes a housing having an aperture allowing radon entry, and a filter that excludes the entry of radon daughters into the housing. A flexible track registration material is located within the housing that records alpha-particle emissions from the decay of radon and radon daughters inside the housing. The flexible track registration material is capable of being spliced such that the registration material from a plurality of monitors can be spliced into a single strip to facilitate automatic processing of the registration material from the plurality of monitors. A process for the automatic counting of radon registered by a radon monitor is also provided.

  1. 3D registration of surfaces for change detection in medical images

    NASA Astrophysics Data System (ADS)

    Fisher, Elizabeth; van der Stelt, Paul F.; Dunn, Stanley M.

    1997-04-01

    Spatial registration of data sets is essential for quantifying changes that take place over time in cases where the position of a patient with respect to the sensor has been altered. Changes within the region of interest can be problematic for automatic methods of registration. This research addresses the problem of automatic 3D registration of surfaces derived from serial, single-modality images for the purpose of quantifying changes over time. The registration algorithm utilizes motion-invariant, curvature- based geometric properties to derive an approximation to an initial rigid transformation to align two image sets. Following the initial registration, changed portions of the surface are detected and excluded before refining the transformation parameters. The performance of the algorithm was tested using simulation experiments. To quantitatively assess the registration, random noise at various levels, known rigid motion transformations, and analytically-defined volume changes were applied to the initial surface data acquired from models of teeth. These simulation experiments demonstrated that the calculated transformation parameters were accurate to within 1.2 percent of the total applied rotation and 2.9 percent of the total applied translation, even at the highest applied noise levels and simulated wear values.

  2. Image registration method for medical image sequences

    DOEpatents

    Gee, Timothy F.; Goddard, James S.

    2013-03-26

    Image registration of low contrast image sequences is provided. In one aspect, a desired region of an image is automatically segmented and only the desired region is registered. Active contours and adaptive thresholding of intensity or edge information may be used to segment the desired regions. A transform function is defined to register the segmented region, and sub-pixel information may be determined using one or more interpolation methods.

  3. Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm

    PubMed Central

    Yan, Li; Xie, Hong; Chen, Changjun

    2017-01-01

    Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%. PMID:28850100

  4. Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm.

    PubMed

    Yan, Li; Tan, Junxiang; Liu, Hua; Xie, Hong; Chen, Changjun

    2017-08-29

    Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%.

  5. Semi-automatic registration of 3D orthodontics models from photographs

    NASA Astrophysics Data System (ADS)

    Destrez, Raphaël.; Treuillet, Sylvie; Lucas, Yves; Albouy-Kissi, Benjamin

    2013-03-01

    In orthodontics, a common practice used to diagnose and plan the treatment is the dental cast. After digitization by a CT-scan or a laser scanner, the obtained 3D surface models can feed orthodontics numerical tools for computer-aided diagnosis and treatment planning. One of the pre-processing critical steps is the 3D registration of dental arches to obtain the occlusion of these numerical models. For this task, we propose a vision based method to automatically compute the registration based on photos of patient mouth. From a set of matched singular points between two photos and the dental 3D models, the rigid transformation to apply to the mandible to be in contact with the maxillary may be computed by minimizing the reprojection errors. During a precedent study, we established the feasibility of this visual registration approach with a manual selection of singular points. This paper addresses the issue of automatic point detection. Based on a priori knowledge, histogram thresholding and edge detection are used to extract specific points in 2D images. Concurrently, curvatures information detects 3D corresponding points. To improve the quality of the final registration, we also introduce a combined optimization of the projection matrix with the 2D/3D point positions. These new developments are evaluated on real data by considering the reprojection errors and the deviation angles after registration in respect to the manual reference occlusion realized by a specialist.

  6. Model-based registration for assessment of spinal deformities in idiopathic scoliosis

    NASA Astrophysics Data System (ADS)

    Forsberg, Daniel; Lundström, Claes; Andersson, Mats; Knutsson, Hans

    2014-01-01

    Detailed analysis of spinal deformity is important within orthopaedic healthcare, in particular for assessment of idiopathic scoliosis. This paper addresses this challenge by proposing an image analysis method, capable of providing a full three-dimensional spine characterization. The proposed method is based on the registration of a highly detailed spine model to image data from computed tomography. The registration process provides an accurate segmentation of each individual vertebra and the ability to derive various measures describing the spinal deformity. The derived measures are estimated from landmarks attached to the spine model and transferred to the patient data according to the registration result. Evaluation of the method provides an average point-to-surface error of 0.9 mm ± 0.9 (comparing segmentations), and an average target registration error of 2.3 mm ± 1.7 (comparing landmarks). Comparing automatic and manual measurements of axial vertebral rotation provides a mean absolute difference of 2.5° ± 1.8, which is on a par with other computerized methods for assessing axial vertebral rotation. A significant advantage of our method, compared to other computerized methods for rotational measurements, is that it does not rely on vertebral symmetry for computing the rotational measures. The proposed method is fully automatic and computationally efficient, only requiring three to four minutes to process an entire image volume covering vertebrae L5 to T1. Given the use of landmarks, the method can be readily adapted to estimate other measures describing a spinal deformity by changing the set of employed landmarks. In addition, the method has the potential to be utilized for accurate segmentations of the vertebrae in routine computed tomography examinations, given the relatively low point-to-surface error.

  7. Comparison and assessment of semi-automatic image segmentation in computed tomography scans for image-guided kidney surgery.

    PubMed

    Glisson, Courtenay L; Altamar, Hernan O; Herrell, S Duke; Clark, Peter; Galloway, Robert L

    2011-11-01

    Image segmentation is integral to implementing intraoperative guidance for kidney tumor resection. Results seen in computed tomography (CT) data are affected by target organ physiology as well as by the segmentation algorithm used. This work studies variables involved in using level set methods found in the Insight Toolkit to segment kidneys from CT scans and applies the results to an image guidance setting. A composite algorithm drawing on the strengths of multiple level set approaches was built using the Insight Toolkit. This algorithm requires image contrast state and seed points to be identified as input, and functions independently thereafter, selecting and altering method and variable choice as needed. Semi-automatic results were compared to expert hand segmentation results directly and by the use of the resultant surfaces for registration of intraoperative data. Direct comparison using the Dice metric showed average agreement of 0.93 between semi-automatic and hand segmentation results. Use of the segmented surfaces in closest point registration of intraoperative laser range scan data yielded average closest point distances of approximately 1 mm. Application of both inverse registration transforms from the previous step to all hand segmented image space points revealed that the distance variability introduced by registering to the semi-automatically segmented surface versus the hand segmented surface was typically less than 3 mm both near the tumor target and at distal points, including subsurface points. Use of the algorithm shortened user interaction time and provided results which were comparable to the gold standard of hand segmentation. Further, the use of the algorithm's resultant surfaces in image registration provided comparable transformations to surfaces produced by hand segmentation. These data support the applicability and utility of such an algorithm as part of an image guidance workflow.

  8. An automatic approach for 3D registration of CT scans

    NASA Astrophysics Data System (ADS)

    Hu, Yang; Saber, Eli; Dianat, Sohail; Vantaram, Sreenath Rao; Abhyankar, Vishwas

    2012-03-01

    CT (Computed tomography) is a widely employed imaging modality in the medical field. Normally, a volume of CT scans is prescribed by a doctor when a specific region of the body (typically neck to groin) is suspected of being abnormal. The doctors are required to make professional diagnoses based upon the obtained datasets. In this paper, we propose an automatic registration algorithm that helps healthcare personnel to automatically align corresponding scans from 'Study' to 'Atlas'. The proposed algorithm is capable of aligning both 'Atlas' and 'Study' into the same resolution through 3D interpolation. After retrieving the scanned slice volume in the 'Study' and the corresponding volume in the original 'Atlas' dataset, a 3D cross correlation method is used to identify and register various body parts.

  9. The heritability of the functional connectome is robust to common nonlinear registration methods

    NASA Astrophysics Data System (ADS)

    Hafzalla, George W.; Prasad, Gautam; Baboyan, Vatche G.; Faskowitz, Joshua; Jahanshad, Neda; McMahon, Katie L.; de Zubicaray, Greig I.; Wright, Margaret J.; Braskie, Meredith N.; Thompson, Paul M.

    2016-03-01

    Nonlinear registration algorithms are routinely used in brain imaging, to align data for inter-subject and group comparisons, and for voxelwise statistical analyses. To understand how the choice of registration method affects maps of functional brain connectivity in a sample of 611 twins, we evaluated three popular nonlinear registration methods: Advanced Normalization Tools (ANTs), Automatic Registration Toolbox (ART), and FMRIB's Nonlinear Image Registration Tool (FNIRT). Using both structural and functional MRI, we used each of the three methods to align the MNI152 brain template, and 80 regions of interest (ROIs), to each subject's T1-weighted (T1w) anatomical image. We then transformed each subject's ROIs onto the associated resting state functional MRI (rs-fMRI) scans and computed a connectivity network or functional connectome for each subject. Given the different degrees of genetic similarity between pairs of monozygotic (MZ) and same-sex dizygotic (DZ) twins, we used structural equation modeling to estimate the additive genetic influences on the elements of the function networks, or their heritability. The functional connectome and derived statistics were relatively robust to nonlinear registration effects.

  10. Phantom study and accuracy evaluation of an image-to-world registration approach used with electro-magnetic tracking system for neurosurgery

    NASA Astrophysics Data System (ADS)

    Li, Senhu; Sarment, David

    2015-12-01

    Minimally invasive neurosurgery needs intraoperative imaging updates and high efficient image guide system to facilitate the procedure. An automatic image guided system utilized with a compact and mobile intraoperative CT imager was introduced in this work. A tracking frame that can be easily attached onto the commercially available skull clamp was designed. With known geometry of fiducial and tracking sensor arranged on this rigid frame that was fabricated through high precision 3D printing, not only was an accurate, fully automatic registration method developed in a simple and less-costly approach, but also it helped in estimating the errors from fiducial localization in image space through image processing, and in patient space through the calibration of tracking frame. Our phantom study shows the fiducial registration error as 0.348+/-0.028mm, comparing the manual registration error as 1.976+/-0.778mm. The system in this study provided a robust and accurate image-to-patient registration without interruption of routine surgical workflow and any user interactions involved through the neurosurgery.

  11. Morphological Feature Extraction for Automatic Registration of Multispectral Images

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.

  12. Automatic Co-Registration of QuickBird Data for Change Detection Applications

    NASA Technical Reports Server (NTRS)

    Bryant, Nevin A.; Logan, Thomas L.; Zobrist, Albert L.

    2006-01-01

    This viewgraph presentation reviews the use Automatic Fusion of Image Data System (AFIDS) for Automatic Co-Registration of QuickBird Data to ascertain if changes have occurred in images. The process is outlined, and views from Iraq and Los Angelels are shown to illustrate the process.

  13. Automatic registration of terrestrial point clouds based on panoramic reflectance images and efficient BaySAC

    NASA Astrophysics Data System (ADS)

    Kang, Zhizhong

    2013-10-01

    This paper presents a new approach to automatic registration of terrestrial laser scanning (TLS) point clouds utilizing a novel robust estimation method by an efficient BaySAC (BAYes SAmpling Consensus). The proposed method directly generates reflectance images from 3D point clouds, and then using SIFT algorithm extracts keypoints to identify corresponding image points. The 3D corresponding points, from which transformation parameters between point clouds are computed, are acquired by mapping the 2D ones onto the point cloud. To remove false accepted correspondences, we implement a conditional sampling method to select the n data points with the highest inlier probabilities as a hypothesis set and update the inlier probabilities of each data point using simplified Bayes' rule for the purpose of improving the computation efficiency. The prior probability is estimated by the verification of the distance invariance between correspondences. The proposed approach is tested on four data sets acquired by three different scanners. The results show that, comparing with the performance of RANSAC, BaySAC leads to less iterations and cheaper computation cost when the hypothesis set is contaminated with more outliers. The registration results also indicate that, the proposed algorithm can achieve high registration accuracy on all experimental datasets.

  14. Fast automatic 3D liver segmentation based on a three-level AdaBoost-guided active shape model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Baochun; Huang, Cheng; Zhou, Shoujun

    Purpose: A robust, automatic, and rapid method for liver delineation is urgently needed for the diagnosis and treatment of liver disorders. Until now, the high variability in liver shape, local image artifacts, and the presence of tumors have complicated the development of automatic 3D liver segmentation. In this study, an automatic three-level AdaBoost-guided active shape model (ASM) is proposed for the segmentation of the liver based on enhanced computed tomography images in a robust and fast manner, with an emphasis on the detection of tumors. Methods: The AdaBoost voxel classifier and AdaBoost profile classifier were used to automatically guide three-levelmore » active shape modeling. In the first level of model initialization, fast automatic liver segmentation by an AdaBoost voxel classifier method is proposed. A shape model is then initialized by registration with the resulting rough segmentation. In the second level of active shape model fitting, a prior model based on the two-class AdaBoost profile classifier is proposed to identify the optimal surface. In the third level, a deformable simplex mesh with profile probability and curvature constraint as the external force is used to refine the shape fitting result. In total, three registration methods—3D similarity registration, probability atlas B-spline, and their proposed deformable closest point registration—are used to establish shape correspondence. Results: The proposed method was evaluated using three public challenge datasets: 3Dircadb1, SLIVER07, and Visceral Anatomy3. The results showed that our approach performs with promising efficiency, with an average of 35 s, and accuracy, with an average Dice similarity coefficient (DSC) of 0.94 ± 0.02, 0.96 ± 0.01, and 0.94 ± 0.02 for the 3Dircadb1, SLIVER07, and Anatomy3 training datasets, respectively. The DSC of the SLIVER07 testing and Anatomy3 unseen testing datasets were 0.964 and 0.933, respectively. Conclusions: The proposed automatic approach achieves robust, accurate, and fast liver segmentation for 3D CTce datasets. The AdaBoost voxel classifier can detect liver area quickly without errors and provides sufficient liver shape information for model initialization. The AdaBoost profile classifier achieves sufficient accuracy and greatly decreases segmentation time. These results show that the proposed segmentation method achieves a level of accuracy comparable to that of state-of-the-art automatic methods based on ASM.« less

  15. Automatic registration of fused lidar/digital imagery (texel images) for three-dimensional image creation

    NASA Astrophysics Data System (ADS)

    Budge, Scott E.; Badamikar, Neeraj S.; Xie, Xuan

    2015-03-01

    Several photogrammetry-based methods have been proposed that the derive three-dimensional (3-D) information from digital images from different perspectives, and lidar-based methods have been proposed that merge lidar point clouds and texture the merged point clouds with digital imagery. Image registration alone has difficulty with smooth regions with low contrast, whereas point cloud merging alone has difficulty with outliers and a lack of proper convergence in the merging process. This paper presents a method to create 3-D images that uses the unique properties of texel images (pixel-fused lidar and digital imagery) to improve the quality and robustness of fused 3-D images. The proposed method uses both image processing and point-cloud merging to combine texel images in an iterative technique. Since the digital image pixels and the lidar 3-D points are fused at the sensor level, more accurate 3-D images are generated because registration of image data automatically improves the merging of the point clouds, and vice versa. Examples illustrate the value of this method over other methods. The proposed method also includes modifications for the situation where an estimate of position and attitude of the sensor is known, when obtained from low-cost global positioning systems and inertial measurement units sensors.

  16. Fully automatic registration and segmentation of first-pass myocardial perfusion MR image sequences.

    PubMed

    Gupta, Vikas; Hendriks, Emile A; Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2010-11-01

    Derivation of diagnostically relevant parameters from first-pass myocardial perfusion magnetic resonance images involves the tedious and time-consuming manual segmentation of the myocardium in a large number of images. To reduce the manual interaction and expedite the perfusion analysis, we propose an automatic registration and segmentation method for the derivation of perfusion linked parameters. A complete automation was accomplished by first registering misaligned images using a method based on independent component analysis, and then using the registered data to automatically segment the myocardium with active appearance models. We used 18 perfusion studies (100 images per study) for validation in which the automatically obtained (AO) contours were compared with expert drawn contours on the basis of point-to-curve error, Dice index, and relative perfusion upslope in the myocardium. Visual inspection revealed successful segmentation in 15 out of 18 studies. Comparison of the AO contours with expert drawn contours yielded 2.23 ± 0.53 mm and 0.91 ± 0.02 as point-to-curve error and Dice index, respectively. The average difference between manually and automatically obtained relative upslope parameters was found to be statistically insignificant (P = .37). Moreover, the analysis time per slice was reduced from 20 minutes (manual) to 1.5 minutes (automatic). We proposed an automatic method that significantly reduced the time required for analysis of first-pass cardiac magnetic resonance perfusion images. The robustness and accuracy of the proposed method were demonstrated by the high spatial correspondence and statistically insignificant difference in perfusion parameters, when AO contours were compared with expert drawn contours. Copyright © 2010 AUR. Published by Elsevier Inc. All rights reserved.

  17. Application of a spectrally filtered probing light beam and RGB decomposition of microphotographs for flow registration of ultrasonically enhanced agglutination of erythrocytes

    NASA Astrophysics Data System (ADS)

    Doubrovski, V. A.; Ganilova, Yu. A.; Zabenkov, I. V.

    2013-08-01

    We propose a development of the flow microscopy method to increase the resolving power upon registration of erythrocyte agglutination. We experimentally show that the action of a ultrasonic standing wave on an agglutinating mixture blood-serum leads to the formation of so large erythrocytic immune complexes that it seems possible to propose a new two-wave optical method of registration of the process of erythrocyte agglutination using the RGB decomposition of microphotographs of the flow of the mixture under study. This approach increases the reliability of registration of erythrocyte agglutination and, consequently, increases the reliability of blood typing. Our results can be used in the development of instruments for automatic human blood typing.

  18. Image Registration Workshop Proceedings

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline (Editor)

    1997-01-01

    Automatic image registration has often been considered as a preliminary step for higher-level processing, such as object recognition or data fusion. But with the unprecedented amounts of data which are being and will continue to be generated by newly developed sensors, the very topic of automatic image registration has become and important research topic. This workshop presents a collection of very high quality work which has been grouped in four main areas: (1) theoretical aspects of image registration; (2) applications to satellite imagery; (3) applications to medical imagery; and (4) image registration for computer vision research.

  19. TU-F-BRF-03: Effect of Radiation Therapy Planning Scan Registration On the Dose in Lung Cancer Patient CT Scans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cunliffe, A; Contee, C; White, B

    Purpose: To characterize the effect of deformable registration of serial computed tomography (CT) scans on the radiation dose calculated from a treatment planning scan. Methods: Eighteen patients who received curative doses (≥60Gy, 2Gy/fraction) of photon radiation therapy for lung cancer treatment were retrospectively identified. For each patient, a diagnostic-quality pre-therapy (4–75 days) CT scan and a treatment planning scan with an associated dose map calculated in Pinnacle were collected. To establish baseline correspondence between scan pairs, a researcher manually identified anatomically corresponding landmark point pairs between the two scans. Pre-therapy scans were co-registered with planning scans (and associated dose maps)more » using the Plastimatch demons and Fraunhofer MEVIS deformable registration algorithms. Landmark points in each pretherapy scan were automatically mapped to the planning scan using the displacement vector field output from both registration algorithms. The absolute difference in planned dose (|ΔD|) between manually and automatically mapped landmark points was calculated. Using regression modeling, |ΔD| was modeled as a function of the distance between manually and automatically matched points (registration error, E), the dose standard deviation (SD-dose) in the eight-pixel neighborhood, and the registration algorithm used. Results: 52–92 landmark point pairs (median: 82) were identified in each patient's scans. Average |ΔD| across patients was 3.66Gy (range: 1.2–7.2Gy). |ΔD| was significantly reduced by 0.53Gy using Plastimatch demons compared with Fraunhofer MEVIS. |ΔD| increased significantly as a function of E (0.39Gy/mm) and SD-dose (2.23Gy/Gy). Conclusion: An average error of <4Gy in radiation dose was introduced when points were mapped between CT scan pairs using deformable registration. Dose differences following registration were significantly increased when the Fraunhofer MEVIS registration algorithm was used, spatial registration errors were larger, and dose gradient was higher (i.e., higher SD-dose). To our knowledge, this is the first study to directly compute dose errors following deformable registration of lung CT scans.« less

  20. A method for automatic matching of multi-timepoint findings for enhanced clinical workflow

    NASA Astrophysics Data System (ADS)

    Raghupathi, Laks; Dinesh, MS; Devarakota, Pandu R.; Valadez, Gerardo Hermosillo; Wolf, Matthias

    2013-03-01

    Non-interventional diagnostics (CT or MR) enables early identification of diseases like cancer. Often, lesion growth assessment done during follow-up is used to distinguish between benign and malignant ones. Thus correspondences need to be found for lesions localized at each time point. Manually matching the radiological findings can be time consuming as well as tedious due to possible differences in orientation and position between scans. Also, the complicated nature of the disease makes the physicians to rely on multiple modalities (PETCT, PET-MR) where it is even more challenging. Here, we propose an automatic feature-based matching that is robust to change in organ volume, subpar or no registration that can be done with very less computations. Traditional matching methods rely mostly on accurate image registration and applying the resulting deformation map on the findings coordinates. This has disadvantages when accurate registration is time-consuming or may not be possible due to vast organ volume differences between scans. Our novel matching proposes supervised learning by taking advantage of the underlying CAD features that are already present and considering the matching as a classification problem. In addition, the matching can be done extremely fast and at reasonable accuracy even when the image registration fails for some reason. Experimental results∗ on real-world multi-time point thoracic CT data showed an accuracy of above 90% with negligible false positives on a variety of registration scenarios.

  1. LiDAR Point Cloud and Stereo Image Point Cloud Fusion

    DTIC Science & Technology

    2013-09-01

    LiDAR point cloud (right) highlighting linear edge features ideal for automatic registration...point cloud (right) highlighting linear edge features ideal for automatic registration. Areas where topography is being derived, unfortunately, do...with the least amount of automatic correlation errors was used. The following graphic (Figure 12) shows the coverage of the WV1 stereo triplet as

  2. [Affine transformation-based automatic registration for peripheral digital subtraction angiography (DSA)].

    PubMed

    Kong, Gang; Dai, Dao-Qing; Zou, Lu-Min

    2008-07-01

    In order to remove the artifacts of peripheral digital subtraction angiography (DSA), an affine transformation-based automatic image registration algorithm is introduced here. The whole process is described as follows: First, rectangle feature templates are constructed with their centers of the extracted Harris corners in the mask, and motion vectors of the central feature points are estimated using template matching technology with the similarity measure of maximum histogram energy. And then the optimal parameters of the affine transformation are calculated with the matrix singular value decomposition (SVD) method. Finally, bilinear intensity interpolation is taken to the mask according to the specific affine transformation. More than 30 peripheral DSA registrations are performed with the presented algorithm, and as the result, moving artifacts of the images are removed with sub-pixel precision, and the time consumption is less enough to satisfy the clinical requirements. Experimental results show the efficiency and robustness of the algorithm.

  3. Accurate tracking of tumor volume change during radiotherapy by CT-CBCT registration with intensity correction

    NASA Astrophysics Data System (ADS)

    Park, Seyoun; Robinson, Adam; Quon, Harry; Kiess, Ana P.; Shen, Colette; Wong, John; Plishker, William; Shekhar, Raj; Lee, Junghoon

    2016-03-01

    In this paper, we propose a CT-CBCT registration method to accurately predict the tumor volume change based on daily cone-beam CTs (CBCTs) during radiotherapy. CBCT is commonly used to reduce patient setup error during radiotherapy, but its poor image quality impedes accurate monitoring of anatomical changes. Although physician's contours drawn on the planning CT can be automatically propagated to daily CBCTs by deformable image registration (DIR), artifacts in CBCT often cause undesirable errors. To improve the accuracy of the registration-based segmentation, we developed a DIR method that iteratively corrects CBCT intensities by local histogram matching. Three popular DIR algorithms (B-spline, demons, and optical flow) with the intensity correction were implemented on a graphics processing unit for efficient computation. We evaluated their performances on six head and neck (HN) cancer cases. For each case, four trained scientists manually contoured the nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial image registration software based on conventional mutual information (MI), VelocityAI (Varian Medical Systems Inc.). The volume differences (mean±std in cc) between the average of the manual segmentations and automatic segmentations are 3.70+/-2.30 (B-spline), 1.25+/-1.78 (demons), 0.93+/-1.14 (optical flow), and 4.39+/-3.86 (VelocityAI). The proposed method significantly reduced the estimation error by 9% (B-spline), 38% (demons), and 51% (optical flow) over the results using VelocityAI. Although demonstrated only on HN nodal GTVs, the results imply that the proposed method can produce improved segmentation of other critical structures over conventional methods.

  4. α-Information Based Registration of Dynamic Scans for Magnetic Resonance Cystography

    PubMed Central

    Han, Hao; Lin, Qin; Li, Lihong; Duan, Chaijie; Lu, Hongbing; Li, Haifang; Yan, Zengmin; Fitzgerald, John

    2015-01-01

    To continue our effort on developing magnetic resonance (MR) cystography, we introduce a novel non–rigid 3D registration method to compensate for bladder wall motion and deformation in dynamic MR scans, which are impaired by relatively low signal–to–noise ratio in each time frame. The registration method is developed on the similarity measure of α–information, which has the potential of achieving higher registration accuracy than the commonly-used mutual information (MI) measure for either mono-modality or multi-modality image registration. The α–information metric was also demonstrated to be superior to both the mean squares and the cross-correlation metrics in multi-modality scenarios. The proposed α–registration method was applied for bladder motion compensation via real patient studies, and its effect to the automatic and accurate segmentation of bladder wall was also evaluated. Compared with the prevailing MI-based image registration approach, the presented α–information based registration was more effective to capture the bladder wall motion and deformation, which ensured the success of the following bladder wall segmentation to achieve the goal of evaluating the entire bladder wall for detection and diagnosis of abnormality. PMID:26087506

  5. Validation of an improved 'diffeomorphic demons' algorithm for deformable image registration in image-guided radiation therapy.

    PubMed

    Zhou, Lu; Zhou, Linghong; Zhang, Shuxu; Zhen, Xin; Yu, Hui; Zhang, Guoqian; Wang, Ruihao

    2014-01-01

    Deformable image registration (DIR) was widely used in radiation therapy, such as in automatic contour generation, dose accumulation, tumor growth or regression analysis. To achieve higher registration accuracy and faster convergence, an improved 'diffeomorphic demons' registration algorithm was proposed and validated. Based on Brox et al.'s gradient constancy assumption and Malis's efficient second-order minimization (ESM) algorithm, a grey value gradient similarity term and a transformation error term were added into the demons energy function, and a formula was derived to calculate the update of transformation field. The limited Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm was used to optimize the energy function so that the iteration number could be determined automatically. The proposed algorithm was validated using mathematically deformed images and physically deformed phantom images. Compared with the original 'diffeomorphic demons' algorithm, the registration method proposed achieve a higher precision and a faster convergence speed. Due to the influence of different scanning conditions in fractionated radiation, the density range of the treatment image and the planning image may be different. In such a case, the improved demons algorithm can achieve faster and more accurate radiotherapy.

  6. Automated reconstruction of standing posture panoramas from multi-sector long limb x-ray images

    NASA Astrophysics Data System (ADS)

    Miller, Linzey; Trier, Caroline; Ben-Zikri, Yehuda K.; Linte, Cristian A.

    2016-03-01

    Due to the digital X-ray imaging system's limited field of view, several individual sector images are required to capture the posture of an individual in standing position. These images are then "stitched together" to reconstruct the standing posture. We have created an image processing application that automates the stitching, therefore minimizing user input, optimizing workflow, and reducing human error. The application begins with pre-processing the input images by removing artifacts, filtering out isolated noisy regions, and amplifying a seamless bone edge. The resulting binary images are then registered together using a rigid-body intensity based registration algorithm. The identified registration transformations are then used to map the original sector images into the panorama image. Our method focuses primarily on the use of the anatomical content of the images to generate the panoramas as opposed to using external markers employed to aid with the alignment process. Currently, results show robust edge detection prior to registration and we have tested our approach by comparing the resulting automatically-stitched panoramas to the manually stitched panoramas in terms of registration parameters, target registration error of homologous markers, and the homogeneity of the digitally subtracted automatically- and manually-stitched images using 26 patient datasets.

  7. [Non-rigid medical image registration based on mutual information and thin-plate spline].

    PubMed

    Cao, Guo-gang; Luo, Li-min

    2009-01-01

    To get precise and complete details, the contrast in different images is needed in medical diagnosis and computer assisted treatment. The image registration is the basis of contrast, but the regular rigid registration does not satisfy the clinic requirements. A non-rigid medical image registration method based on mutual information and thin-plate spline was present. Firstly, registering two images globally based on mutual information; secondly, dividing reference image and global-registered image into blocks and registering them; then getting the thin-plate spline transformation according to the shift of blocks' center; finally, applying the transformation to the global-registered image. The results show that the method is more precise than the global rigid registration based on mutual information and it reduces the complexity of getting control points and satisfy the clinic requirements better by getting control points of the thin-plate transformation automatically.

  8. Multi-spectral brain tissue segmentation using automatically trained k-Nearest-Neighbor classification.

    PubMed

    Vrooman, Henri A; Cocosco, Chris A; van der Lijn, Fedde; Stokking, Rik; Ikram, M Arfan; Vernooij, Meike W; Breteler, Monique M B; Niessen, Wiro J

    2007-08-01

    Conventional k-Nearest-Neighbor (kNN) classification, which has been successfully applied to classify brain tissue in MR data, requires training on manually labeled subjects. This manual labeling is a laborious and time-consuming procedure. In this work, a new fully automated brain tissue classification procedure is presented, in which kNN training is automated. This is achieved by non-rigidly registering the MR data with a tissue probability atlas to automatically select training samples, followed by a post-processing step to keep the most reliable samples. The accuracy of the new method was compared to rigid registration-based training and to conventional kNN-based segmentation using training on manually labeled subjects for segmenting gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) in 12 data sets. Furthermore, for all classification methods, the performance was assessed when varying the free parameters. Finally, the robustness of the fully automated procedure was evaluated on 59 subjects. The automated training method using non-rigid registration with a tissue probability atlas was significantly more accurate than rigid registration. For both automated training using non-rigid registration and for the manually trained kNN classifier, the difference with the manual labeling by observers was not significantly larger than inter-observer variability for all tissue types. From the robustness study, it was clear that, given an appropriate brain atlas and optimal parameters, our new fully automated, non-rigid registration-based method gives accurate and robust segmentation results. A similarity index was used for comparison with manually trained kNN. The similarity indices were 0.93, 0.92 and 0.92, for CSF, GM and WM, respectively. It can be concluded that our fully automated method using non-rigid registration may replace manual segmentation, and thus that automated brain tissue segmentation without laborious manual training is feasible.

  9. Automated Coarse Registration of Point Clouds in 3d Urban Scenes Using Voxel Based Plane Constraint

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Boerner, R.; Yao, W.; Hoegner, L.; Stilla, U.

    2017-09-01

    For obtaining a full coverage of 3D scans in a large-scale urban area, the registration between point clouds acquired via terrestrial laser scanning (TLS) is normally mandatory. However, due to the complex urban environment, the automatic registration of different scans is still a challenging problem. In this work, we propose an automatic marker free method for fast and coarse registration between point clouds using the geometric constrains of planar patches under a voxel structure. Our proposed method consists of four major steps: the voxelization of the point cloud, the approximation of planar patches, the matching of corresponding patches, and the estimation of transformation parameters. In the voxelization step, the point cloud of each scan is organized with a 3D voxel structure, by which the entire point cloud is partitioned into small individual patches. In the following step, we represent points of each voxel with the approximated plane function, and select those patches resembling planar surfaces. Afterwards, for matching the corresponding patches, a RANSAC-based strategy is applied. Among all the planar patches of a scan, we randomly select a planar patches set of three planar surfaces, in order to build a coordinate frame via their normal vectors and their intersection points. The transformation parameters between scans are calculated from these two coordinate frames. The planar patches set with its transformation parameters owning the largest number of coplanar patches are identified as the optimal candidate set for estimating the correct transformation parameters. The experimental results using TLS datasets of different scenes reveal that our proposed method can be both effective and efficient for the coarse registration task. Especially, for the fast orientation between scans, our proposed method can achieve a registration error of less than around 2 degrees using the testing datasets, and much more efficient than the classical baseline methods.

  10. Group-wise feature-based registration of CT and ultrasound images of spine

    NASA Astrophysics Data System (ADS)

    Rasoulian, Abtin; Mousavi, Parvin; Hedjazi Moghari, Mehdi; Foroughi, Pezhman; Abolmaesumi, Purang

    2010-02-01

    Registration of pre-operative CT and freehand intra-operative ultrasound of lumbar spine could aid surgeons in the spinal needle injection which is a common procedure for pain management. Patients are always in a supine position during the CT scan, and in the prone or sitting position during the intervention. This leads to a difference in the spinal curvature between the two imaging modalities, which means a single rigid registration cannot be used for all of the lumbar vertebrae. In this work, a method for group-wise registration of pre-operative CT and intra-operative freehand 2-D ultrasound images of the lumbar spine is presented. The approach utilizes a pointbased registration technique based on the unscented Kalman filter, taking as input segmented vertebrae surfaces in both CT and ultrasound data. Ultrasound images are automatically segmented using a dynamic programming approach, while the CT images are semi-automatically segmented using thresholding. Since the curvature of the spine is different between the pre-operative and the intra-operative data, the registration approach is designed to simultaneously align individual groups of points segmented from each vertebra in the two imaging modalities. A biomechanical model is used to constrain the vertebrae transformation parameters during the registration and to ensure convergence. The mean target registration error achieved for individual vertebrae on five spine phantoms generated from CT data of patients, is 2.47 mm with standard deviation of 1.14 mm.

  11. Automatic three-dimensional registration of intravascular optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Ughi, Giovanni J.; Adriaenssens, Tom; Larsson, Matilda; Dubois, Christophe; Sinnaeve, Peter R.; Coosemans, Mark; Desmet, Walter; D'hooge, Jan

    2012-02-01

    Intravascular optical coherence tomography (IV-OCT) is a catheter-based high-resolution imaging technique able to visualize the inner wall of the coronary arteries and implanted devices in vivo with an axial resolution below 20 μm. IV-OCT is being used in several clinical trials aiming to quantify the vessel response to stent implantation over time. However, stent analysis is currently performed manually and corresponding images taken at different time points are matched through a very labor-intensive and subjective procedure. We present an automated method for the spatial registration of IV-OCT datasets. Stent struts are segmented through consecutive images and three-dimensional models of the stents are created for both datasets to be registered. The two models are initially roughly registered through an automatic initialization procedure and an iterative closest point algorithm is subsequently applied for a more precise registration. To correct for nonuniform rotational distortions (NURDs) and other potential acquisition artifacts, the registration is consecutively refined on a local level. The algorithm was first validated by using an in vitro experimental setup based on a polyvinyl-alcohol gel tubular phantom. Subsequently, an in vivo validation was obtained by exploiting stable vessel landmarks. The mean registration error in vitro was quantified to be 0.14 mm in the longitudinal axis and 7.3-deg mean rotation error. In vivo validation resulted in 0.23 mm in the longitudinal axis and 10.1-deg rotation error. These results indicate that the proposed methodology can be used for automatic registration of in vivo IV-OCT datasets. Such a tool will be indispensable for larger studies on vessel healing pathophysiology and reaction to stent implantation. As such, it will be valuable in testing the performance of new generations of intracoronary devices and new therapeutic drugs.

  12. Accurate and automatic extrinsic calibration method for blade measurement system integrated by different optical sensors

    NASA Astrophysics Data System (ADS)

    He, Wantao; Li, Zhongwei; Zhong, Kai; Shi, Yusheng; Zhao, Can; Cheng, Xu

    2014-11-01

    Fast and precise 3D inspection system is in great demand in modern manufacturing processes. At present, the available sensors have their own pros and cons, and hardly exist an omnipotent sensor to handle the complex inspection task in an accurate and effective way. The prevailing solution is integrating multiple sensors and taking advantages of their strengths. For obtaining a holistic 3D profile, the data from different sensors should be registrated into a coherent coordinate system. However, some complex shape objects own thin wall feather such as blades, the ICP registration method would become unstable. Therefore, it is very important to calibrate the extrinsic parameters of each sensor in the integrated measurement system. This paper proposed an accurate and automatic extrinsic parameter calibration method for blade measurement system integrated by different optical sensors. In this system, fringe projection sensor (FPS) and conoscopic holography sensor (CHS) is integrated into a multi-axis motion platform, and the sensors can be optimally move to any desired position at the object's surface. In order to simple the calibration process, a special calibration artifact is designed according to the characteristics of the two sensors. An automatic registration procedure based on correlation and segmentation is used to realize the artifact datasets obtaining by FPS and CHS rough alignment without any manual operation and data pro-processing, and then the Generalized Gauss-Markoff model is used to estimate the optimization transformation parameters. The experiments show the measurement result of a blade, where several sampled patches are merged into one point cloud, and it verifies the performance of the proposed method.

  13. Automatic Masking for Robust 3D-2D Image Registration in Image-Guided Spine Surgery.

    PubMed

    Ketcha, M D; De Silva, T; Uneri, A; Kleinszig, G; Vogt, S; Wolinsky, J-P; Siewerdsen, J H

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.

  14. Automatic masking for robust 3D-2D image registration in image-guided spine surgery

    NASA Astrophysics Data System (ADS)

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-03-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.

  15. Automatic intraoperative fiducial-less patient registration using cortical surface

    NASA Astrophysics Data System (ADS)

    Fan, Xiaoyao; Roberts, David W.; Olson, Jonathan D.; Ji, Songbai; Paulsen, Keith D.

    2017-03-01

    In image-guided neurosurgery, patient registration is typically performed in the operating room (OR) at the beginning of the procedure to establish the patient-to-image transformation. The accuracy and efficiency of patient registration are crucial as they are associated with surgical outcome, workflow, and healthcare costs. In this paper, we present an automatic fiducial-less patient registration (FLR) by directly registering cortical surface acquired from intraoperative stereovision (iSV) with preoperative MR (pMR) images without incorporating any prior information, and illustrate the method using one patient example. T1-weighted MR images were acquired prior to surgery and the brain was segmented. After dural opening, an image pair of the exposed cortical surface was acquired using an intraoperative stereovision (iSV) system, and a three-dimensional (3D) texture-encoded profile of the cortical surface was reconstructed. The 3D surface was registered with pMR using a multi-start binary registration method to determine the location and orientation of the iSV patch with respect to the segmented brain. A final transformation was calculated to establish the patient-to-MR relationship. The total computational time was 30 min, and can be significantly improved through code optimization, parallel computing, and/or graphical processing unit (GPU) acceleration. The results show that the iSV texture map aligned well with pMR using the FLR transformation, while misalignment was evident with fiducial-based registration (FBR). The difference between FLR and FBR was calculated at the center of craniotomy and the resulting distance was 4.34 mm. The results presented in this paper suggest potential for clinical application in the future.

  16. 76 FR 53072 - Certification; Importation of Vehicles and Equipment Subject to Federal Safety, Bumper, and Theft...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-25

    ... used as a basis for the non-automatic suspension of an RI registration, deletes redundant text from... Part 592 as a Basis for the Non-Automatic Suspension or Revocation of an RI Registration B. Deletion of... violations of the regulations in part 592 as a basis for the non-automatic suspension or revocation of an RI...

  17. Evaluation of Interpolation Effects on Upsampling and Accuracy of Cost Functions-Based Optimized Automatic Image Registration

    PubMed Central

    Mahmoudzadeh, Amir Pasha; Kashou, Nasser H.

    2013-01-01

    Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method. PMID:24000283

  18. Evaluation of interpolation effects on upsampling and accuracy of cost functions-based optimized automatic image registration.

    PubMed

    Mahmoudzadeh, Amir Pasha; Kashou, Nasser H

    2013-01-01

    Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method.

  19. A new idea for visualization of lesions distribution in mammogram based on CPD registration method.

    PubMed

    Pan, Xiaoguang; Qi, Buer; Yu, Hongfei; Wei, Haiping; Kang, Yan

    2017-07-20

    Mammography is currently the most effective technique for breast cancer. Lesions distribution can provide support for clinical diagnosis and epidemiological studies. We presented a new idea to help radiologists study breast lesions distribution conveniently. We also developed an automatic tool based on this idea which could show visualization of lesions distribution in a standard mammogram. Firstly, establishing a lesion database to study; then, extracting breast contours and match different women's mammograms to a standard mammogram; finally, showing the lesion distribution in the standard mammogram, and providing the distribution statistics. The crucial process of developing this tool was matching different women's mammograms correctly. We used a hybrid breast contour extraction method combined with coherent point drift method to match different women's mammograms. We tested our automatic tool by four mass datasets of 641 images. The distribution results shown by the tool were consistent with the results counted according to their reports and mammograms by manual. We also discussed the registration error that was less than 3.3 mm in average distance. The new idea is effective and the automatic tool can provide lesions distribution results which are consistent with radiologists simply and conveniently.

  20. Pairwise registration of TLS point clouds using covariance descriptors and a non-cooperative game

    NASA Astrophysics Data System (ADS)

    Zai, Dawei; Li, Jonathan; Guo, Yulan; Cheng, Ming; Huang, Pengdi; Cao, Xiaofei; Wang, Cheng

    2017-12-01

    It is challenging to automatically register TLS point clouds with noise, outliers and varying overlap. In this paper, we propose a new method for pairwise registration of TLS point clouds. We first generate covariance matrix descriptors with an adaptive neighborhood size from point clouds to find candidate correspondences, we then construct a non-cooperative game to isolate mutual compatible correspondences, which are considered as true positives. The method was tested on three models acquired by two different TLS systems. Experimental results demonstrate that our proposed adaptive covariance (ACOV) descriptor is invariant to rigid transformation and robust to noise and varying resolutions. The average registration errors achieved on three models are 0.46 cm, 0.32 cm and 1.73 cm, respectively. The computational times cost on these models are about 288 s, 184 s and 903 s, respectively. Besides, our registration framework using ACOV descriptors and a game theoretic method is superior to the state-of-the-art methods in terms of both registration error and computational time. The experiment on a large outdoor scene further demonstrates the feasibility and effectiveness of our proposed pairwise registration framework.

  1. Deformable planning CT to cone-beam CT image registration in head-and-neck cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou Jidong; Guerrero, Mariana; Chen, Wenjuan

    2011-04-15

    Purpose: The purpose of this work was to implement and validate a deformable CT to cone-beam computed tomography (CBCT) image registration method in head-and-neck cancer to eventually facilitate automatic target delineation on CBCT. Methods: Twelve head-and-neck cancer patients underwent a planning CT and weekly CBCT during the 5-7 week treatment period. The 12 planning CT images (moving images) of these patients were registered to their weekly CBCT images (fixed images) via the symmetric force Demons algorithm and using a multiresolution scheme. Histogram matching was used to compensate for the intensity difference between the two types of images. Using nine knownmore » anatomic points as registration targets, the accuracy of the registration was evaluated using the target registration error (TRE). In addition, region-of-interest (ROI) contours drawn on the planning CT were morphed to the CBCT images and the volume overlap index (VOI) between registered contours and manually delineated contours was evaluated. Results: The mean TRE value of the nine target points was less than 3.0 mm, the slice thickness of the planning CT. Of the 369 target points evaluated for registration accuracy, the average TRE value was 2.6{+-}0.6 mm. The mean TRE for bony tissue targets was 2.4{+-}0.2 mm, while the mean TRE for soft tissue targets was 2.8{+-}0.2 mm. The average VOI between the registered and manually delineated ROI contours was 76.2{+-}4.6%, which is consistent with that reported in previous studies. Conclusions: The authors have implemented and validated a deformable image registration method to register planning CT images to weekly CBCT images in head-and-neck cancer cases. The accuracy of the TRE values suggests that they can be used as a promising tool for automatic target delineation on CBCT.« less

  2. Semi-automatic tracking, smoothing and segmentation of hyoid bone motion from videofluoroscopic swallowing study.

    PubMed

    Kim, Won-Seok; Zeng, Pengcheng; Shi, Jian Qing; Lee, Youngjo; Paik, Nam-Jong

    2017-01-01

    Motion analysis of the hyoid bone via videofluoroscopic study has been used in clinical research, but the classical manual tracking method is generally labor intensive and time consuming. Although some automatic tracking methods have been developed, masked points could not be tracked and smoothing and segmentation, which are necessary for functional motion analysis prior to registration, were not provided by the previous software. We developed software to track the hyoid bone motion semi-automatically. It works even in the situation where the hyoid bone is masked by the mandible and has been validated in dysphagia patients with stroke. In addition, we added the function of semi-automatic smoothing and segmentation. A total of 30 patients' data were used to develop the software, and data collected from 17 patients were used for validation, of which the trajectories of 8 patients were partly masked. Pearson correlation coefficients between the manual and automatic tracking are high and statistically significant (0.942 to 0.991, P-value<0.0001). Relative errors between automatic tracking and manual tracking in terms of the x-axis, y-axis and 2D range of hyoid bone excursion range from 3.3% to 9.2%. We also developed an automatic method to segment each hyoid bone trajectory into four phases (elevation phase, anterior movement phase, descending phase and returning phase). The semi-automatic hyoid bone tracking from VFSS data by our software is valid compared to the conventional manual tracking method. In addition, the ability of automatic indication to switch the automatic mode to manual mode in extreme cases and calibration without attaching the radiopaque object is convenient and useful for users. Semi-automatic smoothing and segmentation provide further information for functional motion analysis which is beneficial to further statistical analysis such as functional classification and prognostication for dysphagia. Therefore, this software could provide the researchers in the field of dysphagia with a convenient, useful, and all-in-one platform for analyzing the hyoid bone motion. Further development of our method to track the other swallowing related structures or objects such as epiglottis and bolus and to carry out the 2D curve registration may be needed for a more comprehensive functional data analysis for dysphagia with big data.

  3. Fully-integrated framework for the segmentation and registration of the spinal cord white and gray matter.

    PubMed

    Dupont, Sara M; De Leener, Benjamin; Taso, Manuel; Le Troter, Arnaud; Nadeau, Sylvie; Stikov, Nikola; Callot, Virginie; Cohen-Adad, Julien

    2017-04-15

    The spinal cord white and gray matter can be affected by various pathologies such as multiple sclerosis, amyotrophic lateral sclerosis or trauma. Being able to precisely segment the white and gray matter could help with MR image analysis and hence be useful in further understanding these pathologies, and helping with diagnosis/prognosis and drug development. Up to date, white/gray matter segmentation has mostly been done manually, which is time consuming, induces a bias related to the rater and prevents large-scale multi-center studies. Recently, few methods have been proposed to automatically segment the spinal cord white and gray matter. However, no single method exists that combines the following criteria: (i) fully automatic, (ii) works on various MRI contrasts, (iii) robust towards pathology and (iv) freely available and open source. In this study we propose a multi-atlas based method for the segmentation of the spinal cord white and gray matter that addresses the previous limitations. Moreover, to study the spinal cord morphology, atlas-based approaches are increasingly used. These approaches rely on the registration of a spinal cord template to an MR image, however the registration usually doesn't take into account the spinal cord internal structure and thus lacks accuracy. In this study, we propose a new template registration framework that integrates the white and gray matter segmentation to account for the specific gray matter shape of each individual subject. Validation of segmentation was performed in 24 healthy subjects using T 2 * -weighted images, in 8 healthy subjects using diffusion weighted images (exhibiting inverted white-to-gray matter contrast compared to T 2 *-weighted), and in 5 patients with spinal cord injury. The template registration was validated in 24 subjects using T 2 *-weighted data. Results of automatic segmentation on T 2 *-weighted images was in close correspondence with the manual segmentation (Dice coefficient in the white/gray matter of 0.91/0.71 respectively). Similarly, good results were obtained in data with inverted contrast (diffusion-weighted image) and in patients. When compared to the classical template registration framework, the proposed framework that accounts for gray matter shape significantly improved the quality of the registration (comparing Dice coefficient in gray matter: p=9.5×10 -6 ). While further validation is needed to show the benefits of the new registration framework in large cohorts and in a variety of patients, this study provides a fully-integrated tool for quantitative assessment of white/gray matter morphometry and template-based analysis. All the proposed methods are implemented in the Spinal Cord Toolbox (SCT), an open-source software for processing spinal cord multi-parametric MRI data. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. An Improved InSAR Image Co-Registration Method for Pairs with Relatively Big Distortions or Large Incoherent Areas

    PubMed Central

    Chen, Zhenwei; Zhang, Lei; Zhang, Guo

    2016-01-01

    Co-registration is one of the most important steps in interferometric synthetic aperture radar (InSAR) data processing. The standard offset-measurement method based on cross-correlating uniformly distributed patches takes no account of specific geometric transformation between images or characteristics of ground scatterers. Hence, it is inefficient and difficult to obtain satisfying co-registration results for image pairs with relatively big distortion or large incoherent areas. Given this, an improved co-registration strategy is proposed in this paper which takes both the geometric features and image content into consideration. Firstly, some geometric transformations including scale, flip, rotation, and shear between images were eliminated based on the geometrical information, and the initial co-registration polynomial was obtained. Then the registration points were automatically detected by integrating the signal-to-clutter-ratio (SCR) thresholds and the amplitude information, and a further co-registration process was performed to refine the polynomial. Several comparison experiments were carried out using 2 TerraSAR-X data from the Hong Kong airport and 21 PALSAR data from the Donghai Bridge. Experiment results demonstrate that the proposed method brings accuracy and efficiency improvements for co-registration and processing abilities in the cases of big distortion between images or large incoherent areas in the images. For most co-registrations, the proposed method can enhance the reliability and applicability of co-registration and thus promote the automation to a higher level. PMID:27649207

  5. An Improved InSAR Image Co-Registration Method for Pairs with Relatively Big Distortions or Large Incoherent Areas.

    PubMed

    Chen, Zhenwei; Zhang, Lei; Zhang, Guo

    2016-09-17

    Co-registration is one of the most important steps in interferometric synthetic aperture radar (InSAR) data processing. The standard offset-measurement method based on cross-correlating uniformly distributed patches takes no account of specific geometric transformation between images or characteristics of ground scatterers. Hence, it is inefficient and difficult to obtain satisfying co-registration results for image pairs with relatively big distortion or large incoherent areas. Given this, an improved co-registration strategy is proposed in this paper which takes both the geometric features and image content into consideration. Firstly, some geometric transformations including scale, flip, rotation, and shear between images were eliminated based on the geometrical information, and the initial co-registration polynomial was obtained. Then the registration points were automatically detected by integrating the signal-to-clutter-ratio (SCR) thresholds and the amplitude information, and a further co-registration process was performed to refine the polynomial. Several comparison experiments were carried out using 2 TerraSAR-X data from the Hong Kong airport and 21 PALSAR data from the Donghai Bridge. Experiment results demonstrate that the proposed method brings accuracy and efficiency improvements for co-registration and processing abilities in the cases of big distortion between images or large incoherent areas in the images. For most co-registrations, the proposed method can enhance the reliability and applicability of co-registration and thus promote the automation to a higher level.

  6. Investigation of LANDSAT D Thematic Mapper geometric performance: Line to line and band to band registration. [Toulouse, France and Mississippi, U.S.A.

    NASA Technical Reports Server (NTRS)

    Begni, G.; BOISSIN; Desachy, M. J.; PERBOS

    1984-01-01

    The geometric accuray of LANDSAT TM raw data of Toulouse (France) raw data of Mississippi, and preprocessed data of Mississippi was examined using a CDC computer. Analog images were restituted on the VIZIR SEP device. The methods used for line to line and band to band registration are based on automatic correlation techniques and are widely used in automated image to image registration at CNES. Causes of intraband and interband misregistration are identified and statistics are given for both line to line and band to band misregistration.

  7. Fully automated motion correction in first-pass myocardial perfusion MR image sequences.

    PubMed

    Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2008-11-01

    This paper presents a novel method for registration of cardiac perfusion magnetic resonance imaging (MRI). The presented method is capable of automatically registering perfusion data, using independent component analysis (ICA) to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of that ICA. This reference image is used in a two-pass registration framework. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Despite varying image quality and motion patterns in the evaluation set, validation of the method showed a reduction of the average right ventricle (LV) motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. Comparison of clinically relevant parameters computed using registered data and the manual gold standard show a good agreement. Additional tests with a simulated free-breathing protocol showed robustness against considerable deviations from a standard breathing protocol. We conclude that this fully automatic ICA-based method shows an accuracy, a robustness and a computation speed adequate for use in a clinical environment.

  8. Automatic labeling of MR brain images through extensible learning and atlas forests.

    PubMed

    Xu, Lijun; Liu, Hong; Song, Enmin; Yan, Meng; Jin, Renchao; Hung, Chih-Cheng

    2017-12-01

    Multiatlas-based method is extensively used in MR brain images segmentation because of its simplicity and robustness. This method provides excellent accuracy although it is time consuming and limited in terms of obtaining information about new atlases. In this study, an automatic labeling of MR brain images through extensible learning and atlas forest is presented to address these limitations. We propose an extensible learning model which allows the multiatlas-based framework capable of managing the datasets with numerous atlases or dynamic atlas datasets and simultaneously ensure the accuracy of automatic labeling. Two new strategies are used to reduce the time and space complexity and improve the efficiency of the automatic labeling of brain MR images. First, atlases are encoded to atlas forests through random forest technology to reduce the time consumed for cross-registration between atlases and target image, and a scatter spatial vector is designed to eliminate errors caused by inaccurate registration. Second, an atlas selection method based on the extensible learning model is used to select atlases for target image without traversing the entire dataset and then obtain the accurate labeling. The labeling results of the proposed method were evaluated in three public datasets, namely, IBSR, LONI LPBA40, and ADNI. With the proposed method, the dice coefficient metric values on the three datasets were 84.17 ± 4.61%, 83.25 ± 4.29%, and 81.88 ± 4.53% which were 5% higher than those of the conventional method, respectively. The efficiency of the extensible learning model was evaluated by state-of-the-art methods for labeling of MR brain images. Experimental results showed that the proposed method could achieve accurate labeling for MR brain images without traversing the entire datasets. In the proposed multiatlas-based method, extensible learning and atlas forests were applied to control the automatic labeling of brain anatomies on large atlas datasets or dynamic atlas datasets and obtain accurate results. © 2017 American Association of Physicists in Medicine.

  9. Automatic image registration performance for two different CBCT systems; variation with imaging dose

    NASA Astrophysics Data System (ADS)

    Barber, J.; Sykes, J. R.; Holloway, L.; Thwaites, D. I.

    2014-03-01

    The performance of an automatic image registration algorithm was compared on image sets collected with two commercial CBCT systems, and the relationship with imaging dose was explored. CBCT images of a CIRS Virtually Human Male Pelvis phantom (VHMP) were collected on Varian TrueBeam/OBI and Elekta Synergy/XVI linear accelerators, across a range of mAs settings. Each CBCT image was registered 100 times, with random initial offsets introduced. Image registration was performed using the grey value correlation ratio algorithm in the Elekta XVI software, to a mask of the prostate volume with 5 mm expansion. Residual registration errors were calculated after correcting for the initial introduced phantom set-up error. Registration performance with the OBI images was similar to that of XVI. There was a clear dependence on imaging dose for the XVI images with residual errors increasing below 4mGy. It was not possible to acquire images with doses lower than ~5mGy with the OBI system and no evidence of reduced performance was observed at this dose. Registration failures (maximum target registration error > 3.6 mm on the surface of a 30mm sphere) occurred in 5% to 9% of registrations except for the lowest dose XVI scan (31%). The uncertainty in automatic image registration with both OBI and XVI images was found to be adequate for clinical use within a normal range of acquisition settings.

  10. Automated registration of multispectral MR vessel wall images of the carotid artery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klooster, R. van 't; Staring, M.; Reiber, J. H. C.

    2013-12-15

    Purpose: Atherosclerosis is the primary cause of heart disease and stroke. The detailed assessment of atherosclerosis of the carotid artery requires high resolution imaging of the vessel wall using multiple MR sequences with different contrast weightings. These images allow manual or automated classification of plaque components inside the vessel wall. Automated classification requires all sequences to be in alignment, which is hampered by patient motion. In clinical practice, correction of this motion is performed manually. Previous studies applied automated image registration to correct for motion using only nondeformable transformation models and did not perform a detailed quantitative validation. The purposemore » of this study is to develop an automated accurate 3D registration method, and to extensively validate this method on a large set of patient data. In addition, the authors quantified patient motion during scanning to investigate the need for correction. Methods: MR imaging studies (1.5T, dedicated carotid surface coil, Philips) from 55 TIA/stroke patients with ipsilateral <70% carotid artery stenosis were randomly selected from a larger cohort. Five MR pulse sequences were acquired around the carotid bifurcation, each containing nine transverse slices: T1-weighted turbo field echo, time of flight, T2-weighted turbo spin-echo, and pre- and postcontrast T1-weighted turbo spin-echo images (T1W TSE). The images were manually segmented by delineating the lumen contour in each vessel wall sequence and were manually aligned by applying throughplane and inplane translations to the images. To find the optimal automatic image registration method, different masks, choice of the fixed image, different types of the mutual information image similarity metric, and transformation models including 3D deformable transformation models, were evaluated. Evaluation of the automatic registration results was performed by comparing the lumen segmentations of the fixed image and moving image after registration. Results: The average required manual translation per image slice was 1.33 mm. Translations were larger as the patient was longer inside the scanner. Manual alignment took 187.5 s per patient resulting in a mean surface distance of 0.271 ± 0.127 mm. After minimal user interaction to generate the mask in the fixed image, the remaining sequences are automatically registered with a computation time of 52.0 s per patient. The optimal registration strategy used a circular mask with a diameter of 10 mm, a 3D B-spline transformation model with a control point spacing of 15 mm, mutual information as image similarity metric, and the precontrast T1W TSE as fixed image. A mean surface distance of 0.288 ± 0.128 mm was obtained with these settings, which is very close to the accuracy of the manual alignment procedure. The exact registration parameters and software were made publicly available. Conclusions: An automated registration method was developed and optimized, only needing two mouse clicks to mark the start and end point of the artery. Validation on a large group of patients showed that automated image registration has similar accuracy as the manual alignment procedure, substantially reducing the amount of user interactions needed, and is multiple times faster. In conclusion, the authors believe that the proposed automated method can replace the current manual procedure, thereby reducing the time to analyze the images.« less

  11. Research on segmentation based on multi-atlas in brain MR image

    NASA Astrophysics Data System (ADS)

    Qian, Yuejing

    2018-03-01

    Accurate segmentation of specific tissues in brain MR image can be effectively achieved with the multi-atlas-based segmentation method, and the accuracy mainly depends on the image registration accuracy and fusion scheme. This paper proposes an automatic segmentation method based on the multi-atlas for brain MR image. Firstly, to improve the registration accuracy in the area to be segmented, we employ a target-oriented image registration method for the refinement. Then In the label fusion, we proposed a new algorithm to detect the abnormal sparse patch and simultaneously abandon the corresponding abnormal sparse coefficients, this method is made based on the remaining sparse coefficients combined with the multipoint label estimator strategy. The performance of the proposed method was compared with those of the nonlocal patch-based label fusion method (Nonlocal-PBM), the sparse patch-based label fusion method (Sparse-PBM) and majority voting method (MV). Based on our experimental results, the proposed method is efficient in the brain MR images segmentation compared with MV, Nonlocal-PBM, and Sparse-PBM methods.

  12. SU-C-202-03: A Tool for Automatic Calculation of Delivered Dose Variation for Off-Line Adaptive Therapy Using Cone Beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, B; Lee, S; Chen, S

    Purpose: Monitoring the delivered dose is an important task for the adaptive radiotherapy (ART) and for determining time to re-plan. A software tool which enables automatic delivered dose calculation using cone-beam CT (CBCT) has been developed and tested. Methods: The tool consists of four components: a CBCT Colleting Module (CCM), a Plan Registration Moduel (PRM), a Dose Calculation Module (DCM), and an Evaluation and Action Module (EAM). The CCM is triggered periodically (e.g. every 1:00 AM) to search for newly acquired CBCTs of patients of interest and then export the DICOM files of the images and related registrations defined inmore » ARIA followed by triggering the PRM. The PRM imports the DICOM images and registrations, links the CBCTs to the related treatment plan of the patient in the planning system (RayStation V4.5, RaySearch, Stockholm, Sweden). A pre-determined CT-to-density table is automatically generated for dose calculation. Current version of the DCM uses a rigid registration which regards the treatment isocenter of the CBCT to be the isocenter of the treatment plan. Then it starts the dose calculation automatically. The AEM evaluates the plan using pre-determined plan evaluation parameters: PTV dose-volume metrics and critical organ doses. The tool has been tested for 10 patients. Results: Automatic plans are generated and saved in the order of the treatment dates of the Adaptive Planning module of the RayStation planning system, without any manual intervention. Once the CTV dose deviates more than 3%, both email and page alerts are sent to the physician and the physicist of the patient so that one can look the case closely. Conclusion: The tool is capable to perform automatic dose tracking and to alert clinicians when an action is needed. It is clinically useful for off-line adaptive therapy to catch any gross error. Practical way of determining alarming level for OAR is under development.« less

  13. A novel fully automatic scheme for fiducial marker-based alignment in electron tomography.

    PubMed

    Han, Renmin; Wang, Liansan; Liu, Zhiyong; Sun, Fei; Zhang, Fa

    2015-12-01

    Although the topic of fiducial marker-based alignment in electron tomography (ET) has been widely discussed for decades, alignment without human intervention remains a difficult problem. Specifically, the emergence of subtomogram averaging has increased the demand for batch processing during tomographic reconstruction; fully automatic fiducial marker-based alignment is the main technique in this process. However, the lack of an accurate method for detecting and tracking fiducial markers precludes fully automatic alignment. In this paper, we present a novel, fully automatic alignment scheme for ET. Our scheme has two main contributions: First, we present a series of algorithms to ensure a high recognition rate and precise localization during the detection of fiducial markers. Our proposed solution reduces fiducial marker detection to a sampling and classification problem and further introduces an algorithm to solve the parameter dependence of marker diameter and marker number. Second, we propose a novel algorithm to solve the tracking of fiducial markers by reducing the tracking problem to an incomplete point set registration problem. Because a global optimization of a point set registration occurs, the result of our tracking is independent of the initial image position in the tilt series, allowing for the robust tracking of fiducial markers without pre-alignment. The experimental results indicate that our method can achieve an accurate tracking, almost identical to the current best one in IMOD with half automatic scheme. Furthermore, our scheme is fully automatic, depends on fewer parameters (only requires a gross value of the marker diameter) and does not require any manual interaction, providing the possibility of automatic batch processing of electron tomographic reconstruction. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Cellular neural network-based hybrid approach toward automatic image registration

    NASA Astrophysics Data System (ADS)

    Arun, Pattathal VijayaKumar; Katiyar, Sunil Kumar

    2013-01-01

    Image registration is a key component of various image processing operations that involve the analysis of different image data sets. Automatic image registration domains have witnessed the application of many intelligent methodologies over the past decade; however, inability to properly model object shape as well as contextual information has limited the attainable accuracy. A framework for accurate feature shape modeling and adaptive resampling using advanced techniques such as vector machines, cellular neural network (CNN), scale invariant feature transform (SIFT), coreset, and cellular automata is proposed. CNN has been found to be effective in improving feature matching as well as resampling stages of registration and complexity of the approach has been considerably reduced using coreset optimization. The salient features of this work are cellular neural network approach-based SIFT feature point optimization, adaptive resampling, and intelligent object modelling. Developed methodology has been compared with contemporary methods using different statistical measures. Investigations over various satellite images revealed that considerable success was achieved with the approach. This system has dynamically used spectral and spatial information for representing contextual knowledge using CNN-prolog approach. This methodology is also illustrated to be effective in providing intelligent interpretation and adaptive resampling.

  15. Optimal slice thickness for cone-beam CT with on-board imager

    PubMed Central

    Seet, KYT; Barghi, A; Yartsev, S; Van Dyk, J

    2010-01-01

    Purpose: To find the optimal slice thickness (Δτ) setting for patient registration with kilovoltage cone-beam CT (kVCBCT) on the Varian On Board Imager (OBI) system by investigating the relationship of slice thickness to automatic registration accuracy and contrast-to-noise ratio. Materials and method: Automatic registration was performed on kVCBCT studies of the head and pelvis of a RANDO anthropomorphic phantom. Images were reconstructed with 1.0 ≤ Δτ (mm) ≤ 5.0 at 1.0 mm increments. The phantoms were offset by a known amount, and the suggested shifts were compared to the known shifts by calculating the residual error. A uniform cylindrical phantom with cylindrical inserts of various known CT numbers was scanned with kVCBCT at 1.0 ≤ Δτ (mm) ≤ 5.0 at increments of 0.5 mm. The contrast-to-noise ratios for the inserts were measured at each Δτ. Results: For the planning CT slice thickness used in this study, there was no significant difference in residual error below a threshold equal to the planning CT slice thickness. For Δτ > 3.0 mm, residual error increased for both the head and pelvis phantom studies. The contrast-to-noise ratio is proportional to slice thickness until Δτ = 2.5 mm. Beyond this point, the contrast-to-noise ratio was not affected by Δτ. Conclusion: Automatic registration accuracy is greatest when 1.0 ≤ Δτ (mm) ≤ 3.0 is used. Contrast-to-noise ratio is optimal for the 2.5 ≤ Δτ (mm) ≤ 5.0 range. Therefore 2.5 ≤ Δτ (mm) ≤ 3.0 is recommended for kVCBCT patient registration where the planning CT is 3.0 mm. PMID:21611047

  16. Registration of in vivo MR to histology of rodent brains using blockface imaging

    NASA Astrophysics Data System (ADS)

    Uberti, Mariano; Liu, Yutong; Dou, Huanyu; Mosley, R. Lee; Gendelman, Howard E.; Boska, Michael

    2009-02-01

    Registration of MRI to histopathological sections can enhance bioimaging validation for use in pathobiologic, diagnostic, and therapeutic evaluations. However, commonly used registration methods fall short of this goal due to tissue shrinkage and tearing after brain extraction and preparation. In attempts to overcome these limitations we developed a software toolbox using 3D blockface imaging as the common space of reference. This toolbox includes a semi-automatic brain extraction technique using constraint level sets (CLS), 3D reconstruction methods for the blockface and MR volume, and a 2D warping technique using thin-plate splines with landmark optimization. Using this toolbox, the rodent brain volume is first extracted from the whole head MRI using CLS. The blockface volume is reconstructed followed by 3D brain MRI registration to the blockface volume to correct the global deformations due to brain extraction and fixation. Finally, registered MRI and histological slices are warped to corresponding blockface images to correct slice specific deformations. The CLS brain extraction technique was validated by comparing manual results showing 94% overlap. The image warping technique was validated by calculating target registration error (TRE). Results showed a registration accuracy of a TRE < 1 pixel. Lastly, the registration method and the software tools developed were used to validate cell migration in murine human immunodeficiency virus type one encephalitis.

  17. Automatic registration of ICG images using mutual information and perfusion analysis

    NASA Astrophysics Data System (ADS)

    Kim, Namkug; Seo, Jong-Mo; Lee, June-goo; Kim, Jong Hyo; Park, Kwangsuk; Yu, Hyeong-Gon; Yu, Young Suk; Chung, Hum

    2005-04-01

    Introduction: Indocyanin green fundus angiographic images (ICGA) of the eyes is useful method in detecting and characterizing the choroidal neovascularization (CNV), which is the major cause of the blindness over 65 years of age. To investigate the quantitative analysis of the blood flow on ICGA, systematic approach for automatic registration of using mutual information and a quantitative analysis was developed. Methods: Intermittent sequential images of indocyanin green angiography were acquired by Heidelberg retinal angiography that uses the laser scanning system for the image acquisition. Misalignment of the each image generated by the minute eye movement of the patients was corrected by the mutual information method because the distribution of the contrast media on image is changing throughout the time sequences. Several region of interest (ROI) were selected by a physician and the intensities of the selected region were plotted according to the time sequences. Results: The registration of ICGA time sequential images is required not only translate transform but also rotational transform. Signal intensities showed variation based on gamma-variate function depending on ROIs and capillary vessels show more variance of signal intensity than major vessels. CNV showed intermediate variance of signal intensity and prolonged transit time. Conclusion: The resulting registered images can be used not only for quantitative analysis, but also for perfusion analysis. Various investigative approached on CNV using this method will be helpful in the characterization of the lesion and follow-up.

  18. Correlation and registration of ERTS multispectral imagery. [by a digital processing technique

    NASA Technical Reports Server (NTRS)

    Bonrud, L. O.; Henrikson, P. J.

    1974-01-01

    Examples of automatic digital processing demonstrate the feasibility of registering one ERTS multispectral scanner (MSS) image with another obtained on a subsequent orbit, and automatic matching, correlation, and registration of MSS imagery with aerial photography (multisensor correlation) is demonstrated. Excellent correlation was obtained with patch sizes exceeding 16 pixels square. Qualities which lead to effective control point selection are distinctive features, good contrast, and constant feature characteristics. Results of the study indicate that more than 300 degrees of freedom are required to register two standard ERTS-1 MSS frames covering 100 by 100 nautical miles to an accuracy of 0.6 pixel mean radial displacement error. An automatic strip processing technique demonstrates 600 to 1200 degrees of freedom over a quater frame of ERTS imagery. Registration accuracies in the range of 0.3 pixel to 0.5 pixel mean radial error were confirmed by independent error analysis. Accuracies in the range of 0.5 pixel to 1.4 pixel mean radial error were demonstrated by semi-automatic registration over small geographic areas.

  19. MatchGUI: A Graphical MATLAB-Based Tool for Automatic Image Co-Registration

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.

    2011-01-01

    MatchGUI software, based on MATLAB, automatically matches two images and displays the match result by superimposing one image on the other. A slider bar allows focus to shift between the two images. There are tools for zoom, auto-crop to overlap region, and basic image markup. Given a pair of ortho-rectified images (focused primarily on Mars orbital imagery for now), this software automatically co-registers the imagery so that corresponding image pixels are aligned. MatchGUI requires minimal user input, and performs a registration over scale and inplane rotation fully automatically

  20. A fast rigid-registration method of inferior limb X-ray image and 3D CT images for TKA surgery

    NASA Astrophysics Data System (ADS)

    Ito, Fumihito; O. D. A, Prima; Uwano, Ikuko; Ito, Kenzo

    2010-03-01

    In this paper, we propose a fast rigid-registration method of inferior limb X-ray films (two-dimensional Computed Radiography (CR) images) and three-dimensional Computed Tomography (CT) images for Total Knee Arthroplasty (TKA) surgery planning. The position of the each bone, such as femur and tibia (shin bone), in X-ray film and 3D CT images is slightly different, and we must pay attention how to use the two different images, since X-ray film image is captured in the standing position, and 3D CT is captured in decubitus (face up) position, respectively. Though the conventional registration mainly uses cross-correlation function between two images,and utilizes optimization techniques, it takes enormous calculation time and it is difficult to use it in interactive operations. In order to solve these problems, we calculate the center line (bone axis) of femur and tibia (shin bone) automatically, and we use them as initial positions for the registration. We evaluate our registration method by using three patient's image data, and we compare our proposed method and a conventional registration, which uses down-hill simplex algorithm. The down-hill simplex method is an optimization algorithm that requires only function evaluations, and doesn't need the calculation of derivatives. Our registration method is more effective than the downhill simplex method in computational time and the stable convergence. We have developed the implant simulation system on a personal computer, in order to support the surgeon in a preoperative planning of TKA. Our registration method is implemented in the simulation system, and user can manipulate 2D/3D translucent templates of implant components on X-ray film and 3D CT images.

  1. An Integrated Approach to Segmentation and Nonrigid Registration for Application in Image-Guided Pelvic Radiotherapy

    PubMed Central

    Lu, Chao; Chelikani, Sudhakar; Papademetris, Xenophon; Knisely, Jonathan P.; Milosevic, Michael F.; Chen, Zhe; Jaffray, David A.; Staib, Lawrence H.; Duncan, James S.

    2011-01-01

    External beam radiotherapy (EBRT) has become the preferred options for non-surgical treatment of prostate cancer and cervix cancer. In order to deliver higher doses to cancerous regions within these pelvic structures (i.e. prostate or cervix) while maintaining or lowering the doses to surrounding non-cancerous regions, it is critical to account for setup variation, organ motion, anatomical changes due to treatment and intra-fraction motion. In previous work, manual segmentation of the soft tissues is performed and then images are registered based on the manual segmentation. In this paper, we present an integrated automatic approach to multiple organ segmentation and nonrigid constrained registration, which can achieve these two aims simultaneously. The segmentation and registration steps are both formulated using a Bayesian framework, and they constrain each other using an iterative conditional model strategy. We also propose a new strategy to assess cumulative actual dose for this novel integrated algorithm, in order to both determine whether the intended treatment is being delivered and, potentially, whether or not a plan should be adjusted for future treatment fractions. Quantitative results show that the automatic segmentation produced results that have an accuracy comparable to manual segmentation, while the registration part significantly outperforms both rigid and non-rigid registration. Clinical application and evaluation of dose delivery show the superiority of proposed method to the procedure currently used in clinical practice, i.e. manual segmentation followed by rigid registration. PMID:21646038

  2. MR-CT registration using a Ni-Ti prostate stent in image-guided radiotherapy of prostate cancer.

    PubMed

    Korsager, Anne Sofie; Carl, Jesper; Østergaard, Lasse Riis

    2013-06-01

    In image-guided radiotherapy of prostate cancer defining the clinical target volume often relies on magnetic resonance (MR). The task of transferring the clinical target volume from MR to standard planning computed tomography (CT) is not trivial due to prostate mobility. In this paper, an automatic local registration approach is proposed based on a newly developed removable Ni-Ti prostate stent. The registration uses the voxel similarity measure mutual information in a two-step approach where the pelvic bones are used to establish an initial registration for the local registration. In a phantom study, the accuracy was measured to 0.97 mm and visual inspection showed accurate registration of all 30 data sets. The consistency of the registration was examined where translation and rotation displacements yield a rotation error of 0.41° ± 0.45° and a translation error of 1.67 ± 2.24 mm. This study demonstrated the feasibility for an automatic local MR-CT registration using the prostate stent.

  3. Multi-sensor image registration based on algebraic projective invariants.

    PubMed

    Li, Bin; Wang, Wei; Ye, Hao

    2013-04-22

    A new automatic feature-based registration algorithm is presented for multi-sensor images with projective deformation. Contours are firstly extracted from both reference and sensed images as basic features in the proposed method. Since it is difficult to design a projective-invariant descriptor from the contour information directly, a new feature named Five Sequential Corners (FSC) is constructed based on the corners detected from the extracted contours. By introducing algebraic projective invariants, we design a descriptor for each FSC that is ensured to be robust against projective deformation. Further, no gray scale related information is required in calculating the descriptor, thus it is also robust against the gray scale discrepancy between the multi-sensor image pairs. Experimental results utilizing real image pairs are presented to show the merits of the proposed registration method.

  4. Automatic Localization of Vertebral Levels in X-Ray Fluoroscopy Using 3D-2D Registration: A Tool to Reduce Wrong-Site Surgery

    PubMed Central

    Otake, Y.; Schafer, S.; Stayman, J. W.; Zbijewski, W.; Kleinszig, G.; Graumann, R.; Khanna, A. J.; Siewerdsen, J. H.

    2012-01-01

    Surgical targeting of the incorrect vertebral level (“wrong-level” surgery) is among the more common wrong-site surgical errors, attributed primarily to a lack of uniquely identifiable radiographic landmarks in the mid-thoracic spine. Conventional localization method involves manual counting of vertebral bodies under fluoroscopy, is prone to human error, and carries additional time and dose. We propose an image registration and visualization system (referred to as LevelCheck), for decision support in spine surgery by automatically labeling vertebral levels in fluoroscopy using a GPU-accelerated, intensity-based 3D-2D (viz., CT-to-fluoroscopy) registration. A gradient information (GI) similarity metric and CMA-ES optimizer were chosen due to their robustness and inherent suitability for parallelization. Simulation studies involved 10 patient CT datasets from which 50,000 simulated fluoroscopic images were generated from C-arm poses selected to approximate C-arm operator and positioning variability. Physical experiments used an anthropomorphic chest phantom imaged under real fluoroscopy. The registration accuracy was evaluated as the mean projection distance (mPD) between the estimated and true center of vertebral levels. Trials were defined as successful if the estimated position was within the projection of the vertebral body (viz., mPD < 5mm). Simulation studies showed a success rate of 99.998% (1 failure in 50,000 trials) and computation time of 4.7 sec on a midrange GPU. Analysis of failure modes identified cases of false local optima in the search space arising from longitudinal periodicity in vertebral structures. Physical experiments demonstrated robustness of the algorithm against quantum noise and x-ray scatter. The ability to automatically localize target anatomy in fluoroscopy in near-real-time could be valuable in reducing the occurrence of wrong-site surgery while helping to reduce radiation exposure. The method is applicable beyond the specific case of vertebral labeling, since any structure defined in pre-operative (or intra-operative) CT or cone-beam CT can be automatically registered to the fluoroscopic scene. PMID:22864366

  5. Accurate CT-MR image registration for deep brain stimulation: a multi-observer evaluation study

    NASA Astrophysics Data System (ADS)

    Rühaak, Jan; Derksen, Alexander; Heldmann, Stefan; Hallmann, Marc; Meine, Hans

    2015-03-01

    Since the first clinical interventions in the late 1980s, Deep Brain Stimulation (DBS) of the subthalamic nucleus has evolved into a very effective treatment option for patients with severe Parkinson's disease. DBS entails the implantation of an electrode that performs high frequency stimulations to a target area deep inside the brain. A very accurate placement of the electrode is a prerequisite for positive therapy outcome. The assessment of the intervention result is of central importance in DBS treatment and involves the registration of pre- and postinterventional scans. In this paper, we present an image processing pipeline for highly accurate registration of postoperative CT to preoperative MR. Our method consists of two steps: a fully automatic pre-alignment using a detection of the skull tip in the CT based on fuzzy connectedness, and an intensity-based rigid registration. The registration uses the Normalized Gradient Fields distance measure in a multilevel Gauss-Newton optimization framework and focuses on a region around the subthalamic nucleus in the MR. The accuracy of our method was extensively evaluated on 20 DBS datasets from clinical routine and compared with manual expert registrations. For each dataset, three independent registrations were available, thus allowing to relate algorithmic with expert performance. Our method achieved an average registration error of 0.95mm in the target region around the subthalamic nucleus as compared to an inter-observer variability of 1.12 mm. Together with the short registration time of about five seconds on average, our method forms a very attractive package that can be considered ready for clinical use.

  6. Dealing with difficult deformations: construction of a knowledge-based deformation atlas

    NASA Astrophysics Data System (ADS)

    Thorup, S. S.; Darvann, T. A.; Hermann, N. V.; Larsen, P.; Ólafsdóttir, H.; Paulsen, R. R.; Kane, A. A.; Govier, D.; Lo, L.-J.; Kreiborg, S.; Larsen, R.

    2010-03-01

    Twenty-three Taiwanese infants with unilateral cleft lip and palate (UCLP) were CT-scanned before lip repair at the age of 3 months, and again after lip repair at the age of 12 months. In order to evaluate the surgical result, detailed point correspondence between pre- and post-surgical images was needed. We have previously demonstrated that non-rigid registration using B-splines is able to provide automated determination of point correspondences in populations of infants without cleft lip. However, this type of registration fails when applied to the task of determining the complex deformation from before to after lip closure in infants with UCLP. The purpose of the present work was to show that use of prior information about typical deformations due to lip closure, through the construction of a knowledge-based atlas of deformations, could overcome the problem. Initially, mean volumes (atlases) for the pre- and post-surgical populations, respectively, were automatically constructed by non-rigid registration. An expert placed corresponding landmarks in the cleft area in the two atlases; this provided prior information used to build a knowledge-based deformation atlas. We model the change from pre- to post-surgery using thin-plate spline warping. The registration results are convincing and represent a first move towards an automatic registration method for dealing with difficult deformations due to this type of surgery.

  7. Fast algorithm for probabilistic bone edge detection (FAPBED)

    NASA Astrophysics Data System (ADS)

    Scepanovic, Danilo; Kirshtein, Joshua; Jain, Ameet K.; Taylor, Russell H.

    2005-04-01

    The registration of preoperative CT to intra-operative reality systems is a crucial step in Computer Assisted Orthopedic Surgery (CAOS). The intra-operative sensors include 3D digitizers, fiducials, X-rays and Ultrasound (US). FAPBED is designed to process CT volumes for registration to tracked US data. Tracked US is advantageous because it is real time, noninvasive, and non-ionizing, but it is also known to have inherent inaccuracies which create the need to develop a framework that is robust to various uncertainties, and can be useful in US-CT registration. Furthermore, conventional registration methods depend on accurate and absolute segmentation. Our proposed probabilistic framework addresses the segmentation-registration duality, wherein exact segmentation is not a prerequisite to achieve accurate registration. In this paper, we develop a method for fast and automatic probabilistic bone surface (edge) detection in CT images. Various features that influence the likelihood of the surface at each spatial coordinate are combined using a simple probabilistic framework, which strikes a fair balance between a high-level understanding of features in an image and the low-level number crunching of standard image processing techniques. The algorithm evaluates different features for detecting the probability of a bone surface at each voxel, and compounds the results of these methods to yield a final, low-noise, probability map of bone surfaces in the volume. Such a probability map can then be used in conjunction with a similar map from tracked intra-operative US to achieve accurate registration. Eight sample pelvic CT scans were used to extract feature parameters and validate the final probability maps. An un-optimized fully automatic Matlab code runs in five minutes per CT volume on average, and was validated by comparison against hand-segmented gold standards. The mean probability assigned to nonzero surface points was 0.8, while nonzero non-surface points had a mean value of 0.38 indicating clear identification of surface points on average. The segmentation was also sufficiently crisp, with a full width at half maximum (FWHM) value of 1.51 voxels.

  8. An automatic rat brain extraction method based on a deformable surface model.

    PubMed

    Li, Jiehua; Liu, Xiaofeng; Zhuo, Jiachen; Gullapalli, Rao P; Zara, Jason M

    2013-08-15

    The extraction of the brain from the skull in medical images is a necessary first step before image registration or segmentation. While pre-clinical MR imaging studies on small animals, such as rats, are increasing, fully automatic imaging processing techniques specific to small animal studies remain lacking. In this paper, we present an automatic rat brain extraction method, the Rat Brain Deformable model method (RBD), which adapts the popular human brain extraction tool (BET) through the incorporation of information on the brain geometry and MR image characteristics of the rat brain. The robustness of the method was demonstrated on T2-weighted MR images of 64 rats and compared with other brain extraction methods (BET, PCNN, PCNN-3D). The results demonstrate that RBD reliably extracts the rat brain with high accuracy (>92% volume overlap) and is robust against signal inhomogeneity in the images. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Registration-based segmentation with articulated model from multipostural magnetic resonance images for hand bone motion animation.

    PubMed

    Chen, Hsin-Chen; Jou, I-Ming; Wang, Chien-Kuo; Su, Fong-Chin; Sun, Yung-Nien

    2010-06-01

    The quantitative measurements of hand bones, including volume, surface, orientation, and position are essential in investigating hand kinematics. Moreover, within the measurement stage, bone segmentation is the most important step due to its certain influences on measuring accuracy. Since hand bones are small and tubular in shape, magnetic resonance (MR) imaging is prone to artifacts such as nonuniform intensity and fuzzy boundaries. Thus, greater detail is required for improving segmentation accuracy. The authors then propose using a novel registration-based method on an articulated hand model to segment hand bones from multipostural MR images. The proposed method consists of the model construction and registration-based segmentation stages. Given a reference postural image, the first stage requires construction of a drivable reference model characterized by hand bone shapes, intensity patterns, and articulated joint mechanism. By applying the reference model to the second stage, the authors initially design a model-based registration pursuant to intensity distribution similarity, MR bone intensity properties, and constraints of model geometry to align the reference model to target bone regions of the given postural image. The authors then refine the resulting surface to improve the superimposition between the registered reference model and target bone boundaries. For each subject, given a reference postural image, the proposed method can automatically segment all hand bones from all other postural images. Compared to the ground truth from two experts, the resulting surface image had an average margin of error within 1 mm (mm) only. In addition, the proposed method showed good agreement on the overlap of bone segmentations by dice similarity coefficient and also demonstrated better segmentation results than conventional methods. The proposed registration-based segmentation method can successfully overcome drawbacks caused by inherent artifacts in MR images and obtain more accurate segmentation results automatically. Moreover, realistic hand motion animations can be generated based on the bone segmentation results. The proposed method is found helpful for understanding hand bone geometries in dynamic postures that can be used in simulating 3D hand motion through multipostural MR images.

  10. Error Estimation Techniques to Refine Overlapping Aerial Image Mosaic Processes via Detected Parameters

    ERIC Educational Resources Information Center

    Bond, William Glenn

    2012-01-01

    In this paper, I propose to demonstrate a means of error estimation preprocessing in the assembly of overlapping aerial image mosaics. The mosaic program automatically assembles several hundred aerial images from a data set by aligning them, via image registration using a pattern search method, onto a GIS grid. The method presented first locates…

  11. Registration and Fusion of Multiple Source Remotely Sensed Image Data

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline

    2004-01-01

    Earth and Space Science often involve the comparison, fusion, and integration of multiple types of remotely sensed data at various temporal, radiometric, and spatial resolutions. Results of this integration may be utilized for global change analysis, global coverage of an area at multiple resolutions, map updating or validation of new instruments, as well as integration of data provided by multiple instruments carried on multiple platforms, e.g. in spacecraft constellations or fleets of planetary rovers. Our focus is on developing methods to perform fast, accurate and automatic image registration and fusion. General methods for automatic image registration are being reviewed and evaluated. Various choices for feature extraction, feature matching and similarity measurements are being compared, including wavelet-based algorithms, mutual information and statistically robust techniques. Our work also involves studies related to image fusion and investigates dimension reduction and co-kriging for application-dependent fusion. All methods are being tested using several multi-sensor datasets, acquired at EOS Core Sites, and including multiple sensors such as IKONOS, Landsat-7/ETM+, EO1/ALI and Hyperion, MODIS, and SeaWIFS instruments. Issues related to the coregistration of data from the same platform (i.e., AIRS and MODIS from Aqua) or from several platforms of the A-train (i.e., MLS, HIRDLS, OMI from Aura with AIRS and MODIS from Terra and Aqua) will also be considered.

  12. Automatic Generation of Boundary Conditions Using Demons Nonrigid Image Registration for Use in 3-D Modality-Independent Elastography

    PubMed Central

    Ou, Jao J.; Ong, Rowena E.; Miga, Michael I.

    2013-01-01

    Modality-independent elastography (MIE) is a method of elastography that reconstructs the elastic properties of tissue using images acquired under different loading conditions and a biomechanical model. Boundary conditions are a critical input to the algorithm and are often determined by time-consuming point correspondence methods requiring manual user input. This study presents a novel method of automatically generating boundary conditions by nonrigidly registering two image sets with a demons diffusion-based registration algorithm. The use of this method was successfully performed in silico using magnetic resonance and X-ray-computed tomography image data with known boundary conditions. These preliminary results produced boundary conditions with an accuracy of up to 80% compared to the known conditions. Demons-based boundary conditions were utilized within a 3-D MIE reconstruction to determine an elasticity contrast ratio between tumor and normal tissue. Two phantom experiments were then conducted to further test the accuracy of the demons boundary conditions and the MIE reconstruction arising from the use of these conditions. Preliminary results show a reasonable characterization of the material properties on this first attempt and a significant improvement in the automation level and viability of the method. PMID:21690002

  13. Automatic generation of boundary conditions using demons nonrigid image registration for use in 3-D modality-independent elastography.

    PubMed

    Pheiffer, Thomas S; Ou, Jao J; Ong, Rowena E; Miga, Michael I

    2011-09-01

    Modality-independent elastography (MIE) is a method of elastography that reconstructs the elastic properties of tissue using images acquired under different loading conditions and a biomechanical model. Boundary conditions are a critical input to the algorithm and are often determined by time-consuming point correspondence methods requiring manual user input. This study presents a novel method of automatically generating boundary conditions by nonrigidly registering two image sets with a demons diffusion-based registration algorithm. The use of this method was successfully performed in silico using magnetic resonance and X-ray-computed tomography image data with known boundary conditions. These preliminary results produced boundary conditions with an accuracy of up to 80% compared to the known conditions. Demons-based boundary conditions were utilized within a 3-D MIE reconstruction to determine an elasticity contrast ratio between tumor and normal tissue. Two phantom experiments were then conducted to further test the accuracy of the demons boundary conditions and the MIE reconstruction arising from the use of these conditions. Preliminary results show a reasonable characterization of the material properties on this first attempt and a significant improvement in the automation level and viability of the method.

  14. SU-D-BRF-03: Improvement of TomoTherapy Megavoltage Topogram Image Quality for Automatic Registration During Patient Localization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholey, J; White, B; Qi, S

    2014-06-01

    Purpose: To improve the quality of mega-voltage orthogonal scout images (MV topograms) for a fast and low-dose alternative technique for patient localization on the TomoTherapy HiART system. Methods: Digitally reconstructed radiographs (DRR) of anthropomorphic head and pelvis phantoms were synthesized from kVCT under TomoTherapy geometry (kV-DRR). Lateral (LAT) and anterior-posterior (AP) aligned topograms were acquired with couch speeds of 1cm/s, 2cm/s, and 3cm/s. The phantoms were rigidly translated in all spatial directions with known offsets in increments of 5mm, 10mm, and 15mm to simulate daily positioning errors. The contrast of the MV topograms was automatically adjusted based on the imagemore » intensity characteristics. A low-pass fast Fourier transform filter removed high-frequency noise and a Weiner filter reduced stochastic noise caused by scattered radiation to the detector array. An intensity-based image registration algorithm was used to register the MV topograms to a corresponding kV-DRR by minimizing the mean square error between corresponding pixel intensities. The registration accuracy was assessed by comparing the normalized cross correlation coefficients (NCC) between the registered topograms and the kV-DRR. The applied phantom offsets were determined by registering the MV topograms with the kV-DRR and recovering the spatial translation of the MV topograms. Results: The automatic registration technique provided millimeter accuracy and was robust for the deformed MV topograms for three tested couch speeds. The lowest average NCC for all AP and LAT MV topograms was 0.96 for the head phantom and 0.93 for the pelvis phantom. The offsets were recovered to within 1.6mm and 6.5mm for the processed and the original MV topograms respectively. Conclusion: Automatic registration of the processed MV topograms to a corresponding kV-DRR recovered simulated daily positioning errors that were accurate to the order of a millimeter. These results suggest the clinical use of MV topograms as a promising alternative to MVCT patient alignment.« less

  15. Atlas-based fuzzy connectedness segmentation and intensity nonuniformity correction applied to brain MRI.

    PubMed

    Zhou, Yongxin; Bai, Jing

    2007-01-01

    A framework that combines atlas registration, fuzzy connectedness (FC) segmentation, and parametric bias field correction (PABIC) is proposed for the automatic segmentation of brain magnetic resonance imaging (MRI). First, the atlas is registered onto the MRI to initialize the following FC segmentation. Original techniques are proposed to estimate necessary initial parameters of FC segmentation. Further, the result of the FC segmentation is utilized to initialize a following PABIC algorithm. Finally, we re-apply the FC technique on the PABIC corrected MRI to get the final segmentation. Thus, we avoid expert human intervention and provide a fully automatic method for brain MRI segmentation. Experiments on both simulated and real MRI images demonstrate the validity of the method, as well as the limitation of the method. Being a fully automatic method, it is expected to find wide applications, such as three-dimensional visualization, radiation therapy planning, and medical database construction.

  16. Automatic and hierarchical segmentation of the human skeleton in CT images.

    PubMed

    Fu, Yabo; Liu, Shi; Li, Harold; Yang, Deshan

    2017-04-07

    Accurate segmentation of each bone of the human skeleton is useful in many medical disciplines. The results of bone segmentation could facilitate bone disease diagnosis and post-treatment assessment, and support planning and image guidance for many treatment modalities including surgery and radiation therapy. As a medium level medical image processing task, accurate bone segmentation can facilitate automatic internal organ segmentation by providing stable structural reference for inter- or intra-patient registration and internal organ localization. Even though bones in CT images can be visually observed with minimal difficulty due to the high image contrast between the bony structures and surrounding soft tissues, automatic and precise segmentation of individual bones is still challenging due to the many limitations of the CT images. The common limitations include low signal-to-noise ratio, insufficient spatial resolution, and indistinguishable image intensity between spongy bones and soft tissues. In this study, a novel and automatic method is proposed to segment all the major individual bones of the human skeleton above the upper legs in CT images based on an articulated skeleton atlas. The reported method is capable of automatically segmenting 62 major bones, including 24 vertebrae and 24 ribs, by traversing a hierarchical anatomical tree and by using both rigid and deformable image registration. The degrees of freedom of femora and humeri are modeled to support patients in different body and limb postures. The segmentation results are evaluated using the Dice coefficient and point-to-surface error (PSE) against manual segmentation results as the ground-truth. The results suggest that the reported method can automatically segment and label the human skeleton into detailed individual bones with high accuracy. The overall average Dice coefficient is 0.90. The average PSEs are 0.41 mm for the mandible, 0.62 mm for cervical vertebrae, 0.92 mm for thoracic vertebrae, and 1.45 mm for pelvis bones.

  17. Automatic and hierarchical segmentation of the human skeleton in CT images

    NASA Astrophysics Data System (ADS)

    Fu, Yabo; Liu, Shi; Li, H. Harold; Yang, Deshan

    2017-04-01

    Accurate segmentation of each bone of the human skeleton is useful in many medical disciplines. The results of bone segmentation could facilitate bone disease diagnosis and post-treatment assessment, and support planning and image guidance for many treatment modalities including surgery and radiation therapy. As a medium level medical image processing task, accurate bone segmentation can facilitate automatic internal organ segmentation by providing stable structural reference for inter- or intra-patient registration and internal organ localization. Even though bones in CT images can be visually observed with minimal difficulty due to the high image contrast between the bony structures and surrounding soft tissues, automatic and precise segmentation of individual bones is still challenging due to the many limitations of the CT images. The common limitations include low signal-to-noise ratio, insufficient spatial resolution, and indistinguishable image intensity between spongy bones and soft tissues. In this study, a novel and automatic method is proposed to segment all the major individual bones of the human skeleton above the upper legs in CT images based on an articulated skeleton atlas. The reported method is capable of automatically segmenting 62 major bones, including 24 vertebrae and 24 ribs, by traversing a hierarchical anatomical tree and by using both rigid and deformable image registration. The degrees of freedom of femora and humeri are modeled to support patients in different body and limb postures. The segmentation results are evaluated using the Dice coefficient and point-to-surface error (PSE) against manual segmentation results as the ground-truth. The results suggest that the reported method can automatically segment and label the human skeleton into detailed individual bones with high accuracy. The overall average Dice coefficient is 0.90. The average PSEs are 0.41 mm for the mandible, 0.62 mm for cervical vertebrae, 0.92 mm for thoracic vertebrae, and 1.45 mm for pelvis bones.

  18. Automated Bone Segmentation and Surface Evaluation of a Small Animal Model of Post-Traumatic Osteoarthritis.

    PubMed

    Ramme, Austin J; Voss, Kevin; Lesporis, Jurinus; Lendhey, Matin S; Coughlin, Thomas R; Strauss, Eric J; Kennedy, Oran D

    2017-05-01

    MicroCT imaging allows for noninvasive microstructural evaluation of mineralized bone tissue, and is essential in studies of small animal models of bone and joint diseases. Automatic segmentation and evaluation of articular surfaces is challenging. Here, we present a novel method to create knee joint surface models, for the evaluation of PTOA-related joint changes in the rat using an atlas-based diffeomorphic registration to automatically isolate bone from surrounding tissues. As validation, two independent raters manually segment datasets and the resulting segmentations were compared to our novel automatic segmentation process. Data were evaluated using label map volumes, overlap metrics, Euclidean distance mapping, and a time trial. Intraclass correlation coefficients were calculated to compare methods, and were greater than 0.90. Total overlap, union overlap, and mean overlap were calculated to compare the automatic and manual methods and ranged from 0.85 to 0.99. A Euclidean distance comparison was also performed and showed no measurable difference between manual and automatic segmentations. Furthermore, our new method was 18 times faster than manual segmentation. Overall, this study describes a reliable, accurate, and automatic segmentation method for mineralized knee structures from microCT images, and will allow for efficient assessment of bony changes in small animal models of PTOA.

  19. Simultaneous Nonrigid Registration, Segmentation, and Tumor Detection in MRI Guided Cervical Cancer Radiation Therapy

    PubMed Central

    Lu, Chao; Chelikani, Sudhakar; Jaffray, David A.; Milosevic, Michael F.; Staib, Lawrence H.; Duncan, James S.

    2013-01-01

    External beam radiation therapy (EBRT) for the treatment of cancer enables accurate placement of radiation dose on the cancerous region. However, the deformation of soft tissue during the course of treatment, such as in cervical cancer, presents significant challenges for the delineation of the target volume and other structures of interest. Furthermore, the presence and regression of pathologies such as tumors may violate registration constraints and cause registration errors. In this paper, automatic segmentation, nonrigid registration and tumor detection in cervical magnetic resonance (MR) data are addressed simultaneously using a unified Bayesian framework. The proposed novel method can generate a tumor probability map while progressively identifying the boundary of an organ of interest based on the achieved nonrigid transformation. The method is able to handle the challenges of significant tumor regression and its effect on surrounding tissues. The new method was compared to various currently existing algorithms on a set of 36 MR data from six patients, each patient has six T2-weighted MR cervical images. The results show that the proposed approach achieves an accuracy comparable to manual segmentation and it significantly outperforms the existing registration algorithms. In addition, the tumor detection result generated by the proposed method has a high agreement with manual delineation by a qualified clinician. PMID:22328178

  20. Automatic Registration of Terrestrial Laser Scanner Point Clouds Using Natural Planar Surfaces

    NASA Astrophysics Data System (ADS)

    Theiler, P. W.; Schindler, K.

    2012-07-01

    Terrestrial laser scanners have become a standard piece of surveying equipment, used in diverse fields like geomatics, manufacturing and medicine. However, the processing of today's large point clouds is time-consuming, cumbersome and not automated enough. A basic step of post-processing is the registration of scans from different viewpoints. At present this is still done using artificial targets or tie points, mostly by manual clicking. The aim of this registration step is a coarse alignment, which can then be improved with the existing algorithm for fine registration. The focus of this paper is to provide such a coarse registration in a fully automatic fashion, and without placing any target objects in the scene. The basic idea is to use virtual tie points generated by intersecting planar surfaces in the scene. Such planes are detected in the data with RANSAC and optimally fitted using least squares estimation. Due to the huge amount of recorded points, planes can be determined very accurately, resulting in well-defined tie points. Given two sets of potential tie points recovered in two different scans, registration is performed by searching for the assignment which preserves the geometric configuration of the largest possible subset of all tie points. Since exhaustive search over all possible assignments is intractable even for moderate numbers of points, the search is guided by matching individual pairs of tie points with the help of a novel descriptor based on the properties of a point's parent planes. Experiments show that the proposed method is able to successfully coarse register TLS point clouds without the need for artificial targets.

  1. Fully automated registration of first-pass myocardial perfusion MRI using independent component analysis.

    PubMed

    Milles, J; van der Geest, R J; Jerosch-Herold, M; Reiber, J H C; Lelieveldt, B P F

    2007-01-01

    This paper presents a novel method for registration of cardiac perfusion MRI. The presented method successfully corrects for breathing motion without any manual interaction using Independent Component Analysis to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of ICA, and used to compute the displacement caused by breathing for each frame. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Validation experiments showed a reduction of the average LV motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. We conclude that this fully automatic ICA-based method shows an excellent accuracy, robustness and computation speed, adequate for use in a clinical environment.

  2. Computer-aided endovascular aortic repair using fully automated two- and three-dimensional fusion imaging.

    PubMed

    Panuccio, Giuseppe; Torsello, Giovanni Federico; Pfister, Markus; Bisdas, Theodosios; Bosiers, Michel J; Torsello, Giovanni; Austermann, Martin

    2016-12-01

    To assess the usability of a fully automated fusion imaging engine prototype, matching preinterventional computed tomography with intraoperative fluoroscopic angiography during endovascular aortic repair. From June 2014 to February 2015, all patients treated electively for abdominal and thoracoabdominal aneurysms were enrolled prospectively. Before each procedure, preoperative planning was performed with a fully automated fusion engine prototype based on computed tomography angiography, creating a mesh model of the aorta. In a second step, this three-dimensional dataset was registered with the two-dimensional intraoperative fluoroscopy. The main outcome measure was the applicability of the fully automated fusion engine. Secondary outcomes were freedom from failure of automatic segmentation or of the automatic registration as well as accuracy of the mesh model, measuring deviations from intraoperative angiography in millimeters, if applicable. Twenty-five patients were enrolled in this study. The fusion imaging engine could be used in successfully 92% of the cases (n = 23). Freedom from failure of automatic segmentation was 44% (n = 11). The freedom from failure of the automatic registration was 76% (n = 19), the median error of the automatic registration process was 0 mm (interquartile range, 0-5 mm). The fully automated fusion imaging engine was found to be applicable in most cases, albeit in several cases a fully automated data processing was not possible, requiring manual intervention. The accuracy of the automatic registration yielded excellent results and promises a useful and simple to use technology. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  3. Implementation and evaluation of a new workflow for registration and segmentation of pulmonary MRI data for regional lung perfusion assessment.

    PubMed

    Böttger, T; Grunewald, K; Schöbinger, M; Fink, C; Risse, F; Kauczor, H U; Meinzer, H P; Wolf, Ivo

    2007-03-07

    Recently it has been shown that regional lung perfusion can be assessed using time-resolved contrast-enhanced magnetic resonance (MR) imaging. Quantification of the perfusion images has been attempted, based on definition of small regions of interest (ROIs). Use of complete lung segmentations instead of ROIs could possibly increase quantification accuracy. Due to the low signal-to-noise ratio, automatic segmentation algorithms cannot be applied. On the other hand, manual segmentation of the lung tissue is very time consuming and can become inaccurate, as the borders of the lung to adjacent tissues are not always clearly visible. We propose a new workflow for semi-automatic segmentation of the lung from additionally acquired morphological HASTE MR images. First the lung is delineated semi-automatically in the HASTE image. Next the HASTE image is automatically registered with the perfusion images. Finally, the transformation resulting from the registration is used to align the lung segmentation from the morphological dataset with the perfusion images. We evaluated rigid, affine and locally elastic transformations, suitable optimizers and different implementations of mutual information (MI) metrics to determine the best possible registration algorithm. We located the shortcomings of the registration procedure and under which conditions automatic registration will succeed or fail. Segmentation results were evaluated using overlap and distance measures. Integration of the new workflow reduces the time needed for post-processing of the data, simplifies the perfusion quantification and reduces interobserver variability in the segmentation process. In addition, the matched morphological data set can be used to identify morphologic changes as the source for the perfusion abnormalities.

  4. Coarse Point Cloud Registration by Egi Matching of Voxel Clusters

    NASA Astrophysics Data System (ADS)

    Wang, Jinhu; Lindenbergh, Roderik; Shen, Yueqian; Menenti, Massimo

    2016-06-01

    Laser scanning samples the surface geometry of objects efficiently and records versatile information as point clouds. However, often more scans are required to fully cover a scene. Therefore, a registration step is required that transforms the different scans into a common coordinate system. The registration of point clouds is usually conducted in two steps, i.e. coarse registration followed by fine registration. In this study an automatic marker-free coarse registration method for pair-wise scans is presented. First the two input point clouds are re-sampled as voxels and dimensionality features of the voxels are determined by principal component analysis (PCA). Then voxel cells with the same dimensionality are clustered. Next, the Extended Gaussian Image (EGI) descriptor of those voxel clusters are constructed using significant eigenvectors of each voxel in the cluster. Correspondences between clusters in source and target data are obtained according to the similarity between their EGI descriptors. The random sampling consensus (RANSAC) algorithm is employed to remove outlying correspondences until a coarse alignment is obtained. If necessary, a fine registration is performed in a final step. This new method is illustrated on scan data sampling two indoor scenarios. The results of the tests are evaluated by computing the point to point distance between the two input point clouds. The presented two tests resulted in mean distances of 7.6 mm and 9.5 mm respectively, which are adequate for fine registration.

  5. Automatic Extraction of Planetary Image Features

    NASA Technical Reports Server (NTRS)

    Troglio, G.; LeMoigne, J.; Moser, G.; Serpico, S. B.; Benediktsson, J. A.

    2009-01-01

    With the launch of several Lunar missions such as the Lunar Reconnaissance Orbiter (LRO) and Chandrayaan-1, a large amount of Lunar images will be acquired and will need to be analyzed. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to Lunar data that often present low contrast and uneven illumination characteristics. In this paper, we propose a new method for the extraction of Lunar features (that can be generalized to other planetary images), based on the combination of several image processing techniques, a watershed segmentation and the generalized Hough Transform. This feature extraction has many applications, among which image registration.

  6. Automatic segmentation of relevant structures in DCE MR mammograms

    NASA Astrophysics Data System (ADS)

    Koenig, Matthias; Laue, Hendrik; Boehler, Tobias; Peitgen, Heinz-Otto

    2007-03-01

    The automatic segmentation of relevant structures such as skin edge, chest wall, or nipple in dynamic contrast enhanced MR imaging (DCE MRI) of the breast provides additional information for computer aided diagnosis (CAD) systems. Automatic reporting using BI-RADS criteria benefits of information about location of those structures. Lesion positions can be automatically described relatively to such reference structures for reporting purposes. Furthermore, this information can assist data reduction for computation expensive preprocessing such as registration, or for visualization of only the segments of current interest. In this paper, a novel automatic method for determining the air-breast boundary resp. skin edge, for approximation of the chest wall, and locating of the nipples is presented. The method consists of several steps which are built on top of each other. Automatic threshold computation leads to the air-breast boundary which is then analyzed to determine the location of the nipple. Finally, results of both steps are starting point for approximation of the chest wall. The proposed process was evaluated on a large data set of DCE MRI recorded by T1 sequences and yielded reasonable results in all cases.

  7. Motion tracking in the liver: Validation of a method based on 4D ultrasound using a nonrigid registration technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vijayan, Sinara, E-mail: sinara.vijayan@ntnu.no; Klein, Stefan; Hofstad, Erlend Fagertun

    Purpose: Treatments like radiotherapy and focused ultrasound in the abdomen require accurate motion tracking, in order to optimize dosage delivery to the target and minimize damage to critical structures and healthy tissues around the target. 4D ultrasound is a promising modality for motion tracking during such treatments. In this study, the authors evaluate the accuracy of motion tracking in the liver based on deformable registration of 4D ultrasound images. Methods: The offline analysis was performed using a nonrigid registration algorithm that was specifically designed for motion estimation from dynamic imaging data. The method registers the entire 4D image data sequencemore » in a groupwise optimization fashion, thus avoiding a bias toward a specifically chosen reference time point. Three healthy volunteers were scanned over several breathing cycles (12 s) from three different positions and angles on the abdomen; a total of nine 4D scans for the three volunteers. Well-defined anatomic landmarks were manually annotated in all 96 time frames for assessment of the automatic algorithm. The error of the automatic motion estimation method was compared with interobserver variability. The authors also performed experiments to investigate the influence of parameters defining the deformation field flexibility and evaluated how well the method performed with a lower temporal resolution in order to establish the minimum frame rate required for accurate motion estimation. Results: The registration method estimated liver motion with an error of 1 mm (75% percentile over all datasets), which was lower than the interobserver variability of 1.4 mm. The results were only slightly dependent on the degrees of freedom of the deformation model. The registration error increased to 2.8 mm with an eight times lower temporal resolution. Conclusions: The authors conclude that the methodology was able to accurately track the motion of the liver in the 4D ultrasound data. The authors believe that the method has potential in interventions on moving abdominal organs such as MR or ultrasound guided focused ultrasound therapy and radiotherapy, pending the method is enabled to run in real-time. The data and the annotations used for this study are made publicly available for those who would like to test other methods on 4D liver ultrasound data.« less

  8. Stopping Criteria for Log-Domain Diffeomorphic Demons Registration: An Experimental Survey for Radiotherapy Application.

    PubMed

    Peroni, M; Golland, P; Sharp, G C; Baroni, G

    2016-02-01

    A crucial issue in deformable image registration is achieving a robust registration algorithm at a reasonable computational cost. Given the iterative nature of the optimization procedure an algorithm must automatically detect convergence, and stop the iterative process when most appropriate. This paper ranks the performances of three stopping criteria and six stopping value computation strategies for a Log-Domain Demons Deformable registration method simulating both a coarse and a fine registration. The analyzed stopping criteria are: (a) velocity field update magnitude, (b) mean squared error, and (c) harmonic energy. Each stoping condition is formulated so that the user defines a threshold ∊, which quantifies the residual error that is acceptable for the particular problem and calculation strategy. In this work, we did not aim at assigning a value to e, but to give insights in how to evaluate and to set the threshold on a given exit strategy in a very popular registration scheme. Experiments on phantom and patient data demonstrate that comparing the optimization metric minimum over the most recent three iterations with the minimum over the fourth to sixth most recent iterations can be an appropriate algorithm stopping strategy. The harmonic energy was found to provide best trade-off between robustness and speed of convergence for the analyzed registration method at coarse registration, but was outperformed by mean squared error when all the original pixel information is used. This suggests the need of developing mathematically sound new convergence criteria in which both image and vector field information could be used to detect the actual convergence, which could be especially useful when considering multi-resolution registrations. Further work should be also dedicated to study same strategies performances in other deformable registration methods and body districts. © The Author(s) 2014.

  9. 2D to 3D fusion of echocardiography and cardiac CT for TAVR and TAVI image guidance.

    PubMed

    Khalil, Azira; Faisal, Amir; Lai, Khin Wee; Ng, Siew Cheok; Liew, Yih Miin

    2017-08-01

    This study proposed a registration framework to fuse 2D echocardiography images of the aortic valve with preoperative cardiac CT volume. The registration facilitates the fusion of CT and echocardiography to aid the diagnosis of aortic valve diseases and provide surgical guidance during transcatheter aortic valve replacement and implantation. The image registration framework consists of two major steps: temporal synchronization and spatial registration. Temporal synchronization allows time stamping of echocardiography time series data to identify frames that are at similar cardiac phase as the CT volume. Spatial registration is an intensity-based normalized mutual information method applied with pattern search optimization algorithm to produce an interpolated cardiac CT image that matches the echocardiography image. Our proposed registration method has been applied on the short-axis "Mercedes Benz" sign view of the aortic valve and long-axis parasternal view of echocardiography images from ten patients. The accuracy of our fully automated registration method was 0.81 ± 0.08 and 1.30 ± 0.13 mm in terms of Dice coefficient and Hausdorff distance for short-axis aortic valve view registration, whereas for long-axis parasternal view registration it was 0.79 ± 0.02 and 1.19 ± 0.11 mm, respectively. This accuracy is comparable to gold standard manual registration by expert. There was no significant difference in aortic annulus diameter measurement between the automatically and manually registered CT images. Without the use of optical tracking, we have shown the applicability of this technique for effective fusion of echocardiography with preoperative CT volume to potentially facilitate catheter-based surgery.

  10. An Automatic Registration Algorithm for 3D Maxillofacial Model

    NASA Astrophysics Data System (ADS)

    Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng

    2016-09-01

    3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.

  11. Influence of image registration on apparent diffusion coefficient images computed from free-breathing diffusion MR images of the abdomen.

    PubMed

    Guyader, Jean-Marie; Bernardin, Livia; Douglas, Naomi H M; Poot, Dirk H J; Niessen, Wiro J; Klein, Stefan

    2015-08-01

    To evaluate the influence of image registration on apparent diffusion coefficient (ADC) images obtained from abdominal free-breathing diffusion-weighted MR images (DW-MRIs). A comprehensive pipeline based on automatic three-dimensional nonrigid image registrations is developed to compensate for misalignments in DW-MRI datasets obtained from five healthy subjects scanned twice. Motion is corrected both within each image and between images in a time series. ADC distributions are compared with and without registration in two abdominal volumes of interest (VOIs). The effects of interpolations and Gaussian blurring as alternative strategies to reduce motion artifacts are also investigated. Among the four considered scenarios (no processing, interpolation, blurring and registration), registration yields the best alignment scores. Median ADCs vary according to the chosen scenario: for the considered datasets, ADCs obtained without processing are 30% higher than with registration. Registration improves voxelwise reproducibility at least by a factor of 2 and decreases uncertainty (Fréchet-Cramér-Rao lower bound). Registration provides similar improvements in reproducibility and uncertainty as acquiring four times more data. Patient motion during image acquisition leads to misaligned DW-MRIs and inaccurate ADCs, which can be addressed using automatic registration. © 2014 Wiley Periodicals, Inc.

  12. MR to CT registration of brains using image synthesis

    NASA Astrophysics Data System (ADS)

    Roy, Snehashis; Carass, Aaron; Jog, Amod; Prince, Jerry L.; Lee, Junghoon

    2014-03-01

    Computed tomography (CT) is the preferred imaging modality for patient dose calculation for radiation therapy. Magnetic resonance (MR) imaging (MRI) is used along with CT to identify brain structures due to its superior soft tissue contrast. Registration of MR and CT is necessary for accurate delineation of the tumor and other structures, and is critical in radiotherapy planning. Mutual information (MI) or its variants are typically used as a similarity metric to register MRI to CT. However, unlike CT, MRI intensity does not have an accepted calibrated intensity scale. Therefore, MI-based MR-CT registration may vary from scan to scan as MI depends on the joint histogram of the images. In this paper, we propose a fully automatic framework for MR-CT registration by synthesizing a synthetic CT image from MRI using a co-registered pair of MR and CT images as an atlas. Patches of the subject MRI are matched to the atlas and the synthetic CT patches are estimated in a probabilistic framework. The synthetic CT is registered to the original CT using a deformable registration and the computed deformation is applied to the MRI. In contrast to most existing methods, we do not need any manual intervention such as picking landmarks or regions of interests. The proposed method was validated on ten brain cancer patient cases, showing 25% improvement in MI and correlation between MR and CT images after registration compared to state-of-the-art registration methods.

  13. A Multispectral Image Creating Method for a New Airborne Four-Camera System with Different Bandpass Filters

    PubMed Central

    Li, Hanlun; Zhang, Aiwu; Hu, Shaoxing

    2015-01-01

    This paper describes an airborne high resolution four-camera multispectral system which mainly consists of four identical monochrome cameras equipped with four interchangeable bandpass filters. For this multispectral system, an automatic multispectral data composing method was proposed. The homography registration model was chosen, and the scale-invariant feature transform (SIFT) and random sample consensus (RANSAC) were used to generate matching points. For the difficult registration problem between visible band images and near-infrared band images in cases lacking manmade objects, we presented an effective method based on the structural characteristics of the system. Experiments show that our method can acquire high quality multispectral images and the band-to-band alignment error of the composed multiple spectral images is less than 2.5 pixels. PMID:26205264

  14. Rendering-based video-CT registration with physical constraints for image-guided endoscopic sinus surgery

    NASA Astrophysics Data System (ADS)

    Otake, Y.; Leonard, S.; Reiter, A.; Rajan, P.; Siewerdsen, J. H.; Ishii, M.; Taylor, R. H.; Hager, G. D.

    2015-03-01

    We present a system for registering the coordinate frame of an endoscope to pre- or intra- operatively acquired CT data based on optimizing the similarity metric between an endoscopic image and an image predicted via rendering of CT. Our method is robust and semi-automatic because it takes account of physical constraints, specifically, collisions between the endoscope and the anatomy, to initialize and constrain the search. The proposed optimization method is based on a stochastic optimization algorithm that evaluates a large number of similarity metric functions in parallel on a graphics processing unit. Images from a cadaver and a patient were used for evaluation. The registration error was 0.83 mm and 1.97 mm for cadaver and patient images respectively. The average registration time for 60 trials was 4.4 seconds. The patient study demonstrated robustness of the proposed algorithm against a moderate anatomical deformation.

  15. Automatic Feature Extraction from Planetary Images

    NASA Technical Reports Server (NTRS)

    Troglio, Giulia; Le Moigne, Jacqueline; Benediktsson, Jon A.; Moser, Gabriele; Serpico, Sebastiano B.

    2010-01-01

    With the launch of several planetary missions in the last decade, a large amount of planetary images has already been acquired and much more will be available for analysis in the coming years. The image data need to be analyzed, preferably by automatic processing techniques because of the huge amount of data. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to planetary data that often present low contrast and uneven illumination characteristics. Different methods have already been presented for crater extraction from planetary images, but the detection of other types of planetary features has not been addressed yet. Here, we propose a new unsupervised method for the extraction of different features from the surface of the analyzed planet, based on the combination of several image processing techniques, including a watershed segmentation and the generalized Hough Transform. The method has many applications, among which image registration and can be applied to arbitrary planetary images.

  16. Enhanced Virtual Presence for Immersive Visualization of Complex Situations for Mission Rehearsal

    DTIC Science & Technology

    1997-06-01

    taken. We propose to join both these technologies together in a registration device . The registration device would be small and portable and easily...registering the panning of the camera (or other sensing device ) and also stitch together the shots to automatically generate panoramic files necessary to...database and as the base information changes each of the linked drawings is automatically updated. Filename Format A specific naming convention should be

  17. A spatiotemporal-based scheme for efficient registration-based segmentation of thoracic 4-D MRI.

    PubMed

    Yang, Y; Van Reeth, E; Poh, C L; Tan, C H; Tham, I W K

    2014-05-01

    Dynamic three-dimensional (3-D) (four-dimensional, 4-D) magnetic resonance (MR) imaging is gaining importance in the study of pulmonary motion for respiratory diseases and pulmonary tumor motion for radiotherapy. To perform quantitative analysis using 4-D MR images, segmentation of anatomical structures such as the lung and pulmonary tumor is required. Manual segmentation of entire thoracic 4-D MRI data that typically contains many 3-D volumes acquired over several breathing cycles is extremely tedious, time consuming, and suffers high user variability. This requires the development of new automated segmentation schemes for 4-D MRI data segmentation. Registration-based segmentation technique that uses automatic registration methods for segmentation has been shown to be an accurate method to segment structures for 4-D data series. However, directly applying registration-based segmentation to segment 4-D MRI series lacks efficiency. Here we propose an automated 4-D registration-based segmentation scheme that is based on spatiotemporal information for the segmentation of thoracic 4-D MR lung images. The proposed scheme saved up to 95% of computation amount while achieving comparable accurate segmentations compared to directly applying registration-based segmentation to 4-D dataset. The scheme facilitates rapid 3-D/4-D visualization of the lung and tumor motion and potentially the tracking of tumor during radiation delivery.

  18. Image Processing Of Images From Peripheral-Artery Digital Subtraction Angiography (DSA) Studies

    NASA Astrophysics Data System (ADS)

    Wilson, David L.; Tarbox, Lawrence R.; Cist, David B.; Faul, David D.

    1988-06-01

    A system is being developed to test the possibility of doing peripheral, digital subtraction angiography (DSA) with a single contrast injection using a moving gantry system. Given repositioning errors that occur between the mask and contrast-containing images, factors affecting the success of subtractions following image registration have been investigated theoretically and experimentally. For a 1 mm gantry displacement, parallax and geometric image distortion (pin-cushion) both give subtraction errors following registration that are approximately 25% of the error resulting from no registration. Image processing techniques improve the subtractions. The geometric distortion effect is reduced using a piece-wise, 8 parameter unwarping method. Plots of image similarity measures versus pixel shift are well behaved and well fit by a parabola, leading to the development of an iterative, automatic registration algorithm that uses parabolic prediction of the new minimum. The registration algorithm converges quickly (less than 1 second on a MicroVAX) and is relatively immune to the region of interest (ROI) selected.

  19. SU-E-J-248: Comparative Study of Two Image Registration for Image-Guided Radiation Therapy in Esophageal Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shang, K; Wang, J; Liu, D

    2014-06-01

    Purpose: Image-guided radiation therapy (IGRT) is one of the major treatment of esophageal cancer. Gray value registration and bone registration are two kinds of image registration, the purpose of this work is to compare which one is more suitable for esophageal cancer patients. Methods: Twenty three esophageal patients were treated by Elekta Synergy, CBCT images were acquired and automatically registered to planning kilovoltage CT scans according to gray value or bone registration. The setup errors were measured in the X, Y and Z axis, respectively. Two kinds of setup errors were analysed by matching T test statistical method. Results: Fourmore » hundred and five groups of CBCT images were available and the systematic and random setup errors (cm) in X, Y, Z directions were 0.35, 0.63, 0.29 and 0.31, 0.53, 0.21 with gray value registration, while 0.37, 0.64, 0.26 and 0.32, 0.55, 0.20 with bone registration, respectively. Compared with bone registration and gray value registration, the setup errors in X and Z axis have significant differences. In Y axis, both measurement comparison results of T value is 0.256 (P value > 0.05); In X axis, the T value is 5.287(P value < 0.05); In Z axis, the T value is −5.138 (P value < 0.05). Conclusion: Gray value registration is recommended in image-guided radiotherapy for esophageal cancer and the other thoracic tumors. Manual registration could be applied when it is necessary. Bone registration is more suitable for the head tumor and pelvic tumor department where composed of redundant interconnected and immobile bone tissue.« less

  20. Deformable and rigid registration of MRI and microPET images for photodynamic therapy of cancer in mice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fei Baowei; Wang Hesheng; Muzic, Raymond F. Jr.

    2006-03-15

    We are investigating imaging techniques to study the tumor response to photodynamic therapy (PDT). Positron emission tomography (PET) can provide physiological and functional information. High-resolution magnetic resonance imaging (MRI) can provide anatomical and morphological changes. Image registration can combine MRI and PET images for improved tumor monitoring. In this study, we acquired high-resolution MRI and microPET {sup 18}F-fluorodeoxyglucose (FDG) images from C3H mice with RIF-1 tumors that were treated with Pc 4-based PDT. We developed two registration methods for this application. For registration of the whole mouse body, we used an automatic three-dimensional, normalized mutual information algorithm. For tumor registration,more » we developed a finite element model (FEM)-based deformable registration scheme. To assess the quality of whole body registration, we performed slice-by-slice review of both image volumes; manually segmented feature organs, such as the left and right kidneys and the bladder, in each slice; and computed the distance between corresponding centroids. Over 40 volume registration experiments were performed with MRI and microPET images. The distance between corresponding centroids of organs was 1.5{+-}0.4 mm which is about 2 pixels of microPET images. The mean volume overlap ratios for tumors were 94.7% and 86.3% for the deformable and rigid registration methods, respectively. Registration of high-resolution MRI and microPET images combines anatomical and functional information of the tumors and provides a useful tool for evaluating photodynamic therapy.« less

  1. Automatic motion correction for in vivo human skin optical coherence tomography angiography through combined rigid and nonrigid registration

    NASA Astrophysics Data System (ADS)

    Wei, David Wei; Deegan, Anthony J.; Wang, Ruikang K.

    2017-06-01

    When using optical coherence tomography angiography (OCTA), the development of artifacts due to involuntary movements can severely compromise the visualization and subsequent quantitation of tissue microvasculatures. To correct such an occurrence, we propose a motion compensation method to eliminate artifacts from human skin OCTA by means of step-by-step rigid affine registration, rigid subpixel registration, and nonrigid B-spline registration. To accommodate this remedial process, OCTA is conducted using two matching all-depth volume scans. Affine transformation is first performed on the large vessels of the deep reticular dermis, and then the resulting affine parameters are applied to all-depth vasculatures with a further subpixel registration to refine the alignment between superficial smaller vessels. Finally, the coregistration of both volumes is carried out to result in the final artifact-free composite image via an algorithm based upon cubic B-spline free-form deformation. We demonstrate that the proposed method can provide a considerable improvement to the final en face OCTA images with substantial artifact removal. In addition, the correlation coefficients and peak signal-to-noise ratios of the corrected images are evaluated and compared with those of the original images, further validating the effectiveness of the proposed method. We expect that the proposed method can be useful in improving qualitative and quantitative assessment of the OCTA images of scanned tissue beds.

  2. Automatic motion correction for in vivo human skin optical coherence tomography angiography through combined rigid and nonrigid registration.

    PubMed

    Wei, David Wei; Deegan, Anthony J; Wang, Ruikang K

    2017-06-01

    When using optical coherence tomography angiography (OCTA), the development of artifacts due to involuntary movements can severely compromise the visualization and subsequent quantitation of tissue microvasculatures. To correct such an occurrence, we propose a motion compensation method to eliminate artifacts from human skin OCTA by means of step-by-step rigid affine registration, rigid subpixel registration, and nonrigid B-spline registration. To accommodate this remedial process, OCTA is conducted using two matching all-depth volume scans. Affine transformation is first performed on the large vessels of the deep reticular dermis, and then the resulting affine parameters are applied to all-depth vasculatures with a further subpixel registration to refine the alignment between superficial smaller vessels. Finally, the coregistration of both volumes is carried out to result in the final artifact-free composite image via an algorithm based upon cubic B-spline free-form deformation. We demonstrate that the proposed method can provide a considerable improvement to the final en face OCTA images with substantial artifact removal. In addition, the correlation coefficients and peak signal-to-noise ratios of the corrected images are evaluated and compared with those of the original images, further validating the effectiveness of the proposed method. We expect that the proposed method can be useful in improving qualitative and quantitative assessment of the OCTA images of scanned tissue beds.

  3. Registration of prone and supine CT colonography scans using correlation optimized warping and canonical correlation analysis

    PubMed Central

    Wang, Shijun; Yao, Jianhua; Liu, Jiamin; Petrick, Nicholas; Van Uitert, Robert L.; Periaswamy, Senthil; Summers, Ronald M.

    2009-01-01

    Purpose: In computed tomographic colonography (CTC), a patient will be scanned twice—Once supine and once prone—to improve the sensitivity for polyp detection. To assist radiologists in CTC reading, in this paper we propose an automated method for colon registration from supine and prone CTC scans. Methods: We propose a new colon centerline registration method for prone and supine CTC scans using correlation optimized warping (COW) and canonical correlation analysis (CCA) based on the anatomical structure of the colon. Four anatomical salient points on the colon are first automatically distinguished. Then correlation optimized warping is applied to the segments defined by the anatomical landmarks to improve the global registration based on local correlation of segments. The COW method was modified by embedding canonical correlation analysis to allow multiple features along the colon centerline to be used in our implementation. Results: We tested the COW algorithm on a CTC data set of 39 patients with 39 polyps (19 training and 20 test cases) to verify the effectiveness of the proposed COW registration method. Experimental results on the test set show that the COW method significantly reduces the average estimation error in a polyp location between supine and prone scans by 67.6%, from 46.27±52.97 to 14.98 mm±11.41 mm, compared to the normalized distance along the colon centerline algorithm (p<0.01). Conclusions: The proposed COW algorithm is more accurate for the colon centerline registration compared to the normalized distance along the colon centerline method and the dynamic time warping method. Comparison results showed that the feature combination of z-coordinate and curvature achieved lowest registration error compared to the other feature combinations used by COW. The proposed method is tolerant to centerline errors because anatomical landmarks help prevent the propagation of errors across the entire colon centerline. PMID:20095272

  4. Registration of prone and supine CT colonography scans using correlation optimized warping and canonical correlation analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Shijun; Yao Jianhua; Liu Jiamin

    Purpose: In computed tomographic colonography (CTC), a patient will be scanned twice--Once supine and once prone--to improve the sensitivity for polyp detection. To assist radiologists in CTC reading, in this paper we propose an automated method for colon registration from supine and prone CTC scans. Methods: We propose a new colon centerline registration method for prone and supine CTC scans using correlation optimized warping (COW) and canonical correlation analysis (CCA) based on the anatomical structure of the colon. Four anatomical salient points on the colon are first automatically distinguished. Then correlation optimized warping is applied to the segments defined bymore » the anatomical landmarks to improve the global registration based on local correlation of segments. The COW method was modified by embedding canonical correlation analysis to allow multiple features along the colon centerline to be used in our implementation. Results: We tested the COW algorithm on a CTC data set of 39 patients with 39 polyps (19 training and 20 test cases) to verify the effectiveness of the proposed COW registration method. Experimental results on the test set show that the COW method significantly reduces the average estimation error in a polyp location between supine and prone scans by 67.6%, from 46.27{+-}52.97 to 14.98 mm{+-}11.41 mm, compared to the normalized distance along the colon centerline algorithm (p<0.01). Conclusions: The proposed COW algorithm is more accurate for the colon centerline registration compared to the normalized distance along the colon centerline method and the dynamic time warping method. Comparison results showed that the feature combination of z-coordinate and curvature achieved lowest registration error compared to the other feature combinations used by COW. The proposed method is tolerant to centerline errors because anatomical landmarks help prevent the propagation of errors across the entire colon centerline.« less

  5. A CNN Regression Approach for Real-Time 2D/3D Registration.

    PubMed

    Shun Miao; Wang, Z Jane; Rui Liao

    2016-05-01

    In this paper, we present a Convolutional Neural Network (CNN) regression approach to address the two major limitations of existing intensity-based 2-D/3-D registration technology: 1) slow computation and 2) small capture range. Different from optimization-based methods, which iteratively optimize the transformation parameters over a scalar-valued metric function representing the quality of the registration, the proposed method exploits the information embedded in the appearances of the digitally reconstructed radiograph and X-ray images, and employs CNN regressors to directly estimate the transformation parameters. An automatic feature extraction step is introduced to calculate 3-D pose-indexed features that are sensitive to the variables to be regressed while robust to other factors. The CNN regressors are then trained for local zones and applied in a hierarchical manner to break down the complex regression task into multiple simpler sub-tasks that can be learned separately. Weight sharing is furthermore employed in the CNN regression model to reduce the memory footprint. The proposed approach has been quantitatively evaluated on 3 potential clinical applications, demonstrating its significant advantage in providing highly accurate real-time 2-D/3-D registration with a significantly enlarged capture range when compared to intensity-based methods.

  6. NOTE: Wobbled splatting—a fast perspective volume rendering method for simulation of x-ray images from CT

    NASA Astrophysics Data System (ADS)

    Birkfellner, Wolfgang; Seemann, Rudolf; Figl, Michael; Hummel, Johann; Ede, Christopher; Homolka, Peter; Yang, Xinhui; Niederer, Peter; Bergmann, Helmar

    2005-05-01

    3D/2D registration, the automatic assignment of a global rigid-body transformation matching the coordinate systems of patient and preoperative volume scan using projection images, is an important topic in image-guided therapy and radiation oncology. A crucial part of most 3D/2D registration algorithms is the fast computation of digitally rendered radiographs (DRRs) to be compared iteratively to radiographs or portal images. Since registration is an iterative process, fast generation of DRRs—which are perspective summed voxel renderings—is desired. In this note, we present a simple and rapid method for generation of DRRs based on splat rendering. As opposed to conventional splatting, antialiasing of the resulting images is not achieved by means of computing a discrete point spread function (a so-called footprint), but by stochastic distortion of either the voxel positions in the volume scan or by the simulation of a focal spot of the x-ray tube with non-zero diameter. Our method generates slightly blurred DRRs suitable for registration purposes at framerates of approximately 10 Hz when rendering volume images with a size of 30 MB.

  7. A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.

    PubMed

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle

    2016-03-08

    On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual con-tours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (< 1 ms) with a satisfying accuracy (Dice = 0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on-board MR-IGRT system.

  8. Patient-specific model-based segmentation of brain tumors in 3D intraoperative ultrasound images.

    PubMed

    Ilunga-Mbuyamba, Elisee; Avina-Cervantes, Juan Gabriel; Lindner, Dirk; Arlt, Felix; Ituna-Yudonago, Jean Fulbert; Chalopin, Claire

    2018-03-01

    Intraoperative ultrasound (iUS) imaging is commonly used to support brain tumor operation. The tumor segmentation in the iUS images is a difficult task and still under improvement because of the low signal-to-noise ratio. The success of automatic methods is also limited due to the high noise sensibility. Therefore, an alternative brain tumor segmentation method in 3D-iUS data using a tumor model obtained from magnetic resonance (MR) data for local MR-iUS registration is presented in this paper. The aim is to enhance the visualization of the brain tumor contours in iUS. A multistep approach is proposed. First, a region of interest (ROI) based on the specific patient tumor model is defined. Second, hyperechogenic structures, mainly tumor tissues, are extracted from the ROI of both modalities by using automatic thresholding techniques. Third, the registration is performed over the extracted binary sub-volumes using a similarity measure based on gradient values, and rigid and affine transformations. Finally, the tumor model is aligned with the 3D-iUS data, and its contours are represented. Experiments were successfully conducted on a dataset of 33 patients. The method was evaluated by comparing the tumor segmentation with expert manual delineations using two binary metrics: contour mean distance and Dice index. The proposed segmentation method using local and binary registration was compared with two grayscale-based approaches. The outcomes showed that our approach reached better results in terms of computational time and accuracy than the comparative methods. The proposed approach requires limited interaction and reduced computation time, making it relevant for intraoperative use. Experimental results and evaluations were performed offline. The developed tool could be useful for brain tumor resection supporting neurosurgeons to improve tumor border visualization in the iUS volumes.

  9. Strategies for registering range images from unknown camera positions

    NASA Astrophysics Data System (ADS)

    Bernardini, Fausto; Rushmeier, Holly E.

    2000-03-01

    We describe a project to construct a 3D numerical model of Michelangelo's Florentine Pieta to be used in a study of the sculpture. Here we focus on the registration of the range images used to construct the model. The major challenge was the range of length scales involved. A resolution of 1 mm or less required for the 2.25 m tall piece. To achieve this resolution, we could only acquire an area of 20 by 20 cm per scan. A total of approximately 700 images were required. Ideally, a tracker would be attached to the scanner to record position and pose. The use of a tracker was not possible in the field. Instead, we used a crude-to-fine approach to registering the meshes to one another. The crudest level consisted of pairwise manual registration, aided by texture maps containing laser dots that were projected onto the sculpture. This crude alignment was refined by an automatic registration of laser dot centers. In this phase, we found that consistency constraints on dot matches were essential to obtaining accurate results. The laser dot alignment was refined by an automatic registration of laser dot centers. In this phase, we found that consistency constraints on dot matches were essential to obtaining accurate results. The laser dot alignment was further refined using a variation of the ICP algorithm developed by Besl and McKay. In the application of ICP to global registration, we developed a method to avoid one class of local minima by finding a set of points, rather than the single point, that matches each candidate point.

  10. Linking quality indicators to clinical trials: an automated approach

    PubMed Central

    Coiera, Enrico; Choong, Miew Keen; Tsafnat, Guy; Hibbert, Peter; Runciman, William B.

    2017-01-01

    Abstract Objective Quality improvement of health care requires robust measurable indicators to track performance. However identifying which indicators are supported by strong clinical evidence, typically from clinical trials, is often laborious. This study tests a novel method for automatically linking indicators to clinical trial registrations. Design A set of 522 quality of care indicators for 22 common conditions drawn from the CareTrack study were automatically mapped to outcome measures reported in 13 971 trials from ClinicalTrials.gov. Intervention Text mining methods extracted phrases mentioning indicators and outcome phrases, and these were compared using the Levenshtein edit distance ratio to measure similarity. Main Outcome Measure Number of care indicators that mapped to outcome measures in clinical trials. Results While only 13% of the 522 CareTrack indicators were thought to have Level I or II evidence behind them, 353 (68%) could be directly linked to randomized controlled trials. Within these 522, 50 of 70 (71%) Level I and II evidence-based indicators, and 268 of 370 (72%) Level V (consensus-based) indicators could be linked to evidence. Of the indicators known to have evidence behind them, only 5.7% (4 of 70) were mentioned in the trial reports but were missed by our method. Conclusions We automatically linked indicators to clinical trial registrations with high precision. Whilst the majority of quality indicators studied could be directly linked to research evidence, a small portion could not and these require closer scrutiny. It is feasible to support the process of indicator development using automated methods to identify research evidence. PMID:28651340

  11. Fast and Robust Registration of Multimodal Remote Sensing Images via Dense Orientated Gradient Feature

    NASA Astrophysics Data System (ADS)

    Ye, Y.

    2017-09-01

    This paper presents a fast and robust method for the registration of multimodal remote sensing data (e.g., optical, LiDAR, SAR and map). The proposed method is based on the hypothesis that structural similarity between images is preserved across different modalities. In the definition of the proposed method, we first develop a pixel-wise feature descriptor named Dense Orientated Gradient Histogram (DOGH), which can be computed effectively at every pixel and is robust to non-linear intensity differences between images. Then a fast similarity metric based on DOGH is built in frequency domain using the Fast Fourier Transform (FFT) technique. Finally, a template matching scheme is applied to detect tie points between images. Experimental results on different types of multimodal remote sensing images show that the proposed similarity metric has the superior matching performance and computational efficiency than the state-of-the-art methods. Moreover, based on the proposed similarity metric, we also design a fast and robust automatic registration system for multimodal images. This system has been evaluated using a pair of very large SAR and optical images (more than 20000 × 20000 pixels). Experimental results show that our system outperforms the two popular commercial software systems (i.e. ENVI and ERDAS) in both registration accuracy and computational efficiency.

  12. Robust augmented reality registration method for localization of solid organs' tumors using CT-derived virtual biomechanical model and fluorescent fiducials.

    PubMed

    Kong, Seong-Ho; Haouchine, Nazim; Soares, Renato; Klymchenko, Andrey; Andreiuk, Bohdan; Marques, Bruno; Shabat, Galyna; Piechaud, Thierry; Diana, Michele; Cotin, Stéphane; Marescaux, Jacques

    2017-07-01

    Augmented reality (AR) is the fusion of computer-generated and real-time images. AR can be used in surgery as a navigation tool, by creating a patient-specific virtual model through 3D software manipulation of DICOM imaging (e.g., CT scan). The virtual model can be superimposed to real-time images enabling transparency visualization of internal anatomy and accurate localization of tumors. However, the 3D model is rigid and does not take into account inner structures' deformations. We present a concept of automated AR registration, while the organs undergo deformation during surgical manipulation, based on finite element modeling (FEM) coupled with optical imaging of fluorescent surface fiducials. Two 10 × 1 mm wires (pseudo-tumors) and six 10 × 0.9 mm fluorescent fiducials were placed in ex vivo porcine kidneys (n = 10). Biomechanical FEM-based models were generated from CT scan. Kidneys were deformed and the shape changes were identified by tracking the fiducials, using a near-infrared optical system. The changes were registered automatically with the virtual model, which was deformed accordingly. Accuracy of prediction of pseudo-tumors' location was evaluated with a CT scan in the deformed status (ground truth). In vivo: fluorescent fiducials were inserted under ultrasound guidance in the kidney of one pig, followed by a CT scan. The FEM-based virtual model was superimposed on laparoscopic images by automatic registration of the fiducials. Biomechanical models were successfully generated and accurately superimposed on optical images. The mean measured distance between the estimated tumor by biomechanical propagation and the scanned tumor (ground truth) was 0.84 ± 0.42 mm. All fiducials were successfully placed in in vivo kidney and well visualized in near-infrared mode enabling accurate automatic registration of the virtual model on the laparoscopic images. Our preliminary experiments showed the potential of a biomechanical model with fluorescent fiducials to propagate the deformation of solid organs' surface to their inner structures including tumors with good accuracy and automatized robust tracking.

  13. Real-time registration of 3D to 2D ultrasound images for image-guided prostate biopsy.

    PubMed

    Gillies, Derek J; Gardi, Lori; De Silva, Tharindu; Zhao, Shuang-Ren; Fenster, Aaron

    2017-09-01

    During image-guided prostate biopsy, needles are targeted at tissues that are suspicious of cancer to obtain specimen for histological examination. Unfortunately, patient motion causes targeting errors when using an MR-transrectal ultrasound (TRUS) fusion approach to augment the conventional biopsy procedure. This study aims to develop an automatic motion correction algorithm approaching the frame rate of an ultrasound system to be used in fusion-based prostate biopsy systems. Two modes of operation have been investigated for the clinical implementation of the algorithm: motion compensation using a single user initiated correction performed prior to biopsy, and real-time continuous motion compensation performed automatically as a background process. Retrospective 2D and 3D TRUS patient images acquired prior to biopsy gun firing were registered using an intensity-based algorithm utilizing normalized cross-correlation and Powell's method for optimization. 2D and 3D images were downsampled and cropped to estimate the optimal amount of image information that would perform registrations quickly and accurately. The optimal search order during optimization was also analyzed to avoid local optima in the search space. Error in the algorithm was computed using target registration errors (TREs) from manually identified homologous fiducials in a clinical patient dataset. The algorithm was evaluated for real-time performance using the two different modes of clinical implementations by way of user initiated and continuous motion compensation methods on a tissue mimicking prostate phantom. After implementation in a TRUS-guided system with an image downsampling factor of 4, the proposed approach resulted in a mean ± std TRE and computation time of 1.6 ± 0.6 mm and 57 ± 20 ms respectively. The user initiated mode performed registrations with in-plane, out-of-plane, and roll motions computation times of 108 ± 38 ms, 60 ± 23 ms, and 89 ± 27 ms, respectively, and corresponding registration errors of 0.4 ± 0.3 mm, 0.2 ± 0.4 mm, and 0.8 ± 0.5°. The continuous method performed registration significantly faster (P < 0.05) than the user initiated method, with observed computation times of 35 ± 8 ms, 43 ± 16 ms, and 27 ± 5 ms for in-plane, out-of-plane, and roll motions, respectively, and corresponding registration errors of 0.2 ± 0.3 mm, 0.7 ± 0.4 mm, and 0.8 ± 1.0°. The presented method encourages real-time implementation of motion compensation algorithms in prostate biopsy with clinically acceptable registration errors. Continuous motion compensation demonstrated registration accuracy with submillimeter and subdegree error, while performing < 50 ms computation times. Image registration technique approaching the frame rate of an ultrasound system offers a key advantage to be smoothly integrated to the clinical workflow. In addition, this technique could be used further for a variety of image-guided interventional procedures to treat and diagnose patients by improving targeting accuracy. © 2017 American Association of Physicists in Medicine.

  14. TU-AB-303-08: GPU-Based Software Platform for Efficient Image-Guided Adaptive Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, S; Robinson, A; McNutt, T

    2015-06-15

    Purpose: In this study, we develop an integrated software platform for adaptive radiation therapy (ART) that combines fast and accurate image registration, segmentation, and dose computation/accumulation methods. Methods: The proposed system consists of three key components; 1) deformable image registration (DIR), 2) automatic segmentation, and 3) dose computation/accumulation. The computationally intensive modules including DIR and dose computation have been implemented on a graphics processing unit (GPU). All required patient-specific data including the planning CT (pCT) with contours, daily cone-beam CTs, and treatment plan are automatically queried and retrieved from their own databases. To improve the accuracy of DIR between pCTmore » and CBCTs, we use the double force demons DIR algorithm in combination with iterative CBCT intensity correction by local intensity histogram matching. Segmentation of daily CBCT is then obtained by propagating contours from the pCT. Daily dose delivered to the patient is computed on the registered pCT by a GPU-accelerated superposition/convolution algorithm. Finally, computed daily doses are accumulated to show the total delivered dose to date. Results: Since the accuracy of DIR critically affects the quality of the other processes, we first evaluated our DIR method on eight head-and-neck cancer cases and compared its performance. Normalized mutual-information (NMI) and normalized cross-correlation (NCC) computed as similarity measures, and our method produced overall NMI of 0.663 and NCC of 0.987, outperforming conventional methods by 3.8% and 1.9%, respectively. Experimental results show that our registration method is more consistent and roust than existing algorithms, and also computationally efficient. Computation time at each fraction took around one minute (30–50 seconds for registration and 15–25 seconds for dose computation). Conclusion: We developed an integrated GPU-accelerated software platform that enables accurate and efficient DIR, auto-segmentation, and dose computation, thus supporting an efficient ART workflow. This work was supported by NIH/NCI under grant R42CA137886.« less

  15. A method of 2D/3D registration of a statistical mouse atlas with a planar X-ray projection and an optical photo.

    PubMed

    Wang, Hongkai; Stout, David B; Chatziioannou, Arion F

    2013-05-01

    The development of sophisticated and high throughput whole body small animal imaging technologies has created a need for improved image analysis and increased automation. The registration of a digital mouse atlas to individual images is a prerequisite for automated organ segmentation and uptake quantification. This paper presents a fully-automatic method for registering a statistical mouse atlas with individual subjects based on an anterior-posterior X-ray projection and a lateral optical photo of the mouse silhouette. The mouse atlas was trained as a statistical shape model based on 83 organ-segmented micro-CT images. For registration, a hierarchical approach is applied which first registers high contrast organs, and then estimates low contrast organs based on the registered high contrast organs. To register the high contrast organs, a 2D-registration-back-projection strategy is used that deforms the 3D atlas based on the 2D registrations of the atlas projections. For validation, this method was evaluated using 55 subjects of preclinical mouse studies. The results showed that this method can compensate for moderate variations of animal postures and organ anatomy. Two different metrics, the Dice coefficient and the average surface distance, were used to assess the registration accuracy of major organs. The Dice coefficients vary from 0.31 ± 0.16 for the spleen to 0.88 ± 0.03 for the whole body, and the average surface distance varies from 0.54 ± 0.06 mm for the lungs to 0.85 ± 0.10mm for the skin. The method was compared with a direct 3D deformation optimization (without 2D-registration-back-projection) and a single-subject atlas registration (instead of using the statistical atlas). The comparison revealed that the 2D-registration-back-projection strategy significantly improved the registration accuracy, and the use of the statistical mouse atlas led to more plausible organ shapes than the single-subject atlas. This method was also tested with shoulder xenograft tumor-bearing mice, and the results showed that the registration accuracy of most organs was not significantly affected by the presence of shoulder tumors, except for the lungs and the spleen. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Extracting a Purely Non-rigid Deformation Field of a Single Structure

    NASA Astrophysics Data System (ADS)

    Demirci, Stefanie; Manstad-Hulaas, Frode; Navab, Nassir

    During endovascular aortic repair (EVAR) treatment, the aortic shape is subject to severe deformation that is imposed by medical instruments such as guide wires, catheters, and the stent graft. The problem definition of deformable registration of images covering the entire abdominal region, however, is highly ill-posed. We present a new method for extracting the deformation of an aneurysmatic aorta. The outline of the procedure includes initial rigid alignment of two abdominal scans, segmentation of abdominal vessel trees, and automatic reduction of their centerline structures to one specified region of interest around the aorta. Our non-rigid registration procedure then only computes local non-rigid deformation and leaves out all remaining global rigid transformations. In order to evaluate our method, experiments for the extraction of aortic deformation fields are conducted on 15 patient datasets from endovascular aortic repair (EVAR) treatment. A visual assessment of the registration results were performed by two vascular surgeons and one interventional radiologist who are all experts in EVAR procedures.

  17. Intra-operative adjustment of standard planes in C-arm CT image data.

    PubMed

    Brehler, Michael; Görres, Joseph; Franke, Jochen; Barth, Karl; Vetter, Sven Y; Grützner, Paul A; Meinzer, Hans-Peter; Wolf, Ivo; Nabers, Diana

    2016-03-01

    With the help of an intra-operative mobile C-arm CT, medical interventions can be verified and corrected, avoiding the need for a post-operative CT and a second intervention. An exact adjustment of standard plane positions is necessary for the best possible assessment of the anatomical regions of interest but the mobility of the C-arm causes the need for a time-consuming manual adjustment. In this article, we present an automatic plane adjustment at the example of calcaneal fractures. We developed two feature detection methods (2D and pseudo-3D) based on SURF key points and also transferred the SURF approach to 3D. Combined with an atlas-based registration, our algorithm adjusts the standard planes of the calcaneal C-arm images automatically. The robustness of the algorithms is evaluated using a clinical data set. Additionally, we tested the algorithm's performance for two registration approaches, two resolutions of C-arm images and two methods for metal artifact reduction. For the feature extraction, the novel 3D-SURF approach performs best. As expected, a higher resolution ([Formula: see text] voxel) leads also to more robust feature points and is therefore slightly better than the [Formula: see text] voxel images (standard setting of device). Our comparison of two different artifact reduction methods and the complete removal of metal in the images shows that our approach is highly robust against artifacts and the number and position of metal implants. By introducing our fast algorithmic processing pipeline, we developed the first steps for a fully automatic assistance system for the assessment of C-arm CT images.

  18. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method

    PubMed Central

    Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu

    2016-01-01

    A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis. PMID:28029121

  19. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method.

    PubMed

    Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu

    2016-12-24

    A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.

  20. OPAD data analysis

    NASA Astrophysics Data System (ADS)

    Buntine, Wray L.; Kraft, Richard; Whitaker, Kevin; Cooper, Anita E.; Powers, W. T.; Wallace, Tim L.

    1993-06-01

    Data obtained in the framework of an Optical Plume Anomaly Detection (OPAD) program intended to create a rocket engine health monitor based on spectrometric detections of anomalous atomic and molecular species in the exhaust plume are analyzed. The major results include techniques for handling data noise, methods for registration of spectra to wavelength, and a simple automatic process for estimating the metallic component of a spectrum.

  1. Fusion of cone-beam CT and 3D photographic images for soft tissue simulation in maxillofacial surgery

    NASA Astrophysics Data System (ADS)

    Chung, Soyoung; Kim, Joojin; Hong, Helen

    2016-03-01

    During maxillofacial surgery, prediction of the facial outcome after surgery is main concern for both surgeons and patients. However, registration of the facial CBCT images and 3D photographic images has some difficulties that regions around the eyes and mouth are affected by facial expressions or the registration speed is low due to their dense clouds of points on surfaces. Therefore, we propose a framework for the fusion of facial CBCT images and 3D photos with skin segmentation and two-stage surface registration. Our method is composed of three major steps. First, to obtain a CBCT skin surface for the registration with 3D photographic surface, skin is automatically segmented from CBCT images and the skin surface is generated by surface modeling. Second, to roughly align the scale and the orientation of the CBCT skin surface and 3D photographic surface, point-based registration with four corresponding landmarks which are located around the mouth is performed. Finally, to merge the CBCT skin surface and 3D photographic surface, Gaussian-weight-based surface registration is performed within narrow-band of 3D photographic surface.

  2. Modified dixon‐based renal dynamic contrast‐enhanced MRI facilitates automated registration and perfusion analysis

    PubMed Central

    Leiner, Tim; Vink, Eva E.; Blankestijn, Peter J.; van den Berg, Cornelis A.T.

    2017-01-01

    Purpose Renal dynamic contrast‐enhanced (DCE) MRI provides information on renal perfusion and filtration. However, clinical implementation is hampered by challenges in postprocessing as a result of misalignment of the kidneys due to respiration. We propose to perform automated image registration using the fat‐only images derived from a modified Dixon reconstruction of a dual‐echo acquisition because these provide consistent contrast over the dynamic series. Methods DCE data of 10 hypertensive patients was used. Dual‐echo images were acquired at 1.5 T with temporal resolution of 3.9 s during contrast agent injection. Dixon fat, water, and in‐phase and opposed‐phase (OP) images were reconstructed. Postprocessing was automated. Registration was performed both to fat images and OP images for comparison. Perfusion and filtration values were extracted from a two‐compartment model fit. Results Automatic registration to fat images performed better than automatic registration to OP images with visible contrast enhancement. Median vertical misalignment of the kidneys was 14 mm prior to registration, compared to 3 mm and 5 mm with registration to fat images and OP images, respectively (P = 0.03). Mean perfusion values and MR‐based glomerular filtration rates (GFR) were 233 ± 64 mL/100 mL/min and 60 ± 36 mL/minute, respectively, based on fat‐registered images. MR‐based GFR correlated with creatinine‐based GFR (P = 0.04) for fat‐registered images. For unregistered and OP‐registered images, this correlation was not significant. Conclusion Absence of contrast changes on Dixon fat images improves registration in renal DCE MRI and enables automated postprocessing, resulting in a more accurate estimation of GFR. Magn Reson Med 80:66–76, 2018. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes. PMID:29134673

  3. Hyperbolic Harmonic Mapping for Surface Registration

    PubMed Central

    Shi, Rui; Zeng, Wei; Su, Zhengyu; Jiang, Jian; Damasio, Hanna; Lu, Zhonglin; Wang, Yalin; Yau, Shing-Tung; Gu, Xianfeng

    2016-01-01

    Automatic computation of surface correspondence via harmonic map is an active research field in computer vision, computer graphics and computational geometry. It may help document and understand physical and biological phenomena and also has broad applications in biometrics, medical imaging and motion capture inducstries. Although numerous studies have been devoted to harmonic map research, limited progress has been made to compute a diffeomorphic harmonic map on general topology surfaces with landmark constraints. This work conquers this problem by changing the Riemannian metric on the target surface to a hyperbolic metric so that the harmonic mapping is guaranteed to be a diffeomorphism under landmark constraints. The computational algorithms are based on Ricci flow and nonlinear heat diffusion methods. The approach is general and robust. We employ our algorithm to study the constrained surface registration problem which applies to both computer vision and medical imaging applications. Experimental results demonstrate that, by changing the Riemannian metric, the registrations are always diffeomorphic and achieve relatively high performance when evaluated with some popular surface registration evaluation standards. PMID:27187948

  4. Registration of MRI to Intraoperative Radiographs for Target Localization in Spinal Interventions

    PubMed Central

    De Silva, T; Uneri, A; Ketcha, M D; Reaungamornrat, S; Goerres, J; Jacobson, M W; Vogt, S; Kleinszig, G; Khanna, A J; Wolinsky, J-P; Siewerdsen, J H

    2017-01-01

    Purpose Decision support to assist in target vertebra localization could provide a useful aid to safe and effective spine surgery. Previous solutions have shown 3D-2D registration of preoperative CT to intraoperative radiographs to reliably annotate vertebral labels for assistance during level localization. We present an algorithm (referred to as MR-LevelCheck) to perform 3D-2D registration based on a preoperative MRI to accommodate the increasingly common clinical scenario in which MRI is used instead of CT for preoperative planning. Methods Straightforward adaptation of gradient/intensity-based methods appropriate to CT-to-radiograph registration is confounded by large mismatch and noncorrespondence in image intensity between MRI and radiographs. The proposed method overcomes such challenges with a simple vertebrae segmentation step using vertebra centroids as seed points (automatically defined within existing workflow). Forwards projections are computed using segmented MRI and registered to radiographs via gradient orientation (GO) similarity and the CMA-ES (Covariance-Matrix-Adaptation Evolutionary-Strategy) optimizer. The method was tested in an IRB-approved study involving 10 patients undergoing cervical, thoracic, or lumbar spine surgery following preoperative MRI. Results The method successfully registered each preoperative MRI to intraoperative radiographs and maintained desirable properties of robustness against image content mismatch and large capture range. Robust registration performance was achieved with projection distance error (PDE) (median ± iqr) = 4.3 ± 2.6 mm (median ± iqr) and 0% failure rate. Segmentation accuracy for the continuous max-flow method yielded Dice coefficient = 88.1 ± 5.2, Accuracy = 90.6 ± 5.7, RMSE = 1.8 ± 0.6 mm, and contour affinity ratio (CAR) = 0.82 ± 0.08. Registration performance was found to be robust for segmentation methods exhibiting RMSE < 3 mm and CAR > 0.50. Conclusion The MR-LevelCheck method provides a potentially valuable extension to a previously developed decision support tool for spine surgery target localization by extending its utility to preoperative MRI while maintaining characteristics of accuracy and robustness. PMID:28050972

  5. The interactive electrode localization utility: software for automatic sorting and labeling of intracranial subdural electrodes

    PubMed Central

    Tang, Wei; Peled, Noam; Vallejo, Deborah I.; Borzello, Mia; Dougherty, Darin D.; Eskandar, Emad N.; Widge, Alik S.; Cash, Sydney S.; Stufflebeam, Steven M.

    2018-01-01

    Purpose Existing methods for sorting, labeling, registering, and across-subject localization of electrodes in intracranial encephalography (iEEG) may involve laborious work requiring manual inspection of radiological images. Methods We describe a new open-source software package, the interactive electrode localization utility which presents a full pipeline for the registration, localization, and labeling of iEEG electrodes from CT and MR images. In addition, we describe a method to automatically sort and label electrodes from subdural grids of known geometry. Results We validated our software against manual inspection methods in twelve subjects undergoing iEEG for medically intractable epilepsy. Our algorithm for sorting and labeling performed correct identification on 96% of the electrodes. Conclusions The sorting and labeling methods we describe offer nearly perfect performance and the software package we have distributed may simplify the process of registering, sorting, labeling, and localizing subdural iEEG grid electrodes by manual inspection. PMID:27915398

  6. Effect of registration on corpus callosum population differences found with DBM analysis

    NASA Astrophysics Data System (ADS)

    Han, Zhaoying; Thornton-Wells, Tricia A.; Gore, John C.; Dawant, Benoit M.

    2011-03-01

    Deformation Based Morphometry (DBM) is a relatively new method used for characterizing anatomical differences among populations. DBM is based on the analysis of the deformation fields generated by non-rigid registration algorithms, which warp the individual volumes to one standard coordinate system. Although several studies have compared non-rigid registration algorithms for segmentation tasks, few studies have compared the effect of the registration algorithm on population differences that may be uncovered through DBM. In this study, we compared DBM results obtained with five well established non-rigid registration algorithms on the corpus callosum (CC) in thirteen subjects with Williams Syndrome (WS) and thirteen Normal Control (NC) subjects. The five non-rigid registration algorithms include: (1) The Adaptive Basis Algorithm (ABA); (2) Image Registration Toolkit (IRTK); (3) FSL Nonlinear Image Registration Tool (FSL); (4) Automatic Registration Tools (ART); and (5) the normalization algorithm available in SPM8. For each algorithm, the 3D deformation fields from all subjects to the atlas were obtained and used to calculate the Jacobian determinant (JAC) at each voxel in the mid-sagittal slice of the CC. The mean JAC maps for each group were compared quantitatively across different nonrigid registration algorithms. An ANOVA test performed on the means of the JAC over the Genu and the Splenium ROIs shows the JAC differences between nonrigid registration algorithms are statistically significant over the Genu for both groups and over the Splenium for the NC group. These results suggest that it is important to consider the effect of registration when using DBM to compute morphological differences in populations.

  7. Automatic correspondence detection in mammogram and breast tomosynthesis images

    NASA Astrophysics Data System (ADS)

    Ehrhardt, Jan; Krüger, Julia; Bischof, Arpad; Barkhausen, Jörg; Handels, Heinz

    2012-02-01

    Two-dimensional mammography is the major imaging modality in breast cancer detection. A disadvantage of mammography is the projective nature of this imaging technique. Tomosynthesis is an attractive modality with the potential to combine the high contrast and high resolution of digital mammography with the advantages of 3D imaging. In order to facilitate diagnostics and treatment in the current clinical work-flow, correspondences between tomosynthesis images and previous mammographic exams of the same women have to be determined. In this paper, we propose a method to detect correspondences in 2D mammograms and 3D tomosynthesis images automatically. In general, this 2D/3D correspondence problem is ill-posed, because a point in the 2D mammogram corresponds to a line in the 3D tomosynthesis image. The goal of our method is to detect the "most probable" 3D position in the tomosynthesis images corresponding to a selected point in the 2D mammogram. We present two alternative approaches to solve this 2D/3D correspondence problem: a 2D/3D registration method and a 2D/2D mapping between mammogram and tomosynthesis projection images with a following back projection. The advantages and limitations of both approaches are discussed and the performance of the methods is evaluated qualitatively and quantitatively using a software phantom and clinical breast image data. Although the proposed 2D/3D registration method can compensate for moderate breast deformations caused by different breast compressions, this approach is not suitable for clinical tomosynthesis data due to the limited resolution and blurring effects perpendicular to the direction of projection. The quantitative results show that the proposed 2D/2D mapping method is capable of detecting corresponding positions in mammograms and tomosynthesis images automatically for 61 out of 65 landmarks. The proposed method can facilitate diagnosis, visual inspection and comparison of 2D mammograms and 3D tomosynthesis images for the physician.

  8. a Target Aware Texture Mapping for Sculpture Heritage Modeling

    NASA Astrophysics Data System (ADS)

    Yang, C.; Zhang, F.; Huang, X.; Li, D.; Zhu, Y.

    2017-08-01

    In this paper, we proposed a target aware image to model registration method using silhouette as the matching clues. The target sculpture object in natural environment can be automatically detected from image with complex background with assistant of 3D geometric data. Then the silhouette can be automatically extracted and applied in image to model matching. Due to the user don't need to deliberately draw target area, the time consumption for precisely image to model matching operation can be greatly reduced. To enhance the function of this method, we also improved the silhouette matching algorithm to support conditional silhouette matching. Two experiments using a stone lion sculpture of Ming Dynasty and a potable relic in museum are given to evaluate the method we proposed. The method we proposed in this paper is extended and developed into a mature software applied in many culture heritage documentation projects.

  9. A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.

    PubMed

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle

    2016-03-01

    On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on-board MR-IGRT system. PACS number(s): 87.57.nm, 87.57.N-, 87.61.Tg. © 2016 The Authors.

  10. A comparative study of automatic image segmentation algorithms for target tracking in MR‐IGRT

    PubMed Central

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J.; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa

    2016-01-01

    On‐board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real‐time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image‐guided radiotherapy (MR‐IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k‐means (FKM), k‐harmonic means (KHM), and reaction‐diffusion level set evolution (RD‐LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR‐TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR‐TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD‐LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP‐TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high‐contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR‐TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on‐board MR‐IGRT system. PACS number(s): 87.57.nm, 87.57.N‐, 87.61.Tg

  11. Preliminary experience with a novel method of three-dimensional co-registration of prostate cancer digital histology and in vivo multiparametric MRI.

    PubMed

    Orczyk, C; Rusinek, H; Rosenkrantz, A B; Mikheev, A; Deng, F-M; Melamed, J; Taneja, S S

    2013-12-01

    To assess a novel method of three-dimensional (3D) co-registration of prostate cancer digital histology and in-vivo multiparametric magnetic resonance imaging (mpMRI) image sets for clinical usefulness. A software platform was developed to achieve 3D co-registration. This software was prospectively applied to three patients who underwent radical prostatectomy. Data comprised in-vivo mpMRI [T2-weighted, dynamic contrast-enhanced weighted images (DCE); apparent diffusion coefficient (ADC)], ex-vivo T2-weighted imaging, 3D-rebuilt pathological specimen, and digital histology. Internal landmarks from zonal anatomy served as reference points for assessing co-registration accuracy and precision. Applying a method of deformable transformation based on 22 internal landmarks, a 1.6 mm accuracy was reached to align T2-weighted images and the 3D-rebuilt pathological specimen, an improvement over rigid transformation of 32% (p = 0.003). The 22 zonal anatomy landmarks were more accurately mapped using deformable transformation than rigid transformation (p = 0.0008). An automatic method based on mutual information, enabled automation of the process and to include perfusion and diffusion MRI images. Evaluation of co-registration accuracy using the volume overlap index (Dice index) met clinically relevant requirements, ranging from 0.81-0.96 for sequences tested. Ex-vivo images of the specimen did not significantly improve co-registration accuracy. This preliminary analysis suggests that deformable transformation based on zonal anatomy landmarks is accurate in the co-registration of mpMRI and histology. Including diffusion and perfusion sequences in the same 3D space as histology is essential further clinical information. The ability to localize cancer in 3D space may improve targeting for image-guided biopsy, focal therapy, and disease quantification in surveillance protocols. Copyright © 2013 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  12. A rib-specific multimodal registration algorithm for fused unfolded rib visualization using PET/CT

    NASA Astrophysics Data System (ADS)

    Kaftan, Jens N.; Kopaczka, Marcin; Wimmer, Andreas; Platsch, Günther; Declerck, Jérôme

    2014-03-01

    Respiratory motion affects the alignment of PET and CT volumes from PET/CT examinations in a non-rigid manner. This becomes particularly apparent if reviewing fine anatomical structures such as ribs when assessing bone metastases, which frequently occur in many advanced cancers. To make this routine diagnostic task more efficient, a fused unfolded rib visualization for 18F-NaF PET/CT is presented. It allows to review the whole rib cage in a single image. This advanced visualization is enabled by a novel rib-specific registration algorithm that rigidly optimizes the local alignment of each individual rib in both modalities based on a matched filter response function. More specifically, rib centerlines are automatically extracted from CT and subsequently individually aligned to the corresponding bone-specific PET rib uptake pattern. The proposed method has been validated on 20 PET/CT scans acquired at different clinical sites. It has been demonstrated that the presented rib- specific registration method significantly improves the rib alignment without having to run complex deformable registration algorithms. At the same time, it guarantees that rib lesions are not further deformed, which may otherwise affect quantitative measurements such as SUVs. Considering clinically relevant distance thresholds, the centerline portion with good alignment compared to the ground truth improved from 60:6% to 86:7% after registration while approximately 98% can be still considered as acceptably aligned.

  13. SU-E-J-275: Review - Computerized PET/CT Image Analysis in the Evaluation of Tumor Response to Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, W; Wang, J; Zhang, H

    Purpose: To review the literature in using computerized PET/CT image analysis for the evaluation of tumor response to therapy. Methods: We reviewed and summarized more than 100 papers that used computerized image analysis techniques for the evaluation of tumor response with PET/CT. This review mainly covered four aspects: image registration, tumor segmentation, image feature extraction, and response evaluation. Results: Although rigid image registration is straightforward, it has been shown to achieve good alignment between baseline and evaluation scans. Deformable image registration has been shown to improve the alignment when complex deformable distortions occur due to tumor shrinkage, weight loss ormore » gain, and motion. Many semi-automatic tumor segmentation methods have been developed on PET. A comparative study revealed benefits of high levels of user interaction with simultaneous visualization of CT images and PET gradients. On CT, semi-automatic methods have been developed for only tumors that show marked difference in CT attenuation between the tumor and the surrounding normal tissues. Quite a few multi-modality segmentation methods have been shown to improve accuracy compared to single-modality algorithms. Advanced PET image features considering spatial information, such as tumor volume, tumor shape, total glycolytic volume, histogram distance, and texture features have been found more informative than the traditional SUVmax for the prediction of tumor response. Advanced CT features, including volumetric, attenuation, morphologic, structure, and texture descriptors, have also been found advantage over the traditional RECIST and WHO criteria in certain tumor types. Predictive models based on machine learning technique have been constructed for correlating selected image features to response. These models showed improved performance compared to current methods using cutoff value of a single measurement for tumor response. Conclusion: This review showed that computerized PET/CT image analysis holds great potential to improve the accuracy in evaluation of tumor response. This work was supported in part by the National Cancer Institute Grant R01CA172638.« less

  14. Automatic right ventricle (RV) segmentation by propagating a basal spatio-temporal characterization

    NASA Astrophysics Data System (ADS)

    Atehortúa, Angélica; Zuluaga, María. A.; Martínez, Fabio; Romero, Eduardo

    2015-12-01

    An accurate right ventricular (RV) function quantification is important to support the evaluation, diagnosis and prognosis of several cardiac pathologies and to complement the left ventricular function assessment. However, expert RV delineation is a time consuming task with high inter-and-intra observer variability. In this paper we present an automatic segmentation method of the RV in MR-cardiac sequences. Unlike atlas or multi-atlas methods, this approach estimates the RV using exclusively information from the sequence itself. For so doing, a spatio-temporal analysis segments the heart at the basal slice, segmentation that is then propagated to the apex by using a non-rigid-registration strategy. The proposed approach achieves an average Dice Score of 0:79 evaluated with a set of 48 patients.

  15. Automatic elastic image registration by interpolation of 3D rotations and translations from discrete rigid-body transformations.

    PubMed

    Walimbe, Vivek; Shekhar, Raj

    2006-12-01

    We present an algorithm for automatic elastic registration of three-dimensional (3D) medical images. Our algorithm initially recovers the global spatial mismatch between the reference and floating images, followed by hierarchical octree-based subdivision of the reference image and independent registration of the floating image with the individual subvolumes of the reference image at each hierarchical level. Global as well as local registrations use the six-parameter full rigid-body transformation model and are based on maximization of normalized mutual information (NMI). To ensure robustness of the subvolume registration with low voxel counts, we calculate NMI using a combination of current and prior mutual histograms. To generate a smooth deformation field, we perform direct interpolation of six-parameter rigid-body subvolume transformations obtained at the last subdivision level. Our interpolation scheme involves scalar interpolation of the 3D translations and quaternion interpolation of the 3D rotational pose. We analyzed the performance of our algorithm through experiments involving registration of synthetically deformed computed tomography (CT) images. Our algorithm is general and can be applied to image pairs of any two modalities of most organs. We have demonstrated successful registration of clinical whole-body CT and positron emission tomography (PET) images using this algorithm. The registration accuracy for this application was evaluated, based on validation using expert-identified anatomical landmarks in 15 CT-PET image pairs. The algorithm's performance was comparable to the average accuracy observed for three expert-determined registrations in the same 15 image pairs.

  16. Automatic segmentation of male pelvic anatomy on computed tomography images: a comparison with multiple observers in the context of a multicentre clinical trial.

    PubMed

    Geraghty, John P; Grogan, Garry; Ebert, Martin A

    2013-04-30

    This study investigates the variation in segmentation of several pelvic anatomical structures on computed tomography (CT) between multiple observers and a commercial automatic segmentation method, in the context of quality assurance and evaluation during a multicentre clinical trial. CT scans of two prostate cancer patients ('benchmarking cases'), one high risk (HR) and one intermediate risk (IR), were sent to multiple radiotherapy centres for segmentation of prostate, rectum and bladder structures according to the TROG 03.04 "RADAR" trial protocol definitions. The same structures were automatically segmented using iPlan software for the same two patients, allowing structures defined by automatic segmentation to be quantitatively compared with those defined by multiple observers. A sample of twenty trial patient datasets were also used to automatically generate anatomical structures for quantitative comparison with structures defined by individual observers for the same datasets. There was considerable agreement amongst all observers and automatic segmentation of the benchmarking cases for bladder (mean spatial variations < 0.4 cm across the majority of image slices). Although there was some variation in interpretation of the superior-inferior (cranio-caudal) extent of rectum, human-observer contours were typically within a mean 0.6 cm of automatically-defined contours. Prostate structures were more consistent for the HR case than the IR case with all human observers segmenting a prostate with considerably more volume (mean +113.3%) than that automatically segmented. Similar results were seen across the twenty sample datasets, with disagreement between iPlan and observers dominant at the prostatic apex and superior part of the rectum, which is consistent with observations made during quality assurance reviews during the trial. This study has demonstrated quantitative analysis for comparison of multi-observer segmentation studies. For automatic segmentation algorithms based on image-registration as in iPlan, it is apparent that agreement between observer and automatic segmentation will be a function of patient-specific image characteristics, particularly for anatomy with poor contrast definition. For this reason, it is suggested that automatic registration based on transformation of a single reference dataset adds a significant systematic bias to the resulting volumes and their use in the context of a multicentre trial should be carefully considered.

  17. Automatic nipple detection on 3D images of an automated breast ultrasound system (ABUS)

    NASA Astrophysics Data System (ADS)

    Javanshir Moghaddam, Mandana; Tan, Tao; Karssemeijer, Nico; Platel, Bram

    2014-03-01

    Recent studies have demonstrated that applying Automated Breast Ultrasound in addition to mammography in women with dense breasts can lead to additional detection of small, early stage breast cancers which are occult in corresponding mammograms. In this paper, we proposed a fully automatic method for detecting the nipple location in 3D ultrasound breast images acquired from Automated Breast Ultrasound Systems. The nipple location is a valuable landmark to report the position of possible abnormalities in a breast or to guide image registration. To detect the nipple location, all images were normalized. Subsequently, features have been extracted in a multi scale approach and classification experiments were performed using a gentle boost classifier to identify the nipple location. The method was applied on a dataset of 100 patients with 294 different 3D ultrasound views from Siemens and U-systems acquisition systems. Our database is a representative sample of cases obtained in clinical practice by four medical centers. The automatic method could accurately locate the nipple in 90% of AP (Anterior-Posterior) views and in 79% of the other views.

  18. Validation of automatic landmark identification for atlas-based segmentation for radiation treatment planning of the head-and-neck region

    NASA Astrophysics Data System (ADS)

    Leavens, Claudia; Vik, Torbjørn; Schulz, Heinrich; Allaire, Stéphane; Kim, John; Dawson, Laura; O'Sullivan, Brian; Breen, Stephen; Jaffray, David; Pekar, Vladimir

    2008-03-01

    Manual contouring of target volumes and organs at risk in radiation therapy is extremely time-consuming, in particular for treating the head-and-neck area, where a single patient treatment plan can take several hours to contour. As radiation treatment delivery moves towards adaptive treatment, the need for more efficient segmentation techniques will increase. We are developing a method for automatic model-based segmentation of the head and neck. This process can be broken down into three main steps: i) automatic landmark identification in the image dataset of interest, ii) automatic landmark-based initialization of deformable surface models to the patient image dataset, and iii) adaptation of the deformable models to the patient-specific anatomical boundaries of interest. In this paper, we focus on the validation of the first step of this method, quantifying the results of our automatic landmark identification method. We use an image atlas formed by applying thin-plate spline (TPS) interpolation to ten atlas datasets, using 27 manually identified landmarks in each atlas/training dataset. The principal variation modes returned by principal component analysis (PCA) of the landmark positions were used by an automatic registration algorithm, which sought the corresponding landmarks in the clinical dataset of interest using a controlled random search algorithm. Applying a run time of 60 seconds to the random search, a root mean square (rms) distance to the ground-truth landmark position of 9.5 +/- 0.6 mm was calculated for the identified landmarks. Automatic segmentation of the brain, mandible and brain stem, using the detected landmarks, is demonstrated.

  19. 3D Registration of mpMRI for Assessment of Prostate Cancer Focal Therapy.

    PubMed

    Orczyk, Clément; Rosenkrantz, Andrew B; Mikheev, Artem; Villers, Arnauld; Bernaudin, Myriam; Taneja, Samir S; Valable, Samuel; Rusinek, Henry

    2017-12-01

    This study aimed to assess a novel method of three-dimensional (3D) co-registration of prostate magnetic resonance imaging (MRI) examinations performed before and after prostate cancer focal therapy. We developed a software platform for automatic 3D deformable co-registration of prostate MRI at different time points and applied this method to 10 patients who underwent focal ablative therapy. MRI examinations were performed preoperatively, as well as 1 week and 6 months post treatment. Rigid registration served as reference for assessing co-registration accuracy and precision. Segmentation of preoperative and postoperative prostate revealed a significant postoperative volume decrease of the gland that averaged 6.49 cc (P = .017). Applying deformable transformation based on mutual information from 120 pairs of MRI slices, we refined by 2.9 mm (max. 6.25 mm) the alignment of the ablation zone, segmented from contrast-enhanced images on the 1-week postoperative examination, to the 6-month postoperative T2-weighted images. This represented a 500% improvement over the rigid approach (P = .001), corrected by volume. The dissimilarity by Dice index of the mapped ablation zone using deformable transformation vs rigid control was significantly (P = .04) higher at the ablation site than in the whole gland. Our findings illustrate our method's ability to correct for deformation at the ablation site. The preliminary analysis suggests that deformable transformation computed from mutual information of preoperative and follow-up MRI is accurate in co-registration of MRI examinations performed before and after focal therapy. The ability to localize the previously ablated tissue in 3D space may improve targeting for image-guided follow-up biopsy within focal therapy protocols. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  20. SU-E-J-132: Automated Segmentation with Post-Registration Atlas Selection Based On Mutual Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, X; Gao, H; Sharp, G

    2015-06-15

    Purpose: The delineation of targets and organs-at-risk is a critical step during image-guided radiation therapy, for which manual contouring is the gold standard. However, it is often time-consuming and may suffer from intra- and inter-rater variability. The purpose of this work is to investigate the automated segmentation. Methods: The automatic segmentation here is based on mutual information (MI), with the atlas from Public Domain Database for Computational Anatomy (PDDCA) with manually drawn contours.Using dice coefficient (DC) as the quantitative measure of segmentation accuracy, we perform leave-one-out cross-validations for all PDDCA images sequentially, during which other images are registered to eachmore » chosen image and DC is computed between registered contour and ground truth. Meanwhile, six strategies, including MI, are selected to measure the image similarity, with MI to be the best. Then given a target image to be segmented and an atlas, automatic segmentation consists of: (a) the affine registration step for image positioning; (b) the active demons registration method to register the atlas to the target image; (c) the computation of MI values between the deformed atlas and the target image; (d) the weighted image fusion of three deformed atlas images with highest MI values to form the segmented contour. Results: MI was found to be the best among six studied strategies in the sense that it had the highest positive correlation between similarity measure (e.g., MI values) and DC. For automated segmentation, the weighted image fusion of three deformed atlas images with highest MI values provided the highest DC among four proposed strategies. Conclusion: MI has the highest correlation with DC, and therefore is an appropriate choice for post-registration atlas selection in atlas-based segmentation. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  1. 17 CFR 230.486 - Effective date of post-effective amendments and registration statements filed by certain closed...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...-effective amendments and registration statements filed by certain closed-end management investment companies...-end management investment companies. (a) Automatic effectiveness. Except as otherwise provided in this... management investment company or business development company which makes periodic repurchase offers under...

  2. Entropy-Based Registration of Point Clouds Using Terrestrial Laser Scanning and Smartphone GPS.

    PubMed

    Chen, Maolin; Wang, Siying; Wang, Mingwei; Wan, Youchuan; He, Peipei

    2017-01-20

    Automatic registration of terrestrial laser scanning point clouds is a crucial but unresolved topic that is of great interest in many domains. This study combines terrestrial laser scanner with a smartphone for the coarse registration of leveled point clouds with small roll and pitch angles and height differences, which is a novel sensor combination mode for terrestrial laser scanning. The approximate distance between two neighboring scan positions is firstly calculated with smartphone GPS coordinates. Then, 2D distribution entropy is used to measure the distribution coherence between the two scans and search for the optimal initial transformation parameters. To this end, we propose a method called Iterative Minimum Entropy (IME) to correct initial transformation parameters based on two criteria: the difference between the average and minimum entropy and the deviation from the minimum entropy to the expected entropy. Finally, the presented method is evaluated using two data sets that contain tens of millions of points from panoramic and non-panoramic, vegetation-dominated and building-dominated cases and can achieve high accuracy and efficiency.

  3. Automatic pose correction for image-guided nonhuman primate brain surgery planning

    NASA Astrophysics Data System (ADS)

    Ghafurian, Soheil; Chen, Antong; Hines, Catherine; Dogdas, Belma; Bone, Ashleigh; Lodge, Kenneth; O'Malley, Stacey; Winkelmann, Christopher T.; Bagchi, Ansuman; Lubbers, Laura S.; Uslaner, Jason M.; Johnson, Colena; Renger, John; Zariwala, Hatim A.

    2016-03-01

    Intracranial delivery of recombinant DNA and neurochemical analysis in nonhuman primate (NHP) requires precise targeting of various brain structures via imaging derived coordinates in stereotactic surgeries. To attain targeting precision, the surgical planning needs to be done on preoperative three dimensional (3D) CT and/or MR images, in which the animals head is fixed in a pose identical to the pose during the stereotactic surgery. The matching of the image to the pose in the stereotactic frame can be done manually by detecting key anatomical landmarks on the 3D MR and CT images such as ear canal and ear bar zero position. This is not only time intensive but also prone to error due to the varying initial poses in the images which affects both the landmark detection and rotation estimation. We have introduced a fast, reproducible, and semi-automatic method to detect the stereotactic coordinate system in the image and correct the pose. The method begins with a rigid registration of the subject images to an atlas and proceeds to detect the anatomical landmarks through a sequence of optimization, deformable and multimodal registration algorithms. The results showed similar precision (maximum difference of 1.71 in average in-plane rotation) to a manual pose correction.

  4. Automatic Insall-Salvati ratio measurement on lateral knee x-ray images using model-guided landmark localization

    NASA Astrophysics Data System (ADS)

    Chen, Hsin-Chen; Lin, Chii-Jeng; Wu, Chia-Hsing; Wang, Chien-Kuo; Sun, Yung-Nien

    2010-11-01

    The Insall-Salvati ratio (ISR) is important for detecting two common clinical signs of knee disease: patella alta and patella baja. Furthermore, large inter-operator differences in ISR measurement make an objective measurement system necessary for better clinical evaluation. In this paper, we define three specific bony landmarks for determining the ISR and then propose an x-ray image analysis system to localize these landmarks and measure the ISR. Due to inherent artifacts in x-ray images, such as unevenly distributed intensities, which make landmark localization difficult, we hence propose a registration-assisted active-shape model (RAASM) to localize these landmarks. We first construct a statistical model from a set of training images based on x-ray image intensity and patella shape. Since a knee x-ray image contains specific anatomical structures, we then design an algorithm, based on edge tracing, for patella feature extraction in order to automatically align the model to the patella image. We can estimate the landmark locations as well as the ISR after registration-assisted model fitting. Our proposed method successfully overcomes drawbacks caused by x-ray image artifacts. Experimental results show great agreement between the ISRs measured by the proposed method and by orthopedic clinicians.

  5. Poster — Thur Eve — 70: Automatic lung bronchial and vessel bifurcations detection algorithm for deformable image registration assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Labine, Alexandre; Carrier, Jean-François; Bedwani, Stéphane

    2014-08-15

    Purpose: To investigate an automatic bronchial and vessel bifurcations detection algorithm for deformable image registration (DIR) assessment to improve lung cancer radiation treatment. Methods: 4DCT datasets were acquired and exported to Varian treatment planning system (TPS) EclipseTM for contouring. The lungs TPS contour was used as the prior shape for a segmentation algorithm based on hierarchical surface deformation that identifies the deformed lungs volumes of the 10 breathing phases. Hounsfield unit (HU) threshold filter was applied within the segmented lung volumes to identify blood vessels and airways. Segmented blood vessels and airways were skeletonised using a hierarchical curve-skeleton algorithm basedmore » on a generalized potential field approach. A graph representation of the computed skeleton was generated to assign one of three labels to each node: the termination node, the continuation node or the branching node. Results: 320 ± 51 bifurcations were detected in the right lung of a patient for the 10 breathing phases. The bifurcations were visually analyzed. 92 ± 10 bifurcations were found in the upper half of the lung and 228 ± 45 bifurcations were found in the lower half of the lung. Discrepancies between ten vessel trees were mainly ascribed to large deformation and in regions where the HU varies. Conclusions: We established an automatic method for DIR assessment using the morphological information of the patient anatomy. This approach allows a description of the lung's internal structure movement, which is needed to validate the DIR deformation fields for accurate 4D cancer treatment planning.« less

  6. Automatic segmentation of mandible in panoramic x-ray.

    PubMed

    Abdi, Amir Hossein; Kasaei, Shohreh; Mehdizadeh, Mojdeh

    2015-10-01

    As the panoramic x-ray is the most common extraoral radiography in dentistry, segmentation of its anatomical structures facilitates diagnosis and registration of dental records. This study presents a fast and accurate method for automatic segmentation of mandible in panoramic x-rays. In the proposed four-step algorithm, a superior border is extracted through horizontal integral projections. A modified Canny edge detector accompanied by morphological operators extracts the inferior border of the mandible body. The exterior borders of ramuses are extracted through a contour tracing method based on the average model of mandible. The best-matched template is fetched from the atlas of mandibles to complete the contour of left and right processes. The algorithm was tested on a set of 95 panoramic x-rays. Evaluating the results against manual segmentations of three expert dentists showed that the method is robust. It achieved an average performance of [Formula: see text] in Dice similarity, specificity, and sensitivity.

  7. Atlas-based segmentation of brainstem regions in neuromelanin-sensitive magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Puigvert, Marc; Castellanos, Gabriel; Uranga, Javier; Abad, Ricardo; Fernández-Seara, María. A.; Pastor, Pau; Pastor, María. A.; Muñoz-Barrutia, Arrate; Ortiz de Solórzano, Carlos

    2015-03-01

    We present a method for the automatic delineation of two neuromelanin rich brainstem structures -substantia nigra pars compacta (SN) and locus coeruleus (LC)- in neuromelanin sensitive magnetic resonance images of the brain. The segmentation method uses a dynamic multi-image reference atlas and a pre-registration atlas selection strategy. To create the atlas, a pool of 35 images of healthy subjects was pair-wise pre-registered and clustered in groups using an affinity propagation approach. Each group of the atlas is represented by a single exemplar image. Each new target image to be segmented is registered to the exemplars of each cluster. Then all the images of the highest performing clusters are enrolled into the final atlas, and the results of the registration with the target image are propagated using a majority voting approach. All registration processes used combined one two-stage affine and one elastic B-spline algorithm, to account for global positioning, region selection and local anatomic differences. In this paper, we present the algorithm, with emphasis in the atlas selection method and the registration scheme. We evaluate the performance of the atlas selection strategy using 35 healthy subjects and 5 Parkinson's disease patients. Then, we quantified the volume and contrast ratio of neuromelanin signal of these structures in 47 normal subjects and 40 Parkinson's disease patients to confirm that this method can detect neuromelanin-containing neurons loss in Parkinson's disease patients and could eventually be used for the early detection of SN and LC damage.

  8. Sample registration software for process automation in the Neutron Activation Analysis (NAA) Facility in Malaysia nuclear agency

    NASA Astrophysics Data System (ADS)

    Rahman, Nur Aira Abd; Yussup, Nolida; Salim, Nazaratul Ashifa Bt. Abdullah; Ibrahim, Maslina Bt. Mohd; Mokhtar, Mukhlis B.; Soh@Shaari, Syirrazie Bin Che; Azman, Azraf B.; Ismail, Nadiah Binti

    2015-04-01

    Neutron Activation Analysis (NAA) had been established in Nuclear Malaysia since 1980s. Most of the procedures established were done manually including sample registration. The samples were recorded manually in a logbook and given ID number. Then all samples, standards, SRM and blank were recorded on the irradiation vial and several forms prior to irradiation. These manual procedures carried out by the NAA laboratory personnel were time consuming and not efficient. Sample registration software is developed as part of IAEA/CRP project on `Development of Process Automation in the Neutron Activation Analysis (NAA) Facility in Malaysia Nuclear Agency (RC17399)'. The objective of the project is to create a pc-based data entry software during sample preparation stage. This is an effective method to replace redundant manual data entries that needs to be completed by laboratory personnel. The software developed will automatically generate sample code for each sample in one batch, create printable registration forms for administration purpose, and store selected parameters that will be passed to sample analysis program. The software is developed by using National Instruments Labview 8.6.

  9. NOTE: Optimization of megavoltage CT scan registration settings for thoracic cases on helical tomotherapy

    NASA Astrophysics Data System (ADS)

    Woodford, Curtis; Yartsev, Slav; Van Dyk, Jake

    2007-08-01

    This study aims to investigate the settings that provide optimum registration accuracy when registering megavoltage CT (MVCT) studies acquired on tomotherapy with planning kilovoltage CT (kVCT) studies of patients with lung cancer. For each experiment, the systematic difference between the actual and planned positions of the thorax phantom was determined by setting the phantom up at the planning isocenter, generating and registering an MVCT study. The phantom was translated by 5 or 10 mm, MVCT scanned, and registration was performed again. A root-mean-square equation that calculated the residual error of the registration based on the known shift and systematic difference was used to assess the accuracy of the registration process. The phantom study results for 18 combinations of different MVCT/kVCT registration options are presented and compared to clinical registration data from 17 lung cancer patients. MVCT studies acquired with coarse (6 mm), normal (4 mm) and fine (2 mm) slice spacings could all be registered with similar residual errors. No specific combination of resolution and fusion selection technique resulted in a lower residual error. A scan length of 6 cm with any slice spacing registered with the full image fusion selection technique and fine resolution will result in a low residual error most of the time. On average, large corrections made manually by clinicians to the automatic registration values are infrequent. Small manual corrections within the residual error averages of the registration process occur, but their impact on the average patient position is small. Registrations using the full image fusion selection technique and fine resolution of 6 cm MVCT scans with coarse slices have a low residual error, and this strategy can be clinically used for lung cancer patients treated on tomotherapy. Automatic registration values are accurate on average, and a quick verification on a sagittal MVCT slice should be enough to detect registration outliers.

  10. A fast alignment method for breast MRI follow-up studies using automated breast segmentation and current-prior registration

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Strehlow, Jan; Rühaak, Jan; Weiler, Florian; Diez, Yago; Gubern-Merida, Albert; Diekmann, Susanne; Laue, Hendrik; Hahn, Horst K.

    2015-03-01

    In breast cancer screening for high-risk women, follow-up magnetic resonance images (MRI) are acquired with a time interval ranging from several months up to a few years. Prior MRI studies may provide additional clinical value when examining the current one and thus have the potential to increase sensitivity and specificity of screening. To build a spatial correlation between suspicious findings in both current and prior studies, a reliable alignment method between follow-up studies is desirable. However, long time interval, different scanners and imaging protocols, and varying breast compression can result in a large deformation, which challenges the registration process. In this work, we present a fast and robust spatial alignment framework, which combines automated breast segmentation and current-prior registration techniques in a multi-level fashion. First, fully automatic breast segmentation is applied to extract the breast masks that are used to obtain an initial affine transform. Then, a non-rigid registration algorithm using normalized gradient fields as similarity measure together with curvature regularization is applied. A total of 29 subjects and 58 breast MR images were collected for performance assessment. To evaluate the global registration accuracy, the volume overlap and boundary surface distance metrics are calculated, resulting in an average Dice Similarity Coefficient (DSC) of 0.96 and root mean square distance (RMSD) of 1.64 mm. In addition, to measure local registration accuracy, for each subject a radiologist annotated 10 pairs of markers in the current and prior studies representing corresponding anatomical locations. The average distance error of marker pairs dropped from 67.37 mm to 10.86 mm after applying registration.

  11. Automatic segmentation of brain MRIs and mapping neuroanatomy across the human lifespan

    NASA Astrophysics Data System (ADS)

    Keihaninejad, Shiva; Heckemann, Rolf A.; Gousias, Ioannis S.; Rueckert, Daniel; Aljabar, Paul; Hajnal, Joseph V.; Hammers, Alexander

    2009-02-01

    A robust model for the automatic segmentation of human brain images into anatomically defined regions across the human lifespan would be highly desirable, but such structural segmentations of brain MRI are challenging due to age-related changes. We have developed a new method, based on established algorithms for automatic segmentation of young adults' brains. We used prior information from 30 anatomical atlases, which had been manually segmented into 83 anatomical structures. Target MRIs came from 80 subjects (~12 individuals/decade) from 20 to 90 years, with equal numbers of men, women; data from two different scanners (1.5T, 3T), using the IXI database. Each of the adult atlases was registered to each target MR image. By using additional information from segmentation into tissue classes (GM, WM and CSF) to initialise the warping based on label consistency similarity before feeding this into the previous normalised mutual information non-rigid registration, the registration became robust enough to accommodate atrophy and ventricular enlargement with age. The final segmentation was obtained by combination of the 30 propagated atlases using decision fusion. Kernel smoothing was used for modelling the structural volume changes with aging. Example linear correlation coefficients with age were, for lateral ventricular volume, rmale=0.76, rfemale=0.58 and, for hippocampal volume, rmale=-0.6, rfemale=-0.4 (allρ<0.01).

  12. WE-AB-BRA-01: 3D-2D Image Registration for Target Localization in Spine Surgery: Comparison of Similarity Metrics Against Robustness to Content Mismatch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Silva, T; Ketcha, M; Siewerdsen, J H

    Purpose: In image-guided spine surgery, mapping 3D preoperative images to 2D intraoperative images via 3D-2D registration can provide valuable assistance in target localization. However, the presence of surgical instrumentation, hardware implants, and soft-tissue resection/displacement causes mismatches in image content, confounding existing registration methods. Manual/semi-automatic methods to mask such extraneous content is time consuming, user-dependent, error prone, and disruptive to clinical workflow. We developed and evaluated 2 novel similarity metrics within a robust registration framework to overcome such challenges in target localization. Methods: An IRB-approved retrospective study in 19 spine surgery patients included 19 preoperative 3D CT images and 50 intraoperativemore » mobile radiographs in cervical, thoracic, and lumbar spine regions. A neuroradiologist provided truth definition of vertebral positions in CT and radiography. 3D-2D registration was performed using the CMA-ES optimizer with 4 gradient-based image similarity metrics: (1) gradient information (GI); (2) gradient correlation (GC); (3) a novel variant referred to as gradient orientation (GO); and (4) a second variant referred to as truncated gradient correlation (TGC). Registration accuracy was evaluated in terms of the projection distance error (PDE) of the vertebral levels. Results: Conventional similarity metrics were susceptible to gross registration error and failure modes associated with the presence of surgical instrumentation: for GI, the median PDE and interquartile range was 33.0±43.6 mm; similarly for GC, PDE = 23.0±92.6 mm respectively. The robust metrics GO and TGC, on the other hand, demonstrated major improvement in PDE (7.6 ±9.4 mm and 8.1± 18.1 mm, respectively) and elimination of gross failure modes. Conclusion: The proposed GO and TGC similarity measures improve registration accuracy and robustness to gross failure in the presence of strong image content mismatch. Such registration capability could offer valuable assistance in target localization without disruption of clinical workflow. G. Kleinszig and S. Vogt are employees of Siemens Healthcare.« less

  13. Optimizing image registration and infarct definition in stroke research.

    PubMed

    Harston, George W J; Minks, David; Sheerin, Fintan; Payne, Stephen J; Chappell, Michael; Jezzard, Peter; Jenkinson, Mark; Kennedy, James

    2017-03-01

    Accurate representation of final infarct volume is essential for assessing the efficacy of stroke interventions in imaging-based studies. This study defines the impact of image registration methods used at different timepoints following stroke, and the implications for infarct definition in stroke research. Patients presenting with acute ischemic stroke were imaged serially using magnetic resonance imaging. Infarct volume was defined manually using four metrics: 24-h b1000 imaging; 1-week and 1-month T2-weighted FLAIR; and automatically using predefined thresholds of ADC at 24 h. Infarct overlap statistics and volumes were compared across timepoints following both rigid body and nonlinear image registration to the presenting MRI. The effect of nonlinear registration on a hypothetical trial sample size was calculated. Thirty-seven patients were included. Nonlinear registration improved infarct overlap statistics and consistency of total infarct volumes across timepoints, and reduced infarct volumes by 4.0 mL (13.1%) and 7.1 mL (18.2%) at 24 h and 1 week, respectively, compared to rigid body registration. Infarct volume at 24 h, defined using a predetermined ADC threshold, was less sensitive to infarction than b1000 imaging. 1-week T2-weighted FLAIR imaging was the most accurate representation of final infarct volume. Nonlinear registration reduced hypothetical trial sample size, independent of infarct volume, by an average of 13%. Nonlinear image registration may offer the opportunity of improving the accuracy of infarct definition in serial imaging studies compared to rigid body registration, helping to overcome the challenges of anatomical distortions at subacute timepoints, and reducing sample size for imaging-based clinical trials.

  14. A 4D biomechanical lung phantom for joint segmentation/registration evaluation

    NASA Astrophysics Data System (ADS)

    Markel, Daniel; Levesque, Ives; Larkin, Joe; Léger, Pierre; El Naqa, Issam

    2016-10-01

    At present, there exists few openly available methods for evaluation of simultaneous segmentation and registration algorithms. These methods allow for a combination of both techniques to track the tumor in complex settings such as adaptive radiotherapy. We have produced a quality assurance platform for evaluating this specific subset of algorithms using a preserved porcine lung in such that it is multi-modality compatible: positron emission tomography (PET), computer tomography (CT) and magnetic resonance imaging (MRI). A computer controlled respirator was constructed to pneumatically manipulate the lungs in order to replicate human breathing traces. A registration ground truth was provided using an in-house bifurcation tracking pipeline. Segmentation ground truth was provided by synthetic multi-compartment lesions to simulate biologically active tumor, background tissue and a necrotic core. The bifurcation tracking pipeline results were compared to digital deformations and used to evaluate three registration algorithms, Diffeomorphic demons, fast-symmetric forces demons and MiMVista’s deformable registration tool. Three segmentation algorithms the Chan Vese level sets method, a Hybrid technique and the multi-valued level sets algorithm. The respirator was able to replicate three seperate breathing traces with a mean accuracy of 2-2.2%. Bifurcation tracking error was found to be sub-voxel when using human CT data for displacements up to 6.5 cm and approximately 1.5 voxel widths for displacements up to 3.5 cm for the porcine lungs. For the fast-symmetric, diffeomorphic and MiMvista registration algorithms, mean geometric errors were found to be 0.430+/- 0.001 , 0.416+/- 0.001 and 0.605+/- 0.002 voxels widths respectively using the vector field differences and 0.4+/- 0.2 , 0.4+/- 0.2 and 0.6+/- 0.2 voxel widths using the bifurcation tracking pipeline. The proposed phantom was found sufficient for accurate evaluation of registration and segmentation algorithms. The use of automatically generated anatomical landmarks proposed can eliminate the time and potential innacuracy of manual landmark selection using expert observers.

  15. Alternative face models for 3D face registration

    NASA Astrophysics Data System (ADS)

    Salah, Albert Ali; Alyüz, Neşe; Akarun, Lale

    2007-01-01

    3D has become an important modality for face biometrics. The accuracy of a 3D face recognition system depends on a correct registration that aligns the facial surfaces and makes a comparison possible. The best results obtained so far use a one-to-all registration approach, which means each new facial surface is registered to all faces in the gallery, at a great computational cost. We explore the approach of registering the new facial surface to an average face model (AFM), which automatically establishes correspondence to the pre-registered gallery faces. Going one step further, we propose that using a couple of well-selected AFMs can trade-off computation time with accuracy. Drawing on cognitive justifications, we propose to employ category-specific alternative average face models for registration, which is shown to increase the accuracy of the subsequent recognition. We inspect thin-plate spline (TPS) and iterative closest point (ICP) based registration schemes under realistic assumptions on manual or automatic landmark detection prior to registration. We evaluate several approaches for the coarse initialization of ICP. We propose a new algorithm for constructing an AFM, and show that it works better than a recent approach. Finally, we perform simulations with multiple AFMs that correspond to different clusters in the face shape space and compare these with gender and morphology based groupings. We report our results on the FRGC 3D face database.

  16. Use of multiresolution wavelet feature pyramids for automatic registration of multisensor imagery

    NASA Technical Reports Server (NTRS)

    Zavorin, Ilya; Le Moigne, Jacqueline

    2005-01-01

    The problem of image registration, or the alignment of two or more images representing the same scene or object, has to be addressed in various disciplines that employ digital imaging. In the area of remote sensing, just like in medical imaging or computer vision, it is necessary to design robust, fast, and widely applicable algorithms that would allow automatic registration of images generated by various imaging platforms at the same or different times and that would provide subpixel accuracy. One of the main issues that needs to be addressed when developing a registration algorithm is what type of information should be extracted from the images being registered, to be used in the search for the geometric transformation that best aligns them. The main objective of this paper is to evaluate several wavelet pyramids that may be used both for invariant feature extraction and for representing images at multiple spatial resolutions to accelerate registration. We find that the bandpass wavelets obtained from the steerable pyramid due to Simoncelli performs best in terms of accuracy and consistency, while the low-pass wavelets obtained from the same pyramid give the best results in terms of the radius of convergence. Based on these findings, we propose a modification of a gradient-based registration algorithm that has recently been developed for medical data. We test the modified algorithm on several sets of real and synthetic satellite imagery.

  17. Use of Multi-Resolution Wavelet Feature Pyramids for Automatic Registration of Multi-Sensor Imagery

    NASA Technical Reports Server (NTRS)

    Zavorin, Ilya; LeMoigne, Jacqueline

    2003-01-01

    The problem of image registration, or alignment of two or more images representing the same scene or object, has to be addressed in various disciplines that employ digital imaging. In the area of remote sensing, just like in medical imaging or computer vision, it is necessary to design robust, fast and widely applicable algorithms that would allow automatic registration of images generated by various imaging platforms at the same or different times, and that would provide sub-pixel accuracy. One of the main issues that needs to be addressed when developing a registration algorithm is what type of information should be extracted from the images being registered, to be used in the search for the geometric transformation that best aligns them. The main objective of this paper is to evaluate several wavelet pyramids that may be used both for invariant feature extraction and for representing images at multiple spatial resolutions to accelerate registration. We find that the band-pass wavelets obtained from the Steerable Pyramid due to Simoncelli perform better than two types of low-pass pyramids when the images being registered have relatively small amount of nonlinear radiometric variations between them. Based on these findings, we propose a modification of a gradient-based registration algorithm that has recently been developed for medical data. We test the modified algorithm on several sets of real and synthetic satellite imagery.

  18. Mapping and localization for extraterrestrial robotic explorations

    NASA Astrophysics Data System (ADS)

    Xu, Fengliang

    In the exploration of an extraterrestrial environment such as Mars, orbital data, such as high-resolution imagery Mars Orbital Camera-Narrow Angle (MOC-NA), laser ranging data Mars Orbital Laser Altimeter (MOLA), and multi-spectral imagery Thermal Emission Imaging System (THEMIS), play more and more important roles. However, these remote sensing techniques can never replace the role of landers and rovers, which can provide a close up and inside view. Similarly, orbital mapping can not compete with ground-level close-range mapping in resolution, precision, and speed. This dissertation addresses two tasks related to robotic extraterrestrial exploration: mapping and rover localization. Image registration is also discussed as an important aspect for both of them. Techniques from computer vision and photogrammetry are applied for automation and precision. Image registration is classified into three sub-categories: intra-stereo, inter-stereo, and cross-site, according to the relationship between stereo images. In the intra-stereo registration, which is the most fundamental sub-category, interest point-based registration and verification by parallax continuity in the principal direction are proposed. Two other techniques, inter-scanline search with constrained dynamic programming for far range matching and Markov Random Field (MRF) based registration for big terrain variation, are explored as possible improvements. Creating using rover ground images mainly involves the generation of Digital Terrain Model (DTM) and ortho-rectified map (orthomap). The first task is to derive the spatial distribution statistics from the first panorama and model the DTM with a dual polynomial model. This model is used for interpolation of the DTM, using Kriging in the close range and Triangular Irregular Network (TIN) in the far range. To generate a uniformly illuminated orthomap from the DTM, a least-squares-based automatic intensity balancing method is proposed. Finally a seamless orthomap is constructed by a split-and-merge technique: the mapped area is split or subdivided into small regions of image overlap, and then each small map piece was processed and all of the pieces are merged together to form a seamless map. Rover localization has three stages, all of which use a least-squares adjustment procedure: (1) an initial localization which is accomplished by adjustment over features common to rover images and orbital images, (2) an adjustment of image pointing angles at a single site through inter and intra-stereo tie points, and (3) an adjustment of the rover traverse through manual cross-site tie points. The first stage is based on adjustment of observation angles of features. The second stage and third stage are based on bundle-adjustment. In the third-stage an incremental adjustment method was proposed. Automation in rover localization includes automatic intra/inter-stereo tie point selection, computer-assisted cross-site tie point selection, and automatic verification of accuracy. (Abstract shortened by UMI.)

  19. Nineteen hundred seventy three significant accomplishments. [Landsat satellite data applications

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Data collected by the Skylab remote sensing satellites was used to develop applications techniques and to combine automatic data classification with statistical clustering methods. Continuing research was concentrated in the correlation and registration of data products and in the definition of the atmospheric effects on remote sensing. The causes of errors encountered in the automated classification of agricultural data are identified. Other applications in forestry, geography, environmental geology, and land use are discussed.

  20. A new markerless patient-to-image registration method using a portable 3D scanner.

    PubMed

    Fan, Yifeng; Jiang, Dongsheng; Wang, Manning; Song, Zhijian

    2014-10-01

    Patient-to-image registration is critical to providing surgeons with reliable guidance information in the application of image-guided neurosurgery systems. The conventional point-matching registration method, which is based on skin markers, requires expensive and time-consuming logistic support. Surface-matching registration with facial surface scans is an alternative method, but the registration accuracy is unstable and the error in the more posterior parts of the head is usually large because the scan range is limited. This study proposes a new surface-matching method using a portable 3D scanner to acquire a point cloud of the entire head to perform the patient-to-image registration. A new method for transforming the scan points from the device space into the patient space without calibration and tracking was developed. Five positioning targets were attached on a reference star, and their coordinates in the patient space were measured prior. During registration, the authors moved the scanner around the head to scan its entire surface as well as the positioning targets, and the scanner generated a unique point cloud in the device space. The coordinates of the positioning targets in the device space were automatically detected by the scanner, and a spatial transformation from the device space to the patient space could be calculated by registering them to their coordinates in the patient space that had been measured prior. A three-step registration algorithm was then used to register the patient space to the image space. The authors evaluated their method on a rigid head phantom and an elastic head phantom to verify its practicality and to calculate the target registration error (TRE) in different regions of the head phantoms. The authors also conducted an experiment with a real patient's data to test the feasibility of their method in the clinical environment. In the phantom experiments, the mean fiducial registration error between the device space and the patient space, the mean surface registration error, and the mean TRE of 15 targets on the surface of each phantom were 0.34 ± 0.01 mm and 0.33 ± 0.02 mm, 1.17 ± 0.02 mm and 1.34 ± 0.10 mm, and 1.06 ± 0.11 mm and 1.48 ± 0.21 mm, respectively. When grouping the targets according to their positions on the head, high accuracy was achieved in all parts of the head, and the TREs were similar across different regions. The authors compared their method with the current surface registration methods that use only a part of the facial surface on the elastic phantom, and the mean TRE of 15 targets was 1.48 ± 0.21 mm and 1.98 ± 0.53 mm, respectively. In a clinical experiment, the mean TRE of seven targets on the patient's head surface was 1.92 ± 0.18 mm, which was sufficient to meet clinical requirements. The proposed surface-matching registration method provides sufficient registration accuracy even in the posterior area of the head. The 3D point cloud of the entire head, including the facial surface and the back of the head, can be easily acquired using a portable 3D scanner. The scanner does not need to be calibrated prior or tracked by the optical tracking system during scanning.

  1. Automatic map generalisation from research to production

    NASA Astrophysics Data System (ADS)

    Nyberg, Rose; Johansson, Mikael; Zhang, Yang

    2018-05-01

    The manual work of map generalisation is known to be a complex and time consuming task. With the development of technology and societies, the demands for more flexible map products with higher quality are growing. The Swedish mapping, cadastral and land registration authority Lantmäteriet has manual production lines for databases in five different scales, 1 : 10 000 (SE10), 1 : 50 000 (SE50), 1 : 100 000 (SE100), 1 : 250 000 (SE250) and 1 : 1 million (SE1M). To streamline this work, Lantmäteriet started a project to automatically generalise geographic information. Planned timespan for the project is 2015-2022. Below the project background together with the methods for the automatic generalisation are described. The paper is completed with a description of results and conclusions.

  2. Feasibility of Extracting Key Elements from ClinicalTrials.gov to Support Clinicians' Patient Care Decisions.

    PubMed

    Kim, Heejun; Bian, Jiantao; Mostafa, Javed; Jonnalagadda, Siddhartha; Del Fiol, Guilherme

    2016-01-01

    Motivation: Clinicians need up-to-date evidence from high quality clinical trials to support clinical decisions. However, applying evidence from the primary literature requires significant effort. Objective: To examine the feasibility of automatically extracting key clinical trial information from ClinicalTrials.gov. Methods: We assessed the coverage of ClinicalTrials.gov for high quality clinical studies that are indexed in PubMed. Using 140 random ClinicalTrials.gov records, we developed and tested rules for the automatic extraction of key information. Results: The rate of high quality clinical trial registration in ClinicalTrials.gov increased from 0.2% in 2005 to 17% in 2015. Trials reporting results increased from 3% in 2005 to 19% in 2015. The accuracy of the automatic extraction algorithm for 10 trial attributes was 90% on average. Future research is needed to improve the algorithm accuracy and to design information displays to optimally present trial information to clinicians.

  3. Acceptance test of a commercially available software for automatic image registration of computed tomography (CT), magnetic resonance imaging (MRI) and 99mTc-methoxyisobutylisonitrile (MIBI) single-photon emission computed tomography (SPECT) brain images.

    PubMed

    Loi, Gianfranco; Dominietto, Marco; Manfredda, Irene; Mones, Eleonora; Carriero, Alessandro; Inglese, Eugenio; Krengli, Marco; Brambilla, Marco

    2008-09-01

    This note describes a method to characterize the performances of image fusion software (Syntegra) with respect to accuracy and robustness. Computed tomography (CT), magnetic resonance imaging (MRI), and single-photon emission computed tomography (SPECT) studies were acquired from two phantoms and 10 patients. Image registration was performed independently by two couples composed of one radiotherapist and one physicist by means of superposition of anatomic landmarks. Each couple performed jointly and saved the registration. The two solutions were averaged to obtain the gold standard registration. A new set of estimators was defined to identify translation and rotation errors in the coordinate axes, independently from point position in image field of view (FOV). Algorithms evaluated were local correlation (LC) for CT-MRI, normalized mutual information (MI) for CT-MRI, and CT-SPECT registrations. To evaluate accuracy, estimator values were compared to limiting values for the algorithms employed, both in phantoms and in patients. To evaluate robustness, different alignments between images taken from a sample patient were produced and registration errors determined. LC algorithm resulted accurate in CT-MRI registrations in phantoms, but exceeded limiting values in 3 of 10 patients. MI algorithm resulted accurate in CT-MRI and CT-SPECT registrations in phantoms; limiting values were exceeded in one case in CT-MRI and never reached in CT-SPECT registrations. Thus, the evaluation of robustness was restricted to the algorithm of MI both for CT-MRI and CT-SPECT registrations. The algorithm of MI proved to be robust: limiting values were not exceeded with translation perturbations up to 2.5 cm, rotation perturbations up to 10 degrees and roto-translational perturbation up to 3 cm and 5 degrees.

  4. PVR: Patch-to-Volume Reconstruction for Large Area Motion Correction of Fetal MRI.

    PubMed

    Alansary, Amir; Rajchl, Martin; McDonagh, Steven G; Murgasova, Maria; Damodaram, Mellisa; Lloyd, David F A; Davidson, Alice; Rutherford, Mary; Hajnal, Joseph V; Rueckert, Daniel; Kainz, Bernhard

    2017-10-01

    In this paper, we present a novel method for the correction of motion artifacts that are present in fetal magnetic resonance imaging (MRI) scans of the whole uterus. Contrary to current slice-to-volume registration (SVR) methods, requiring an inflexible anatomical enclosure of a single investigated organ, the proposed patch-to-volume reconstruction (PVR) approach is able to reconstruct a large field of view of non-rigidly deforming structures. It relaxes rigid motion assumptions by introducing a specific amount of redundant information that is exploited with parallelized patchwise optimization, super-resolution, and automatic outlier rejection. We further describe and provide an efficient parallel implementation of PVR allowing its execution within reasonable time on commercially available graphics processing units, enabling its use in the clinical practice. We evaluate PVR's computational overhead compared with standard methods and observe improved reconstruction accuracy in the presence of affine motion artifacts compared with conventional SVR in synthetic experiments. Furthermore, we have evaluated our method qualitatively and quantitatively on real fetal MRI data subject to maternal breathing and sudden fetal movements. We evaluate peak-signal-to-noise ratio, structural similarity index, and cross correlation with respect to the originally acquired data and provide a method for visual inspection of reconstruction uncertainty. We further evaluate the distance error for selected anatomical landmarks in the fetal head, as well as calculating the mean and maximum displacements resulting from automatic non-rigid registration to a motion-free ground truth image. These experiments demonstrate a successful application of PVR motion compensation to the whole fetal body, uterus, and placenta.

  5. Fully automatic segmentation of the femur from 3D-CT images using primitive shape recognition and statistical shape models.

    PubMed

    Ben Younes, Lassad; Nakajima, Yoshikazu; Saito, Toki

    2014-03-01

    Femur segmentation is well established and widely used in computer-assisted orthopedic surgery. However, most of the robust segmentation methods such as statistical shape models (SSM) require human intervention to provide an initial position for the SSM. In this paper, we propose to overcome this problem and provide a fully automatic femur segmentation method for CT images based on primitive shape recognition and SSM. Femur segmentation in CT scans was performed using primitive shape recognition based on a robust algorithm such as the Hough transform and RANdom SAmple Consensus. The proposed method is divided into 3 steps: (1) detection of the femoral head as sphere and the femoral shaft as cylinder in the SSM and the CT images, (2) rigid registration between primitives of SSM and CT image to initialize the SSM into the CT image, and (3) fitting of the SSM to the CT image edge using an affine transformation followed by a nonlinear fitting. The automated method provided good results even with a high number of outliers. The difference of segmentation error between the proposed automatic initialization method and a manual initialization method is less than 1 mm. The proposed method detects primitive shape position to initialize the SSM into the target image. Based on primitive shapes, this method overcomes the problem of inter-patient variability. Moreover, the results demonstrate that our method of primitive shape recognition can be used for 3D SSM initialization to achieve fully automatic segmentation of the femur.

  6. Automatic registration of multi-modal microscopy images for integrative analysis of prostate tissue sections.

    PubMed

    Lippolis, Giuseppe; Edsjö, Anders; Helczynski, Leszek; Bjartell, Anders; Overgaard, Niels Chr

    2013-09-05

    Prostate cancer is one of the leading causes of cancer related deaths. For diagnosis, predicting the outcome of the disease, and for assessing potential new biomarkers, pathologists and researchers routinely analyze histological samples. Morphological and molecular information may be integrated by aligning microscopic histological images in a multiplex fashion. This process is usually time-consuming and results in intra- and inter-user variability. The aim of this study is to investigate the feasibility of using modern image analysis methods for automated alignment of microscopic images from differently stained adjacent paraffin sections from prostatic tissue specimens. Tissue samples, obtained from biopsy or radical prostatectomy, were sectioned and stained with either hematoxylin & eosin (H&E), immunohistochemistry for p63 and AMACR or Time Resolved Fluorescence (TRF) for androgen receptor (AR). Image pairs were aligned allowing for translation, rotation and scaling. The registration was performed automatically by first detecting landmarks in both images, using the scale invariant image transform (SIFT), followed by the well-known RANSAC protocol for finding point correspondences and finally aligned by Procrustes fit. The Registration results were evaluated using both visual and quantitative criteria as defined in the text. Three experiments were carried out. First, images of consecutive tissue sections stained with H&E and p63/AMACR were successfully aligned in 85 of 88 cases (96.6%). The failures occurred in 3 out of 13 cores with highly aggressive cancer (Gleason score ≥ 8). Second, TRF and H&E image pairs were aligned correctly in 103 out of 106 cases (97%).The third experiment considered the alignment of image pairs with the same staining (H&E) coming from a stack of 4 sections. The success rate for alignment dropped from 93.8% in adjacent sections to 22% for sections furthest away. The proposed method is both reliable and fast and therefore well suited for automatic segmentation and analysis of specific areas of interest, combining morphological information with protein expression data from three consecutive tissue sections. Finally, the performance of the algorithm seems to be largely unaffected by the Gleason grade of the prostate tissue samples examined, at least up to Gleason score 7.

  7. 4D Near Real-Time Environmental Monitoring Using Highly Temporal LiDAR

    NASA Astrophysics Data System (ADS)

    Höfle, Bernhard; Canli, Ekrem; Schmitz, Evelyn; Crommelinck, Sophie; Hoffmeister, Dirk; Glade, Thomas

    2016-04-01

    The last decade has witnessed extensive applications of 3D environmental monitoring with the LiDAR technology, also referred to as laser scanning. Although several automatic methods were developed to extract environmental parameters from LiDAR point clouds, only little research has focused on highly multitemporal near real-time LiDAR (4D-LiDAR) for environmental monitoring. Large potential of applying 4D-LiDAR is given for landscape objects with high and varying rates of change (e.g. plant growth) and also for phenomena with sudden unpredictable changes (e.g. geomorphological processes). In this presentation we will report on the most recent findings of the research projects 4DEMON (http://uni-heidelberg.de/4demon) and NoeSLIDE (https://geomorph.univie.ac.at/forschung/projekte/aktuell/noeslide/). The method development in both projects is based on two real-world use cases: i) Surface parameter derivation of agricultural crops (e.g. crop height) and ii) change detection of landslides. Both projects exploit the "full history" contained in the LiDAR point cloud time series. One crucial initial step of 4D-LiDAR analysis is the co-registration over time, 3D-georeferencing and time-dependent quality assessment of the LiDAR point cloud time series. Due to the high amount of datasets (e.g. one full LiDAR scan per day), the procedure needs to be performed fully automatically. Furthermore, the online near real-time 4D monitoring system requires to set triggers that can detect removal or moving of tie reflectors (used for co-registration) or the scanner itself. This guarantees long-term data acquisition with high quality. We will present results from a georeferencing experiment for 4D-LiDAR monitoring, which performs benchmarking of co-registration, 3D-georeferencing and also fully automatic detection of events (e.g. removal/moving of reflectors or scanner). Secondly, we will show our empirical findings of an ongoing permanent LiDAR observation of a landslide (Gresten, Austria) and an agricultural maize crop stand (Heidelberg, Germany). This research demonstrates the potential and also limitations of fully automated, near real-time 4D LiDAR monitoring in geosciences.

  8. A two-step framework for the registration of HE stained and FTIR images

    NASA Astrophysics Data System (ADS)

    Peñaranda, Francisco; Naranjo, Valery; Verdú, Rafaél.; Lloyd, Gavin R.; Nallala, Jayakrupakar; Stone, Nick

    2016-03-01

    FTIR spectroscopy is an emerging technology with high potential for cancer diagnosis but with particular physical phenomena that require special processing. Little work has been done in the field with the aim of registering hyperspectral Fourier-Transform Infrared (FTIR) spectroscopic images and Hematoxilin and Eosin (HE) stained histological images of contiguous slices of tissue. This registration is necessary to transfer the location of relevant structures that the pathologist may identify in the gold standard HE images. A two-step registration framework is presented where a representative gray image extracted from the FTIR hypercube is used as an input. This representative image, which must have a spatial contrast as similar as possible to a gray image obtained from the HE image, is calculated through the spectrum variation in the fingerprint region. In the first step of the registration algorithm a similarity transformation is estimated from interest points, which are automatically detected by the popular SURF algorithm. In the second stage, a variational registration framework defined in the frequency domain compensates for local anatomical variations between both images. After a proper tuning of some parameters the proposed registration framework works in an automated way. The method was tested on 7 samples of colon tissue in different stages of cancer. Very promising qualitative and quantitative results were obtained (a mean correlation ratio of 92.16% with a standard deviation of 3.10%).

  9. Automatic aortic root segmentation in CTA whole-body dataset

    NASA Astrophysics Data System (ADS)

    Gao, Xinpei; Kitslaar, Pieter H.; Scholte, Arthur J. H. A.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke; Reiber, Johan H. C.

    2016-03-01

    Trans-catheter aortic valve replacement (TAVR) is an evolving technique for patients with serious aortic stenosis disease. Typically, in this application a CTA data set is obtained of the patient's arterial system from the subclavian artery to the femoral arteries, to evaluate the quality of the vascular access route and analyze the aortic root to determine if and which prosthesis should be used. In this paper, we concentrate on the automated segmentation of the aortic root. The purpose of this study was to automatically segment the aortic root in computed tomography angiography (CTA) datasets to support TAVR procedures. The method in this study includes 4 major steps. First, the patient's cardiac CTA image was resampled to reduce the computation time. Next, the cardiac CTA image was segmented using an atlas-based approach. The most similar atlas was selected from a total of 8 atlases based on its image similarity to the input CTA image. Third, the aortic root segmentation from the previous step was transferred to the patient's whole-body CTA image by affine registration and refined in the fourth step using a deformable subdivision surface model fitting procedure based on image intensity. The pipeline was applied to 20 patients. The ground truth was created by an analyst who semi-automatically corrected the contours of the automatic method, where necessary. The average Dice similarity index between the segmentations of the automatic method and the ground truth was found to be 0.965±0.024. In conclusion, the current results are very promising.

  10. Diffusion Tensor Image Registration Using Hybrid Connectivity and Tensor Features

    PubMed Central

    Wang, Qian; Yap, Pew-Thian; Wu, Guorong; Shen, Dinggang

    2014-01-01

    Most existing diffusion tensor imaging (DTI) registration methods estimate structural correspondences based on voxelwise matching of tensors. The rich connectivity information that is given by DTI, however, is often neglected. In this article, we propose to integrate complementary information given by connectivity features and tensor features for improved registration accuracy. To utilize connectivity information, we place multiple anchors representing different brain anatomies in the image space, and define the connectivity features for each voxel as the geodesic distances from all anchors to the voxel under consideration. The geodesic distance, which is computed in relation to the tensor field, encapsulates information of brain connectivity. We also extract tensor features for every voxel to reflect the local statistics of tensors in its neighborhood. We then combine both connectivity features and tensor features for registration of tensor images. From the images, landmarks are selected automatically and their correspondences are determined based on their connectivity and tensor feature vectors. The deformation field that deforms one tensor image to the other is iteratively estimated and optimized according to the landmarks and their associated correspondences. Experimental results show that, by using connectivity features and tensor features simultaneously, registration accuracy is increased substantially compared with the cases using either type of features alone. PMID:24293159

  11. Experimental comparison of landmark-based methods for 3D elastic registration of pre- and postoperative liver CT data

    NASA Astrophysics Data System (ADS)

    Lange, Thomas; Wörz, Stefan; Rohr, Karl; Schlag, Peter M.

    2009-02-01

    The qualitative and quantitative comparison of pre- and postoperative image data is an important possibility to validate surgical procedures, in particular, if computer assisted planning and/or navigation is performed. Due to deformations after surgery, partially caused by the removal of tissue, a non-rigid registration scheme is a prerequisite for a precise comparison. Interactive landmark-based schemes are a suitable approach, if high accuracy and reliability is difficult to achieve by automatic registration approaches. Incorporation of a priori knowledge about the anatomical structures to be registered may help to reduce interaction time and improve accuracy. Concerning pre- and postoperative CT data of oncological liver resections the intrahepatic vessels are suitable anatomical structures. In addition to using branching landmarks for registration, we here introduce quasi landmarks at vessel segments with high localization precision perpendicular to the vessels and low precision along the vessels. A comparison of interpolating thin-plate splines (TPS), interpolating Gaussian elastic body splines (GEBS) and approximating GEBS on landmarks at vessel branchings as well as approximating GEBS on the introduced vessel segment landmarks is performed. It turns out that the segment landmarks provide registration accuracies as good as branching landmarks and can improve accuracy if combined with branching landmarks. For a low number of landmarks segment landmarks are even superior.

  12. Shape-based diffeomorphic registration on hippocampal surfaces using Beltrami holomorphic flow.

    PubMed

    Lui, Lok Ming; Wong, Tsz Wai; Thompson, Paul; Chan, Tony; Gu, Xianfeng; Yau, Shing-Tung

    2010-01-01

    We develop a new algorithm to automatically register hippocampal (HP) surfaces with complete geometric matching, avoiding the need to manually label landmark features. A good registration depends on a reasonable choice of shape energy that measures the dissimilarity between surfaces. In our work, we first propose a complete shape index using the Beltrami coefficient and curvatures, which measures subtle local differences. The proposed shape energy is zero if and only if two shapes are identical up to a rigid motion. We then seek the best surface registration by minimizing the shape energy. We propose a simple representation of surface diffeomorphisms using Beltrami coefficients, which simplifies the optimization process. We then iteratively minimize the shape energy using the proposed Beltrami Holomorphic flow (BHF) method. Experimental results on 212 HP of normal and diseased (Alzheimer's disease) subjects show our proposed algorithm is effective in registering HP surfaces with complete geometric matching. The proposed shape energy can also capture local shape differences between HP for disease analysis.

  13. Distance-Dependent Multimodal Image Registration for Agriculture Tasks

    PubMed Central

    Berenstein, Ron; Hočevar, Marko; Godeša, Tone; Edan, Yael; Ben-Shahar, Ohad

    2015-01-01

    Image registration is the process of aligning two or more images of the same scene taken at different times; from different viewpoints; and/or by different sensors. This research focuses on developing a practical method for automatic image registration for agricultural systems that use multimodal sensory systems and operate in natural environments. While not limited to any particular modalities; here we focus on systems with visual and thermal sensory inputs. Our approach is based on pre-calibrating a distance-dependent transformation matrix (DDTM) between the sensors; and representing it in a compact way by regressing the distance-dependent coefficients as distance-dependent functions. The DDTM is measured by calculating a projective transformation matrix for varying distances between the sensors and possible targets. To do so we designed a unique experimental setup including unique Artificial Control Points (ACPs) and their detection algorithms for the two sensors. We demonstrate the utility of our approach using different experiments and evaluation criteria. PMID:26308000

  14. MISTICA: Minimum Spanning Tree-based Coarse Image Alignment for Microscopy Image Sequences

    PubMed Central

    Ray, Nilanjan; McArdle, Sara; Ley, Klaus; Acton, Scott T.

    2016-01-01

    Registration of an in vivo microscopy image sequence is necessary in many significant studies, including studies of atherosclerosis in large arteries and the heart. Significant cardiac and respiratory motion of the living subject, occasional spells of focal plane changes, drift in the field of view, and long image sequences are the principal roadblocks. The first step in such a registration process is the removal of translational and rotational motion. Next, a deformable registration can be performed. The focus of our study here is to remove the translation and/or rigid body motion that we refer to here as coarse alignment. The existing techniques for coarse alignment are unable to accommodate long sequences often consisting of periods of poor quality images (as quantified by a suitable perceptual measure). Many existing methods require the user to select an anchor image to which other images are registered. We propose a novel method for coarse image sequence alignment based on minimum weighted spanning trees (MISTICA) that overcomes these difficulties. The principal idea behind MISTICA is to re-order the images in shorter sequences, to demote nonconforming or poor quality images in the registration process, and to mitigate the error propagation. The anchor image is selected automatically making MISTICA completely automated. MISTICA is computationally efficient. It has a single tuning parameter that determines graph width, which can also be eliminated by way of additional computation. MISTICA outperforms existing alignment methods when applied to microscopy image sequences of mouse arteries. PMID:26415193

  15. MISTICA: Minimum Spanning Tree-Based Coarse Image Alignment for Microscopy Image Sequences.

    PubMed

    Ray, Nilanjan; McArdle, Sara; Ley, Klaus; Acton, Scott T

    2016-11-01

    Registration of an in vivo microscopy image sequence is necessary in many significant studies, including studies of atherosclerosis in large arteries and the heart. Significant cardiac and respiratory motion of the living subject, occasional spells of focal plane changes, drift in the field of view, and long image sequences are the principal roadblocks. The first step in such a registration process is the removal of translational and rotational motion. Next, a deformable registration can be performed. The focus of our study here is to remove the translation and/or rigid body motion that we refer to here as coarse alignment. The existing techniques for coarse alignment are unable to accommodate long sequences often consisting of periods of poor quality images (as quantified by a suitable perceptual measure). Many existing methods require the user to select an anchor image to which other images are registered. We propose a novel method for coarse image sequence alignment based on minimum weighted spanning trees (MISTICA) that overcomes these difficulties. The principal idea behind MISTICA is to reorder the images in shorter sequences, to demote nonconforming or poor quality images in the registration process, and to mitigate the error propagation. The anchor image is selected automatically making MISTICA completely automated. MISTICA is computationally efficient. It has a single tuning parameter that determines graph width, which can also be eliminated by the way of additional computation. MISTICA outperforms existing alignment methods when applied to microscopy image sequences of mouse arteries.

  16. Image registration with auto-mapped control volumes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreibmann, Eduard; Xing Lei

    2006-04-15

    Many image registration algorithms rely on the use of homologous control points on the two input image sets to be registered. In reality, the interactive identification of the control points on both images is tedious, difficult, and often a source of error. We propose a two-step algorithm to automatically identify homologous regions that are used as a priori information during the image registration procedure. First, a number of small control volumes having distinct anatomical features are identified on the model image in a somewhat arbitrary fashion. Instead of attempting to find their correspondences in the reference image through user interaction,more » in the proposed method, each of the control regions is mapped to the corresponding part of the reference image by using an automated image registration algorithm. A normalized cross-correlation (NCC) function or mutual information was used as the auto-mapping metric and a limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm (L-BFGS) was employed to optimize the function to find the optimal mapping. For rigid registration, the transformation parameters of the system are obtained by averaging that derived from the individual control volumes. In our deformable calculation, the mapped control volumes are treated as the nodes or control points with known positions on the two images. If the number of control volumes is not enough to cover the whole image to be registered, additional nodes are placed on the model image and then located on the reference image in a manner similar to the conventional BSpline deformable calculation. For deformable registration, the established correspondence by the auto-mapped control volumes provides valuable guidance for the registration calculation and greatly reduces the dimensionality of the problem. The performance of the two-step registrations was applied to three rigid registration cases (two PET-CT registrations and a brain MRI-CT registration) and one deformable registration of inhale and exhale phases of a lung 4D CT. Algorithm convergence was confirmed by starting the registration calculations from a large number of initial transformation parameters. An accuracy of {approx}2 mm was achieved for both deformable and rigid registration. The proposed image registration method greatly reduces the complexity involved in the determination of homologous control points and allows us to minimize the subjectivity and uncertainty associated with the current manual interactive approach. Patient studies have indicated that the two-step registration technique is fast, reliable, and provides a valuable tool to facilitate both rigid and nonrigid image registrations.« less

  17. Geometric registration of remotely sensed data with SAMIR

    NASA Astrophysics Data System (ADS)

    Gianinetto, Marco; Barazzetti, Luigi; Dini, Luigi; Fusiello, Andrea; Toldo, Roberto

    2015-06-01

    The commercial market offers several software packages for the registration of remotely sensed data through standard one-to-one image matching. Although very rapid and simple, this strategy does not take into consideration all the interconnections among the images of a multi-temporal data set. This paper presents a new scientific software, called Satellite Automatic Multi-Image Registration (SAMIR), able to extend the traditional registration approach towards multi-image global processing. Tests carried out with high-resolution optical (IKONOS) and high-resolution radar (COSMO-SkyMed) data showed that SAMIR can improve the registration phase with a more rigorous and robust workflow without initial approximations, user's interaction or limitation in spatial/spectral data size. The validation highlighted a sub-pixel accuracy in image co-registration for the considered imaging technologies, including optical and radar imagery.

  18. Image Segmentation, Registration, Compression, and Matching

    NASA Technical Reports Server (NTRS)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity/topology components of the generated models. The highly efficient triangular mesh compression compacts the connectivity information at the rate of 1.5-4 bits per vertex (on average for triangle meshes), while reducing the 3D geometry by 40-50 percent. Finally, taking into consideration the characteristics of 3D terrain data, and using the innovative, regularized binary decomposition mesh modeling, a multistage, pattern-drive modeling, and compression technique has been developed to provide an effective framework for compressing digital elevation model (DEM) surfaces, high-resolution aerial imagery, and other types of NASA data.

  19. Position tracking of moving liver lesion based on real-time registration between 2D ultrasound and 3D preoperative images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weon, Chijun; Hyun Nam, Woo; Lee, Duhgoon

    Purpose: Registration between 2D ultrasound (US) and 3D preoperative magnetic resonance (MR) (or computed tomography, CT) images has been studied recently for US-guided intervention. However, the existing techniques have some limits, either in the registration speed or the performance. The purpose of this work is to develop a real-time and fully automatic registration system between two intermodal images of the liver, and subsequently an indirect lesion positioning/tracking algorithm based on the registration result, for image-guided interventions. Methods: The proposed position tracking system consists of three stages. In the preoperative stage, the authors acquire several 3D preoperative MR (or CT) imagesmore » at different respiratory phases. Based on the transformations obtained from nonrigid registration of the acquired 3D images, they then generate a 4D preoperative image along the respiratory phase. In the intraoperative preparatory stage, they properly attach a 3D US transducer to the patient’s body and fix its pose using a holding mechanism. They then acquire a couple of respiratory-controlled 3D US images. Via the rigid registration of these US images to the 3D preoperative images in the 4D image, the pose information of the fixed-pose 3D US transducer is determined with respect to the preoperative image coordinates. As feature(s) to use for the rigid registration, they may choose either internal liver vessels or the inferior vena cava. Since the latter is especially useful in patients with a diffuse liver disease, the authors newly propose using it. In the intraoperative real-time stage, they acquire 2D US images in real-time from the fixed-pose transducer. For each US image, they select candidates for its corresponding 2D preoperative slice from the 4D preoperative MR (or CT) image, based on the predetermined pose information of the transducer. The correct corresponding image is then found among those candidates via real-time 2D registration based on a gradient-based similarity measure. Finally, if needed, they obtain the position information of the liver lesion using the 3D preoperative image to which the registered 2D preoperative slice belongs. Results: The proposed method was applied to 23 clinical datasets and quantitative evaluations were conducted. With the exception of one clinical dataset that included US images of extremely low quality, 22 datasets of various liver status were successfully applied in the evaluation. Experimental results showed that the registration error between the anatomical features of US and preoperative MR images is less than 3 mm on average. The lesion tracking error was also found to be less than 5 mm at maximum. Conclusions: A new system has been proposed for real-time registration between 2D US and successive multiple 3D preoperative MR/CT images of the liver and was applied for indirect lesion tracking for image-guided intervention. The system is fully automatic and robust even with images that had low quality due to patient status. Through visual examinations and quantitative evaluations, it was verified that the proposed system can provide high lesion tracking accuracy as well as high registration accuracy, at performance levels which were acceptable for various clinical applications.« less

  20. Thermal feature extraction of servers in a datacenter using thermal image registration

    NASA Astrophysics Data System (ADS)

    Liu, Hang; Ran, Jian; Xie, Ting; Gao, Shan

    2017-09-01

    Thermal cameras provide fine-grained thermal information that enhances monitoring and enables automatic thermal management in large datacenters. Recent approaches employing mobile robots or thermal camera networks can already identify the physical locations of hot spots. Other distribution information used to optimize datacenter management can also be obtained automatically using pattern recognition technology. However, most of the features extracted from thermal images, such as shape and gradient, may be affected by changes in the position and direction of the thermal camera. This paper presents a method for extracting the thermal features of a hot spot or a server in a container datacenter. First, thermal and visual images are registered based on textural characteristics extracted from images acquired in datacenters. Then, the thermal distribution of each server is standardized. The features of a hot spot or server extracted from the standard distribution can reduce the impact of camera position and direction. The results of experiments show that image registration is efficient for aligning the corresponding visual and thermal images in the datacenter, and the standardization procedure reduces the impacts of camera position and direction on hot spot or server features.

  1. A fully automatic approach to register mobile mapping and airborne imagery to support the correction of platform trajectories in GNSS-denied urban areas

    NASA Astrophysics Data System (ADS)

    Jende, Phillipp; Nex, Francesco; Gerke, Markus; Vosselman, George

    2018-07-01

    Mobile Mapping (MM) solutions have become a significant extension to traditional data acquisition methods over the last years. Independently from the sensor carried by a platform, may it be laser scanners or cameras, high-resolution data postings are opposing a poor absolute localisation accuracy in urban areas due to GNSS occlusions and multipath effects. Potentially inaccurate position estimations are propagated by IMUs which are furthermore prone to drift effects. Thus, reliable and accurate absolute positioning on a par with MM's high-quality data remains an open issue. Multiple and diverse approaches have shown promising potential to mitigate GNSS errors in urban areas, but cannot achieve decimetre accuracy, require manual effort, or have limitations with respect to costs and availability. This paper presents a fully automatic approach to support the correction of MM imaging data based on correspondences with airborne nadir images. These correspondences can be employed to correct the MM platform's orientation by an adjustment solution. Unlike MM as such, aerial images do not suffer from GNSS occlusions, and their accuracy is usually verified by employing well-established methods using ground control points. However, a registration between MM and aerial images is a non-standard matching scenario, and requires several strategies to yield reliable and accurate correspondences. Scale, perspective and content strongly vary between both image sources, thus traditional feature matching methods may fail. To this end, the registration process is designed to focus on common and clearly distinguishable elements, such as road markings, manholes, or kerbstones. With a registration accuracy of about 98%, reliable tie information between MM and aerial data can be derived. Even though, the adjustment strategy is not covered in its entirety in this paper, accuracy results after adjustment will be presented. It will be shown that a decimetre accuracy is well achievable in a real data test scenario.

  2. Registration of clinical volumes to beams-eye-view images for real-time tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bryant, Jonathan H.; Rottmann, Joerg; Lewis, John H.

    2014-12-15

    Purpose: The authors combine the registration of 2D beam’s eye view (BEV) images and 3D planning computed tomography (CT) images, with relative, markerless tumor tracking to provide automatic absolute tracking of physician defined volumes such as the gross tumor volume (GTV). Methods: During treatment of lung SBRT cases, BEV images were continuously acquired with an electronic portal imaging device (EPID) operating in cine mode. For absolute registration of physician-defined volumes, an intensity based 2D/3D registration to the planning CT was performed using the end-of-exhale (EoE) phase of the four dimensional computed tomography (4DCT). The volume was converted from Hounsfield unitsmore » into electron density by a calibration curve and digitally reconstructed radiographs (DRRs) were generated for each beam geometry. Using normalized cross correlation between the DRR and an EoE BEV image, the best in-plane rigid transformation was found. The transformation was applied to physician-defined contours in the planning CT, mapping them into the EPID image domain. A robust multiregion method of relative markerless lung tumor tracking quantified deviations from the EoE position. Results: The success of 2D/3D registration was demonstrated at the EoE breathing phase. By registering at this phase and then employing a separate technique for relative tracking, the authors are able to successfully track target volumes in the BEV images throughout the entire treatment delivery. Conclusions: Through the combination of EPID/4DCT registration and relative tracking, a necessary step toward the clinical implementation of BEV tracking has been completed. The knowledge of tumor volumes relative to the treatment field is important for future applications like real-time motion management, adaptive radiotherapy, and delivered dose calculations.« less

  3. Speeding up Coarse Point Cloud Registration by Threshold-Independent Baysac Match Selection

    NASA Astrophysics Data System (ADS)

    Kang, Z.; Lindenbergh, R.; Pu, S.

    2016-06-01

    This paper presents an algorithm for the automatic registration of terrestrial point clouds by match selection using an efficiently conditional sampling method -- threshold-independent BaySAC (BAYes SAmpling Consensus) and employs the error metric of average point-to-surface residual to reduce the random measurement error and then approach the real registration error. BaySAC and other basic sampling algorithms usually need to artificially determine a threshold by which inlier points are identified, which leads to a threshold-dependent verification process. Therefore, we applied the LMedS method to construct the cost function that is used to determine the optimum model to reduce the influence of human factors and improve the robustness of the model estimate. Point-to-point and point-to-surface error metrics are most commonly used. However, point-to-point error in general consists of at least two components, random measurement error and systematic error as a result of a remaining error in the found rigid body transformation. Thus we employ the measure of the average point-to-surface residual to evaluate the registration accuracy. The proposed approaches, together with a traditional RANSAC approach, are tested on four data sets acquired by three different scanners in terms of their computational efficiency and quality of the final registration. The registration results show the st.dev of the average point-to-surface residuals is reduced from 1.4 cm (plain RANSAC) to 0.5 cm (threshold-independent BaySAC). The results also show that, compared to the performance of RANSAC, our BaySAC strategies lead to less iterations and cheaper computational cost when the hypothesis set is contaminated with more outliers.

  4. Effect of deformable registration on the dose calculated in radiation therapy planning CT scans of lung cancer patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cunliffe, Alexandra R.; Armato, Samuel G.; White, Bradley

    2015-01-15

    Purpose: To characterize the effects of deformable image registration of serial computed tomography (CT) scans on the radiation dose calculated from a treatment planning scan. Methods: Eighteen patients who received curative doses (≥60 Gy, 2 Gy/fraction) of photon radiation therapy for lung cancer treatment were retrospectively identified. For each patient, a diagnostic-quality pretherapy (4–75 days) CT scan and a treatment planning scan with an associated dose map were collected. To establish correspondence between scan pairs, a researcher manually identified anatomically corresponding landmark point pairs between the two scans. Pretherapy scans then were coregistered with planning scans (and associated dose maps)more » using the demons deformable registration algorithm and two variants of the Fraunhofer MEVIS algorithm (“Fast” and “EMPIRE10”). Landmark points in each pretherapy scan were automatically mapped to the planning scan using the displacement vector field output from each of the three algorithms. The Euclidean distance between manually and automatically mapped landmark points (d{sub E}) and the absolute difference in planned dose (|ΔD|) were calculated. Using regression modeling, |ΔD| was modeled as a function of d{sub E}, dose (D), dose standard deviation (SD{sub dose}) in an eight-pixel neighborhood, and the registration algorithm used. Results: Over 1400 landmark point pairs were identified, with 58–93 (median: 84) points identified per patient. Average |ΔD| across patients was 3.5 Gy (range: 0.9–10.6 Gy). Registration accuracy was highest using the Fraunhofer MEVIS EMPIRE10 algorithm, with an average d{sub E} across patients of 5.2 mm (compared with >7 mm for the other two algorithms). Consequently, average |ΔD| was also lowest using the Fraunhofer MEVIS EMPIRE10 algorithm. |ΔD| increased significantly as a function of d{sub E} (0.42 Gy/mm), D (0.05 Gy/Gy), SD{sub dose} (1.4 Gy/Gy), and the algorithm used (≤1 Gy). Conclusions: An average error of <4 Gy in radiation dose was introduced when points were mapped between CT scan pairs using deformable registration, with the majority of points yielding dose-mapping error <2 Gy (approximately 3% of the total prescribed dose). Registration accuracy was highest using the Fraunhofer MEVIS EMPIRE10 algorithm, resulting in the smallest errors in mapped dose. Dose differences following registration increased significantly with increasing spatial registration errors, dose, and dose gradient (i.e., SD{sub dose}). This model provides a measurement of the uncertainty in the radiation dose when points are mapped between serial CT scans through deformable registration.« less

  5. Automatic thoracic anatomy segmentation on CT images using hierarchical fuzzy models and registration

    NASA Astrophysics Data System (ADS)

    Sun, Kaioqiong; Udupa, Jayaram K.; Odhner, Dewey; Tong, Yubing; Torigian, Drew A.

    2014-03-01

    This paper proposes a thoracic anatomy segmentation method based on hierarchical recognition and delineation guided by a built fuzzy model. Labeled binary samples for each organ are registered and aligned into a 3D fuzzy set representing the fuzzy shape model for the organ. The gray intensity distributions of the corresponding regions of the organ in the original image are recorded in the model. The hierarchical relation and mean location relation between different organs are also captured in the model. Following the hierarchical structure and location relation, the fuzzy shape model of different organs is registered to the given target image to achieve object recognition. A fuzzy connected delineation method is then used to obtain the final segmentation result of organs with seed points provided by recognition. The hierarchical structure and location relation integrated in the model provide the initial parameters for registration and make the recognition efficient and robust. The 3D fuzzy model combined with hierarchical affine registration ensures that accurate recognition can be obtained for both non-sparse and sparse organs. The results on real images are presented and shown to be better than a recently reported fuzzy model-based anatomy recognition strategy.

  6. Multi-stage 3D-2D registration for correction of anatomical deformation in image-guided spine surgery

    NASA Astrophysics Data System (ADS)

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Jacobson, M. W.; Goerres, J.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2017-06-01

    A multi-stage image-based 3D-2D registration method is presented that maps annotations in a 3D image (e.g. point labels annotating individual vertebrae in preoperative CT) to an intraoperative radiograph in which the patient has undergone non-rigid anatomical deformation due to changes in patient positioning or due to the intervention itself. The proposed method (termed msLevelCheck) extends a previous rigid registration solution (LevelCheck) to provide an accurate mapping of vertebral labels in the presence of spinal deformation. The method employs a multi-stage series of rigid 3D-2D registrations performed on sets of automatically determined and increasingly localized sub-images, with the final stage achieving a rigid mapping for each label to yield a locally rigid yet globally deformable solution. The method was evaluated first in a phantom study in which a CT image of the spine was acquired followed by a series of 7 mobile radiographs with increasing degree of deformation applied. Second, the method was validated using a clinical data set of patients exhibiting strong spinal deformation during thoracolumbar spine surgery. Registration accuracy was assessed using projection distance error (PDE) and failure rate (PDE  >  20 mm—i.e. label registered outside vertebra). The msLevelCheck method was able to register all vertebrae accurately for all cases of deformation in the phantom study, improving the maximum PDE of the rigid method from 22.4 mm to 3.9 mm. The clinical study demonstrated the feasibility of the approach in real patient data by accurately registering all vertebral labels in each case, eliminating all instances of failure encountered in the conventional rigid method. The multi-stage approach demonstrated accurate mapping of vertebral labels in the presence of strong spinal deformation. The msLevelCheck method maintains other advantageous aspects of the original LevelCheck method (e.g. compatibility with standard clinical workflow, large capture range, and robustness against mismatch in image content) and extends capability to cases exhibiting strong changes in spinal curvature.

  7. Deformably registering and annotating whole CLARITY brains to an atlas via masked LDDMM

    NASA Astrophysics Data System (ADS)

    Kutten, Kwame S.; Vogelstein, Joshua T.; Charon, Nicolas; Ye, Li; Deisseroth, Karl; Miller, Michael I.

    2016-04-01

    The CLARITY method renders brains optically transparent to enable high-resolution imaging in the structurally intact brain. Anatomically annotating CLARITY brains is necessary for discovering which regions contain signals of interest. Manually annotating whole-brain, terabyte CLARITY images is difficult, time-consuming, subjective, and error-prone. Automatically registering CLARITY images to a pre-annotated brain atlas offers a solution, but is difficult for several reasons. Removal of the brain from the skull and subsequent storage and processing cause variable non-rigid deformations, thus compounding inter-subject anatomical variability. Additionally, the signal in CLARITY images arises from various biochemical contrast agents which only sparsely label brain structures. This sparse labeling challenges the most commonly used registration algorithms that need to match image histogram statistics to the more densely labeled histological brain atlases. The standard method is a multiscale Mutual Information B-spline algorithm that dynamically generates an average template as an intermediate registration target. We determined that this method performs poorly when registering CLARITY brains to the Allen Institute's Mouse Reference Atlas (ARA), because the image histogram statistics are poorly matched. Therefore, we developed a method (Mask-LDDMM) for registering CLARITY images, that automatically finds the brain boundary and learns the optimal deformation between the brain and atlas masks. Using Mask-LDDMM without an average template provided better results than the standard approach when registering CLARITY brains to the ARA. The LDDMM pipelines developed here provide a fast automated way to anatomically annotate CLARITY images; our code is available as open source software at http://NeuroData.io.

  8. FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.

    PubMed

    Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu

    2017-07-18

    Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.

  9. Sample registration software for process automation in the Neutron Activation Analysis (NAA) Facility in Malaysia nuclear agency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rahman, Nur Aira Abd, E-mail: nur-aira@nuclearmalaysia.gov.my; Yussup, Nolida; Ibrahim, Maslina Bt. Mohd

    Neutron Activation Analysis (NAA) had been established in Nuclear Malaysia since 1980s. Most of the procedures established were done manually including sample registration. The samples were recorded manually in a logbook and given ID number. Then all samples, standards, SRM and blank were recorded on the irradiation vial and several forms prior to irradiation. These manual procedures carried out by the NAA laboratory personnel were time consuming and not efficient. Sample registration software is developed as part of IAEA/CRP project on ‘Development of Process Automation in the Neutron Activation Analysis (NAA) Facility in Malaysia Nuclear Agency (RC17399)’. The objective ofmore » the project is to create a pc-based data entry software during sample preparation stage. This is an effective method to replace redundant manual data entries that needs to be completed by laboratory personnel. The software developed will automatically generate sample code for each sample in one batch, create printable registration forms for administration purpose, and store selected parameters that will be passed to sample analysis program. The software is developed by using National Instruments Labview 8.6.« less

  10. Lung texture in serial thoracic CT scans: Assessment of change introduced by image registration1

    PubMed Central

    Cunliffe, Alexandra R.; Al-Hallaq, Hania A.; Labby, Zacariah E.; Pelizzari, Charles A.; Straus, Christopher; Sensakovic, William F.; Ludwig, Michelle; Armato, Samuel G.

    2012-01-01

    Purpose: The aim of this study was to quantify the effect of four image registration methods on lung texture features extracted from serial computed tomography (CT) scans obtained from healthy human subjects. Methods: Two chest CT scans acquired at different time points were collected retrospectively for each of 27 patients. Following automated lung segmentation, each follow-up CT scan was registered to the baseline scan using four algorithms: (1) rigid, (2) affine, (3) B-splines deformable, and (4) demons deformable. The registration accuracy for each scan pair was evaluated by measuring the Euclidean distance between 150 identified landmarks. On average, 1432 spatially matched 32 × 32-pixel region-of-interest (ROI) pairs were automatically extracted from each scan pair. First-order, fractal, Fourier, Laws’ filter, and gray-level co-occurrence matrix texture features were calculated in each ROI, for a total of 140 features. Agreement between baseline and follow-up scan ROI feature values was assessed by Bland–Altman analysis for each feature; the range spanned by the 95% limits of agreement of feature value differences was calculated and normalized by the average feature value to obtain the normalized range of agreement (nRoA). Features with small nRoA were considered “registration-stable.” The normalized bias for each feature was calculated from the feature value differences between baseline and follow-up scans averaged across all ROIs in every patient. Because patients had “normal” chest CT scans, minimal change in texture feature values between scan pairs was anticipated, with the expectation of small bias and narrow limits of agreement. Results: Registration with demons reduced the Euclidean distance between landmarks such that only 9% of landmarks were separated by ≥1 mm, compared with rigid (98%), affine (95%), and B-splines (90%). Ninety-nine of the 140 (71%) features analyzed yielded nRoA > 50% for all registration methods, indicating that the majority of feature values were perturbed following registration. Nineteen of the features (14%) had nRoA < 15% following demons registration, indicating relative feature value stability. Student's t-tests showed that the nRoA of these 19 features was significantly larger when rigid, affine, or B-splines registration methods were used compared with demons registration. Demons registration yielded greater normalized bias in feature value change than B-splines registration, though this difference was not significant (p = 0.15). Conclusions: Demons registration provided higher spatial accuracy between matched anatomic landmarks in serial CT scans than rigid, affine, or B-splines algorithms. Texture feature changes calculated in healthy lung tissue from serial CT scans were smaller following demons registration compared with all other algorithms. Though registration altered the values of the majority of texture features, 19 features remained relatively stable after demons registration, indicating their potential for detecting pathologic change in serial CT scans. Combined use of accurate deformable registration using demons and texture analysis may allow for quantitative evaluation of local changes in lung tissue due to disease progression or treatment response. PMID:22894392

  11. Registering 2D and 3D imaging data of bone during healing.

    PubMed

    Hoerth, Rebecca M; Baum, Daniel; Knötel, David; Prohaska, Steffen; Willie, Bettina M; Duda, Georg N; Hege, Hans-Christian; Fratzl, Peter; Wagermaier, Wolfgang

    2015-04-01

    PURPOSE/AIMS OF THE STUDY: Bone's hierarchical structure can be visualized using a variety of methods. Many techniques, such as light and electron microscopy generate two-dimensional (2D) images, while micro-computed tomography (µCT) allows a direct representation of the three-dimensional (3D) structure. In addition, different methods provide complementary structural information, such as the arrangement of organic or inorganic compounds. The overall aim of the present study is to answer bone research questions by linking information of different 2D and 3D imaging techniques. A great challenge in combining different methods arises from the fact that they usually reflect different characteristics of the real structure. We investigated bone during healing by means of µCT and a couple of 2D methods. Backscattered electron images were used to qualitatively evaluate the tissue's calcium content and served as a position map for other experimental data. Nanoindentation and X-ray scattering experiments were performed to visualize mechanical and structural properties. We present an approach for the registration of 2D data in a 3D µCT reference frame, where scanning electron microscopies serve as a methodic link. Backscattered electron images are perfectly suited for registration into µCT reference frames, since both show structures based on the same physical principles. We introduce specific registration tools that have been developed to perform the registration process in a semi-automatic way. By applying this routine, we were able to exactly locate structural information (e.g. mineral particle properties) in the 3D bone volume. In bone healing studies this will help to better understand basic formation, remodeling and mineralization processes.

  12. Atlas-based automatic measurements of the morphology of the tibiofemoral joint

    NASA Astrophysics Data System (ADS)

    Brehler, M.; Thawait, G.; Shyr, W.; Ramsay, J.; Siewerdsen, J. H.; Zbijewski, W.

    2017-03-01

    Purpose: Anatomical metrics of the tibiofemoral joint support assessment of joint stability and surgical planning. We propose an automated, atlas-based algorithm to streamline the measurements in 3D images of the joint and reduce userdependence of the metrics arising from manual identification of the anatomical landmarks. Methods: The method is initialized with coarse registrations of a set of atlas images to the fixed input image. The initial registrations are then refined separately for the tibia and femur and the best matching atlas is selected. Finally, the anatomical landmarks of the best matching atlas are transformed onto the input image by deforming a surface model of the atlas to fit the shape of the tibial plateau in the input image (a mesh-to-volume registration). We apply the method to weight-bearing volumetric images of the knee obtained from 23 subjects using an extremity cone-beam CT system. Results of the automated algorithm were compared to an expert radiologist for measurements of Static Alignment (SA), Medial Tibial Slope (MTS) and Lateral Tibial Slope (LTS). Results: Intra-reader variability as high as 10% for LTS and 7% for MTS (ratio of standard deviation to the mean in repeated measurements) was found for expert radiologist, illustrating the potential benefits of an automated approach in improving the precision of the metrics. The proposed method achieved excellent registration of the atlas mesh to the input volumes. The resulting automated measurements yielded high correlations with expert radiologist, as indicated by correlation coefficients of 0.72 for MTS, 0.8 for LTS, and 0.89 for SA. Conclusions: The automated method for measurement of anatomical metrics of the tibiofemoral joint achieves high correlation with expert radiologist without the need for time consuming and error prone manual selection of landmarks.

  13. Wooing patients with technology.

    PubMed

    Myers, Michael

    2013-04-01

    Technologies that can give healthcare organizations a marketing advantage with patients include: Registration kiosks that request payment automatically, in a more comfortable environment for both patients and registration staff. Emails that enable patients to schedule initial visits and follow-up care. Secure online messaging platforms that enable patients to obtain timely answers to questions they have for their providers both before and after receiving services.

  14. Measurement of complex joint trajectories using slice-to-volume 2D/3D registration and cine MR

    NASA Astrophysics Data System (ADS)

    Bloch, C.; Figl, M.; Gendrin, C.; Weber, C.; Unger, E.; Aldrian, S.; Birkfellner, W.

    2010-02-01

    A method for studying the in vivo kinematics of complex joints is presented. It is based on automatic fusion of single slice cine MR images capturing the dynamics and a static MR volume. With the joint at rest the 3D scan is taken. In the data the anatomical compartments are identified and segmented resulting in a 3D volume of each individual part. In each of the cine MR images the joint parts are segmented and their pose and position are derived using a 2D/3D slice-to-volume registration to the volumes. The method is tested on the carpal joint because of its complexity and the small but complex motion of its compartments. For a first study a human cadaver hand was scanned and the method was evaluated with artificially generated slice images. Starting from random initial positions of about 5 mm translational and 12° rotational deviation, 70 to 90 % of the registrations converged successfully to a deviation better than 0.5 mm and 5°. First evaluations using real data from a cine MR were promising. The feasibility of the method was demonstrated. However we experienced difficulties with the segmentation of the cine MR images. We therefore plan to examine different parameters for the image acquisition in future studies.

  15. Contextual Computing: A Bluetooth based approach for tracking healthcare providers in the emergency room.

    PubMed

    Frisby, Joshua; Smith, Vernon; Traub, Stephen; Patel, Vimla L

    2017-01-01

    Hospital Emergency Departments (EDs) frequently experience crowding. One of the factors that contributes to this crowding is the "door to doctor time", which is the time from a patient's registration to when the patient is first seen by a physician. This is also one of the Meaningful Use (MU) performance measures that emergency departments report to the Center for Medicare and Medicaid Services (CMS). Current documentation methods for this measure are inaccurate due to the imprecision in manual data collection. We describe a method for automatically (in real time) and more accurately documenting the door to physician time. Using sensor-based technology, the distance between the physician and the computer is calculated by using the single board computers installed in patient rooms that log each time a Bluetooth signal is seen from a device that the physicians carry. This distance is compared automatically with the accepted room radius to determine if the physicians are present in the room at the time logged to provide greater precision. The logged times, accurate to the second, were compared with physicians' handwritten times, showing automatic recordings to be more precise. This real time automatic method will free the physician from extra cognitive load of manually recording data. This method for evaluation of performance is generic and can be used in any other setting outside the ED, and for purposes other than measuring physician time. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Spatio-Temporal Regularization for Longitudinal Registration to Subject-Specific 3d Template

    PubMed Central

    Guizard, Nicolas; Fonov, Vladimir S.; García-Lorenzo, Daniel; Nakamura, Kunio; Aubert-Broche, Bérengère; Collins, D. Louis

    2015-01-01

    Neurodegenerative diseases such as Alzheimer's disease present subtle anatomical brain changes before the appearance of clinical symptoms. Manual structure segmentation is long and tedious and although automatic methods exist, they are often performed in a cross-sectional manner where each time-point is analyzed independently. With such analysis methods, bias, error and longitudinal noise may be introduced. Noise due to MR scanners and other physiological effects may also introduce variability in the measurement. We propose to use 4D non-linear registration with spatio-temporal regularization to correct for potential longitudinal inconsistencies in the context of structure segmentation. The major contribution of this article is the use of individual template creation with spatio-temporal regularization of the deformation fields for each subject. We validate our method with different sets of real MRI data, compare it to available longitudinal methods such as FreeSurfer, SPM12, QUARC, TBM, and KNBSI, and demonstrate that spatially local temporal regularization yields more consistent rates of change of global structures resulting in better statistical power to detect significant changes over time and between populations. PMID:26301716

  17. Patient-Specific Biomechanical Modeling for Guidance During Minimally-Invasive Hepatic Surgery.

    PubMed

    Plantefève, Rosalie; Peterlik, Igor; Haouchine, Nazim; Cotin, Stéphane

    2016-01-01

    During the minimally-invasive liver surgery, only the partial surface view of the liver is usually provided to the surgeon via the laparoscopic camera. Therefore, it is necessary to estimate the actual position of the internal structures such as tumors and vessels from the pre-operative images. Nevertheless, such task can be highly challenging since during the intervention, the abdominal organs undergo important deformations due to the pneumoperitoneum, respiratory and cardiac motion and the interaction with the surgical tools. Therefore, a reliable automatic system for intra-operative guidance requires fast and reliable registration of the pre- and intra-operative data. In this paper we present a complete pipeline for the registration of pre-operative patient-specific image data to the sparse and incomplete intra-operative data. While the intra-operative data is represented by a point cloud extracted from the stereo-endoscopic images, the pre-operative data is used to reconstruct a biomechanical model which is necessary for accurate estimation of the position of the internal structures, considering the actual deformations. This model takes into account the patient-specific liver anatomy composed of parenchyma, vascularization and capsule, and is enriched with anatomical boundary conditions transferred from an atlas. The registration process employs the iterative closest point technique together with a penalty-based method. We perform a quantitative assessment based on the evaluation of the target registration error on synthetic data as well as a qualitative assessment on real patient data. We demonstrate that the proposed registration method provides good results in terms of both accuracy and robustness w.r.t. the quality of the intra-operative data.

  18. Multi-frame super-resolution with quality self-assessment for retinal fundus videos.

    PubMed

    Köhler, Thomas; Brost, Alexander; Mogalle, Katja; Zhang, Qianyi; Köhler, Christiane; Michelson, Georg; Hornegger, Joachim; Tornow, Ralf P

    2014-01-01

    This paper proposes a novel super-resolution framework to reconstruct high-resolution fundus images from multiple low-resolution video frames in retinal fundus imaging. Natural eye movements during an examination are used as a cue for super-resolution in a robust maximum a-posteriori scheme. In order to compensate heterogeneous illumination on the fundus, we integrate retrospective illumination correction for photometric registration to the underlying imaging model. Our method utilizes quality self-assessment to provide objective quality scores for reconstructed images as well as to select regularization parameters automatically. In our evaluation on real data acquired from six human subjects with a low-cost video camera, the proposed method achieved considerable enhancements of low-resolution frames and improved noise and sharpness characteristics by 74%. In terms of image analysis, we demonstrate the importance of our method for the improvement of automatic blood vessel segmentation as an example application, where the sensitivity was increased by 13% using super-resolution reconstruction.

  19. WE-AB-BRA-09: Registration of Preoperative MRI to Intraoperative Radiographs for Automatic Vertebral Target Localization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Silva, T; Uneri, A; Ketcha, M

    Purpose: Accurate localization of target vertebrae is essential to safe, effective spine surgery, but wrong-level surgery occurs with surprisingly high frequency. Recent research yielded the “LevelCheck” method for 3D-2D registration of preoperative CT to intraoperative radiographs, providing decision support for level localization. We report a new method (MR-LevelCheck) to perform 3D-2D registration based on preoperative MRI, presenting a solution for the increasingly common scenario in which MRI (not CT) is used for preoperative planning. Methods: Direct extension of LevelCheck is confounded by large mismatch in image intensity between MRI and radiographs. The proposed method overcomes such challenges with a simplemore » vertebrae segmentation. Using seed points at centroids, vertebrae are segmented using continuous max-flow method and dilated by 1.8 mm to include surrounding cortical bone (inconspicuous in T2w-MRI). MRI projections are computed (analogous to DRR) using segmentation and registered to intraoperative radiographs. The method was tested in a retrospective IRB-approved study involving 11 patients undergoing cervical, thoracic, or lumbar spine surgery following preoperative MRI. Registration accuracy was evaluated in terms of projection-distance-error (PDE) between the true and estimated location of vertebrae in each radiograph. Results: The method successfully registered each preoperative MRI to intraoperative radiographs and maintained desirable properties of robustness against image content mismatch, and large capture range. Segmentation achieved Dice coefficient = 89.2 ± 2.3 and mean-absolute-distance (MAD) = 1.5 ± 0.3 mm. Registration demonstrated robust performance under realistic patient variations, with PDE = 4.0 ± 1.9 mm (median ± iqr) and converged with run-time = 23.3 ± 1.7 s. Conclusion: The MR-LevelCheck algorithm provides an important extension to a previously validated decision support tool in spine surgery by extending its utility to preoperative MRI. With initial studies demonstrating PDE <5 mm and 0% failure rate, the method is now in translation to larger scale prospective clinical studies. S. Vogt and G. Kleinszig are employees of Siemens Healthcare.« less

  20. Evaluation of MRI and cannabinoid type 1 receptor PET templates constructed using DARTEL for spatial normalization of rat brains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kronfeld, Andrea; Müller-Forell, Wibke; Buchholz, Hans-Georg

    Purpose: Image registration is one prerequisite for the analysis of brain regions in magnetic-resonance-imaging (MRI) or positron-emission-tomography (PET) studies. Diffeomorphic anatomical registration through exponentiated Lie algebra (DARTEL) is a nonlinear, diffeomorphic algorithm for image registration and construction of image templates. The goal of this small animal study was (1) the evaluation of a MRI and calculation of several cannabinoid type 1 (CB1) receptor PET templates constructed using DARTEL and (2) the analysis of the image registration accuracy of MR and PET images to their DARTEL templates with reference to analytical and iterative PET reconstruction algorithms. Methods: Five male Sprague Dawleymore » rats were investigated for template construction using MRI and [{sup 18}F]MK-9470 PET for CB1 receptor representation. PET images were reconstructed using the algorithms filtered back-projection, ordered subset expectation maximization in 2D, and maximum a posteriori in 3D. Landmarks were defined on each MR image, and templates were constructed under different settings, i.e., based on different tissue class images [gray matter (GM), white matter (WM), and GM + WM] and regularization forms (“linear elastic energy,” “membrane energy,” and “bending energy”). Registration accuracy for MRI and PET templates was evaluated by means of the distance between landmark coordinates. Results: The best MRI template was constructed based on gray and white matter images and the regularization form linear elastic energy. In this case, most distances between landmark coordinates were <1 mm. Accordingly, MRI-based spatial normalization was most accurate, but results of the PET-based spatial normalization were quite comparable. Conclusions: Image registration using DARTEL provides a standardized and automatic framework for small animal brain data analysis. The authors were able to show that this method works with high reliability and validity. Using DARTEL templates together with nonlinear registration algorithms allows for accurate spatial normalization of combined MRI/PET or PET-only studies.« less

  1. A simple bubble-flowmeter with quasicontinuous registration.

    PubMed

    Ludt, H; Herrmann, H D

    1976-07-22

    The construction of a simple bubble-flow-meter is described. The instrument has the following features: 1. automatic bubble injection, 2. precise measurement of the bubble passage time by a digital counter, 3. quasicontinuous registration of the flow rate, 4. alternative run with clear fluid (water) and coloured fluid (blood), 5. low volume, 6. closed measuring system for measurements in low and high pressure systems.

  2. Registration of MRI to intraoperative radiographs for target localization in spinal interventions

    NASA Astrophysics Data System (ADS)

    De Silva, T.; Uneri, A.; Ketcha, M. D.; Reaungamornrat, S.; Goerres, J.; Jacobson, M. W.; Vogt, S.; Kleinszig, G.; Khanna, A. J.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2017-01-01

    Decision support to assist in target vertebra localization could provide a useful aid to safe and effective spine surgery. Previous solutions have shown 3D-2D registration of preoperative CT to intraoperative radiographs to reliably annotate vertebral labels for assistance during level localization. We present an algorithm (referred to as MR-LevelCheck) to perform 3D-2D registration based on a preoperative MRI to accommodate the increasingly common clinical scenario in which MRI is used instead of CT for preoperative planning. Straightforward adaptation of gradient/intensity-based methods appropriate to CT-to-radiograph registration is confounded by large mismatch and noncorrespondence in image intensity between MRI and radiographs. The proposed method overcomes such challenges with a simple vertebrae segmentation step using vertebra centroids as seed points (automatically defined within existing workflow). Forwards projections are computed using segmented MRI and registered to radiographs via gradient orientation (GO) similarity and the CMA-ES (covariance-matrix-adaptation evolutionary-strategy) optimizer. The method was tested in an IRB-approved study involving 10 patients undergoing cervical, thoracic, or lumbar spine surgery following preoperative MRI. The method successfully registered each preoperative MRI to intraoperative radiographs and maintained desirable properties of robustness against image content mismatch and large capture range. Robust registration performance was achieved with projection distance error (PDE) (median  ±  IQR)  =  4.3  ±  2.6 mm (median  ±  IQR) and 0% failure rate. Segmentation accuracy for the continuous max-flow method yielded dice coefficient  =  88.1  ±  5.2, accuracy  =  90.6  ±  5.7, RMSE  =  1.8  ±  0.6 mm, and contour affinity ratio (CAR)  =  0.82  ±  0.08. Registration performance was found to be robust for segmentation methods exhibiting RMSE  <3 mm and CAR  >0.50. The MR-LevelCheck method provides a potentially valuable extension to a previously developed decision support tool for spine surgery target localization by extending its utility to preoperative MRI while maintaining characteristics of accuracy and robustness.

  3. MO-C-17A-11: A Segmentation and Point Matching Enhanced Deformable Image Registration Method for Dose Accumulation Between HDR CT Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhen, X; Chen, H; Zhou, L

    2014-06-15

    Purpose: To propose and validate a novel and accurate deformable image registration (DIR) scheme to facilitate dose accumulation among treatment fractions of high-dose-rate (HDR) gynecological brachytherapy. Method: We have developed a method to adapt DIR algorithms to gynecologic anatomies with HDR applicators by incorporating a segmentation step and a point-matching step into an existing DIR framework. In the segmentation step, random walks algorithm is used to accurately segment and remove the applicator region (AR) in the HDR CT image. A semi-automatic seed point generation approach is developed to obtain the incremented foreground and background point sets to feed the randommore » walks algorithm. In the subsequent point-matching step, a feature-based thin-plate spline-robust point matching (TPS-RPM) algorithm is employed for AR surface point matching. With the resulting mapping, a DVF characteristic of the deformation between the two AR surfaces is generated by B-spline approximation, which serves as the initial DVF for the following Demons DIR between the two AR-free HDR CT images. Finally, the calculated DVF via Demons combined with the initial one serve as the final DVF to map doses between HDR fractions. Results: The segmentation and registration accuracy are quantitatively assessed by nine clinical HDR cases from three gynecological cancer patients. The quantitative results as well as the visual inspection of the DIR indicate that our proposed method can suppress the interference of the applicator with the DIR algorithm, and accurately register HDR CT images as well as deform and add interfractional HDR doses. Conclusions: We have developed a novel and robust DIR scheme that can perform registration between HDR gynecological CT images and yield accurate registration results. This new DIR scheme has potential for accurate interfractional HDR dose accumulation. This work is supported in part by the National Natural ScienceFoundation of China (no 30970866 and no 81301940)« less

  4. Geometry Processing of Conventionally Produced Mouse Brain Slice Images.

    PubMed

    Agarwal, Nitin; Xu, Xiangmin; Gopi, M

    2018-04-21

    Brain mapping research in most neuroanatomical laboratories relies on conventional processing techniques, which often introduce histological artifacts such as tissue tears and tissue loss. In this paper we present techniques and algorithms for automatic registration and 3D reconstruction of conventionally produced mouse brain slices in a standardized atlas space. This is achieved first by constructing a virtual 3D mouse brain model from annotated slices of Allen Reference Atlas (ARA). Virtual re-slicing of the reconstructed model generates ARA-based slice images corresponding to the microscopic images of histological brain sections. These image pairs are aligned using a geometric approach through contour images. Histological artifacts in the microscopic images are detected and removed using Constrained Delaunay Triangulation before performing global alignment. Finally, non-linear registration is performed by solving Laplace's equation with Dirichlet boundary conditions. Our methods provide significant improvements over previously reported registration techniques for the tested slices in 3D space, especially on slices with significant histological artifacts. Further, as one of the application we count the number of neurons in various anatomical regions using a dataset of 51 microscopic slices from a single mouse brain. To the best of our knowledge the presented work is the first that automatically registers both clean as well as highly damaged high-resolutions histological slices of mouse brain to a 3D annotated reference atlas space. This work represents a significant contribution to this subfield of neuroscience as it provides tools to neuroanatomist for analyzing and processing histological data. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. SU-F-J-194: Development of Dose-Based Image Guided Proton Therapy Workflow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pham, R; Sun, B; Zhao, T

    Purpose: To implement image-guided proton therapy (IGPT) based on daily proton dose distribution. Methods: Unlike x-ray therapy, simple alignment based on anatomy cannot ensure proper dose coverage in proton therapy. Anatomy changes along the beam path may lead to underdosing the target, or overdosing the organ-at-risk (OAR). With an in-room mobile computed tomography (CT) system, we are developing a dose-based IGPT software tool that allows patient positioning and treatment adaption based on daily dose distributions. During an IGPT treatment, daily CT images are acquired in treatment position. After initial positioning based on rigid image registration, proton dose distribution is calculatedmore » on daily CT images. The target and OARs are automatically delineated via deformable image registration. Dose distributions are evaluated to decide if repositioning or plan adaptation is necessary in order to achieve proper coverage of the target and sparing of OARs. Besides online dose-based image guidance, the software tool can also map daily treatment doses to the treatment planning CT images for offline adaptive treatment. Results: An in-room helical CT system is commissioned for IGPT purposes. It produces accurate CT numbers that allow proton dose calculation. GPU-based deformable image registration algorithms are developed and evaluated for automatic ROI-delineation and dose mapping. The online and offline IGPT functionalities are evaluated with daily CT images of the proton patients. Conclusion: The online and offline IGPT software tool may improve the safety and quality of proton treatment by allowing dose-based IGPT and adaptive proton treatments. Research is partially supported by Mevion Medical Systems.« less

  6. Automatic three-dimensional registration of intra-vascular optical coherence tomography images for the clinical evaluation of stent implantation over time

    NASA Astrophysics Data System (ADS)

    Ughi, Giovanni J.; Adriaenssens, Tom; Larsson, Matilda; Dubois, Christophe; Sinnaeve, Peter; Coosemans, Mark; Desmet, Walter; D'hooghe, Jan

    2012-01-01

    In the last decade a large number of new intracoronary devices (i.e. drug-eluting stents, DES) have been developed to reduce the risks related to bare metal stent (BMS) implantation. The use of this new generation of DES has been shown to substantially reduce, compared with BMS, the occurrence of restenosis and recurrent ischemia that would necessitate a second revascularization procedure. Nevertheless, safety issues on the use of DES persist and full understanding of mechanisms of adverse clinical events is still a matter of concern and debate. Intravascular Optical Coherence Tomography (IV-OCT) is an imaging technique able to visualize the microstructure of blood vessels with an axial resolution <20 μm. Due to its very high spatial resolution, it enables detailed in-vivo assessment of implanted devices and vessel wall. Currently, the aim of several major clinical trials is to observe and quantify the vessel response to DES implantation over time. However, image analysis is currently performed manually and corresponding images, belonging to different IV-OCT acquisitions, can only be matched through a very labor intensive and subjective procedure. The aim of this study is to develop and validate a new methodology for the automatic registration of IV-OCT datasets on an image level. Hereto, we propose a landmark based rigid registration method exploiting the metallic stent framework as a feature. Such a tool would provide a better understanding of the behavior of different intracoronary devices in-vivo, giving unique insights about vessel pathophysiology and performance of new generation of intracoronary devices and different drugs.

  7. Planning, guidance, and quality assurance of pelvic screw placement using deformable image registration

    NASA Astrophysics Data System (ADS)

    Goerres, J.; Uneri, A.; Jacobson, M.; Ramsay, B.; De Silva, T.; Ketcha, M.; Han, R.; Manbachi, A.; Vogt, S.; Kleinszig, G.; Wolinsky, J.-P.; Osgood, G.; Siewerdsen, J. H.

    2017-12-01

    Percutaneous pelvic screw placement is challenging due to narrow bone corridors surrounded by vulnerable structures and difficult visual interpretation of complex anatomical shapes in 2D x-ray projection images. To address these challenges, a system for planning, guidance, and quality assurance (QA) is presented, providing functionality analogous to surgical navigation, but based on robust 3D-2D image registration techniques using fluoroscopy images already acquired in routine workflow. Two novel aspects of the system are investigated: automatic planning of pelvic screw trajectories and the ability to account for deformation of surgical devices (K-wire deflection). Atlas-based registration is used to calculate a patient-specific plan of screw trajectories in preoperative CT. 3D-2D registration aligns the patient to CT within the projective geometry of intraoperative fluoroscopy. Deformable known-component registration (dKC-Reg) localizes the surgical device, and the combination of plan and device location is used to provide guidance and QA. A leave-one-out analysis evaluated the accuracy of automatic planning, and a cadaver experiment compared the accuracy of dKC-Reg to rigid approaches (e.g. optical tracking). Surgical plans conformed within the bone cortex by 3-4 mm for the narrowest corridor (superior pubic ramus) and  >5 mm for the widest corridor (tear drop). The dKC-Reg algorithm localized the K-wire tip within 1.1 mm and 1.4° and was consistently more accurate than rigid-body tracking (errors up to 9 mm). The system was shown to automatically compute reliable screw trajectories and accurately localize deformed surgical devices (K-wires). Such capability could improve guidance and QA in orthopaedic surgery, where workflow is impeded by manual planning, conventional tool trackers add complexity and cost, rigid tool assumptions are often inaccurate, and qualitative interpretation of complex anatomy from 2D projections is prone to trial-and-error with extended fluoroscopy time.

  8. Atlas-based automatic measurements of the morphology of the tibiofemoral joint.

    PubMed

    Brehler, M; Thawait, G; Shyr, W; Ramsay, J; Siewerdsen, J H; Zbijewski, W

    2017-02-11

    Anatomical metrics of the tibiofemoral joint support assessment of joint stability and surgical planning. We propose an automated, atlas-based algorithm to streamline the measurements in 3D images of the joint and reduce user-dependence of the metrics arising from manual identification of the anatomical landmarks. The method is initialized with coarse registrations of a set of atlas images to the fixed input image. The initial registrations are then refined separately for the tibia and femur and the best matching atlas is selected. Finally, the anatomical landmarks of the best matching atlas are transformed onto the input image by deforming a surface model of the atlas to fit the shape of the tibial plateau in the input image (a mesh-to-volume registration). We apply the method to weight-bearing volumetric images of the knee obtained from 23 subjects using an extremity cone-beam CT system. Results of the automated algorithm were compared to an expert radiologist for measurements of Static Alignment (SA), Medial Tibial Slope (MTS) and Lateral Tibial Slope (LTS). Intra-reader variability as high as ~10% for LTS and 7% for MTS (ratio of standard deviation to the mean in repeated measurements) was found for expert radiologist, illustrating the potential benefits of an automated approach in improving the precision of the metrics. The proposed method achieved excellent registration of the atlas mesh to the input volumes. The resulting automated measurements yielded high correlations with expert radiologist, as indicated by correlation coefficients of 0.72 for MTS, 0.8 for LTS, and 0.89 for SA. The automated method for measurement of anatomical metrics of the tibiofemoral joint achieves high correlation with expert radiologist without the need for time consuming and error prone manual selection of landmarks.

  9. Feasibility of Extracting Key Elements from ClinicalTrials.gov to Support Clinicians’ Patient Care Decisions

    PubMed Central

    Kim, Heejun; Bian, Jiantao; Mostafa, Javed; Jonnalagadda, Siddhartha; Del Fiol, Guilherme

    2016-01-01

    Motivation: Clinicians need up-to-date evidence from high quality clinical trials to support clinical decisions. However, applying evidence from the primary literature requires significant effort. Objective: To examine the feasibility of automatically extracting key clinical trial information from ClinicalTrials.gov. Methods: We assessed the coverage of ClinicalTrials.gov for high quality clinical studies that are indexed in PubMed. Using 140 random ClinicalTrials.gov records, we developed and tested rules for the automatic extraction of key information. Results: The rate of high quality clinical trial registration in ClinicalTrials.gov increased from 0.2% in 2005 to 17% in 2015. Trials reporting results increased from 3% in 2005 to 19% in 2015. The accuracy of the automatic extraction algorithm for 10 trial attributes was 90% on average. Future research is needed to improve the algorithm accuracy and to design information displays to optimally present trial information to clinicians. PMID:28269867

  10. Automatic Recognition of Indoor Navigation Elements from Kinect Point Clouds

    NASA Astrophysics Data System (ADS)

    Zeng, L.; Kang, Z.

    2017-09-01

    This paper realizes automatically the navigating elements defined by indoorGML data standard - door, stairway and wall. The data used is indoor 3D point cloud collected by Kinect v2 launched in 2011 through the means of ORB-SLAM. By contrast, it is cheaper and more convenient than lidar, but the point clouds also have the problem of noise, registration error and large data volume. Hence, we adopt a shape descriptor - histogram of distances between two randomly chosen points, proposed by Osada and merges with other descriptor - in conjunction with random forest classifier to recognize the navigation elements (door, stairway and wall) from Kinect point clouds. This research acquires navigation elements and their 3-d location information from each single data frame through segmentation of point clouds, boundary extraction, feature calculation and classification. Finally, this paper utilizes the acquired navigation elements and their information to generate the state data of the indoor navigation module automatically. The experimental results demonstrate a high recognition accuracy of the proposed method.

  11. Augmenting atlas-based liver segmentation for radiotherapy treatment planning by incorporating image features proximal to the atlas contours

    NASA Astrophysics Data System (ADS)

    Li, Dengwang; Liu, Li; Chen, Jinhu; Li, Hongsheng; Yin, Yong; Ibragimov, Bulat; Xing, Lei

    2017-01-01

    Atlas-based segmentation utilizes a library of previously delineated contours of similar cases to facilitate automatic segmentation. The problem, however, remains challenging because of limited information carried by the contours in the library. In this studying, we developed a narrow-shell strategy to enhance the information of each contour in the library and to improve the accuracy of the exiting atlas-based approach. This study presented a new concept of atlas based segmentation method. Instead of using the complete volume of the target organs, only information along the organ contours from the atlas images was used for guiding segmentation of the new image. In setting up an atlas-based library, we included not only the coordinates of contour points, but also the image features adjacent to the contour. In this work, 139 CT images with normal appearing livers collected for radiotherapy treatment planning were used to construct the library. The CT images within the library were first registered to each other using affine registration. The nonlinear narrow shell was generated alongside the object contours of registered images. Matching voxels were selected inside common narrow shell image features of a library case and a new case using a speed-up robust features (SURF) strategy. A deformable registration was then performed using a thin plate splines (TPS) technique. The contour associated with the library case was propagated automatically onto the new image by exploiting the deformation field vectors. The liver contour was finally obtained by employing level set based energy optimization within the narrow shell. The performance of the proposed method was evaluated by comparing quantitatively the auto-segmentation results with that delineated by physicians. A novel atlas-based segmentation technique with inclusion of neighborhood image features through the introduction of a narrow-shell surrounding the target objects was established. Application of the technique to 30 liver cases suggested that the technique was capable to reliably segment liver cases from CT, 4D-CT, and CBCT images with little human interaction. The accuracy and speed of the proposed method are quantitatively validated by comparing automatic segmentation results with the manual delineation results. The Jaccard similarity metric between the automatically generated liver contours obtained by the proposed method and the physician delineated results are on an average 90%-96% for planning images. Incorporation of image features into the library contours improves the currently available atlas-based auto-contouring techniques and provides a clinically practical solution for auto-segmentation. The proposed mountainous narrow shell atlas based method can achieve efficient automatic liver propagation for CT, 4D-CT and CBCT images with following treatment planning and should find widespread application in future treatment planning systems.

  12. Augmenting atlas-based liver segmentation for radiotherapy treatment planning by incorporating image features proximal to the atlas contours.

    PubMed

    Li, Dengwang; Liu, Li; Chen, Jinhu; Li, Hongsheng; Yin, Yong; Ibragimov, Bulat; Xing, Lei

    2017-01-07

    Atlas-based segmentation utilizes a library of previously delineated contours of similar cases to facilitate automatic segmentation. The problem, however, remains challenging because of limited information carried by the contours in the library. In this studying, we developed a narrow-shell strategy to enhance the information of each contour in the library and to improve the accuracy of the exiting atlas-based approach. This study presented a new concept of atlas based segmentation method. Instead of using the complete volume of the target organs, only information along the organ contours from the atlas images was used for guiding segmentation of the new image. In setting up an atlas-based library, we included not only the coordinates of contour points, but also the image features adjacent to the contour. In this work, 139 CT images with normal appearing livers collected for radiotherapy treatment planning were used to construct the library. The CT images within the library were first registered to each other using affine registration. The nonlinear narrow shell was generated alongside the object contours of registered images. Matching voxels were selected inside common narrow shell image features of a library case and a new case using a speed-up robust features (SURF) strategy. A deformable registration was then performed using a thin plate splines (TPS) technique. The contour associated with the library case was propagated automatically onto the new image by exploiting the deformation field vectors. The liver contour was finally obtained by employing level set based energy optimization within the narrow shell. The performance of the proposed method was evaluated by comparing quantitatively the auto-segmentation results with that delineated by physicians. A novel atlas-based segmentation technique with inclusion of neighborhood image features through the introduction of a narrow-shell surrounding the target objects was established. Application of the technique to 30 liver cases suggested that the technique was capable to reliably segment liver cases from CT, 4D-CT, and CBCT images with little human interaction. The accuracy and speed of the proposed method are quantitatively validated by comparing automatic segmentation results with the manual delineation results. The Jaccard similarity metric between the automatically generated liver contours obtained by the proposed method and the physician delineated results are on an average 90%-96% for planning images. Incorporation of image features into the library contours improves the currently available atlas-based auto-contouring techniques and provides a clinically practical solution for auto-segmentation. The proposed mountainous narrow shell atlas based method can achieve efficient automatic liver propagation for CT, 4D-CT and CBCT images with following treatment planning and should find widespread application in future treatment planning systems.

  13. Multisensor Fusion for Change Detection

    NASA Astrophysics Data System (ADS)

    Schenk, T.; Csatho, B.

    2005-12-01

    Combining sensors that record different properties of a 3-D scene leads to complementary and redundant information. If fused properly, a more robust and complete scene description becomes available. Moreover, fusion facilitates automatic procedures for object reconstruction and modeling. For example, aerial imaging sensors, hyperspectral scanning systems, and airborne laser scanning systems generate complementary data. We describe how data from these sensors can be fused for such diverse applications as mapping surface erosion and landslides, reconstructing urban scenes, monitoring urban land use and urban sprawl, and deriving velocities and surface changes of glaciers and ice sheets. An absolute prerequisite for successful fusion is a rigorous co-registration of the sensors involved. We establish a common 3-D reference frame by using sensor invariant features. Such features are caused by the same object space phenomena and are extracted in multiple steps from the individual sensors. After extracting, segmenting and grouping the features into more abstract entities, we discuss ways on how to automatically establish correspondences. This is followed by a brief description of rigorous mathematical models suitable to deal with linear and area features. In contrast to traditional, point-based registration methods, lineal and areal features lend themselves to a more robust and more accurate registration. More important, the chances to automate the registration process increases significantly. The result of the co-registration of the sensors is a unique transformation between the individual sensors and the object space. This makes spatial reasoning of extracted information more versatile; reasoning can be performed in sensor space or in 3-D space where domain knowledge about features and objects constrains reasoning processes, reduces the search space, and helps to make the problem well-posed. We demonstrate the feasibility of the proposed multisensor fusion approach with detecting surface elevation changes on the Byrd Glacier, Antarctica, with aerial imagery from 1980s and ICESat laser altimetry data from 2003-05. Change detection from such disparate data sets is an intricate fusion problem, beginning with sensor alignment, and on to reasoning with spatial information as to where changes occurred and to what extent.

  14. Volume measurements of individual muscles in human quadriceps femoris using atlas-based segmentation approaches.

    PubMed

    Le Troter, Arnaud; Fouré, Alexandre; Guye, Maxime; Confort-Gouny, Sylviane; Mattei, Jean-Pierre; Gondin, Julien; Salort-Campana, Emmanuelle; Bendahan, David

    2016-04-01

    Atlas-based segmentation is a powerful method for automatic structural segmentation of several sub-structures in many organs. However, such an approach has been very scarcely used in the context of muscle segmentation, and so far no study has assessed such a method for the automatic delineation of individual muscles of the quadriceps femoris (QF). In the present study, we have evaluated a fully automated multi-atlas method and a semi-automated single-atlas method for the segmentation and volume quantification of the four muscles of the QF and for the QF as a whole. The study was conducted in 32 young healthy males, using high-resolution magnetic resonance images (MRI) of the thigh. The multi-atlas-based segmentation method was conducted in 25 subjects. Different non-linear registration approaches based on free-form deformable (FFD) and symmetric diffeomorphic normalization algorithms (SyN) were assessed. Optimal parameters of two fusion methods, i.e., STAPLE and STEPS, were determined on the basis of the highest Dice similarity index (DSI) considering manual segmentation (MSeg) as the ground truth. Validation and reproducibility of this pipeline were determined using another MRI dataset recorded in seven healthy male subjects on the basis of additional metrics such as the muscle volume similarity values, intraclass coefficient, and coefficient of variation. Both non-linear registration methods (FFD and SyN) were also evaluated as part of a single-atlas strategy in order to assess longitudinal muscle volume measurements. The multi- and the single-atlas approaches were compared for the segmentation and the volume quantification of the four muscles of the QF and for the QF as a whole. Considering each muscle of the QF, the DSI of the multi-atlas-based approach was high 0.87 ± 0.11 and the best results were obtained with the combination of two deformation fields resulting from the SyN registration method and the STEPS fusion algorithm. The optimal variables for FFD and SyN registration methods were four templates and a kernel standard deviation ranging between 5 and 8. The segmentation process using a single-atlas-based method was more robust with DSI values higher than 0.9. From the vantage of muscle volume measurements, the multi-atlas-based strategy provided acceptable results regarding the QF muscle as a whole but highly variable results regarding individual muscle. On the contrary, the performance of the single-atlas-based pipeline for individual muscles was highly comparable to the MSeg, thereby indicating that this method would be adequate for longitudinal tracking of muscle volume changes in healthy subjects. In the present study, we demonstrated that both multi-atlas and single-atlas approaches were relevant for the segmentation of individual muscles of the QF in healthy subjects. Considering muscle volume measurements, the single-atlas method provided promising perspectives regarding longitudinal quantification of individual muscle volumes.

  15. Evaluation of an automatic MR-based gold fiducial marker localisation method for MR-only prostate radiotherapy

    NASA Astrophysics Data System (ADS)

    Maspero, Matteo; van den Berg, Cornelis A. T.; Zijlstra, Frank; Sikkes, Gonda G.; de Boer, Hans C. J.; Meijer, Gert J.; Kerkmeijer, Linda G. W.; Viergever, Max A.; Lagendijk, Jan J. W.; Seevinck, Peter R.

    2017-10-01

    An MR-only radiotherapy planning (RTP) workflow would reduce the cost, radiation exposure and uncertainties introduced by CT-MRI registrations. In the case of prostate treatment, one of the remaining challenges currently holding back the implementation of an RTP workflow is the MR-based localisation of intraprostatic gold fiducial markers (FMs), which is crucial for accurate patient positioning. Currently, MR-based FM localisation is clinically performed manually. This is sub-optimal, as manual interaction increases the workload. Attempts to perform automatic FM detection often rely on being able to detect signal voids induced by the FMs in magnitude images. However, signal voids may not always be sufficiently specific, hampering accurate and robust automatic FM localisation. Here, we present an approach that aims at automatic MR-based FM localisation. This method is based on template matching using a library of simulated complex-valued templates, and exploiting the behaviour of the complex MR signal in the vicinity of the FM. Clinical evaluation was performed on seventeen prostate cancer patients undergoing external beam radiotherapy treatment. Automatic MR-based FM localisation was compared to manual MR-based and semi-automatic CT-based localisation (the current gold standard) in terms of detection rate and the spatial accuracy and precision of localisation. The proposed method correctly detected all three FMs in 15/17 patients. The spatial accuracy (mean) and precision (STD) were 0.9 mm and 0.5 mm respectively, which is below the voxel size of 1.1 × 1.1 × 1.2 mm3 and comparable to MR-based manual localisation. FM localisation failed (3/51 FMs) in the presence of bleeding or calcifications in the direct vicinity of the FM. The method was found to be spatially accurate and precise, which is essential for clinical use. To overcome any missed detection, we envision the use of the proposed method along with verification by an observer. This will result in a semi-automatic workflow facilitating the introduction of an MR-only workflow.

  16. Microscopic validation of whole mouse micro-metastatic tumor imaging agents using cryo-imaging and sliding organ image registration.

    PubMed

    Liu, Yiqiao; Zhou, Bo; Qutaish, Mohammed; Wilson, David L

    2016-01-01

    We created a metastasis imaging, analysis platform consisting of software and multi-spectral cryo-imaging system suitable for evaluating emerging imaging agents targeting micro-metastatic tumor. We analyzed CREKA-Gd in MRI, followed by cryo-imaging which repeatedly sectioned and tiled microscope images of the tissue block face, providing anatomical bright field and molecular fluorescence, enabling 3D microscopic imaging of the entire mouse with single metastatic cell sensitivity. To register MRI volumes to the cryo bright field reference, we used our standard mutual information, non-rigid registration which proceeded: preprocess → affine → B-spline non-rigid 3D registration. In this report, we created two modified approaches: mask where we registered locally over a smaller rectangular solid, and sliding organ . Briefly, in sliding organ , we segmented the organ, registered the organ and body volumes separately and combined results. Though s liding organ required manual annotation, it provided the best result as a standard to measure other registration methods. Regularization parameters for standard and mask methods were optimized in a grid search. Evaluations consisted of DICE, and visual scoring of a checkerboard display. Standard had accuracy of 2 voxels in all regions except near the kidney, where there were 5 voxels sliding. After mask and sliding organ correction, kidneys sliding were within 2 voxels, and Dice overlap increased 4%-10% in mask compared to standard . Mask generated comparable results with sliding organ and allowed a semi-automatic process.

  17. Microscopic validation of whole mouse micro-metastatic tumor imaging agents using cryo-imaging and sliding organ image registration

    NASA Astrophysics Data System (ADS)

    Liu, Yiqiao; Zhou, Bo; Qutaish, Mohammed; Wilson, David L.

    2016-03-01

    We created a metastasis imaging, analysis platform consisting of software and multi-spectral cryo-imaging system suitable for evaluating emerging imaging agents targeting micro-metastatic tumor. We analyzed CREKA-Gd in MRI, followed by cryo-imaging which repeatedly sectioned and tiled microscope images of the tissue block face, providing anatomical bright field and molecular fluorescence, enabling 3D microscopic imaging of the entire mouse with single metastatic cell sensitivity. To register MRI volumes to the cryo bright field reference, we used our standard mutual information, non-rigid registration which proceeded: preprocess --> affine --> B-spline non-rigid 3D registration. In this report, we created two modified approaches: mask where we registered locally over a smaller rectangular solid, and sliding organ. Briefly, in sliding organ, we segmented the organ, registered the organ and body volumes separately and combined results. Though sliding organ required manual annotation, it provided the best result as a standard to measure other registration methods. Regularization parameters for standard and mask methods were optimized in a grid search. Evaluations consisted of DICE, and visual scoring of a checkerboard display. Standard had accuracy of 2 voxels in all regions except near the kidney, where there were 5 voxels sliding. After mask and sliding organ correction, kidneys sliding were within 2 voxels, and Dice overlap increased 4%-10% in mask compared to standard. Mask generated comparable results with sliding organ and allowed a semi-automatic process.

  18. Dentalmaps: Automatic Dental Delineation for Radiotherapy Planning in Head-and-Neck Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thariat, Juliette, E-mail: jthariat@hotmail.com; Ramus, Liliane; INRIA

    Purpose: To propose an automatic atlas-based segmentation framework of the dental structures, called Dentalmaps, and to assess its accuracy and relevance to guide dental care in the context of intensity-modulated radiotherapy. Methods and Materials: A multi-atlas-based segmentation, less sensitive to artifacts than previously published head-and-neck segmentation methods, was used. The manual segmentations of a 21-patient database were first deformed onto the query using nonlinear registrations with the training images and then fused to estimate the consensus segmentation of the query. Results: The framework was evaluated with a leave-one-out protocol. The maximum doses estimated using manual contours were considered as groundmore » truth and compared with the maximum doses estimated using automatic contours. The dose estimation error was within 2-Gy accuracy in 75% of cases (with a median of 0.9 Gy), whereas it was within 2-Gy accuracy in 30% of cases only with the visual estimation method without any contour, which is the routine practice procedure. Conclusions: Dose estimates using this framework were more accurate than visual estimates without dental contour. Dentalmaps represents a useful documentation and communication tool between radiation oncologists and dentists in routine practice. Prospective multicenter assessment is underway on patients extrinsic to the database.« less

  19. Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications

    PubMed Central

    Moussa, Adel; El-Sheimy, Naser; Habib, Ayman

    2017-01-01

    Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research. PMID:29057847

  20. Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications.

    PubMed

    Al-Rawabdeh, Abdulla; Moussa, Adel; Foroutan, Marzieh; El-Sheimy, Naser; Habib, Ayman

    2017-10-18

    Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research.

  1. Model based rib-cage unfolding for trauma CT

    NASA Astrophysics Data System (ADS)

    von Berg, Jens; Klinder, Tobias; Lorenz, Cristian

    2018-03-01

    A CT rib-cage unfolding method is proposed that does not require to determine rib centerlines but determines the visceral cavity surface by model base segmentation. Image intensities are sampled across this surface that is flattened using a model based 3D thin-plate-spline registration. An average rib centerline model projected onto this surface serves as a reference system for registration. The flattening registration is designed so that ribs similar to the centerline model are mapped onto parallel lines preserving their relative length. Ribs deviating from this model appear deviating from straight parallel ribs in the unfolded view, accordingly. As the mapping is continuous also the details in intercostal space and those adjacent to the ribs are rendered well. The most beneficial application area is Trauma CT where a fast detection of rib fractures is a crucial task. Specifically in trauma, automatic rib centerline detection may not be guaranteed due to fractures and dislocations. The application by visual assessment on the large public LIDC data base of lung CT proved general feasibility of this early work.

  2. Automatic Substitute Computed Tomography Generation and Contouring for Magnetic Resonance Imaging (MRI)-Alone External Beam Radiation Therapy From Standard MRI Sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dowling, Jason A., E-mail: jason.dowling@csiro.au; University of Newcastle, Callaghan, New South Wales; Sun, Jidi

    Purpose: To validate automatic substitute computed tomography CT (sCT) scans generated from standard T2-weighted (T2w) magnetic resonance (MR) pelvic scans for MR-Sim prostate treatment planning. Patients and Methods: A Siemens Skyra 3T MR imaging (MRI) scanner with laser bridge, flat couch, and pelvic coil mounts was used to scan 39 patients scheduled for external beam radiation therapy for localized prostate cancer. For sCT generation a whole-pelvis MRI scan (1.6 mm 3-dimensional isotropic T2w SPACE [Sampling Perfection with Application optimized Contrasts using different flip angle Evolution] sequence) was acquired. Three additional small field of view scans were acquired: T2w, T2*w, and T1wmore » flip angle 80° for gold fiducials. Patients received a routine planning CT scan. Manual contouring of the prostate, rectum, bladder, and bones was performed independently on the CT and MR scans. Three experienced observers contoured each organ on MRI, allowing interobserver quantification. To generate a training database, each patient CT scan was coregistered to their whole-pelvis T2w using symmetric rigid registration and structure-guided deformable registration. A new multi-atlas local weighted voting method was used to generate automatic contours and sCT results. Results: The mean error in Hounsfield units between the sCT and corresponding patient CT (within the body contour) was 0.6 ± 14.7 (mean ± 1 SD), with a mean absolute error of 40.5 ± 8.2 Hounsfield units. Automatic contouring results were very close to the expert interobserver level (Dice similarity coefficient): prostate 0.80 ± 0.08, bladder 0.86 ± 0.12, rectum 0.84 ± 0.06, bones 0.91 ± 0.03, and body 1.00 ± 0.003. The change in monitor units between the sCT-based plans relative to the gold standard CT plan for the same dose prescription was found to be 0.3% ± 0.8%. The 3-dimensional γ pass rate was 1.00 ± 0.00 (2 mm/2%). Conclusions: The MR-Sim setup and automatic sCT generation methods using standard MR sequences generates realistic contours and electron densities for prostate cancer radiation therapy dose planning and digitally reconstructed radiograph generation.« less

  3. Study of Automatic Image Rectification and Registration of Scanned Historical Aerial Photographs

    NASA Astrophysics Data System (ADS)

    Chen, H. R.; Tseng, Y. H.

    2016-06-01

    Historical aerial photographs directly provide good evidences of past times. The Research Center for Humanities and Social Sciences (RCHSS) of Taiwan Academia Sinica has collected and scanned numerous historical maps and aerial images of Taiwan and China. Some maps or images have been geo-referenced manually, but most of historical aerial images have not been registered since there are no GPS or IMU data for orientation assisting in the past. In our research, we developed an automatic process of matching historical aerial images by SIFT (Scale Invariant Feature Transform) for handling the great quantity of images by computer vision. SIFT is one of the most popular method of image feature extracting and matching. This algorithm extracts extreme values in scale space into invariant image features, which are robust to changing in rotation scale, noise, and illumination. We also use RANSAC (Random sample consensus) to remove outliers, and obtain good conjugated points between photographs. Finally, we manually add control points for registration through least square adjustment based on collinear equation. In the future, we can use image feature points of more photographs to build control image database. Every new image will be treated as query image. If feature points of query image match the features in database, it means that the query image probably is overlapped with control images.With the updating of database, more and more query image can be matched and aligned automatically. Other research about multi-time period environmental changes can be investigated with those geo-referenced temporal spatial data.

  4. Object-constrained meshless deformable algorithm for high speed 3D nonrigid registration between CT and CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Ting; Kim, Sung; Goyal, Sharad

    2010-01-15

    Purpose: High-speed nonrigid registration between the planning CT and the treatment CBCT data is critical for real time image guided radiotherapy (IGRT) to improve the dose distribution and to reduce the toxicity to adjacent organs. The authors propose a new fully automatic 3D registration framework that integrates object-based global and seed constraints with the grayscale-based ''demons'' algorithm. Methods: Clinical objects were segmented on the planning CT images and were utilized as meshless deformable models during the nonrigid registration process. The meshless models reinforced a global constraint in addition to the grayscale difference between CT and CBCT in order to maintainmore » the shape and the volume of geometrically complex 3D objects during the registration. To expedite the registration process, the framework was stratified into hierarchies, and the authors used a frequency domain formulation to diffuse the displacement between the reference and the target in each hierarchy. Also during the registration of pelvis images, they replaced the air region inside the rectum with estimated pixel values from the surrounding rectal wall and introduced an additional seed constraint to robustly track and match the seeds implanted into the prostate. The proposed registration framework and algorithm were evaluated on 15 real prostate cancer patients. For each patient, prostate gland, seminal vesicle, bladder, and rectum were first segmented by a radiation oncologist on planning CT images for radiotherapy planning purpose. The same radiation oncologist also manually delineated the tumor volumes and critical anatomical structures in the corresponding CBCT images acquired at treatment. These delineated structures on the CBCT were only used as the ground truth for the quantitative validation, while structures on the planning CT were used both as the input to the registration method and the ground truth in validation. By registering the planning CT to the CBCT, a displacement map was generated. Segmented volumes in the CT images deformed using the displacement field were compared against the manual segmentations in the CBCT images to quantitatively measure the convergence of the shape and the volume. Other image features were also used to evaluate the overall performance of the registration. Results: The algorithm was able to complete the segmentation and registration process within 1 min, and the superimposed clinical objects achieved a volumetric similarity measure of over 90% between the reference and the registered data. Validation results also showed that the proposed registration could accurately trace the deformation inside the target volume with average errors of less than 1 mm. The method had a solid performance in registering the simulated images with up to 20 Hounsfield unit white noise added. Also, the side by side comparison with the original demons algorithm demonstrated its improved registration performance over the local pixel-based registration approaches. Conclusions: Given the strength and efficiency of the algorithm, the proposed method has significant clinical potential to accelerate and to improve the CBCT delineation and targets tracking in online IGRT applications.« less

  5. Automatic Tracking Of Remote Sensing Precipitation Data Using Genetic Algorithm Image Registration Based Automatic Morphing: September 1999 Storm Floyd Case Study

    NASA Astrophysics Data System (ADS)

    Chiu, L.; Vongsaard, J.; El-Ghazawi, T.; Weinman, J.; Yang, R.; Kafatos, M.

    U Due to the poor temporal sampling by satellites, data gaps exist in satellite derived time series of precipitation. This poses a challenge for assimilating rain- fall data into forecast models. To yield a continuous time series, the classic image processing technique of digital image morphing has been used. However, the digital morphing technique was applied manually and that is time consuming. In order to avoid human intervention in the process, an automatic procedure for image morphing is needed for real-time operations. For this purpose, Genetic Algorithm Based Image Registration Automatic Morphing (GRAM) model was developed and tested in this paper. Specifically, automatic morphing technique was integrated with Genetic Algo- rithm and Feature Based Image Metamorphosis technique to fill in data gaps between satellite coverage. The technique was tested using NOWRAD data which are gener- ated from the network of NEXRAD radars. Time series of NOWRAD data from storm Floyd that occurred at the US eastern region on September 16, 1999 for 00:00, 01:00, 02:00,03:00, and 04:00am were used. The GRAM technique was applied to data col- lected at 00:00 and 04:00am. These images were also manually morphed. Images at 01:00, 02:00 and 03:00am were interpolated from the GRAM and manual morphing and compared with the original NOWRAD rainrates. The results show that the GRAM technique outperforms manual morphing. The correlation coefficients between the im- ages generated using manual morphing are 0.905, 0.900, and 0.905 for the images at 01:00, 02:00,and 03:00 am, while the corresponding correlation coefficients are 0.946, 0.911, and 0.913, respectively, based on the GRAM technique. Index terms ­ Remote Sensing, Image Registration, Hydrology, Genetic Algorithm, Morphing, NEXRAD

  6. GLISTR: Glioma Image Segmentation and Registration

    PubMed Central

    Pohl, Kilian M.; Bilello, Michel; Cirillo, Luigi; Biros, George; Melhem, Elias R.; Davatzikos, Christos

    2015-01-01

    We present a generative approach for simultaneously registering a probabilistic atlas of a healthy population to brain magnetic resonance (MR) scans showing glioma and segmenting the scans into tumor as well as healthy tissue labels. The proposed method is based on the expectation maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the original atlas into one with tumor and edema adapted to best match a given set of patient’s images. The modified atlas is registered into the patient space and utilized for estimating the posterior probabilities of various tissue labels. EM iteratively refines the estimates of the posterior probabilities of tissue labels, the deformation field and the tumor growth model parameters. Hence, in addition to segmentation, the proposed method results in atlas registration and a low-dimensional description of the patient scans through estimation of tumor model parameters. We validate the method by automatically segmenting 10 MR scans and comparing the results to those produced by clinical experts and two state-of-the-art methods. The resulting segmentations of tumor and edema outperform the results of the reference methods, and achieve a similar accuracy from a second human rater. We additionally apply the method to 122 patients scans and report the estimated tumor model parameters and their relations with segmentation and registration results. Based on the results from this patient population, we construct a statistical atlas of the glioma by inverting the estimated deformation fields to warp the tumor segmentations of patients scans into a common space. PMID:22907965

  7. A practical salient region feature based 3D multi-modality registration method for medical images

    NASA Astrophysics Data System (ADS)

    Hahn, Dieter A.; Wolz, Gabriele; Sun, Yiyong; Hornegger, Joachim; Sauer, Frank; Kuwert, Torsten; Xu, Chenyang

    2006-03-01

    We present a novel representation of 3D salient region features and its integration into a hybrid rigid-body registration framework. We adopt scale, translation and rotation invariance properties of those intrinsic 3D features to estimate a transform between underlying mono- or multi-modal 3D medical images. Our method combines advantageous aspects of both feature- and intensity-based approaches and consists of three steps: an automatic extraction of a set of 3D salient region features on each image, a robust estimation of correspondences and their sub-pixel accurate refinement with outliers elimination. We propose a region-growing based approach for the extraction of 3D salient region features, a solution to the problem of feature clustering and a reduction of the correspondence search space complexity. Results of the developed algorithm are presented for both mono- and multi-modal intra-patient 3D image pairs (CT, PET and SPECT) that have been acquired for change detection, tumor localization, and time based intra-person studies. The accuracy of the method is clinically evaluated by a medical expert with an approach that measures the distance between a set of selected corresponding points consisting of both anatomical and functional structures or lesion sites. This demonstrates the robustness of the proposed method to image overlap, missing information and artefacts. We conclude by discussing potential medical applications and possibilities for integration into a non-rigid registration framework.

  8. Validity of registration of ICD codes and prescriptions in a research database in Swedish primary care: a cross-sectional study in Skaraborg primary care database

    PubMed Central

    2010-01-01

    Background In recent years, several primary care databases recording information from computerized medical records have been established and used for quality assessment of medical care and research. However, to be useful for research purposes, the data generated routinely from every day practice require registration of high quality. In this study we aimed to investigate (i) the frequency and validity of ICD code and drug prescription registration in the new Skaraborg primary care database (SPCD) and (ii) to investigate the sources of variation in this registration. Methods SPCD contains anonymous electronic medical records (ProfDoc III) automatically retrieved from all 24 public health care centres (HCC) in Skaraborg, Sweden. The frequencies of ICD code registration for the selected diagnoses diabetes mellitus, hypertension and chronic cardiovascular disease and the relevant drug prescriptions in the time period between May 2002 and October 2003 were analysed. The validity of data registration in the SPCD was assessed in a random sample of 50 medical records from each HCC (n = 1200 records) using the medical record text as gold standard. The variance of ICD code registration was studied with multi-level logistic regression analysis and expressed as median odds ratio (MOR). Results For diabetes mellitus and hypertension ICD codes were registered in 80-90% of cases, while for congestive heart failure and ischemic heart disease ICD codes were registered more seldom (60-70%). Drug prescription registration was overall high (88%). A correlation between the frequency of ICD coded visits and the sensitivity of the ICD code registration was found for hypertension and congestive heart failure but not for diabetes or ischemic heart disease. The frequency of ICD code registration varied from 42 to 90% between HCCs, and the greatest variation was found at the physician level (MORPHYSICIAN = 4.2 and MORHCC = 2.3). Conclusions Since the frequency of ICD code registration varies between different diagnoses, each diagnosis must be separately validated. Improved frequency and quality of ICD code registration might be achieved by interventions directed towards the physicians where the greatest amount of variation was found. PMID:20416069

  9. Registration of central paths and colonic polyps between supine and prone scans in computed tomography colonography: Pilot study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Ping; Napel, Sandy; Acar, Burak

    2004-10-01

    Computed tomography colonography (CTC) is a minimally invasive method that allows the evaluation of the colon wall from CT sections of the abdomen/pelvis. The primary goal of CTC is to detect colonic polyps, precursors to colorectal cancer. Because imperfect cleansing and distension can cause portions of the colon wall to be collapsed, covered with water, and/or covered with retained stool, patients are scanned in both prone and supine positions. We believe that both reading efficiency and computer aided detection (CAD) of CTC images can be improved by accurate registration of data from the supine and prone positions. We developed amore » two-stage approach that first registers the colonic central paths using a heuristic and automated algorithm and then matches polyps or polyp candidates (CAD hits) by a statistical approach. We evaluated the registration algorithm on 24 patient cases. After path registration, the mean misalignment distance between prone and supine identical anatomic landmarks was reduced from 47.08 to 12.66 mm, a 73% improvement. The polyp registration algorithm was specifically evaluated using eight patient cases for which radiologists identified polyps separately for both supine and prone data sets, and then manually registered corresponding pairs. The algorithm correctly matched 78% of these pairs without user input. The algorithm was also applied to the 30 highest-scoring CAD hits in the prone and supine scans and showed a success rate of 50% in automatically registering corresponding polyp pairs. Finally, we computed the average number of CAD hits that need to be manually compared in order to find the correct matches among the top 30 CAD hits. With polyp registration, the average number of comparisons was 1.78 per polyp, as opposed to 4.28 comparisons without polyp registration.« less

  10. Chest wall segmentation in automated 3D breast ultrasound scans.

    PubMed

    Tan, Tao; Platel, Bram; Mann, Ritse M; Huisman, Henkjan; Karssemeijer, Nico

    2013-12-01

    In this paper, we present an automatic method to segment the chest wall in automated 3D breast ultrasound images. Determining the location of the chest wall in automated 3D breast ultrasound images is necessary in computer-aided detection systems to remove automatically detected cancer candidates beyond the chest wall and it can be of great help for inter- and intra-modal image registration. We show that the visible part of the chest wall in an automated 3D breast ultrasound image can be accurately modeled by a cylinder. We fit the surface of our cylinder model to a set of automatically detected rib-surface points. The detection of the rib-surface points is done by a classifier using features representing local image intensity patterns and presence of rib shadows. Due to attenuation of the ultrasound signal, a clear shadow is visible behind the ribs. Evaluation of our segmentation method is done by computing the distance of manually annotated rib points to the surface of the automatically detected chest wall. We examined the performance on images obtained with the two most common 3D breast ultrasound devices in the market. In a dataset of 142 images, the average mean distance of the annotated points to the segmented chest wall was 5.59 ± 3.08 mm. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Manual limbal markings versus iris-registration software for correction of myopic astigmatism by laser in situ keratomileusis.

    PubMed

    Shen, Elizabeth P; Chen, Wei-Li; Hu, Fung-Rong

    2010-03-01

    To compare the efficacy and safety of manual limbal markings and wavefront-guided treatment with iris-registration software in laser in situ keratomileusis (LASIK) for myopic astigmatism. National Taiwan University Hospital, Taipei, Taiwan. Eyes with myopic astigmatism had LASIK with a Technolas 217z laser. Eyes in the limbal-marking group had conventional LASIK (PlanoScan or Zyoptix tissue-saving algorithm) with manual cyclotorsional-error adjustments according to 2 limbal marks. Eyes in the iris-registration group had wavefront-guided ablation (Zyoptix) in which cyclotorsional errors were automatically detected and adjusted. Refraction, corneal topography, and visual acuity data were compared between groups. Vector analysis was by the Alpins method. The mean preoperative spherical equivalent (SE) was -6.64 diopters (D) +/- 1.99 (SD) in the limbal-marking group and -6.72 +/- 1.86 D in the iris-registration group (P = .92). At 6 months, the mean SE was -0.42 +/- 0.63 D and -0.47 +/- 0.62 D, respectively (P = .08). There was no statistically significant difference between groups in the astigmatism correction, success, or flattening index values using 6-month postoperative refractive data. The angle of error was within +/-10 degrees in 73% of eyes in the limbal-marking group and 75% of eyes in the iris-registration group. Manual limbal markings and iris-registration software were equally effective and safe in LASIK for myopic astigmatism, showing that checking cyclotorsion by manual limbal markings is a safe alternative when automated systems are not available. Copyright 2010 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  12. Evaluation of an Automatic Registration-Based Algorithm for Direct Measurement of Volume Change in Tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarkar, Saradwata; Johnson, Timothy D.; Ma, Bing

    2012-07-01

    Purpose: Assuming that early tumor volume change is a biomarker for response to therapy, accurate quantification of early volume changes could aid in adapting an individual patient's therapy and lead to shorter clinical trials. We investigated an image registration-based approach for tumor volume change quantification that may more reliably detect smaller changes that occur in shorter intervals than can be detected by existing algorithms. Methods and Materials: Variance and bias of the registration-based approach were evaluated using retrospective, in vivo, very-short-interval diffusion magnetic resonance imaging scans where true zero tumor volume change is unequivocally known and synthetic data, respectively. Themore » interval scans were nonlinearly registered using two similarity measures: mutual information (MI) and normalized cross-correlation (NCC). Results: The 95% confidence interval of the percentage volume change error was (-8.93% to 10.49%) for MI-based and (-7.69%, 8.83%) for NCC-based registrations. Linear mixed-effects models demonstrated that error in measuring volume change increased with increase in tumor volume and decreased with the increase in the tumor's normalized mutual information, even when NCC was the similarity measure being optimized during registration. The 95% confidence interval of the relative volume change error for the synthetic examinations with known changes over {+-}80% of reference tumor volume was (-3.02% to 3.86%). Statistically significant bias was not demonstrated. Conclusion: A low-noise, low-bias tumor volume change measurement algorithm using nonlinear registration is described. Errors in change measurement were a function of tumor volume and the normalized mutual information content of the tumor.« less

  13. Combined registration of 3D tibia and femur implant models in 3D magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Englmeier, Karl-Hans; Siebert, Markus; von Eisenhart-Rothe, Ruediger; Graichen, Heiko

    2008-03-01

    The most frequent reasons for revision of total knee arthroplasty are loosening and abnormal axial alignment leading to an unphysiological kinematic of the knee implant. To get an idea about the postoperative kinematic of the implant, it is essential to determine the position and orientation of the tibial and femoral prosthesis. Therefore we developed a registration method for fitting 3D CAD-models of knee joint prostheses into an 3D MR image. This rigid registration is the basis for a quantitative analysis of the kinematics of knee implants. Firstly the surface data of the prostheses models are converted into a voxel representation; a recursive algorithm determines all boundary voxels of the original triangular surface data. Secondly an initial preconfiguration of the implants by the user is still necessary for the following step: The user has to perform a rough preconfiguration of both remaining prostheses models, so that the fine matching process gets a reasonable starting point. After that an automated gradient-based fine matching process determines the best absolute position and orientation: This iterative process changes all 6 parameters (3 rotational- and 3 translational parameters) of a model by a minimal amount until a maximum value of the matching function is reached. To examine the spread of the final solutions of the registration, the interobserver variability was measured in a group of testers. This variability, calculated by the relative standard deviation, improved from about 50% (pure manual registration) to 0.5% (rough manual preconfiguration and subsequent fine registration with the automatic fine matching process).

  14. Contribution a l'inspection automatique des pieces flexibles a l'etat libre sans gabarit de conformation

    NASA Astrophysics Data System (ADS)

    Sattarpanah Karganroudi, Sasan

    The competitive industrial market demands manufacturing companies to provide the markets with a higher quality of production. The quality control department in industrial sectors verifies geometrical requirements of products with consistent tolerances. These requirements are presented in Geometric Dimensioning and Tolerancing (GD&T) standards. However, conventional measuring and dimensioning methods for manufactured parts are time-consuming and costly. Nowadays manual and tactile measuring methods have been replaced by Computer-Aided Inspection (CAI) methods. The CAI methods apply improvements in computational calculations and 3-D data acquisition devices (scanners) to compare the scan mesh of manufactured parts with the Computer-Aided Design (CAD) model. Metrology standards, such as ASME-Y14.5 and ISO-GPS, require implementing the inspection in free-state, wherein the part is only under its weight. Non-rigid parts are exempted from the free-state inspection rule because of their significant geometrical deviation in a free-state with respect to the tolerances. Despite the developments in CAI methods, inspection of non-rigid parts still remains a serious challenge. Conventional inspection methods apply complex fixtures for non-rigid parts to retrieve the functional shape of these parts on physical fixtures; however, the fabrication and setup of these fixtures are sophisticated and expensive. The cost of fixtures has doubled since the client and manufacturing sectors require repetitive and independent inspection fixtures. To eliminate the need for costly and time-consuming inspection fixtures, fixtureless inspection methods of non-rigid parts based on CAI methods have been developed. These methods aim at distinguishing flexible deformations of parts in a free-state from defects. Fixtureless inspection methods are required to be automatic, reliable, reasonably accurate and repeatable for non-rigid parts with complex shapes. The scan model, which is acquired as point clouds, represent the shape of a part in a free-state. Afterward, the inspection of defects is performed by comparing the scan and CAD models, but these models are presented in different coordinate systems. Indeed, the scan model is presented in the measurement coordinate system whereas the CAD model is introduced in the designed coordinate system. To accomplish the inspection and facilitate an accurate comparison between the models, the registration process is required to align the scan and CAD models in a common coordinate system. The registration includes a virtual compensation for the flexible deformation of the parts in a free-state. Then, the inspection is implemented as a geometrical comparison between the CAD and scan models. This thesis focuses on developing automatic and accurate fixtureless CAI methods for non-rigid parts along with assessing the robustness of the methods. To this end, an automatic fixtureless CAI method for non-rigid parts based on filtering registration points is developed to identify and quantify defects more accurately on the surface of scan models. The flexible deformation of parts in a free-state in our developed automatic fixtureless CAI method is compensated by applying FE non-rigid Registration (FENR) to deform the CAD model towards the scan mesh. The displacement boundary conditions (BCs) for FENR are determined based on the corresponding sample points, which are generated by the Generalized Numerical Inspection Fixture (GNIF) method on the CAD and scan models. These corresponding sample points are evenly distributed on the surface of the models. The comparison between this deformed CAD model and the scan mesh intend to evaluate and quantify the defects on the scan model. However, some sample points can be located close or on defect areas which result in an inaccurate estimation of defects. These sample points are automatically filtered out in our CAI method based on curvature and von Mises stress criteria. Once filtered out, the remaining sample points are used in a new FENR, which allows an accurate evaluation of defects with respect to the tolerances. The performance and robustness of all CAI methods are generally required to be assessed with respect to the actual measurements. This thesis also introduces a new validation metric for Verification and Validation (V&V) of CAI methods based on ASME recommendations. The developed V&V approach uses a nonparametric statistical hypothesis test, namely the Kolmogorov-Smirnov (K-S) test. In addition to validating the defects size, the K-S test allows a deeper evaluation based on distance distribution of defects. The robustness of CAI method with respect to uncertainties such as scanning noise is quantitatively assessed using the developed validation metric. Due to the compliance of non-rigid parts, a geometrically deviated part can still be assembled in the assembly-state. This thesis also presents a fixtureless CAI method for geometrically deviated (presenting defects) non-rigid parts to evaluate the feasibility of mounting these parts in the functional assembly-state. Our developed Virtual Mounting Assembly-State Inspection (VMASI) method performs a non-rigid registration to virtually mount the scan mesh in assembly-state. To this end, the point clouds of scan model representing the part in a free-state is deformed to meet the assembly constraints such as fixation position (e.g. mounting holes). In some cases, the functional shape of a deviated part can be retrieved by applying assembly loads, which are limited to permissible loads, on the surface of the part. The required assembly loads are estimated through our developed Restraining Pressures Optimization (RPO) aiming at displacing the deviated scan model to achieve the tolerance for mounting holes. Therefore, the deviated scan model can be assembled if the mounting holes on the predicted functional shape of scan model attain the tolerance range. Different industrial parts are used to evaluate the performance of our developed methods in this thesis. The automatic inspection for identifying different types of small (local) and big (global) defects on the parts results in an accurate evaluation of defects. The robustness of this inspection method is also validated with respect to different levels of scanning noise, which shows promising results. Meanwhile, the VMASI method is performed on various parts with different types of defects, which concludes that in some cases the functional shape of deviated parts can be retrieved by mounting them on a virtual fixture in assembly-state under restraining loads.

  15. Automatic frequency and phase alignment of in vivo J-difference-edited MR spectra by frequency domain correlation.

    PubMed

    Wiegers, Evita C; Philips, Bart W J; Heerschap, Arend; van der Graaf, Marinette

    2017-12-01

    J-difference editing is often used to select resonances of compounds with coupled spins in 1 H-MR spectra. Accurate phase and frequency alignment prior to subtracting J-difference-edited MR spectra is important to avoid artefactual contributions to the edited resonance. In-vivo J-difference-edited MR spectra were aligned by maximizing the normalized scalar product between two spectra (i.e., the correlation over a spectral region). The performance of our correlation method was compared with alignment by spectral registration and by alignment of the highest point in two spectra. The correlation method was tested at different SNR levels and for a broad range of phase and frequency shifts. In-vivo application of the proposed correlation method showed reduced subtraction errors and increased fit reliability in difference spectra as compared with conventional peak alignment. The correlation method and the spectral registration method generally performed equally well. However, better alignment using the correlation method was obtained for spectra with a low SNR (down to ~2) and for relatively large frequency shifts. Our correlation method for simultaneously phase and frequency alignment is able to correct both small and large phase and frequency drifts and also performs well at low SNR levels.

  16. The use of atlas registration and graph cuts for prostate segmentation in magnetic resonance images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korsager, Anne Sofie, E-mail: asko@hst.aau.dk; Østergaard, Lasse Riis; Fortunati, Valerio

    2015-04-15

    Purpose: An automatic method for 3D prostate segmentation in magnetic resonance (MR) images is presented for planning image-guided radiotherapy treatment of prostate cancer. Methods: A spatial prior based on intersubject atlas registration is combined with organ-specific intensity information in a graph cut segmentation framework. The segmentation is tested on 67 axial T{sub 2}-weighted MR images in a leave-one-out cross validation experiment and compared with both manual reference segmentations and with multiatlas-based segmentations using majority voting atlas fusion. The impact of atlas selection is investigated in both the traditional atlas-based segmentation and the new graph cut method that combines atlas andmore » intensity information in order to improve the segmentation accuracy. Best results were achieved using the method that combines intensity information, shape information, and atlas selection in the graph cut framework. Results: A mean Dice similarity coefficient (DSC) of 0.88 and a mean surface distance (MSD) of 1.45 mm with respect to the manual delineation were achieved. Conclusions: This approaches the interobserver DSC of 0.90 and interobserver MSD 0f 1.15 mm and is comparable to other studies performing prostate segmentation in MR.« less

  17. Fully Automatic Segmentation of Fluorescein Leakage in Subjects With Diabetic Macular Edema

    PubMed Central

    Rabbani, Hossein; Allingham, Michael J.; Mettu, Priyatham S.; Cousins, Scott W.; Farsiu, Sina

    2015-01-01

    Purpose. To create and validate software to automatically segment leakage area in real-world clinical fluorescein angiography (FA) images of subjects with diabetic macular edema (DME). Methods. Fluorescein angiography images obtained from 24 eyes of 24 subjects with DME were retrospectively analyzed. Both video and still-frame images were obtained using a Heidelberg Spectralis 6-mode HRA/OCT unit. We aligned early and late FA frames in the video by a two-step nonrigid registration method. To remove background artifacts, we subtracted early and late FA frames. Finally, after postprocessing steps, including detection and inpainting of the vessels, a robust active contour method was utilized to obtain leakage area in a 1500-μm-radius circular region centered at the fovea. Images were captured at different fields of view (FOVs) and were often contaminated with outliers, as is the case in real-world clinical imaging. Our algorithm was applied to these images with no manual input. Separately, all images were manually segmented by two retina specialists. The sensitivity, specificity, and accuracy of manual interobserver, manual intraobserver, and automatic methods were calculated. Results. The mean accuracy was 0.86 ± 0.08 for automatic versus manual, 0.83 ± 0.16 for manual interobserver, and 0.90 ± 0.08 for manual intraobserver segmentation methods. Conclusions. Our fully automated algorithm can reproducibly and accurately quantify the area of leakage of clinical-grade FA video and is congruent with expert manual segmentation. The performance was reliable for different DME subtypes. This approach has the potential to reduce time and labor costs and may yield objective and reproducible quantitative measurements of DME imaging biomarkers. PMID:25634978

  18. Cortical surface registration using spherical thin-plate spline with sulcal lines and mean curvature as features.

    PubMed

    Park, Hyunjin; Park, Jun-Sung; Seong, Joon-Kyung; Na, Duk L; Lee, Jong-Min

    2012-04-30

    Analysis of cortical patterns requires accurate cortical surface registration. Many researchers map the cortical surface onto a unit sphere and perform registration of two images defined on the unit sphere. Here we have developed a novel registration framework for the cortical surface based on spherical thin-plate splines. Small-scale composition of spherical thin-plate splines was used as the geometric interpolant to avoid folding in the geometric transform. Using an automatic algorithm based on anisotropic skeletons, we extracted seven sulcal lines, which we then incorporated as landmark information. Mean curvature was chosen as an additional feature for matching between spherical maps. We employed a two-term cost function to encourage matching of both sulcal lines and the mean curvature between the spherical maps. Application of our registration framework to fifty pairwise registrations of T1-weighted MRI scans resulted in improved registration accuracy, which was computed from sulcal lines. Our registration approach was tested as an additional procedure to improve an existing surface registration algorithm. Our registration framework maintained an accurate registration over the sulcal lines while significantly increasing the cross-correlation of mean curvature between the spherical maps being registered. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Super resolution for astronomical observations

    NASA Astrophysics Data System (ADS)

    Li, Zhan; Peng, Qingyu; Bhanu, Bir; Zhang, Qingfeng; He, Haifeng

    2018-05-01

    In order to obtain detailed information from multiple telescope observations a general blind super-resolution (SR) reconstruction approach for astronomical images is proposed in this paper. A pixel-reliability-based SR reconstruction algorithm is described and implemented, where the developed process incorporates flat field correction, automatic star searching and centering, iterative star matching, and sub-pixel image registration. Images captured by the 1-m telescope at Yunnan Observatory are used to test the proposed technique. The results of these experiments indicate that, following SR reconstruction, faint stars are more distinct, bright stars have sharper profiles, and the backgrounds have higher details; thus these results benefit from the high-precision star centering and image registration provided by the developed method. Application of the proposed approach not only provides more opportunities for new discoveries from astronomical image sequences, but will also contribute to enhancing the capabilities of most spatial or ground-based telescopes.

  20. SU-C-BRA-03: An Automated and Quick Contour Errordetection for Auto Segmentation in Online Adaptive Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, J; Ates, O; Li, X

    Purpose: To develop a tool that can quickly and automatically assess contour quality generated from auto segmentation during online adaptive replanning. Methods: Due to the strict time requirement of online replanning and lack of ‘ground truth’ contours in daily images, our method starts with assessing image registration accuracy focusing on the surface of the organ in question. Several metrics tightly related to registration accuracy including Jacobian maps, contours shell deformation, and voxel-based root mean square (RMS) analysis were computed. To identify correct contours, additional metrics and an adaptive decision tree are introduced. To approve in principle, tests were performed withmore » CT sets, planned and daily CTs acquired using a CT-on-rails during routine CT-guided RT delivery for 20 prostate cancer patients. The contours generated on daily CTs using an auto-segmentation tool (ADMIRE, Elekta, MIM) based on deformable image registration of the planning CT and daily CT were tested. Results: The deformed contours of 20 patients with total of 60 structures were manually checked as baselines. The incorrect rate of total contours is 49%. To evaluate the quality of local deformation, the Jacobian determinant (1.047±0.045) on contours has been analyzed. In an analysis of rectum contour shell deformed, the higher rate (0.41) of error contours detection was obtained compared to 0.32 with manual check. All automated detections took less than 5 seconds. Conclusion: The proposed method can effectively detect contour errors in micro and macro scope by evaluating multiple deformable registration metrics in a parallel computing process. Future work will focus on improving practicability and optimizing calculation algorithms and metric selection.« less

  1. A 3D global-to-local deformable mesh model based registration and anatomy-constrained segmentation method for image guided prostate radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Jinghao; Kim, Sung; Jabbour, Salma

    2010-03-15

    Purpose: In the external beam radiation treatment of prostate cancers, successful implementation of adaptive radiotherapy and conformal radiation dose delivery is highly dependent on precise and expeditious segmentation and registration of the prostate volume between the simulation and the treatment images. The purpose of this study is to develop a novel, fast, and accurate segmentation and registration method to increase the computational efficiency to meet the restricted clinical treatment time requirement in image guided radiotherapy. Methods: The method developed in this study used soft tissues to capture the transformation between the 3D planning CT (pCT) images and 3D cone-beam CTmore » (CBCT) treatment images. The method incorporated a global-to-local deformable mesh model based registration framework as well as an automatic anatomy-constrained robust active shape model (ACRASM) based segmentation algorithm in the 3D CBCT images. The global registration was based on the mutual information method, and the local registration was to minimize the Euclidian distance of the corresponding nodal points from the global transformation of deformable mesh models, which implicitly used the information of the segmented target volume. The method was applied on six data sets of prostate cancer patients. Target volumes delineated by the same radiation oncologist on the pCT and CBCT were chosen as the benchmarks and were compared to the segmented and registered results. The distance-based and the volume-based estimators were used to quantitatively evaluate the results of segmentation and registration. Results: The ACRASM segmentation algorithm was compared to the original active shape model (ASM) algorithm by evaluating the values of the distance-based estimators. With respect to the corresponding benchmarks, the mean distance ranged from -0.85 to 0.84 mm for ACRASM and from -1.44 to 1.17 mm for ASM. The mean absolute distance ranged from 1.77 to 3.07 mm for ACRASM and from 2.45 to 6.54 mm for ASM. The volume overlap ratio ranged from 79% to 91% for ACRASM and from 44% to 80% for ASM. These data demonstrated that the segmentation results of ACRASM were in better agreement with the corresponding benchmarks than those of ASM. The developed registration algorithm was quantitatively evaluated by comparing the registered target volumes from the pCT to the benchmarks on the CBCT. The mean distance and the root mean square error ranged from 0.38 to 2.2 mm and from 0.45 to 2.36 mm, respectively, between the CBCT images and the registered pCT. The mean overlap ratio of the prostate volumes ranged from 85.2% to 95% after registration. The average time of the ACRASM-based segmentation was under 1 min. The average time of the global transformation was from 2 to 4 min on two 3D volumes and the average time of the local transformation was from 20 to 34 s on two deformable superquadrics mesh models. Conclusions: A novel and fast segmentation and deformable registration method was developed to capture the transformation between the planning and treatment images for external beam radiotherapy of prostate cancers. This method increases the computational efficiency and may provide foundation to achieve real time adaptive radiotherapy.« less

  2. Automatic C-arm pose estimation via 2D/3D hybrid registration of a radiographic fiducial

    NASA Astrophysics Data System (ADS)

    Moult, E.; Burdette, E. C.; Song, D. Y.; Abolmaesumi, P.; Fichtinger, G.; Fallavollita, P.

    2011-03-01

    Motivation: In prostate brachytherapy, real-time dosimetry would be ideal to allow for rapid evaluation of the implant quality intra-operatively. However, such a mechanism requires an imaging system that is both real-time and which provides, via multiple C-arm fluoroscopy images, clear information describing the three-dimensional position of the seeds deposited within the prostate. Thus, accurate tracking of the C-arm poses proves to be of critical importance to the process. Methodology: We compute the pose of the C-arm relative to a stationary radiographic fiducial of known geometry by employing a hybrid registration framework. Firstly, by means of an ellipse segmentation algorithm and a 2D/3D feature based registration, we exploit known FTRAC geometry to recover an initial estimate of the C-arm pose. Using this estimate, we then initialize the intensity-based registration which serves to recover a refined and accurate estimation of the C-arm pose. Results: Ground-truth pose was established for each C-arm image through a published and clinically tested segmentation-based method. Using 169 clinical C-arm images and a +/-10° and +/-10 mm random perturbation of the ground-truth pose, the average rotation and translation errors were 0.68° (std = 0.06°) and 0.64 mm (std = 0.24 mm). Conclusion: Fully automated C-arm pose estimation using a 2D/3D hybrid registration scheme was found to be clinically robust based on human patient data.

  3. Simultaneous automatic scoring and co-registration of hormone receptors in tumor areas in whole slide images of breast cancer tissue slides.

    PubMed

    Trahearn, Nicholas; Tsang, Yee Wah; Cree, Ian A; Snead, David; Epstein, David; Rajpoot, Nasir

    2017-06-01

    Automation of downstream analysis may offer many potential benefits to routine histopathology. One area of interest for automation is in the scoring of multiple immunohistochemical markers to predict the patient's response to targeted therapies. Automated serial slide analysis of this kind requires robust registration to identify common tissue regions across sections. We present an automated method for co-localized scoring of Estrogen Receptor and Progesterone Receptor (ER/PR) in breast cancer core biopsies using whole slide images. Regions of tumor in a series of fifty consecutive breast core biopsies were identified by annotation on H&E whole slide images. Sequentially cut immunohistochemical stained sections were scored manually, before being digitally scanned and then exported into JPEG 2000 format. A two-stage registration process was performed to identify the annotated regions of interest in the immunohistochemistry sections, which were then scored using the Allred system. Overall correlation between manual and automated scoring for ER and PR was 0.944 and 0.883, respectively, with 90% of ER and 80% of PR scores within in one point or less of agreement. This proof of principle study indicates slide registration can be used as a basis for automation of the downstream analysis for clinically relevant biomarkers in the majority of cases. The approach is likely to be improved by implantation of safeguarding analysis steps post registration. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.

  4. Automatic identification of the reference system based on the fourth ventricular landmarks in T1-weighted MR images.

    PubMed

    Fu, Yili; Gao, Wenpeng; Chen, Xiaoguang; Zhu, Minwei; Shen, Weigao; Wang, Shuguo

    2010-01-01

    The reference system based on the fourth ventricular landmarks (including the fastigial point and ventricular floor plane) is used in medical image analysis of the brain stem. The objective of this study was to develop a rapid, robust, and accurate method for the automatic identification of this reference system on T1-weighted magnetic resonance images. The fully automated method developed in this study consisted of four stages: preprocessing of the data set, expectation-maximization algorithm-based extraction of the fourth ventricle in the region of interest, a coarse-to-fine strategy for identifying the fastigial point, and localization of the base point. The method was evaluated on 27 Brain Web data sets qualitatively and 18 Internet Brain Segmentation Repository data sets and 30 clinical scans quantitatively. The results of qualitative evaluation indicated that the method was robust to rotation, landmark variation, noise, and inhomogeneity. The results of quantitative evaluation indicated that the method was able to identify the reference system with an accuracy of 0.7 +/- 0.2 mm for the fastigial point and 1.1 +/- 0.3 mm for the base point. It took <6 seconds for the method to identify the related landmarks on a personal computer with an Intel Core 2 6300 processor and 2 GB of random-access memory. The proposed method for the automatic identification of the reference system based on the fourth ventricular landmarks was shown to be rapid, robust, and accurate. The method has potentially utility in image registration and computer-aided surgery.

  5. Infrared thermal facial image sequence registration analysis and verification

    NASA Astrophysics Data System (ADS)

    Chen, Chieh-Li; Jian, Bo-Lin

    2015-03-01

    To study the emotional responses of subjects to the International Affective Picture System (IAPS), infrared thermal facial image sequence is preprocessed for registration before further analysis such that the variance caused by minor and irregular subject movements is reduced. Without affecting the comfort level and inducing minimal harm, this study proposes an infrared thermal facial image sequence registration process that will reduce the deviations caused by the unconscious head shaking of the subjects. A fixed image for registration is produced through the localization of the centroid of the eye region as well as image translation and rotation processes. Thermal image sequencing will then be automatically registered using the two-stage genetic algorithm proposed. The deviation before and after image registration will be demonstrated by image quality indices. The results show that the infrared thermal image sequence registration process proposed in this study is effective in localizing facial images accurately, which will be beneficial to the correlation analysis of psychological information related to the facial area.

  6. Orthogonal Rings, Fiducial Markers, and Overlay Accuracy When Image Fusion is Used for EVAR Guidance.

    PubMed

    Koutouzi, G; Sandström, C; Roos, H; Henrikson, O; Leonhardt, H; Falkenberg, M

    2016-11-01

    Evaluation of orthogonal rings, fiducial markers, and overlay accuracy when image fusion is used for endovascular aortic repair (EVAR). This was a prospective single centre study. In 19 patients undergoing standard EVAR, 3D image fusion was used for intra-operative guidance. Renal arteries and targeted stent graft positions were marked with rings orthogonal to the respective centre lines from pre-operative computed tomography (CT). Radiopaque reference objects attached to the back of the patient were used as fiducial markers to detect patient movement intra-operatively. Automatic 3D-3D registration of the pre-operative CT with an intra-operative cone beam computed tomography (CBCT) as well as 3D-3D registration after manual alignment of nearby vertebrae were evaluated. Registration was defined as being sufficient for EVAR guidance if the deviation of the origin of the lower renal artery was less than 3 mm. For final overlay registration, the renal arteries were manually aligned using aortic calcification and vessel outlines. The accuracy of the overlay before stent graft deployment was evaluated using digital subtraction angiography (DSA) as direct comparison. Fiducial markers helped in detecting misalignment caused by patient movement during the procedure. Use of automatic intensity based registration alone was insufficient for EVAR guidance. Manual registration based on vertebrae L1-L2 was sufficient in 7/19 patients (37%). Using the final adjusted registration as overlay, the median alignment error of the lower renal artery marking at pre-deployment DSA was 2 mm (0-5) sideways and 2 mm (0-9) longitudinally, mostly in a caudal direction. 3D image fusion can facilitate intra-operative guidance during EVAR. Orthogonal rings and fiducial markers are useful for visualization and overlay correction. However, the accuracy of the overlaid 3D image is not always ideal and further technical development is needed. Copyright © 2016 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  7. Automatic localization of landmark sets in head CT images with regression forests for image registration initialization

    NASA Astrophysics Data System (ADS)

    Zhang, Dongqing; Liu, Yuan; Noble, Jack H.; Dawant, Benoit M.

    2016-03-01

    Cochlear Implants (CIs) are electrode arrays that are surgically inserted into the cochlea. Individual contacts stimulate frequency-mapped nerve endings thus replacing the natural electro-mechanical transduction mechanism. CIs are programmed post-operatively by audiologists but this is currently done using behavioral tests without imaging information that permits relating electrode position to inner ear anatomy. We have recently developed a series of image processing steps that permit the segmentation of the inner ear anatomy and the localization of individual contacts. We have proposed a new programming strategy that uses this information and we have shown in a study with 68 participants that 78% of long term recipients preferred the programming parameters determined with this new strategy. A limiting factor to the large scale evaluation and deployment of our technique is the amount of user interaction still required in some of the steps used in our sequence of image processing algorithms. One such step is the rough registration of an atlas to target volumes prior to the use of automated intensity-based algorithms when the target volumes have very different fields of view and orientations. In this paper we propose a solution to this problem. It relies on a random forest-based approach to automatically localize a series of landmarks. Our results obtained from 83 images with 132 registration tasks show that automatic initialization of an intensity-based algorithm proves to be a reliable technique to replace the manual step.

  8. Robust, Globally Consistent, and Fully-automatic Multi-image Registration and Montage Synthesis for 3-D Multi-channel Images

    PubMed Central

    Tsai, Chia-Ling; Lister, James P.; Bjornsson, Christopher J; Smith, Karen; Shain, William; Barnes, Carol A.; Roysam, Badrinath

    2013-01-01

    The need to map regions of brain tissue that are much wider than the field of view of the microscope arises frequently. One common approach is to collect a series of overlapping partial views, and align them to synthesize a montage covering the entire region of interest. We present a method that advances this approach in multiple ways. Our method (1) produces a globally consistent joint registration of an unorganized collection of 3-D multi-channel images with or without stage micrometer data; (2) produces accurate registrations withstanding changes in scale, rotation, translation and shear by using a 3-D affine transformation model; (3) achieves complete automation, and does not require any parameter settings; (4) handles low and variable overlaps (5 – 15%) between adjacent images, minimizing the number of images required to cover a tissue region; (5) has the self-diagnostic ability to recognize registration failures instead of delivering incorrect results; (6) can handle a broad range of biological images by exploiting generic alignment cues from multiple fluorescence channels without requiring segmentation; and (7) is computationally efficient enough to run on desktop computers regardless of the number of images. The algorithm was tested with several tissue samples of at least 50 image tiles, involving over 5,000 image pairs. It correctly registered all image pairs with an overlap greater than 7%, correctly recognized all failures, and successfully joint-registered all images for all tissue samples studied. This algorithm is disseminated freely to the community as included with the FARSIGHT toolkit for microscopy (www.farsight-toolkit.org). PMID:21361958

  9. Results of a Multi-Institutional Benchmark Test for Cranial CT/MR Image Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulin, Kenneth; Urie, Marcia M., E-mail: murie@qarc.or; Cherlow, Joel M.

    2010-08-01

    Purpose: Variability in computed tomography/magnetic resonance imaging (CT/MR) cranial image registration was assessed using a benchmark case developed by the Quality Assurance Review Center to credential institutions for participation in Children's Oncology Group Protocol ACNS0221 for treatment of pediatric low-grade glioma. Methods and Materials: Two DICOM image sets, an MR and a CT of the same patient, were provided to each institution. A small target in the posterior occipital lobe was readily visible on two slices of the MR scan and not visible on the CT scan. Each institution registered the two scans using whatever software system and method itmore » ordinarily uses for such a case. The target volume was then contoured on the two MR slices, and the coordinates of the center of the corresponding target in the CT coordinate system were reported. The average of all submissions was used to determine the true center of the target. Results: Results are reported from 51 submissions representing 45 institutions and 11 software systems. The average error in the position of the center of the target was 1.8 mm (1 standard deviation = 2.2 mm). The least variation in position was in the lateral direction. Manual registration gave significantly better results than did automatic registration (p = 0.02). Conclusion: When MR and CT scans of the head are registered with currently available software, there is inherent uncertainty of approximately 2 mm (1 standard deviation), which should be considered when defining planning target volumes and PRVs for organs at risk on registered image sets.« less

  10. SU-F-BRA-01: A Procedure for the Fast Semi-Automatic Localization of Catheters Using An Electromagnetic Tracker (EMT) for Image-Guided Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damato, A; Viswanathan, A; Cormack, R

    2015-06-15

    Purpose: To evaluate the feasibility of brachytherapy catheter localization through use of an EMT and 3D image set. Methods: A 15-catheter phantom mimicking an interstitial implantation was built and CT-scanned. Baseline catheter reconstruction was performed manually. An EMT was used to acquire the catheter coordinates in the EMT frame of reference. N user-identified catheter tips, without catheter number associations, were used to establish registration with the CT frame of reference. Two algorithms were investigated: brute-force registration (BFR), in which all possible permutation of N identified tips with the EMT tips were evaluated; and signature-based registration (SBR), in which a distancemore » matrix was used to generate a list of matching signatures describing possible N-point matches with the registration points. Digitization error (average of the distance between corresponding EMT and baseline dwell positions; average, standard deviation, and worst-case scenario over all possible registration-point selections) and algorithm inefficiency (maximum number of rigid registrations required to find the matching fusion for all possible selections of registration points) were calculated. Results: Digitization errors on average <2 mm were observed for N ≥5, with standard deviation <2 mm for N ≥6, and worst-case scenario error <2 mm for N ≥11. Algorithm inefficiencies were: N = 5, 32,760 (BFR) and 9900 (SBR); N = 6, 360,360 (BFR) and 21,660 (SBR); N = 11, 5.45*1010 (BFR) and 12 (SBR). Conclusion: A procedure was proposed for catheter reconstruction using EMT and only requiring user identification of catheter tips without catheter localization. Digitization errors <2 mm were observed on average with 5 or more registration points, and in any scenario with 11 or more points. Inefficiency for N = 11 was 9 orders of magnitude lower for SBR than for BFR. Funding: Kaye Family Award.« less

  11. Integration of retinal image sequences

    NASA Astrophysics Data System (ADS)

    Ballerini, Lucia

    1998-10-01

    In this paper a method for noise reduction in ocular fundus image sequences is described. The eye is the only part of the human body where the capillary network can be observed along with the arterial and venous circulation using a non invasive technique. The study of the retinal vessels is very important both for the study of the local pathology (retinal disease) and for the large amount of information it offers on systematic haemodynamics, such as hypertension, arteriosclerosis, and diabetes. In this paper a method for image integration of ocular fundus image sequences is described. The procedure can be divided in two step: registration and fusion. First we describe an automatic alignment algorithm for registration of ocular fundus images. In order to enhance vessel structures, we used a spatially oriented bank of filters designed to match the properties of the objects of interest. To evaluate interframe misalignment we adopted a fast cross-correlation algorithm. The performances of the alignment method have been estimated by simulating shifts between image pairs and by using a cross-validation approach. Then we propose a temporal integration technique of image sequences so as to compute enhanced pictures of the overall capillary network. Image registration is combined with image enhancement by fusing subsequent frames of a same region. To evaluate the attainable results, the signal-to-noise ratio was estimated before and after integration. Experimental results on synthetic images of vessel-like structures with different kind of Gaussian additive noise as well as on real fundus images are reported.

  12. Fully automatic segmentation of fluorescein leakage in subjects with diabetic macular edema.

    PubMed

    Rabbani, Hossein; Allingham, Michael J; Mettu, Priyatham S; Cousins, Scott W; Farsiu, Sina

    2015-01-29

    To create and validate software to automatically segment leakage area in real-world clinical fluorescein angiography (FA) images of subjects with diabetic macular edema (DME). Fluorescein angiography images obtained from 24 eyes of 24 subjects with DME were retrospectively analyzed. Both video and still-frame images were obtained using a Heidelberg Spectralis 6-mode HRA/OCT unit. We aligned early and late FA frames in the video by a two-step nonrigid registration method. To remove background artifacts, we subtracted early and late FA frames. Finally, after postprocessing steps, including detection and inpainting of the vessels, a robust active contour method was utilized to obtain leakage area in a 1500-μm-radius circular region centered at the fovea. Images were captured at different fields of view (FOVs) and were often contaminated with outliers, as is the case in real-world clinical imaging. Our algorithm was applied to these images with no manual input. Separately, all images were manually segmented by two retina specialists. The sensitivity, specificity, and accuracy of manual interobserver, manual intraobserver, and automatic methods were calculated. The mean accuracy was 0.86 ± 0.08 for automatic versus manual, 0.83 ± 0.16 for manual interobserver, and 0.90 ± 0.08 for manual intraobserver segmentation methods. Our fully automated algorithm can reproducibly and accurately quantify the area of leakage of clinical-grade FA video and is congruent with expert manual segmentation. The performance was reliable for different DME subtypes. This approach has the potential to reduce time and labor costs and may yield objective and reproducible quantitative measurements of DME imaging biomarkers. Copyright 2015 The Association for Research in Vision and Ophthalmology, Inc.

  13. Segmentation of radiographic images under topological constraints: application to the femur.

    PubMed

    Gamage, Pavan; Xie, Sheng Quan; Delmas, Patrice; Xu, Wei Liang

    2010-09-01

    A framework for radiographic image segmentation under topological control based on two-dimensional (2D) image analysis was developed. The system is intended for use in common radiological tasks including fracture treatment analysis, osteoarthritis diagnostics and osteotomy management planning. The segmentation framework utilizes a generic three-dimensional (3D) model of the bone of interest to define the anatomical topology. Non-rigid registration is performed between the projected contours of the generic 3D model and extracted edges of the X-ray image to achieve the segmentation. For fractured bones, the segmentation requires an additional step where a region-based active contours curve evolution is performed with a level set Mumford-Shah method to obtain the fracture surface edge. The application of the segmentation framework to analysis of human femur radiographs was evaluated. The proposed system has two major innovations. First, definition of the topological constraints does not require a statistical learning process, so the method is generally applicable to a variety of bony anatomy segmentation problems. Second, the methodology is able to handle both intact and fractured bone segmentation. Testing on clinical X-ray images yielded an average root mean squared distance (between the automatically segmented femur contour and the manual segmented ground truth) of 1.10 mm with a standard deviation of 0.13 mm. The proposed point correspondence estimation algorithm was benchmarked against three state-of-the-art point matching algorithms, demonstrating successful non-rigid registration for the cases of interest. A topologically constrained automatic bone contour segmentation framework was developed and tested, providing robustness to noise, outliers, deformations and occlusions.

  14. User-friendly freehand ultrasound calibration using Lego bricks and automatic registration.

    PubMed

    Xiao, Yiming; Yan, Charles Xiao Bo; Drouin, Simon; De Nigris, Dante; Kochanowska, Anna; Collins, D Louis

    2016-09-01

    As an inexpensive, noninvasive, and portable clinical imaging modality, ultrasound (US) has been widely employed in many interventional procedures for monitoring potential tissue deformation, surgical tool placement, and locating surgical targets. The application requires the spatial mapping between 2D US images and 3D coordinates of the patient. Although positions of the devices (i.e., ultrasound transducer) and the patient can be easily recorded by a motion tracking system, the spatial relationship between the US image and the tracker attached to the US transducer needs to be estimated through an US calibration procedure. Previously, various calibration techniques have been proposed, where a spatial transformation is computed to match the coordinates of corresponding features in a physical phantom and those seen in the US scans. However, most of these methods are difficult to use for novel users. We proposed an ultrasound calibration method by constructing a phantom from simple Lego bricks and applying an automated multi-slice 2D-3D registration scheme without volumetric reconstruction. The method was validated for its calibration accuracy and reproducibility. Our method yields a calibration accuracy of [Formula: see text] mm and a calibration reproducibility of 1.29 mm. We have proposed a robust, inexpensive, and easy-to-use ultrasound calibration method.

  15. Landmark-guided diffeomorphic demons algorithm and its application to automatic segmentation of the whole spine and pelvis in CT images.

    PubMed

    Hanaoka, Shouhei; Masutani, Yoshitaka; Nemoto, Mitsutaka; Nomura, Yukihiro; Miki, Soichiro; Yoshikawa, Takeharu; Hayashi, Naoto; Ohtomo, Kuni; Shimizu, Akinobu

    2017-03-01

    A fully automatic multiatlas-based method for segmentation of the spine and pelvis in a torso CT volume is proposed. A novel landmark-guided diffeomorphic demons algorithm is used to register a given CT image to multiple atlas volumes. This algorithm can utilize both grayscale image information and given landmark coordinate information optimally. The segmentation has four steps. Firstly, 170 bony landmarks are detected in the given volume. Using these landmark positions, an atlas selection procedure is performed to reduce the computational cost of the following registration. Then the chosen atlas volumes are registered to the given CT image. Finally, voxelwise label voting is performed to determine the final segmentation result. The proposed method was evaluated using 50 torso CT datasets as well as the public SpineWeb dataset. As a result, a mean distance error of [Formula: see text] and a mean Dice coefficient of [Formula: see text] were achieved for the whole spine and the pelvic bones, which are competitive with other state-of-the-art methods. From the experimental results, the usefulness of the proposed segmentation method was validated.

  16. An improved method for precise automatic co-registration of moderate and high-resolution spacecraft imagery

    NASA Technical Reports Server (NTRS)

    Bryant, Nevin A.; Logan, Thomas L.; Zobrist, Albert L.

    2006-01-01

    Improvements to the automated co-registration and change detection software package, AFIDS (Automatic Fusion of Image Data System) has recently completed development for and validation by NGA/GIAT. The improvements involve the integration of the AFIDS ultra-fine gridding technique for horizontal displacement compensation with the recently evolved use of Rational Polynomial Functions/ Coefficients (RPFs/RPCs) for image raster pixel position to Latitude/Longitude indexing. Mapping and orthorectification (correction for elevation effects) of satellite imagery defies exact projective solutions because the data are not obtained from a single point (like a camera), but as a continuous process from the orbital path. Standard image processing techniques can apply approximate solutions, but advances in the state-of-the-art had to be made for precision change-detection and time-series applications where relief offsets become a controlling factor. The earlier AFIDS procedure required the availability of a camera model and knowledge of the satellite platform ephemeredes. The recent design advances connect the spacecraft sensor Rational Polynomial Function, a deductively developed model, with the AFIDS ultrafine grid, an inductively developed representation of the relationship raster pixel position to latitude /longitude. As a result, RPCs can be updated by AFIDS, a situation often necessary due to the accuracy limits of spacecraft navigation systems. An example of precision change detection will be presented from Quickbird.

  17. Multi-atlas-based segmentation of the parotid glands of MR images in patients following head-and-neck cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Cheng, Guanghui; Yang, Xiaofeng; Wu, Ning; Xu, Zhijian; Zhao, Hongfu; Wang, Yuefeng; Liu, Tian

    2013-02-01

    Xerostomia (dry mouth), resulting from radiation damage to the parotid glands, is one of the most common and distressing side effects of head-and-neck cancer radiotherapy. Recent MRI studies have demonstrated that the volume reduction of parotid glands is an important indicator for radiation damage and xerostomia. In the clinic, parotid-volume evaluation is exclusively based on physicians' manual contours. However, manual contouring is time-consuming and prone to inter-observer and intra-observer variability. Here, we report a fully automated multi-atlas-based registration method for parotid-gland delineation in 3D head-and-neck MR images. The multi-atlas segmentation utilizes a hybrid deformable image registration to map the target subject to multiple patients' images, applies the transformation to the corresponding segmented parotid glands, and subsequently uses the multiple patient-specific pairs (head-and-neck MR image and transformed parotid-gland mask) to train support vector machine (SVM) to reach consensus to segment the parotid gland of the target subject. This segmentation algorithm was tested with head-and-neck MRIs of 5 patients following radiotherapy for the nasopharyngeal cancer. The average parotid-gland volume overlapped 85% between the automatic segmentations and the physicians' manual contours. In conclusion, we have demonstrated the feasibility of an automatic multi-atlas based segmentation algorithm to segment parotid glands in head-and-neck MR images.

  18. Multi-atlas and label fusion approach for patient-specific MRI based skull estimation.

    PubMed

    Torrado-Carvajal, Angel; Herraiz, Joaquin L; Hernandez-Tamames, Juan A; San Jose-Estepar, Raul; Eryaman, Yigitcan; Rozenholc, Yves; Adalsteinsson, Elfar; Wald, Lawrence L; Malpica, Norberto

    2016-04-01

    MRI-based skull segmentation is a useful procedure for many imaging applications. This study describes a methodology for automatic segmentation of the complete skull from a single T1-weighted volume. The skull is estimated using a multi-atlas segmentation approach. Using a whole head computed tomography (CT) scan database, the skull in a new MRI volume is detected by nonrigid image registration of the volume to every CT, and combination of the individual segmentations by label-fusion. We have compared Majority Voting, Simultaneous Truth and Performance Level Estimation (STAPLE), Shape Based Averaging (SBA), and the Selective and Iterative Method for Performance Level Estimation (SIMPLE) algorithms. The pipeline has been evaluated quantitatively using images from the Retrospective Image Registration Evaluation database (reaching an overlap of 72.46 ± 6.99%), a clinical CT-MR dataset (maximum overlap of 78.31 ± 6.97%), and a whole head CT-MRI pair (maximum overlap 78.68%). A qualitative evaluation has also been performed on MRI acquisition of volunteers. It is possible to automatically segment the complete skull from MRI data using a multi-atlas and label fusion approach. This will allow the creation of complete MRI-based tissue models that can be used in electromagnetic dosimetry applications and attenuation correction in PET/MR. © 2015 Wiley Periodicals, Inc.

  19. Self-recalibration of a robot-assisted structured-light-based measurement system.

    PubMed

    Xu, Jing; Chen, Rui; Liu, Shuntao; Guan, Yong

    2017-11-10

    The structured-light-based measurement method is widely employed in numerous fields. However, for industrial inspection, to achieve complete scanning of a work piece and overcome occlusion, the measurement system needs to be moved to different viewpoints. Moreover, frequent reconfiguration of the measurement system may be needed based on the size of the measured object, making the self-recalibration of extrinsic parameters indispensable. To this end, this paper proposes an automatic self-recalibration and reconstruction method, wherein a robot arm is employed to move the measurement system for complete scanning; the self-recalibration is achieved using fundamental matrix calculations and point cloud registration without the need for an accurate calibration gauge. Experimental results demonstrate the feasibility and accuracy of our method.

  20. Image segmentation and registration for the analysis of joint motion from 3D MRI

    NASA Astrophysics Data System (ADS)

    Hu, Yangqiu; Haynor, David R.; Fassbind, Michael; Rohr, Eric; Ledoux, William

    2006-03-01

    We report an image segmentation and registration method for studying joint morphology and kinematics from in vivo MRI scans and its application to the analysis of ankle joint motion. Using an MR-compatible loading device, a foot was scanned in a single neutral and seven dynamic positions including maximal flexion, rotation and inversion/eversion. A segmentation method combining graph cuts and level sets was developed which allows a user to interactively delineate 14 bones in the neutral position volume in less than 30 minutes total, including less than 10 minutes of user interaction. In the subsequent registration step, a separate rigid body transformation for each bone is obtained by registering the neutral position dataset to each of the dynamic ones, which produces an accurate description of the motion between them. We have processed six datasets, including 3 normal and 3 pathological feet. For validation our results were compared with those obtained from 3DViewnix, a semi-automatic segmentation program, and achieved good agreement in volume overlap ratios (mean: 91.57%, standard deviation: 3.58%) for all bones. Our tool requires only 1/50 and 1/150 of the user interaction time required by 3DViewnix and NIH Image Plus, respectively, an improvement that has the potential to make joint motion analysis from MRI practical in research and clinical applications.

  1. Primal/dual linear programming and statistical atlases for cartilage segmentation.

    PubMed

    Glocker, Ben; Komodakis, Nikos; Paragios, Nikos; Glaser, Christian; Tziritas, Georgios; Navab, Nassir

    2007-01-01

    In this paper we propose a novel approach for automatic segmentation of cartilage using a statistical atlas and efficient primal/dual linear programming. To this end, a novel statistical atlas construction is considered from registered training examples. Segmentation is then solved through registration which aims at deforming the atlas such that the conditional posterior of the learned (atlas) density is maximized with respect to the image. Such a task is reformulated using a discrete set of deformations and segmentation becomes equivalent to finding the set of local deformations which optimally match the model to the image. We evaluate our method on 56 MRI data sets (28 used for the model and 28 used for evaluation) and obtain a fully automatic segmentation of patella cartilage volume with an overlap ratio of 0.84 with a sensitivity and specificity of 94.06% and 99.92%, respectively.

  2. Semantic Registration and Discovery System of Subsystems and Services within an Interoperable Coordination Platform in Smart Cities

    PubMed Central

    Rubio, Gregorio; Martínez, José Fernán; Gómez, David; Li, Xin

    2016-01-01

    Smart subsystems like traffic, Smart Homes, the Smart Grid, outdoor lighting, etc. are built in many urban areas, each with a set of services that are offered to citizens. These subsystems are managed by self-contained embedded systems. However, coordination and cooperation between them are scarce. An integration of these systems which truly represents a “system of systems” could introduce more benefits, such as allowing the development of new applications and collective optimization. The integration should allow maximum reusability of available services provided by entities (e.g., sensors or Wireless Sensor Networks). Thus, it is of major importance to facilitate the discovery and registration of available services and subsystems in an integrated way. Therefore, an ontology-based and automatic system for subsystem and service registration and discovery is presented. Using this proposed system, heterogeneous subsystems and services could be registered and discovered in a dynamic manner with additional semantic annotations. In this way, users are able to build customized applications across different subsystems by using available services. The proposed system has been fully implemented and a case study is presented to show the usefulness of the proposed method. PMID:27347965

  3. Semantic Registration and Discovery System of Subsystems and Services within an Interoperable Coordination Platform in Smart Cities.

    PubMed

    Rubio, Gregorio; Martínez, José Fernán; Gómez, David; Li, Xin

    2016-06-24

    Smart subsystems like traffic, Smart Homes, the Smart Grid, outdoor lighting, etc. are built in many urban areas, each with a set of services that are offered to citizens. These subsystems are managed by self-contained embedded systems. However, coordination and cooperation between them are scarce. An integration of these systems which truly represents a "system of systems" could introduce more benefits, such as allowing the development of new applications and collective optimization. The integration should allow maximum reusability of available services provided by entities (e.g., sensors or Wireless Sensor Networks). Thus, it is of major importance to facilitate the discovery and registration of available services and subsystems in an integrated way. Therefore, an ontology-based and automatic system for subsystem and service registration and discovery is presented. Using this proposed system, heterogeneous subsystems and services could be registered and discovered in a dynamic manner with additional semantic annotations. In this way, users are able to build customized applications across different subsystems by using available services. The proposed system has been fully implemented and a case study is presented to show the usefulness of the proposed method.

  4. Workflow oriented software support for image guided radiofrequency ablation of focal liver malignancies

    NASA Astrophysics Data System (ADS)

    Weihusen, Andreas; Ritter, Felix; Kröger, Tim; Preusser, Tobias; Zidowitz, Stephan; Peitgen, Heinz-Otto

    2007-03-01

    Image guided radiofrequency (RF) ablation has taken a significant part in the clinical routine as a minimally invasive method for the treatment of focal liver malignancies. Medical imaging is used in all parts of the clinical workflow of an RF ablation, incorporating treatment planning, interventional targeting and result assessment. This paper describes a software application, which has been designed to support the RF ablation workflow under consideration of the requirements of clinical routine, such as easy user interaction and a high degree of robust and fast automatic procedures, in order to keep the physician from spending too much time at the computer. The application therefore provides a collection of specialized image processing and visualization methods for treatment planning and result assessment. The algorithms are adapted to CT as well as to MR imaging. The planning support contains semi-automatic methods for the segmentation of liver tumors and the surrounding vascular system as well as an interactive virtual positioning of RF applicators and a concluding numerical estimation of the achievable heat distribution. The assessment of the ablation result is supported by the segmentation of the coagulative necrosis and an interactive registration of pre- and post-interventional image data for the comparison of tumor and necrosis segmentation masks. An automatic quantification of surface distances is performed to verify the embedding of the tumor area into the thermal lesion area. The visualization methods support representations in the commonly used orthogonal 2D view as well as in 3D scenes.

  5. SU-E-J-109: Accurate Contour Transfer Between Different Image Modalities Using a Hybrid Deformable Image Registration and Fuzzy Connected Image Segmentation Method.

    PubMed

    Yang, C; Paulson, E; Li, X

    2012-06-01

    To develop and evaluate a tool that can improve the accuracy of contour transfer between different image modalities under challenging conditions of low image contrast and large image deformation, comparing to a few commonly used methods, for radiation treatment planning. The software tool includes the following steps and functionalities: (1) accepting input of images of different modalities, (2) converting existing contours on reference images (e.g., MRI) into delineated volumes and adjusting the intensity within the volumes to match target images (e.g., CT) intensity distribution for enhanced similarity metric, (3) registering reference and target images using appropriate deformable registration algorithms (e.g., B-spline, demons) and generate deformed contours, (4) mapping the deformed volumes on target images, calculating mean, variance, and center of mass as the initialization parameters for consecutive fuzzy connectedness (FC) image segmentation on target images, (5) generate affinity map from FC segmentation, (6) achieving final contours by modifying the deformed contours using the affinity map with a gradient distance weighting algorithm. The tool was tested with the CT and MR images of four pancreatic cancer patients acquired at the same respiration phase to minimize motion distortion. Dice's Coefficient was calculated against direct delineation on target image. Contours generated by various methods, including rigid transfer, auto-segmentation, deformable only transfer and proposed method, were compared. Fuzzy connected image segmentation needs careful parameter initialization and user involvement. Automatic contour transfer by multi-modality deformable registration leads up to 10% of accuracy improvement over the rigid transfer. Two extra proposed steps of adjusting intensity distribution and modifying the deformed contour with affinity map improve the transfer accuracy further to 14% averagely. Deformable image registration aided by contrast adjustment and fuzzy connectedness segmentation improves the contour transfer accuracy between multi-modality images, particularly with large deformation and low image contrast. © 2012 American Association of Physicists in Medicine.

  6. Automated three-dimensional quantification of myocardial perfusion and brain SPECT.

    PubMed

    Slomka, P J; Radau, P; Hurwitz, G A; Dey, D

    2001-01-01

    To allow automated and objective reading of nuclear medicine tomography, we have developed a set of tools for clinical analysis of myocardial perfusion tomography (PERFIT) and Brain SPECT/PET (BRASS). We exploit algorithms for image registration and use three-dimensional (3D) "normal models" for individual patient comparisons to composite datasets on a "voxel-by-voxel basis" in order to automatically determine the statistically significant abnormalities. A multistage, 3D iterative inter-subject registration of patient images to normal templates is applied, including automated masking of the external activity before final fit. In separate projects, the software has been applied to the analysis of myocardial perfusion SPECT, as well as brain SPECT and PET data. Automatic reading was consistent with visual analysis; it can be applied to the whole spectrum of clinical images, and aid physicians in the daily interpretation of tomographic nuclear medicine images.

  7. Enhanced video indirect ophthalmoscopy (VIO) via robust mosaicing.

    PubMed

    Estrada, Rolando; Tomasi, Carlo; Cabrera, Michelle T; Wallace, David K; Freedman, Sharon F; Farsiu, Sina

    2011-10-01

    Indirect ophthalmoscopy (IO) is the standard of care for evaluation of the neonatal retina. When recorded on video from a head-mounted camera, IO images have low quality and narrow Field of View (FOV). We present an image fusion methodology for converting a video IO recording into a single, high quality, wide-FOV mosaic that seamlessly blends the best frames in the video. To this end, we have developed fast and robust algorithms for automatic evaluation of video quality, artifact detection and removal, vessel mapping, registration, and multi-frame image fusion. Our experiments show the effectiveness of the proposed methods.

  8. Automatic parameter selection for feature-based multi-sensor image registration

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan

    2006-05-01

    Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.

  9. Difference optimization: Automatic correction of relative frequency and phase for mean non-edited and edited GABA 1H MEGA-PRESS spectra

    NASA Astrophysics Data System (ADS)

    Cleve, Marianne; Krämer, Martin; Gussew, Alexander; Reichenbach, Jürgen R.

    2017-06-01

    Phase and frequency corrections of magnetic resonance spectroscopic data are of major importance to obtain reliable and unambiguous metabolite estimates as validated in recent research for single-shot scans with the same spectral fingerprint. However, when using the J-difference editing technique 1H MEGA-PRESS, misalignment between mean edited (ON ‾) and non-edited (OFF ‾) spectra that may remain even after correction of the corresponding individual single-shot scans results in subtraction artefacts compromising reliable GABA quantitation. We present a fully automatic routine that iteratively optimizes simultaneously relative frequencies and phases between the mean ON ‾ and OFF ‾ 1H MEGA-PRESS spectra while minimizing the sum of the magnitude of the difference spectrum (L1 norm). The proposed method was applied to simulated spectra at different SNR levels with deliberately preset frequency and phase errors. Difference optimization proved to be more sensitive to small signal fluctuations, as e.g. arising from subtraction artefacts, and outperformed the alternative spectral registration approach, that, in contrast to our proposed linear approach, uses a nonlinear least squares minimization (L2 norm), at all investigated levels of SNR. Moreover, the proposed method was applied to 47 MEGA-PRESS datasets acquired in vivo at 3 T. The results of the alignment between the mean OFF ‾ and ON ‾ spectra were compared by applying (a) no correction, (b) difference optimization or (c) spectral registration. Since the true frequency and phase errors are not known for in vivo data, manually corrected spectra were used as the gold standard reference (d). Automatically corrected data applying both, method (b) or method (c), showed distinct improvements of spectra quality as revealed by the mean Pearson correlation coefficient between corresponding real part mean DIFF ‾ spectra of Rbd = 0.997 ± 0.003 (method (b) vs. (d)), compared to Rad = 0.764 ± 0.220 (method (a) vs. (d)) with no alignment between OFF ‾ and ON ‾ . Method (c) revealed a slightly lower correlation coefficient of Rcd = 0.972 ± 0.028 compared to Rbd, that can be ascribed to small remaining subtraction artefacts in the final DIFF ‾ spectrum. In conclusion, difference optimization performs robustly with no restrictions regarding the input data range or user intervention and represents a complementary tool to optimize the final DIFF ‾ spectrum following the mandatory frequency and phase corrections of single ON and OFF scans prior to averaging.

  10. Difference optimization: Automatic correction of relative frequency and phase for mean non-edited and edited GABA 1H MEGA-PRESS spectra.

    PubMed

    Cleve, Marianne; Krämer, Martin; Gussew, Alexander; Reichenbach, Jürgen R

    2017-06-01

    Phase and frequency corrections of magnetic resonance spectroscopic data are of major importance to obtain reliable and unambiguous metabolite estimates as validated in recent research for single-shot scans with the same spectral fingerprint. However, when using the J-difference editing technique 1 H MEGA-PRESS, misalignment between mean edited (ON‾) and non-edited (OFF‾) spectra that may remain even after correction of the corresponding individual single-shot scans results in subtraction artefacts compromising reliable GABA quantitation. We present a fully automatic routine that iteratively optimizes simultaneously relative frequencies and phases between the mean ON‾ and OFF‾ 1 H MEGA-PRESS spectra while minimizing the sum of the magnitude of the difference spectrum (L 1 norm). The proposed method was applied to simulated spectra at different SNR levels with deliberately preset frequency and phase errors. Difference optimization proved to be more sensitive to small signal fluctuations, as e.g. arising from subtraction artefacts, and outperformed the alternative spectral registration approach, that, in contrast to our proposed linear approach, uses a nonlinear least squares minimization (L 2 norm), at all investigated levels of SNR. Moreover, the proposed method was applied to 47 MEGA-PRESS datasets acquired in vivo at 3T. The results of the alignment between the mean OFF‾ and ON‾ spectra were compared by applying (a) no correction, (b) difference optimization or (c) spectral registration. Since the true frequency and phase errors are not known for in vivo data, manually corrected spectra were used as the gold standard reference (d). Automatically corrected data applying both, method (b) or method (c), showed distinct improvements of spectra quality as revealed by the mean Pearson correlation coefficient between corresponding real part mean DIFF‾ spectra of R bd =0.997±0.003 (method (b) vs. (d)), compared to R ad =0.764±0.220 (method (a) vs. (d)) with no alignment between OFF‾ and ON‾. Method (c) revealed a slightly lower correlation coefficient of R cd =0.972±0.028 compared to R bd , that can be ascribed to small remaining subtraction artefacts in the final DIFF‾ spectrum. In conclusion, difference optimization performs robustly with no restrictions regarding the input data range or user intervention and represents a complementary tool to optimize the final DIFF‾ spectrum following the mandatory frequency and phase corrections of single ON and OFF scans prior to averaging. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. MARS: a mouse atlas registration system based on a planar x-ray projector and an optical camera

    NASA Astrophysics Data System (ADS)

    Wang, Hongkai; Stout, David B.; Taschereau, Richard; Gu, Zheng; Vu, Nam T.; Prout, David L.; Chatziioannou, Arion F.

    2012-10-01

    This paper introduces a mouse atlas registration system (MARS), composed of a stationary top-view x-ray projector and a side-view optical camera, coupled to a mouse atlas registration algorithm. This system uses the x-ray and optical images to guide a fully automatic co-registration of a mouse atlas with each subject, in order to provide anatomical reference for small animal molecular imaging systems such as positron emission tomography (PET). To facilitate the registration, a statistical atlas that accounts for inter-subject anatomical variations was constructed based on 83 organ-labeled mouse micro-computed tomography (CT) images. The statistical shape model and conditional Gaussian model techniques were used to register the atlas with the x-ray image and optical photo. The accuracy of the atlas registration was evaluated by comparing the registered atlas with the organ-labeled micro-CT images of the test subjects. The results showed excellent registration accuracy of the whole-body region, and good accuracy for the brain, liver, heart, lungs and kidneys. In its implementation, the MARS was integrated with a preclinical PET scanner to deliver combined PET/MARS imaging, and to facilitate atlas-assisted analysis of the preclinical PET images.

  12. MARS: a mouse atlas registration system based on a planar x-ray projector and an optical camera.

    PubMed

    Wang, Hongkai; Stout, David B; Taschereau, Richard; Gu, Zheng; Vu, Nam T; Prout, David L; Chatziioannou, Arion F

    2012-10-07

    This paper introduces a mouse atlas registration system (MARS), composed of a stationary top-view x-ray projector and a side-view optical camera, coupled to a mouse atlas registration algorithm. This system uses the x-ray and optical images to guide a fully automatic co-registration of a mouse atlas with each subject, in order to provide anatomical reference for small animal molecular imaging systems such as positron emission tomography (PET). To facilitate the registration, a statistical atlas that accounts for inter-subject anatomical variations was constructed based on 83 organ-labeled mouse micro-computed tomography (CT) images. The statistical shape model and conditional Gaussian model techniques were used to register the atlas with the x-ray image and optical photo. The accuracy of the atlas registration was evaluated by comparing the registered atlas with the organ-labeled micro-CT images of the test subjects. The results showed excellent registration accuracy of the whole-body region, and good accuracy for the brain, liver, heart, lungs and kidneys. In its implementation, the MARS was integrated with a preclinical PET scanner to deliver combined PET/MARS imaging, and to facilitate atlas-assisted analysis of the preclinical PET images.

  13. Intraoperative laser speckle contrast imaging with retrospective motion correction for quantitative assessment of cerebral blood flow

    PubMed Central

    Richards, Lisa M.; Towle, Erica L.; Fox, Douglas J.; Dunn, Andrew K.

    2014-01-01

    Abstract. Although multiple intraoperative cerebral blood flow (CBF) monitoring techniques are currently available, a quantitative method that allows for continuous monitoring and that can be easily integrated into the surgical workflow is still needed. Laser speckle contrast imaging (LSCI) is an optical imaging technique with a high spatiotemporal resolution that has been recently demonstrated as feasible and effective for intraoperative monitoring of CBF during neurosurgical procedures. This study demonstrates the impact of retrospective motion correction on the quantitative analysis of intraoperatively acquired LSCI images. LSCI images were acquired through a surgical microscope during brain tumor resection procedures from 10 patients under baseline conditions and after a cortical stimulation in three of those patients. The patient’s electrocardiogram (ECG) was recorded during acquisition for postprocess correction of pulsatile artifacts. Automatic image registration was retrospectively performed to correct for tissue motion artifacts, and the performance of rigid and nonrigid transformations was compared. In baseline cases, the original images had 25%±27% noise across 16 regions of interest (ROIs). ECG filtering moderately reduced the noise to 20%±21%, while image registration resulted in a further noise reduction of 15%±4%. Combined ECG filtering and image registration significantly reduced the noise to 6.2%±2.6% (p<0.05). Using the combined motion correction, accuracy and sensitivity to small changes in CBF were improved in cortical stimulation cases. There was also excellent agreement between rigid and nonrigid registration methods (15/16 ROIs with <3% difference). Results from this study demonstrate the importance of motion correction for improved visualization of CBF changes in clinical LSCI images. PMID:26157974

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rüegsegger, Michael B.; Steiner, Patrick; Kowal, Jens H., E-mail: jens.kowal@artorg.unibe.ch

    Purpose: External beam radiation therapy is currently considered the most common treatment modality for intraocular tumors. Localization of the tumor and efficient compensation of tumor misalignment with respect to the radiation beam are crucial. According to the state of the art procedure, localization of the target volume is indirectly performed by the invasive surgical implantation of radiopaque clips or is limited to positioning the head using stereoscopic radiographies. This work represents a proof-of-concept for direct and noninvasive tumor referencing based on anterior eye topography acquired using optical coherence tomography (OCT). Methods: A prototype of a head-mounted device has been developedmore » for automatic monitoring of tumor position and orientation in the isocentric reference frame for LINAC based treatment of intraocular tumors. Noninvasive tumor referencing is performed with six degrees of freedom based on anterior eye topography acquired using OCT and registration of a statistical eye model. The proposed prototype was tested based on enucleated pig eyes and registration accuracy was measured by comparison of the resulting transformation with tilt and torsion angles manually induced using a custom-made test bench. Results: Validation based on 12 enucleated pig eyes revealed an overall average registration error of 0.26 ± 0.08° in 87 ± 0.7 ms for tilting and 0.52 ± 0.03° in 94 ± 1.4 ms for torsion. Furthermore, dependency of sampling density on mean registration error was quantitatively assessed. Conclusions: The tumor referencing method presented in combination with the statistical eye model introduced in the past has the potential to enable noninvasive treatment and may improve quality, efficacy, and flexibility of external beam radiotherapy of intraocular tumors.« less

  15. INVITED REVIEW--IMAGE REGISTRATION IN VETERINARY RADIATION ONCOLOGY: INDICATIONS, IMPLICATIONS, AND FUTURE ADVANCES.

    PubMed

    Feng, Yang; Lawrence, Jessica; Cheng, Kun; Montgomery, Dean; Forrest, Lisa; Mclaren, Duncan B; McLaughlin, Stephen; Argyle, David J; Nailon, William H

    2016-01-01

    The field of veterinary radiation therapy (RT) has gained substantial momentum in recent decades with significant advances in conformal treatment planning, image-guided radiation therapy (IGRT), and intensity-modulated (IMRT) techniques. At the root of these advancements lie improvements in tumor imaging, image alignment (registration), target volume delineation, and identification of critical structures. Image registration has been widely used to combine information from multimodality images such as computerized tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET) to improve the accuracy of radiation delivery and reliably identify tumor-bearing areas. Many different techniques have been applied in image registration. This review provides an overview of medical image registration in RT and its applications in veterinary oncology. A summary of the most commonly used approaches in human and veterinary medicine is presented along with their current use in IGRT and adaptive radiation therapy (ART). It is important to realize that registration does not guarantee that target volumes, such as the gross tumor volume (GTV), are correctly identified on the image being registered, as limitations unique to registration algorithms exist. Research involving novel registration frameworks for automatic segmentation of tumor volumes is ongoing and comparative oncology programs offer a unique opportunity to test the efficacy of proposed algorithms. © 2016 American College of Veterinary Radiology.

  16. Subcortical structure segmentation using probabilistic atlas priors

    NASA Astrophysics Data System (ADS)

    Gouttard, Sylvain; Styner, Martin; Joshi, Sarang; Smith, Rachel G.; Cody Hazlett, Heather; Gerig, Guido

    2007-03-01

    The segmentation of the subcortical structures of the brain is required for many forms of quantitative neuroanatomic analysis. The volumetric and shape parameters of structures such as lateral ventricles, putamen, caudate, hippocampus, pallidus and amygdala are employed to characterize a disease or its evolution. This paper presents a fully automatic segmentation of these structures via a non-rigid registration of a probabilistic atlas prior and alongside a comprehensive validation. Our approach is based on an unbiased diffeomorphic atlas with probabilistic spatial priors built from a training set of MR images with corresponding manual segmentations. The atlas building computes an average image along with transformation fields mapping each training case to the average image. These transformation fields are applied to the manually segmented structures of each case in order to obtain a probabilistic map on the atlas. When applying the atlas for automatic structural segmentation, an MR image is first intensity inhomogeneity corrected, skull stripped and intensity calibrated to the atlas. Then the atlas image is registered to the image using an affine followed by a deformable registration matching the gray level intensity. Finally, the registration transformation is applied to the probabilistic maps of each structures, which are then thresholded at 0.5 probability. Using manual segmentations for comparison, measures of volumetric differences show high correlation with our results. Furthermore, the dice coefficient, which quantifies the volumetric overlap, is higher than 62% for all structures and is close to 80% for basal ganglia. The intraclass correlation coefficient computed on these same datasets shows a good inter-method correlation of the volumetric measurements. Using a dataset of a single patient scanned 10 times on 5 different scanners, reliability is shown with a coefficient of variance of less than 2 percents over the whole dataset. Overall, these validation and reliability studies show that our method accurately and reliably segments almost all structures. Only the hippocampus and amygdala segmentations exhibit relative low correlation with the manual segmentation in at least one of the validation studies, whereas they still show appropriate dice overlap coefficients.

  17. Development of a novel constellation based landmark detection algorithm

    NASA Astrophysics Data System (ADS)

    Ghayoor, Ali; Vaidya, Jatin G.; Johnson, Hans J.

    2013-03-01

    Anatomical landmarks such as the anterior commissure (AC) and posterior commissure (PC) are commonly used by researchers for co-registration of images. In this paper, we present a novel, automated approach for landmark detection that combines morphometric constraining and statistical shape models to provide accurate estimation of landmark points. This method is made robust to large rotations in initial head orientation by extracting extra information of the eye centers using a radial Hough transform and exploiting the centroid of head mass (CM) using a novel estimation approach. To evaluate the effectiveness of this method, the algorithm is trained on a set of 20 images with manually selected landmarks, and a test dataset is used to compare the automatically detected against the manually detected landmark locations of the AC, PC, midbrain-pons junction (MPJ), and fourth ventricle notch (VN4). The results show that the proposed method is accurate as the average error between the automatically and manually labeled landmark points is less than 1 mm. Also, the algorithm is highly robust as it was successfully run on a large dataset that included different kinds of images with various orientation, spacing, and origin.

  18. Optimization of real-time rigid registration motion compensation for prostate biopsies using 2D/3D ultrasound

    NASA Astrophysics Data System (ADS)

    Gillies, Derek J.; Gardi, Lori; Zhao, Ren; Fenster, Aaron

    2017-03-01

    During image-guided prostate biopsy, needles are targeted at suspicious tissues to obtain specimens that are later examined histologically for cancer. Patient motion causes inaccuracies when using MR-transrectal ultrasound (TRUS) image fusion approaches used to augment the conventional biopsy procedure. Motion compensation using a single, user initiated correction can be performed to temporarily compensate for prostate motion, but a real-time continuous registration offers an improvement to clinical workflow by reducing user interaction and procedure time. An automatic motion compensation method, approaching the frame rate of a TRUS-guided system, has been developed for use during fusion-based prostate biopsy to improve image guidance. 2D and 3D TRUS images of a prostate phantom were registered using an intensity based algorithm utilizing normalized cross-correlation and Powell's method for optimization with user initiated and continuous registration techniques. The user initiated correction performed with observed computation times of 78 ± 35 ms, 74 ± 28 ms, and 113 ± 49 ms for in-plane, out-of-plane, and roll motions, respectively, corresponding to errors of 0.5 ± 0.5 mm, 1.5 ± 1.4 mm, and 1.5 ± 1.6°. The continuous correction performed significantly faster (p < 0.05) than the user initiated method, with observed computation times of 31 ± 4 ms, 32 ± 4 ms, and 31 ± 6 ms for in-plane, out-of-plane, and roll motions, respectively, corresponding to errors of 0.2 ± 0.2 mm, 0.6 ± 0.5 mm, and 0.8 ± 0.4°.

  19. TU-G-BRA-05: Predicting Volume Change of the Tumor and Critical Structures Throughout Radiation Therapy by CT-CBCT Registration with Local Intensity Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, S; Robinson, A; Kiess, A

    2015-06-15

    Purpose: The purpose of this study is to develop an accurate and effective technique to predict and monitor volume changes of the tumor and organs at risk (OARs) from daily cone-beam CTs (CBCTs). Methods: While CBCT is typically used to minimize the patient setup error, its poor image quality impedes accurate monitoring of daily anatomical changes in radiotherapy. Reconstruction artifacts in CBCT often cause undesirable errors in registration-based contour propagation from the planning CT, a conventional way to estimate anatomical changes. To improve the registration and segmentation accuracy, we developed a new deformable image registration (DIR) that iteratively corrects CBCTmore » intensities using slice-based histogram matching during the registration process. Three popular DIR algorithms (hierarchical B-spline, demons, optical flow) augmented by the intensity correction were implemented on a graphics processing unit for efficient computation, and their performances were evaluated on six head and neck (HN) cancer cases. Four trained scientists manually contoured nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs for each case, to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial software, VelocityAI (Varian Medical Systems Inc.). Results: Manual contouring showed significant variations, [-76, +141]% from the mean of all four sets of contours. The volume differences (mean±std in cc) between the average manual segmentation and four automatic segmentations are 3.70±2.30(B-spline), 1.25±1.78(demons), 0.93±1.14(optical flow), and 4.39±3.86 (VelocityAI). In comparison to the average volume of the manual segmentations, the proposed approach significantly reduced the estimation error by 9%(B-spline), 38%(demons), and 51%(optical flow) over the conventional mutual information based method (VelocityAI). Conclusion: The proposed CT-CBCT registration with local CBCT intensity correction can accurately predict the tumor volume change with reduced errors. Although demonstrated only on HN nodal GTVs, the results imply improved accuracy for other critical structures. This work was supported by NIH/NCI under grant R42CA137886.« less

  20. Validation of Imaging With Pathology in Laryngeal Cancer: Accuracy of the Registration Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caldas-Magalhaes, Joana, E-mail: J.CaldasMagalhaes@umcutrecht.nl; Kasperts, Nicolien; Kooij, Nina

    2012-02-01

    Purpose: To investigate the feasibility and accuracy of an automated method to validate gross tumor volume (GTV) delineations with pathology in laryngeal and hypopharyngeal cancer. Methods and Materials: High-resolution computed tomography (CT{sub HR}), magnetic resonance imaging (MRI), and positron emission tomography (PET) scans were obtained from 10 patients before total laryngectomy. The GTV was delineated separately in each imaging modality. The laryngectomy specimen was sliced transversely in 3-mm-thick slices, and whole-mount hematoxylin-eosin stained (H and E) sections were obtained. A pathologist delineated tumor tissue in the H and E sections (GTV{sub PATH}). An automatic three-dimensional (3D) reconstruction of the specimenmore » was performed, and the CT{sub HR}, MRI, and PET were semiautomatically and rigidly registered to the 3D specimen. The accuracy of the pathology-imaging registration and the specimen deformation and shrinkage were assessed. The tumor delineation inaccuracies were compared with the registration errors. Results: Good agreement was observed between anatomical landmarks in the 3D specimen and in the in vivo images. Limited deformations and shrinkage (3% {+-} 1%) were found inside the cartilage skeleton. The root mean squared error of the registration between the 3D specimen and the CT, MRI, and PET was on average 1.5, 3.0, and 3.3 mm, respectively, in the cartilage skeleton. The GTV{sub PATH} volume was 7.2 mL, on average. The GTVs based on CT, MRI, and PET generated a mean volume of 14.9, 18.3, and 9.8 mL and covered the GTV{sub PATH} by 85%, 88%, and 77%, respectively. The tumor delineation inaccuracies exceeded the registration error in all the imaging modalities. Conclusions: Validation of GTV delineations with pathology is feasible with an average overall accuracy below 3.5 mm inside the laryngeal skeleton. The tumor delineation inaccuracies were larger than the registration error. Therefore, an accurate histological validation of anatomical and functional imaging techniques for GTV delineation is possible in laryngeal cancer patients.« less

  1. Understanding bone responses in B-mode ultrasound images and automatic bone surface extraction using a Bayesian probabilistic framework

    NASA Astrophysics Data System (ADS)

    Jain, Ameet K.; Taylor, Russell H.

    2004-04-01

    The registration of preoperative CT to intra-operative reality systems is a crucial step in Computer Assisted Orthopedic Surgery (CAOS). The intra-operative sensors include 3D digitizers, fiducials, X-rays and Ultrasound (US). Although US has many advantages over others, tracked US for Orthopedic Surgery has been researched by only a few authors. An important factor limiting the accuracy of tracked US to CT registration (1-3mm) has been the difficulty in determining the exact location of the bone surfaces in the US images (the response could range from 2-4mm). Thus it is crucial to localize the bone surface accurately from these images. Moreover conventional US imaging systems are known to have certain inherent inaccuracies, mainly due to the fact that the imaging model is assumed planar. This creates the need to develop a bone segmentation framework that can couple information from various post-processed spatially separated US images (of the bone) to enhance the localization of the bone surface. In this paper we discuss the various reasons that cause inherent uncertainties in the bone surface localization (in B-mode US images) and suggest methods to account for these. We also develop a method for automatic bone surface detection. To do so, we account objectively for the high-level understanding of the various bone surface features visible in typical US images. A combination of these features would finally decide the surface position. We use a Bayesian probabilistic framework, which strikes a fair balance between high level understanding from features in an image and the low level number crunching of standard image processing techniques. It also provides us with a mathematical approach that facilitates combining multiple images to augment the bone surface estimate.

  2. A prostate CAD system based on multiparametric analysis of DCE T1-w, and DW automatically registered images

    NASA Astrophysics Data System (ADS)

    Giannini, Valentina; Vignati, Anna; Mazzetti, Simone; De Luca, Massimo; Bracco, Christian; Stasi, Michele; Russo, Filippo; Armando, Enrico; Regge, Daniele

    2013-02-01

    Prostate specific antigen (PSA)-based screening reduces the rate of death from prostate cancer (PCa) by 31%, but this benefit is associated with a high risk of overdiagnosis and overtreatment. As prostate transrectal ultrasound-guided biopsy, the standard procedure for prostate histological sampling, has a sensitivity of 77% with a considerable false-negative rate, more accurate methods need to be found to detect or rule out significant disease. Prostate magnetic resonance imaging has the potential to improve the specificity of PSA-based screening scenarios as a non-invasive detection tool, in particular exploiting the combination of anatomical and functional information in a multiparametric framework. The purpose of this study was to describe a computer aided diagnosis (CAD) method that automatically produces a malignancy likelihood map by combining information from dynamic contrast enhanced MR images and diffusion weighted images. The CAD system consists of multiple sequential stages, from a preliminary registration of images of different sequences, in order to correct for susceptibility deformation and/or movement artifacts, to a Bayesian classifier, which fused all the extracted features into a probability map. The promising results (AUROC=0.87) should be validated on a larger dataset, but they suggest that the discrimination on a voxel basis between benign and malignant tissues is feasible with good performances. This method can be of benefit to improve the diagnostic accuracy of the radiologist, reduce reader variability and speed up the reading time, automatically highlighting probably cancer suspicious regions.

  3. Comparison of carina-based versus bony anatomy-based registration for setup verification in esophageal cancer radiotherapy.

    PubMed

    Machiels, Mélanie; Jin, Peng; van Gurp, Christianne H; van Hooft, Jeanin E; Alderliesten, Tanja; Hulshof, Maarten C C M

    2018-03-21

    To investigate the feasibility and geometric accuracy of carina-based registration for CBCT-guided setup verification in esophageal cancer IGRT, compared with current practice bony anatomy-based registration. Included were 24 esophageal cancer patients with 65 implanted fiducial markers, visible on planning CTs and follow-up CBCTs. All available CBCT scans (n = 236) were rigidly registered to the planning CT with respect to the bony anatomy and the carina. Target coverage was visually inspected and marker position variation was quantified relative to both registration approaches; the variation of systematic (Σ) and random errors (σ) was estimated. Automatic carina-based registration was feasible in 94.9% of the CBCT scans, with an adequate target coverage in 91.1% compared to 100% after bony anatomy-based registration. Overall, Σ (σ) in the LR/CC/AP direction was 2.9(2.4)/4.1(2.4)/2.2(1.8) mm using the bony anatomy registration compared to 3.3(3.0)/3.6(2.6)/3.9(3.1) mm for the carina. Mid-thoracic placed markers showed a non-significant but smaller Σ in CC and AP direction when using the carina-based registration. Compared with a bony anatomy-based registration, carina-based registration for esophageal cancer IGRT results in inadequate target coverage in 8.9% of cases. Furthermore, large Σ and σ, requiring larger anisotropic margins, were seen after carina-based registration. Only for tumors entirely confined to the mid-thoracic region the carina-based registration might be slightly favorable.

  4. SU-E-I-87: Automated Liver Segmentation Method for CBCT Dataset by Combining Sparse Shape Composition and Probabilistic Atlas Construction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dengwang; Liu, Li; Chen, Jinhu

    2014-06-01

    Purpose: The aiming of this study was to extract liver structures for daily Cone beam CT (CBCT) images automatically. Methods: Datasets were collected from 50 intravenous contrast planning CT images, which were regarded as training dataset for probabilistic atlas and shape prior model construction. Firstly, probabilistic atlas and shape prior model based on sparse shape composition (SSC) were constructed by iterative deformable registration. Secondly, the artifacts and noise were removed from the daily CBCT image by an edge-preserving filtering using total variation with L1 norm (TV-L1). Furthermore, the initial liver region was obtained by registering the incoming CBCT image withmore » the atlas utilizing edge-preserving deformable registration with multi-scale strategy, and then the initial liver region was converted to surface meshing which was registered with the shape model where the major variation of specific patient was modeled by sparse vectors. At the last stage, the shape and intensity information were incorporated into joint probabilistic model, and finally the liver structure was extracted by maximum a posteriori segmentation.Regarding the construction process, firstly the manually segmented contours were converted into meshes, and then arbitrary patient data was chosen as reference image to register with the rest of training datasets by deformable registration algorithm for constructing probabilistic atlas and prior shape model. To improve the efficiency of proposed method, the initial probabilistic atlas was used as reference image to register with other patient data for iterative construction for removing bias caused by arbitrary selection. Results: The experiment validated the accuracy of the segmentation results quantitatively by comparing with the manually ones. The volumetric overlap percentage between the automatically generated liver contours and the ground truth were on an average 88%–95% for CBCT images. Conclusion: The experiment demonstrated that liver structures of CBCT with artifacts can be extracted accurately for following adaptive radiation therapy. This work is supported by National Natural Science Foundation of China (No. 61201441), Research Fund for Excellent Young and Middle-aged Scientists of Shandong Province (No. BS2012DX038), Project of Shandong Province Higher Educational Science and Technology Program (No. J12LN23), Jinan youth science and technology star (No.20120109)« less

  5. Endoluminal surface registration for CT colonography using haustral fold matching☆

    PubMed Central

    Hampshire, Thomas; Roth, Holger R.; Helbren, Emma; Plumb, Andrew; Boone, Darren; Slabaugh, Greg; Halligan, Steve; Hawkes, David J.

    2013-01-01

    Computed Tomographic (CT) colonography is a technique used for the detection of bowel cancer or potentially precancerous polyps. The procedure is performed routinely with the patient both prone and supine to differentiate fixed colonic pathology from mobile faecal residue. Matching corresponding locations is difficult and time consuming for radiologists due to colonic deformations that occur during patient repositioning. We propose a novel method to establish correspondence between the two acquisitions automatically. The problem is first simplified by detecting haustral folds using a graph cut method applied to a curvature-based metric applied to a surface mesh generated from segmentation of the colonic lumen. A virtual camera is used to create a set of images that provide a metric for matching pairs of folds between the prone and supine acquisitions. Image patches are generated at the fold positions using depth map renderings of the endoluminal surface and optimised by performing a virtual camera registration over a restricted set of degrees of freedom. The intensity difference between image pairs, along with additional neighbourhood information to enforce geometric constraints over a 2D parameterisation of the 3D space, are used as unary and pair-wise costs respectively, and included in a Markov Random Field (MRF) model to estimate the maximum a posteriori fold labelling assignment. The method achieved fold matching accuracy of 96.0% and 96.1% in patient cases with and without local colonic collapse. Moreover, it improved upon an existing surface-based registration algorithm by providing an initialisation. The set of landmark correspondences is used to non-rigidly transform a 2D source image derived from a conformal mapping process on the 3D endoluminal surface mesh. This achieves full surface correspondence between prone and supine views and can be further refined with an intensity based registration showing a statistically significant improvement (p < 0.001), and decreasing mean error from 11.9 mm to 6.0 mm measured at 1743 reference points from 17 CTC datasets. PMID:23845949

  6. Correction of patient motion in cone-beam CT using 3D-2D registration

    NASA Astrophysics Data System (ADS)

    Ouadah, S.; Jacobson, M.; Stayman, J. W.; Ehtiati, T.; Weiss, C.; Siewerdsen, J. H.

    2017-12-01

    Cone-beam CT (CBCT) is increasingly common in guidance of interventional procedures, but can be subject to artifacts arising from patient motion during fairly long (~5-60 s) scan times. We present a fiducial-free method to mitigate motion artifacts using 3D-2D image registration that simultaneously corrects residual errors in the intrinsic and extrinsic parameters of geometric calibration. The 3D-2D registration process registers each projection to a prior 3D image by maximizing gradient orientation using the covariance matrix adaptation-evolution strategy optimizer. The resulting rigid transforms are applied to the system projection matrices, and a 3D image is reconstructed via model-based iterative reconstruction. Phantom experiments were conducted using a Zeego robotic C-arm to image a head phantom undergoing 5-15 cm translations and 5-15° rotations. To further test the algorithm, clinical images were acquired with a CBCT head scanner in which long scan times were susceptible to significant patient motion. CBCT images were reconstructed using a penalized likelihood objective function. For phantom studies the structural similarity (SSIM) between motion-free and motion-corrected images was  >0.995, with significant improvement (p  <  0.001) compared to the SSIM values of uncorrected images. Additionally, motion-corrected images exhibited a point-spread function with full-width at half maximum comparable to that of the motion-free reference image. Qualitative comparison of the motion-corrupted and motion-corrected clinical images demonstrated a significant improvement in image quality after motion correction. This indicates that the 3D-2D registration method could provide a useful approach to motion artifact correction under assumptions of local rigidity, as in the head, pelvis, and extremities. The method is highly parallelizable, and the automatic correction of residual geometric calibration errors provides added benefit that could be valuable in routine use.

  7. A registration-based segmentation method with application to adiposity analysis of mice microCT images

    NASA Astrophysics Data System (ADS)

    Bai, Bing; Joshi, Anand; Brandhorst, Sebastian; Longo, Valter D.; Conti, Peter S.; Leahy, Richard M.

    2014-04-01

    Obesity is a global health problem, particularly in the U.S. where one third of adults are obese. A reliable and accurate method of quantifying obesity is necessary. Visceral adipose tissue (VAT) and subcutaneous adipose tissue (SAT) are two measures of obesity that reflect different associated health risks, but accurate measurements in humans or rodent models are difficult. In this paper we present an automatic, registration-based segmentation method for mouse adiposity studies using microCT images. We co-register the subject CT image and a mouse CT atlas. Our method is based on surface matching of the microCT image and an atlas. Surface-based elastic volume warping is used to match the internal anatomy. We acquired a whole body scan of a C57BL6/J mouse injected with contrast agent using microCT and created a whole body mouse atlas by manually delineate the boundaries of the mouse and major organs. For method verification we scanned a C57BL6/J mouse from the base of the skull to the distal tibia. We registered the obtained mouse CT image to our atlas. Preliminary results show that we can warp the atlas image to match the posture and shape of the subject CT image, which has significant differences from the atlas. We plan to use this software tool in longitudinal obesity studies using mouse models.

  8. Self calibrating autoTRAC

    NASA Technical Reports Server (NTRS)

    Everett, Louis J.

    1994-01-01

    The work reported here demonstrates how to automatically compute the position and attitude of a targeting reflective alignment concept (TRAC) camera relative to the robot end effector. In the robotics literature this is known as the sensor registration problem. The registration problem is important to solve if TRAC images need to be related to robot position. Previously, when TRAC operated on the end of a robot arm, the camera had to be precisely located at the correct orientation and position. If this location is in error, then the robot may not be able to grapple an object even though the TRAC sensor indicates it should. In addition, if the camera is significantly far from the alignment it is expected to be at, TRAC may give incorrect feedback for the control of the robot. A simple example is if the robot operator thinks the camera is right side up but the camera is actually upside down, the camera feedback will tell the operator to move in an incorrect direction. The automatic calibration algorithm requires the operator to translate and rotate the robot arbitrary amounts along (about) two coordinate directions. After the motion, the algorithm determines the transformation matrix from the robot end effector to the camera image plane. This report discusses the TRAC sensor registration problem.

  9. SimITK: rapid ITK prototyping using the Simulink visual programming environment

    NASA Astrophysics Data System (ADS)

    Dickinson, A. W. L.; Mousavi, P.; Gobbi, D. G.; Abolmaesumi, P.

    2011-03-01

    The Insight Segmentation and Registration Toolkit (ITK) is a long-established, software package used for image analysis, visualization, and image-guided surgery applications. This package is a collection of C++ libraries, that can pose usability problems for users without C++ programming experience. To bridge the gap between the programming complexities and the required learning curve of ITK, we present a higher-level visual programming environment that represents ITK methods and classes by wrapping them into "blocks" within MATLAB's visual programming environment, Simulink. These blocks can be connected to form workflows: visual schematics that closely represent the structure of a C++ program. Due to the heavily C++ templated nature of ITK, direct interaction between Simulink and ITK requires an intermediary to convert their respective datatypes and allow intercommunication. We have developed a "Virtual Block" that serves as an intermediate wrapper around the ITK class and is responsible for resolving the templated datatypes used by ITK to native types used by Simulink. Presently, the wrapping procedure for SimITK is semi-automatic in that it requires XML descriptions of the ITK classes as a starting point, as this data is used to create all other necessary integration files. The generation of all source code and object code from the XML is done automatically by a CMake build script that yields Simulink blocks as the final result. An example 3D segmentation workflow using cranial-CT data as well as a 3D MR-to-CT registration workflow are presented as a proof-of-concept.

  10. An automated A-value measurement tool for accurate cochlear duct length estimation.

    PubMed

    Iyaniwura, John E; Elfarnawany, Mai; Ladak, Hanif M; Agrawal, Sumit K

    2018-01-22

    There has been renewed interest in the cochlear duct length (CDL) for preoperative cochlear implant electrode selection and postoperative generation of patient-specific frequency maps. The CDL can be estimated by measuring the A-value, which is defined as the length between the round window and the furthest point on the basal turn. Unfortunately, there is significant intra- and inter-observer variability when these measurements are made clinically. The objective of this study was to develop an automated A-value measurement algorithm to improve accuracy and eliminate observer variability. Clinical and micro-CT images of 20 cadaveric cochleae specimens were acquired. The micro-CT of one sample was chosen as the atlas, and A-value fiducials were placed onto that image. Image registration (rigid affine and non-rigid B-spline) was applied between the atlas and the 19 remaining clinical CT images. The registration transform was applied to the A-value fiducials, and the A-value was then automatically calculated for each specimen. High resolution micro-CT images of the same 19 specimens were used to measure the gold standard A-values for comparison against the manual and automated methods. The registration algorithm had excellent qualitative overlap between the atlas and target images. The automated method eliminated the observer variability and the systematic underestimation by experts. Manual measurement of the A-value on clinical CT had a mean error of 9.5 ± 4.3% compared to micro-CT, and this improved to an error of 2.7 ± 2.1% using the automated algorithm. Both the automated and manual methods correlated significantly with the gold standard micro-CT A-values (r = 0.70, p < 0.01 and r = 0.69, p < 0.01, respectively). An automated A-value measurement tool using atlas-based registration methods was successfully developed and validated. The automated method eliminated the observer variability and improved accuracy as compared to manual measurements by experts. This open-source tool has the potential to benefit cochlear implant recipients in the future.

  11. Accuracy of Automatic Cephalometric Software on Landmark Identification

    NASA Astrophysics Data System (ADS)

    Anuwongnukroh, N.; Dechkunakorn, S.; Damrongsri, S.; Nilwarat, C.; Pudpong, N.; Radomsutthisarn, W.; Kangern, S.

    2017-11-01

    This study was to assess the accuracy of an automatic cephalometric analysis software in the identification of cephalometric landmarks. Thirty randomly selected digital lateral cephalograms of patients undergoing orthodontic treatment were used in this study. Thirteen landmarks (S, N, Or, A-point, U1T, U1A, B-point, Gn, Pog, Me, Go, L1T, and L1A) were identified on the digital image by an automatic cephalometric software and on cephalometric tracing by manual method. Superimposition of printed image and manual tracing was done by registration at the soft tissue profiles. The accuracy of landmarks located by the automatic method was compared with that of the manually identified landmarks by measuring the mean differences of distances of each landmark on the Cartesian plane where X and Y coordination axes passed through the center of ear rod. One-Sample T test was used to evaluate the mean differences. Statistically significant mean differences (p<0.05) were found in 5 landmarks (Or, A-point, Me, L1T, and L1A) in horizontal direction and 7 landmarks (Or, A-point, U1T, U1A, B-point, Me, and L1A) in vertical direction. Four landmarks (Or, A-point, Me, and L1A) showed significant (p<0.05) mean differences in both horizontal and vertical directions. Small mean differences (<0.5mm) were found for S, N, B-point, Gn, and Pog in horizontal direction and N, Gn, Me, and L1T in vertical direction. Large mean differences were found for A-point (3.0 < 3.5mm) in horizontal direction and L1A (>4mm) in vertical direction. Only 5 of 13 landmarks (38.46%; S, N, Gn, Pog, and Go) showed no significant mean difference between the automatic and manual landmarking methods. It is concluded that if this automatic cephalometric analysis software is used for orthodontic diagnosis, the orthodontist must correct or modify the position of landmarks in order to increase the accuracy of cephalometric analysis.

  12. Weakly supervised automatic segmentation and 3D modeling of the knee joint from MR images

    NASA Astrophysics Data System (ADS)

    Amami, Amal; Ben Azouz, Zouhour

    2013-12-01

    Automatic segmentation and 3D modeling of the knee joint from MR images, is a challenging task. Most of the existing techniques require the tedious manual segmentation of a training set of MRIs. We present an approach that necessitates the manual segmentation of one MR image. It is based on a volumetric active appearance model. First, a dense tetrahedral mesh is automatically created on a reference MR image that is arbitrary selected. Second, a pairwise non-rigid registration between each MRI from a training set and the reference MRI is computed. The non-rigid registration is based on a piece-wise affine deformation using the created tetrahedral mesh. The minimum description length is then used to bring all the MR images into a correspondence. An average image and tetrahedral mesh, as well as a set of main modes of variations, are generated using the established correspondence. Any manual segmentation of the average MRI can be mapped to other MR images using the AAM. The proposed approach has the advantage of simultaneously generating 3D reconstructions of the surface as well as a 3D solid model of the knee joint. The generated surfaces and tetrahedral meshes present the interesting property of fulfilling a correspondence between different MR images. This paper shows preliminary results of the proposed approach. It demonstrates the automatic segmentation and 3D reconstruction of a knee joint obtained by mapping a manual segmentation of a reference image.

  13. Radiotherapy treatment planning: benefits of CT-MR image registration and fusion in tumor volume delineation.

    PubMed

    Djan, Igor; Petrović, Borislava; Erak, Marko; Nikolić, Ivan; Lucić, Silvija

    2013-08-01

    Development of imaging techniques, computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET), made great impact on radiotherapy treatment planning by improving the localization of target volumes. Improved localization allows better local control of tumor volumes, but also minimizes geographical misses. Mutual information is obtained by registration and fusion of images achieved manually or automatically. The aim of this study was to validate the CT-MRI image fusion method and compare delineation obtained by CT versus CT-MRI image fusion. The image fusion software (XIO CMS 4.50.0) was applied to delineate 16 patients. The patients were scanned on CT and MRI in the treatment position within an immobilization device before the initial treatment. The gross tumor volume (GTV) and clinical target volume (CTV) were delineated on CT alone and on CT+MRI images consecutively and image fusion was obtained. Image fusion showed that CTV delineated on a CT image study set is mainly inadequate for treatment planning, in comparison with CTV delineated on CT-MRI fused image study set. Fusion of different modalities enables the most accurate target volume delineation. This study shows that registration and image fusion allows precise target localization in terms of GTV and CTV and local disease control.

  14. Semi-automated location identification of catheters in digital chest radiographs

    NASA Astrophysics Data System (ADS)

    Keller, Brad M.; Reeves, Anthony P.; Cham, Matthew D.; Henschke, Claudia I.; Yankelevitz, David F.

    2007-03-01

    Localization of catheter tips is the most common task in intensive care unit imaging. In this work, catheters appearing in digital chest radiographs acquired by portable chest x-rays were tracked using a semi-automatic method. Due to the fact that catheters are synthetic objects, its profile does not vary drastically over its length. Therefore, we use forward looking registration with normalized cross-correlation in order to take advantage of a priori information of the catheter profile. The registration is accomplished with a two-dimensional template representative of the catheter to be tracked generated using two seed points given by the user. To validate catheter tracking with this method, we look at two metrics: accuracy and precision. The algorithms results are compared to a ground truth established by catheter midlines marked by expert radiologists. Using 12 objects of interest comprised of naso-gastric, endo-tracheal tubes, and chest tubes, and PICC and central venous catheters, we find that our algorithm can fully track 75% of the objects of interest, with a average tracking accuracy and precision of 85.0%, 93.6% respectively using the above metrics. Such a technique would be useful for physicians wishing to verify the positioning of catheter tips using chest radiographs.

  15. Fast DRR generation for 2D to 3D registration on GPUs.

    PubMed

    Tornai, Gábor János; Cserey, György; Pappas, Ion

    2012-08-01

    The generation of digitally reconstructed radiographs (DRRs) is the most time consuming step on the CPU in intensity based two-dimensional x-ray to three-dimensional (CT or 3D rotational x-ray) medical image registration, which has application in several image guided interventions. This work presents optimized DRR rendering on graphical processor units (GPUs) and compares performance achievable on four commercially available devices. A ray-cast based DRR rendering was implemented for a 512 × 512 × 72 CT volume. The block size parameter was optimized for four different GPUs for a region of interest (ROI) of 400 × 225 pixels with different sampling ratios (1.1%-9.1% and 100%). Performance was statistically evaluated and compared for the four GPUs. The method and the block size dependence were validated on the latest GPU for several parameter settings with a public gold standard dataset (512 × 512 × 825 CT) for registration purposes. Depending on the GPU, the full ROI is rendered in 2.7-5.2 ms. If sampling ratio of 1.1%-9.1% is applied, execution time is in the range of 0.3-7.3 ms. On all GPUs, the mean of the execution time increased linearly with respect to the number of pixels if sampling was used. The presented results outperform other results from the literature. This indicates that automatic 2D to 3D registration, which typically requires a couple of hundred DRR renderings to converge, can be performed quasi on-line, in less than a second or depending on the application and hardware in less than a couple of seconds. Accordingly, a whole new field of applications is opened for image guided interventions, where the registration is continuously performed to match the real-time x-ray.

  16. Vision based tunnel inspection using non-rigid registration

    NASA Astrophysics Data System (ADS)

    Badshah, Amir; Ullah, Shan; Shahzad, Danish

    2015-04-01

    Growing numbers of long tunnels across the globe has increased the need for safety measurements and inspections of tunnels in these days. To avoid serious damages, tunnel inspection is highly recommended at regular intervals of time to find any deformations or cracks at the right time. While following the stringent safety and tunnel accessibility standards, conventional geodetic surveying using techniques of civil engineering and other manual and mechanical methods are time consuming and results in troublesome of routine life. An automatic tunnel inspection by image processing techniques using non rigid registration has been proposed. There are many other image processing methods used for image registration purposes. Most of the processes are operation of images in its spatial domain like finding edges and corners by Harris edge detection method. These methods are quite time consuming and fail for some or other reasons like for blurred or images with noise. Due to use of image features directly by these methods in the process, are known by the group, correlation by image features. The other method is featureless correlation, in which the images are converted into its frequency domain and then correlated with each other. The shift in spatial domain is the same as in frequency domain, but the processing is order faster than in spatial domain. In the proposed method modified normalized phase correlation has been used to find any shift between two images. As pre pre-processing the tunnel images i.e. reference and template are divided into small patches. All these relative patches are registered by the proposed modified normalized phase correlation. By the application of the proposed algorithm we get the pixel movement of the images. And then these pixels shifts are converted to measuring units like mm, cm etc. After the complete process if there is any shift in the tunnel at described points are located.

  17. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Li; Gao, Yaozong; Shi, Feng

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segmentmore » CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT segmentation based on 15 patients.« less

  18. Trans-dimensional MCMC methods for fully automatic motion analysis in tagged MRI.

    PubMed

    Smal, Ihor; Carranza-Herrezuelo, Noemí; Klein, Stefan; Niessen, Wiro; Meijering, Erik

    2011-01-01

    Tagged magnetic resonance imaging (tMRI) is a well-known noninvasive method allowing quantitative analysis of regional heart dynamics. Its clinical use has so far been limited, in part due to the lack of robustness and accuracy of existing tag tracking algorithms in dealing with low (and intrinsically time-varying) image quality. In this paper, we propose a novel probabilistic method for tag tracking, implemented by means of Bayesian particle filtering and a trans-dimensional Markov chain Monte Carlo (MCMC) approach, which efficiently combines information about the imaging process and tag appearance with prior knowledge about the heart dynamics obtained by means of non-rigid image registration. Experiments using synthetic image data (with ground truth) and real data (with expert manual annotation) from preclinical (small animal) and clinical (human) studies confirm that the proposed method yields higher consistency, accuracy, and intrinsic tag reliability assessment in comparison with other frequently used tag tracking methods.

  19. Building generic anatomical models using virtual model cutting and iterative registration.

    PubMed

    Xiao, Mei; Soh, Jung; Meruvia-Pastor, Oscar; Schmidt, Eric; Hallgrímsson, Benedikt; Sensen, Christoph W

    2010-02-08

    Using 3D generic models to statistically analyze trends in biological structure changes is an important tool in morphometrics research. Therefore, 3D generic models built for a range of populations are in high demand. However, due to the complexity of biological structures and the limited views of them that medical images can offer, it is still an exceptionally difficult task to quickly and accurately create 3D generic models (a model is a 3D graphical representation of a biological structure) based on medical image stacks (a stack is an ordered collection of 2D images). We show that the creation of a generic model that captures spatial information exploitable in statistical analyses is facilitated by coupling our generalized segmentation method to existing automatic image registration algorithms. The method of creating generic 3D models consists of the following processing steps: (i) scanning subjects to obtain image stacks; (ii) creating individual 3D models from the stacks; (iii) interactively extracting sub-volume by cutting each model to generate the sub-model of interest; (iv) creating image stacks that contain only the information pertaining to the sub-models; (v) iteratively registering the corresponding new 2D image stacks; (vi) averaging the newly created sub-models based on intensity to produce the generic model from all the individual sub-models. After several registration procedures are applied to the image stacks, we can create averaged image stacks with sharp boundaries. The averaged 3D model created from those image stacks is very close to the average representation of the population. The image registration time varies depending on the image size and the desired accuracy of the registration. Both volumetric data and surface model for the generic 3D model are created at the final step. Our method is very flexible and easy to use such that anyone can use image stacks to create models and retrieve a sub-region from it at their ease. Java-based implementation allows our method to be used on various visualization systems including personal computers, workstations, computers equipped with stereo displays, and even virtual reality rooms such as the CAVE Automated Virtual Environment. The technique allows biologists to build generic 3D models of their interest quickly and accurately.

  20. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  1. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2011-12-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  2. An Automatic Procedure for Combining Digital Images and Laser Scanner Data

    NASA Astrophysics Data System (ADS)

    Moussa, W.; Abdel-Wahab, M.; Fritsch, D.

    2012-07-01

    Besides improving both the geometry and the visual quality of the model, the integration of close-range photogrammetry and terrestrial laser scanning techniques directs at filling gaps in laser scanner point clouds to avoid modeling errors, reconstructing more details in higher resolution and recovering simple structures with less geometric details. Thus, within this paper a flexible approach for the automatic combination of digital images and laser scanner data is presented. Our approach comprises two methods for data fusion. The first method starts by a marker-free registration of digital images based on a point-based environment model (PEM) of a scene which stores the 3D laser scanner point clouds associated with intensity and RGB values. The PEM allows the extraction of accurate control information for the direct computation of absolute camera orientations with redundant information by means of accurate space resection methods. In order to use the computed relations between the digital images and the laser scanner data, an extended Helmert (seven-parameter) transformation is introduced and its parameters are estimated. Precedent to that, in the second method, the local relative orientation parameters of the camera images are calculated by means of an optimized Structure and Motion (SaM) reconstruction method. Then, using the determined transformation parameters results in having absolute oriented images in relation to the laser scanner data. With the resulting absolute orientations we have employed robust dense image reconstruction algorithms to create oriented dense image point clouds, which are automatically combined with the laser scanner data to form a complete detailed representation of a scene. Examples of different data sets are shown and experimental results demonstrate the effectiveness of the presented procedures.

  3. Fast radioactive seed localization in intraoperative cone beam CT for low-dose-rate prostate brachytherapy

    NASA Astrophysics Data System (ADS)

    Hu, Yu-chi; Xiong, Jian-ping; Cohan, Gilad; Zaider, Marco; Mageras, Gig; Zelefsky, Michael

    2013-03-01

    A fast knowledge-based radioactive seed localization method for brachytherapy was developed to automatically localize radioactive seeds in an intraoperative volumetric cone beam CT (CBCT) so that corrections, if needed, can be made during prostate implant surgery. A transrectal ultrasound (TRUS) scan is acquired for intraoperative treatment planning. Planned seed positions are transferred to intraoperative CBCT following TRUS-to-CBCT registration using a reference CBCT scan of the TRUS probe as a template, in which the probe and its external fiducial markers are pre-segmented and their positions in TRUS are known. The transferred planned seeds and probe serve as an atlas to reduce the search space in CBCT. Candidate seed voxels are identified based on image intensity. Regions are grown from candidate voxels and overlay regions are merged. Region volume and intensity variance is checked against known seed volume and intensity profile. Regions meeting the above criteria are flagged as detected seeds; otherwise they are flagged as likely seeds and sorted by a score that is based on volume, intensity profile and distance to the closest planned seed. A graphical interface allows users to review and accept or reject likely seeds. Likely seeds with approximately twice the seed volume are automatically split. Five clinical cases are tested. Without any manual correction in seed detection, the method performed the localization in 5 seconds (excluding registration time) for a CBCT scan with 512×512×192 voxels. The average precision rate per case is 99% and the recall rate is 96% for a total of 416 seeds. All false negative seeds are found with 15 in likely seeds and 1 included in a detected seed. With the new method, updating of calculations of dose distribution during the procedure is possible and thus facilitating evaluation and improvement of treatment quality.

  4. Automatic localization of the nipple in mammograms using Gabor filters and the Radon transform

    NASA Astrophysics Data System (ADS)

    Chakraborty, Jayasree; Mukhopadhyay, Sudipta; Rangayyan, Rangaraj M.; Sadhu, Anup; Azevedo-Marques, P. M.

    2013-02-01

    The nipple is an important landmark in mammograms. Detection of the nipple is useful for alignment and registration of mammograms in computer-aided diagnosis of breast cancer. In this paper, a novel approach is proposed for automatic detection of the nipple based on the oriented patterns of the breast tissues present in mammograms. The Radon transform is applied to the oriented patterns obtained by a bank of Gabor filters to detect the linear structures related to the tissue patterns. The detected linear structures are then used to locate the nipple position using the characteristics of convergence of the tissue patterns towards the nipple. The performance of the method was evaluated with 200 scanned-film images from the mini-MIAS database and 150 digital radiography (DR) images from a local database. Average errors of 5:84 mm and 6:36 mm were obtained with respect to the reference nipple location marked by a radiologist for the mini-MIAS and the DR images, respectively.

  5. Automatic categorization of anatomical landmark-local appearances based on diffeomorphic demons and spectral clustering for constructing detector ensembles.

    PubMed

    Hanaoka, Shouhei; Masutani, Yoshitaka; Nemoto, Mitsutaka; Nomura, Yukihiro; Yoshikawa, Takeharu; Hayashi, Naoto; Ohtomo, Kuni

    2012-01-01

    A method for categorizing landmark-local appearances extracted from computed tomography (CT) datasets is presented. Anatomical landmarks in the human body inevitably have inter-individual variations that cause difficulty in automatic landmark detection processes. The goal of this study is to categorize subjects (i.e., training datasets) according to local shape variations of such a landmark so that each subgroup has less shape variation and thus the machine learning of each landmark detector is much easier. The similarity between each subject pair is measured based on the non-rigid registration result between them. These similarities are used by the spectral clustering process. After the clustering, all training datasets in each cluster, as well as synthesized intermediate images calculated from all subject-pairs in the cluster, are used to train the corresponding subgroup detector. All of these trained detectors compose a detector ensemble to detect the target landmark. Evaluation with clinical CT datasets showed great improvement in the detection performance.

  6. Automatic co-registration of 3D multi-sensor point clouds

    NASA Astrophysics Data System (ADS)

    Persad, Ravi Ancil; Armenakis, Costas

    2017-08-01

    We propose an approach for the automatic coarse alignment of 3D point clouds which have been acquired from various platforms. The method is based on 2D keypoint matching performed on height map images of the point clouds. Initially, a multi-scale wavelet keypoint detector is applied, followed by adaptive non-maxima suppression. A scale, rotation and translation-invariant descriptor is then computed for all keypoints. The descriptor is built using the log-polar mapping of Gabor filter derivatives in combination with the so-called Rapid Transform. In the final step, source and target height map keypoint correspondences are determined using a bi-directional nearest neighbour similarity check, together with a threshold-free modified-RANSAC. Experiments with urban and non-urban scenes are presented and results show scale errors ranging from 0.01 to 0.03, 3D rotation errors in the order of 0.2° to 0.3° and 3D translation errors from 0.09 m to 1.1 m.

  7. Benefit Analyses of Technologies for Automatic Identification to Be Implemented in the Healthcare Sector

    NASA Astrophysics Data System (ADS)

    Krey, Mike; Schlatter, Ueli

    The tasks and objectives of automatic identification (Auto-ID) are to provide information on goods and products. It has already been established for years in the areas of logistics and trading and can no longer be ignored by the German healthcare sector. Some German hospitals have already discovered the capabilities of Auto-ID. Improvements in quality, safety and reductions in risk, cost and time are aspects and areas where improvements are achievable. Privacy protection, legal restraints, and the personal rights of patients and staff members are just a few aspects which make the heath care sector a sensible field for the implementation of Auto-ID. Auto-ID in this context contains the different technologies, methods and products for the registration, provision and storage of relevant data. With the help of a quantifiable and science-based evaluation, an answer is sought as to which Auto-ID has the highest capability to be implemented in healthcare business.

  8. Automated robust registration of grossly misregistered whole-slide images with varying stains

    NASA Astrophysics Data System (ADS)

    Litjens, G.; Safferling, K.; Grabe, N.

    2016-03-01

    Cancer diagnosis and pharmaceutical research increasingly depend on the accurate quantification of cancer biomarkers. Identification of biomarkers is usually performed through immunohistochemical staining of cancer sections on glass slides. However, combination of multiple biomarkers from a wide variety of immunohistochemically stained slides is a tedious process in traditional histopathology due to the switching of glass slides and re-identification of regions of interest by pathologists. Digital pathology now allows us to apply image registration algorithms to digitized whole-slides to align the differing immunohistochemical stains automatically. However, registration algorithms need to be robust to changes in color due to differing stains and severe changes in tissue content between slides. In this work we developed a robust registration methodology to allow for fast coarse alignment of multiple immunohistochemical stains to the base hematyoxylin and eosin stained image. We applied HSD color model conversion to obtain a less stain color dependent representation of the whole-slide images. Subsequently, optical density thresholding and connected component analysis were used to identify the relevant regions for registration. Template matching using normalized mutual information was applied to provide initial translation and rotation parameters, after which a cost function-driven affine registration was performed. The algorithm was validated using 40 slides from 10 prostate cancer patients, with landmark registration error as a metric. Median landmark registration error was around 180 microns, which indicates performance is adequate for practical application. None of the registrations failed, indicating the robustness of the algorithm.

  9. Clinical evaluation of multi-atlas based segmentation of lymph node regions in head and neck and prostate cancer patients.

    PubMed

    Sjöberg, Carl; Lundmark, Martin; Granberg, Christoffer; Johansson, Silvia; Ahnesjö, Anders; Montelius, Anders

    2013-10-03

    Semi-automated segmentation using deformable registration of selected atlas cases consisting of expert segmented patient images has been proposed to facilitate the delineation of lymph node regions for three-dimensional conformal and intensity-modulated radiotherapy planning of head and neck and prostate tumours. Our aim is to investigate if fusion of multiple atlases will lead to clinical workload reductions and more accurate segmentation proposals compared to the use of a single atlas segmentation, due to a more complete representation of the anatomical variations. Atlases for lymph node regions were constructed using 11 head and neck patients and 15 prostate patients based on published recommendations for segmentations. A commercial registration software (Velocity AI) was used to create individual segmentations through deformable registration. Ten head and neck patients, and ten prostate patients, all different from the atlas patients, were randomly chosen for the study from retrospective data. Each patient was first delineated three times, (a) manually by a radiation oncologist, (b) automatically using a single atlas segmentation proposal from a chosen atlas and (c) automatically by fusing the atlas proposals from all cases in the database using the probabilistic weighting fusion algorithm. In a subsequent step a radiation oncologist corrected the segmentation proposals achieved from step (b) and (c) without using the result from method (a) as reference. The time spent for editing the segmentations was recorded separately for each method and for each individual structure. Finally, the Dice Similarity Coefficient and the volume of the structures were used to evaluate the similarity between the structures delineated with the different methods. For the single atlas method, the time reduction compared to manual segmentation was 29% and 23% for head and neck and pelvis lymph nodes, respectively, while editing the fused atlas proposal resulted in time reductions of 49% and 34%. The average volume of the fused atlas proposals was only 74% of the manual segmentation for the head and neck cases and 82% for the prostate cases due to a blurring effect from the fusion process. After editing of the proposals the resulting volume differences were no longer statistically significant, although a slight influence by the proposals could be noticed since the average edited volume was still slightly smaller than the manual segmentation, 9% and 5%, respectively. Segmentation based on fusion of multiple atlases reduces the time needed for delineation of lymph node regions compared to the use of a single atlas segmentation. Even though the time saving is large, the quality of the segmentation is maintained compared to manual segmentation.

  10. Effect of Non-rigid Registration Algorithms on Deformation Based Morphometry: A Comparative Study with Control and Williams Syndrome Subjects

    PubMed Central

    Han, Zhaoying; Thornton-Wells, Tricia A.; Dykens, Elisabeth M.; Gore, John C.; Dawant, Benoit M.

    2014-01-01

    Deformation Based Morphometry (DBM) is a widely used method for characterizing anatomical differences across groups. DBM is based on the analysis of the deformation fields generated by non-rigid registration algorithms, which warp the individual volumes to a DBM atlas. Although several studies have compared non-rigid registration algorithms for segmentation tasks, few studies have compared the effect of the registration algorithms on group differences that may be uncovered through DBM. In this study, we compared group atlas creation and DBM results obtained with five well-established non-rigid registration algorithms using thirteen subjects with Williams Syndrome (WS) and thirteen Normal Control (NC) subjects. The five non-rigid registration algorithms include: (1) The Adaptive Bases Algorithm (ABA); (2) The Image Registration Toolkit (IRTK); (3) The FSL Nonlinear Image Registration Tool (FSL); (4) The Automatic Registration Tool (ART); and (5) the normalization algorithm available in SPM8. Results indicate that the choice of algorithm has little effect on the creation of group atlases. However, regions of differences between groups detected with DBM vary from algorithm to algorithm both qualitatively and quantitatively. The unique nature of the data set used in this study also permits comparison of visible anatomical differences between the groups and regions of difference detected by each algorithm. Results show that the interpretation of DBM results is difficult. Four out of the five algorithms we have evaluated detect bilateral differences between the two groups in the insular cortex, the basal ganglia, orbitofrontal cortex, as well as in the cerebellum. These correspond to differences that have been reported in the literature and that are visible in our samples. But our results also show that some algorithms detect regions that are not detected by the others and that the extent of the detected regions varies from algorithm to algorithm. These results suggest that using more than one algorithm when performing DBM studies would increase confidence in the results. Properties of the algorithms such as the similarity measure they maximize and the regularity of the deformation fields, as well as the location of differences detected with DBM, also need to be taken into account in the interpretation process. PMID:22459439

  11. Evaluation of a deformable registration algorithm for subsequent lung computed tomography imaging during radiochemotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stützer, Kristin; Haase, Robert; Exner, Florian

    2016-09-15

    Purpose: Rating both a lung segmentation algorithm and a deformable image registration (DIR) algorithm for subsequent lung computed tomography (CT) images by different evaluation techniques. Furthermore, investigating the relative performance and the correlation of the different evaluation techniques to address their potential value in a clinical setting. Methods: Two to seven subsequent CT images (69 in total) of 15 lung cancer patients were acquired prior, during, and after radiochemotherapy. Automated lung segmentations were compared to manually adapted contours. DIR between the first and all following CT images was performed with a fast algorithm specialized for lung tissue registration, requiring themore » lung segmentation as input. DIR results were evaluated based on landmark distances, lung contour metrics, and vector field inconsistencies in different subvolumes defined by eroding the lung contour. Correlations between the results from the three methods were evaluated. Results: Automated lung contour segmentation was satisfactory in 18 cases (26%), failed in 6 cases (9%), and required manual correction in 45 cases (66%). Initial and corrected contours had large overlap but showed strong local deviations. Landmark-based DIR evaluation revealed high accuracy compared to CT resolution with an average error of 2.9 mm. Contour metrics of deformed contours were largely satisfactory. The median vector length of inconsistency vector fields was 0.9 mm in the lung volume and slightly smaller for the eroded volumes. There was no clear correlation between the three evaluation approaches. Conclusions: Automatic lung segmentation remains challenging but can assist the manual delineation process. Proven by three techniques, the inspected DIR algorithm delivers reliable results for the lung CT data sets acquired at different time points. Clinical application of DIR demands a fast DIR evaluation to identify unacceptable results, for instance, by combining different automated DIR evaluation methods.« less

  12. Automatic segmentation of cortical vessels in pre- and post-tumor resection laser range scan images

    NASA Astrophysics Data System (ADS)

    Ding, Siyi; Miga, Michael I.; Thompson, Reid C.; Garg, Ishita; Dawant, Benoit M.

    2009-02-01

    Measurement of intra-operative cortical brain movement is necessary to drive mechanical models developed to predict sub-cortical shift. At our institution, this is done with a tracked laser range scanner. This device acquires both 3D range data and 2D photographic images. 3D cortical brain movement can be estimated if 2D photographic images acquired over time can be registered. Previously, we have developed a method, which permits this registration using vessels visible in the images. But, vessel segmentation required the localization of starting and ending points for each vessel segment. Here, we propose a method, which automates the segmentation process further. This method involves several steps: (1) correction of lighting artifacts, (2) vessel enhancement, and (3) vessels' centerline extraction. Result obtained on 5 images obtained in the operating room suggests that our method is robust and is able to segment vessels reliably.

  13. Augmented reality in laparoscopic surgical oncology.

    PubMed

    Nicolau, Stéphane; Soler, Luc; Mutter, Didier; Marescaux, Jacques

    2011-09-01

    Minimally invasive surgery represents one of the main evolutions of surgical techniques aimed at providing a greater benefit to the patient. However, minimally invasive surgery increases the operative difficulty since the depth perception is usually dramatically reduced, the field of view is limited and the sense of touch is transmitted by an instrument. However, these drawbacks can currently be reduced by computer technology guiding the surgical gesture. Indeed, from a patient's medical image (US, CT or MRI), Augmented Reality (AR) can increase the surgeon's intra-operative vision by providing a virtual transparency of the patient. AR is based on two main processes: the 3D visualization of the anatomical or pathological structures appearing in the medical image, and the registration of this visualization on the real patient. 3D visualization can be performed directly from the medical image without the need for a pre-processing step thanks to volume rendering. But better results are obtained with surface rendering after organ and pathology delineations and 3D modelling. Registration can be performed interactively or automatically. Several interactive systems have been developed and applied to humans, demonstrating the benefit of AR in surgical oncology. It also shows the current limited interactivity due to soft organ movements and interaction between surgeon instruments and organs. If the current automatic AR systems show the feasibility of such system, it is still relying on specific and expensive equipment which is not available in clinical routine. Moreover, they are not robust enough due to the high complexity of developing a real-time registration taking organ deformation and human movement into account. However, the latest results of automatic AR systems are extremely encouraging and show that it will become a standard requirement for future computer-assisted surgical oncology. In this article, we will explain the concept of AR and its principles. Then, we will review the existing interactive and automatic AR systems in digestive surgical oncology, highlighting their benefits and limitations. Finally, we will discuss the future evolutions and the issues that still have to be tackled so that this technology can be seamlessly integrated in the operating room. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Mutual-information-based image to patient re-registration using intraoperative ultrasound in image-guided neurosurgery

    PubMed Central

    Ji, Songbai; Wu, Ziji; Hartov, Alex; Roberts, David W.; Paulsen, Keith D.

    2008-01-01

    An image-based re-registration scheme has been developed and evaluated that uses fiducial registration as a starting point to maximize the normalized mutual information (nMI) between intraoperative ultrasound (iUS) and preoperative magnetic resonance images (pMR). We show that this scheme significantly (p⪡0.001) reduces tumor boundary misalignment between iUS pre-durotomy and pMR from an average of 2.5 mm to 1.0 mm in six resection surgeries. The corrected tumor alignment before dural opening provides a more accurate reference for assessing subsequent intraoperative tumor displacement, which is important for brain shift compensation as surgery progresses. In addition, we report the translational and rotational capture ranges necessary for successful convergence of the nMI registration technique (5.9 mm and 5.2 deg, respectively). The proposed scheme is automatic, sufficiently robust, and computationally efficient (<2 min), and holds promise for routine clinical use in the operating room during image-guided neurosurgical procedures. PMID:18975707

  15. Automatic spatiotemporal matching of detected pleural thickenings

    NASA Astrophysics Data System (ADS)

    Chaisaowong, Kraisorn; Keller, Simon Kai; Kraus, Thomas

    2014-01-01

    Pleural thickenings can be found in asbestos exposed patient's lung. Non-invasive diagnosis including CT imaging can detect aggressive malignant pleural mesothelioma in its early stage. In order to create a quantitative documentation of automatic detected pleural thickenings over time, the differences in volume and thickness of the detected thickenings have to be calculated. Physicians usually estimate the change of each thickening via visual comparison which provides neither quantitative nor qualitative measures. In this work, automatic spatiotemporal matching techniques of the detected pleural thickenings at two points of time based on the semi-automatic registration have been developed, implemented, and tested so that the same thickening can be compared fully automatically. As result, the application of the mapping technique using the principal components analysis turns out to be advantageous than the feature-based mapping using centroid and mean Hounsfield Units of each thickening, since the resulting sensitivity was improved to 98.46% from 42.19%, while the accuracy of feature-based mapping is only slightly higher (84.38% to 76.19%).

  16. Evaluating characteristics of PROSPERO records as predictors of eventual publication of non-Cochrane systematic reviews: a meta-epidemiological study protocol.

    PubMed

    Ruano, Juan; Gómez-García, Francisco; Gay-Mimbrera, Jesús; Aguilar-Luque, Macarena; Fernández-Rueda, José Luis; Fernández-Chaichio, Jesús; Alcalde-Mellado, Patricia; Carmona-Fernandez, Pedro J; Sanz-Cabanillas, Juan Luis; Viguera-Guerra, Isabel; Franco-García, Francisco; Cárdenas-Aranzana, Manuel; Romero, José Luis Hernández; Gonzalez-Padilla, Marcelino; Isla-Tejera, Beatriz; Garcia-Nieto, Antonio Velez

    2018-03-09

    Epidemiology and the reporting characteristics of systematic reviews (SRs) and meta-analyses (MAs) are well known. However, no study has analyzed the influence of protocol features on the probability that a study's results will be finally reported, thereby indirectly assessing the reporting bias of International Prospective Register of Systematic Reviews (PROSPERO) registration records. The objective of this study is to explore which factors are associated with a higher probability that results derived from a non-Cochrane PROSPERO registration record for a systematic review will be finally reported as an original article in a scientific journal. The PROSPERO repository will be web scraped to automatically and iteratively obtain all completed non-Cochrane registration records stored from February 2011 to December 2017. Downloaded records will be screened, and those with less than 90% fulfilled or are duplicated (i.e., those sharing titles and reviewers) will be excluded. Manual and human-supervised automatic methods will be used for data extraction, depending on the data source (fields of PROSPERO registration records, bibliometric databases, etc.). Records will be classified into published, discontinued, and abandoned review subgroups. All articles derived from published reviews will be obtained through multiple parallel searches using the full protocol "title" and/or "list reviewers" in MEDLINE/PubMed databases and Google Scholar. Reviewer, author, article, and journal metadata will be obtained using different sources. R and Python programming and analysis languages will be used to describe the datasets; perform text mining, machine learning, and deep learning analyses; and visualize the data. We will report the study according to the recommendations for meta-epidemiological studies adapted from the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement for SRs and MAs. This meta-epidemiological study will explore, for the first time, characteristics of PROSPERO records that may be associated with the publication of a completed systematic review. The evidence may help to improve review workflow performance in terms of research topic selection, decision-making regarding team selection, planning relationships with funding sources, implementing literature search strategies, and efficient data extraction and analysis. We expect to make our results, datasets, and R and Python code scripts publicly available during the third quarter of 2018.

  17. Automatic pattern localization across layout database and photolithography mask

    NASA Astrophysics Data System (ADS)

    Morey, Philippe; Brault, Frederic; Beisser, Eric; Ache, Oliver; Röth, Klaus-Dieter

    2016-03-01

    Advanced process photolithography masks require more and more controls for registration versus design and critical dimension uniformity (CDU). The distribution of the measurement points should be distributed all over the whole mask and may be denser in areas critical to wafer overlay requirements. This means that some, if not many, of theses controls should be made inside the customer die and may use non-dedicated patterns. It is then mandatory to access the original layout database to select patterns for the metrology process. Finding hundreds of relevant patterns in a database containing billions of polygons may be possible, but in addition, it is mandatory to create the complete metrology job fast and reliable. Combining, on one hand, a software expertise in mask databases processing and, on the other hand, advanced skills in control and registration equipment, we have developed a Mask Dataprep Station able to select an appropriate number of measurement targets and their positions in a huge database and automatically create measurement jobs on the corresponding area on the mask for the registration metrology system. In addition, the required design clips are generated from the database in order to perform the rendering procedure on the metrology system. This new methodology has been validated on real production line for the most advanced process. This paper presents the main challenges that we have faced, as well as some results on the global performances.

  18. Automatic registration of Iphone images to LASER point clouds of the urban structures using shape features

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R. C.; Menenti, M.

    2013-10-01

    Fusion of 3D airborne laser (LIDAR) data and terrestrial optical imagery can be applied in 3D urban modeling and model up-dating. The most challenging aspect of the fusion procedure is registering the terrestrial optical images on the LIDAR point clouds. In this article, we propose an approach for registering these two different data from different sensor sources. As we use iPhone camera images which are taken in front of the interested urban structure by the application user and the high resolution LIDAR point clouds of the acquired by an airborne laser sensor. After finding the photo capturing position and orientation from the iPhone photograph metafile, we automatically select the area of interest in the point cloud and transform it into a range image which has only grayscale intensity levels according to the distance from the image acquisition position. We benefit from local features for registering the iPhone image to the generated range image. In this article, we have applied the registration process based on local feature extraction and graph matching. Finally, the registration result is used for facade texture mapping on the 3D building surface mesh which is generated from the LIDAR point cloud. Our experimental results indicate possible usage of the proposed algorithm framework for 3D urban map updating and enhancing purposes.

  19. Adaptive Registration of Varying Contrast-Weighted Images for Improved Tissue Characterization (ARCTIC): Application to T1 Mapping

    PubMed Central

    Roujol, Sébastien; Foppa, Murilo; Weingartner, Sebastian; Manning, Warren J.; Nezafat, Reza

    2014-01-01

    Purpose To propose and evaluate a novel non-rigid image registration approach for improved myocardial T1 mapping. Methods Myocardial motion is estimated as global affine motion refined by a novel local non-rigid motion estimation algorithm. A variational framework is proposed, which simultaneously estimates motion field and intensity variations, and uses an additional regularization term to constrain the deformation field using automatic feature tracking. The method was evaluated in 29 patients by measuring the DICE similarity coefficient (DSC) and the myocardial boundary error (MBE) in short axis and four chamber data. Each image series was visually assessed as “no motion” or “with motion”. Overall T1 map quality and motion artifacts were assessed in the 85 T1 maps acquired in short axis view using a 4-point scale (1-non diagnostic/severe motion artifact, 4-excellent/no motion artifact). Results Increased DSC (0.78±0.14 to 0.87±0.03, p<0.001), reduced MBE (1.29±0.72mm to 0.84±0.20mm, p<0.001), improved overall T1 map quality (2.86±1.04 to 3.49±0.77, p<0.001), and reduced T1 map motion artifacts (2.51±0.84 to 3.61±0.64, p<0.001) were obtained after motion correction of “with motion” data (~56% of data). Conclusion The proposed non-rigid registration approach reduces the respiratory-induced motion that occurs during breath-hold T1 mapping, and significantly improves T1 map quality. PMID:24798588

  20. Intermediate Templates Guided Groupwise Registration of Diffusion Tensor Images

    PubMed Central

    Jia, Hongjun; Yap, Pew-Thian; Wu, Guorong; Wang, Qian; Shen, Dinggang

    2010-01-01

    Registration of a population of diffusion tensor images (DTIs) is one of the key steps in medical image analysis, and it plays an important role in the statistical analysis of white matter related neurological diseases. However, pairwise registration with respect to a pre-selected template may not give precise results if the selected template deviates significantly from the distribution of images. To cater for more accurate and consistent registration, a novel framework is proposed for groupwise registration with the guidance from one or more intermediate templates determined from the population of images. Specifically, we first use a Euclidean distance, defined as a combinative measure based on the FA map and ADC map, for gauging the similarity of each pair of DTIs. A fully connected graph is then built with each node denoting an image and each edge denoting the distance between a pair of images. The root template image is determined automatically as the image with the overall shortest path length to all other images on the minimum spanning tree (MST) of the graph. Finally, a sequence of registration steps is applied to progressively warping each image towards the root template image with the help of intermediate templates distributed along its path to the root node on the MST. Extensive experimental results using diffusion tensor images of real subjects indicate that registration accuracy and fiber tract alignment are significantly improved, compared with the direct registration from each image to the root template image. PMID:20851197

  1. Retinal slit lamp video mosaicking.

    PubMed

    De Zanet, Sandro; Rudolph, Tobias; Richa, Rogerio; Tappeiner, Christoph; Sznitman, Raphael

    2016-06-01

    To this day, the slit lamp remains the first tool used by an ophthalmologist to examine patient eyes. Imaging of the retina poses, however, a variety of problems, namely a shallow depth of focus, reflections from the optical system, a small field of view and non-uniform illumination. For ophthalmologists, the use of slit lamp images for documentation and analysis purposes, however, remains extremely challenging due to large image artifacts. For this reason, we propose an automatic retinal slit lamp video mosaicking, which enlarges the field of view and reduces amount of noise and reflections, thus enhancing image quality. Our method is composed of three parts: (i) viable content segmentation, (ii) global registration and (iii) image blending. Frame content is segmented using gradient boosting with custom pixel-wise features. Speeded-up robust features are used for finding pair-wise translations between frames with robust random sample consensus estimation and graph-based simultaneous localization and mapping for global bundle adjustment. Foreground-aware blending based on feathering merges video frames into comprehensive mosaics. Foreground is segmented successfully with an area under the curve of the receiver operating characteristic curve of 0.9557. Mosaicking results and state-of-the-art methods were compared and rated by ophthalmologists showing a strong preference for a large field of view provided by our method. The proposed method for global registration of retinal slit lamp images of the retina into comprehensive mosaics improves over state-of-the-art methods and is preferred qualitatively.

  2. Geometry-aware multiscale image registration via OBBTree-based polyaffine log-demons.

    PubMed

    Seiler, Christof; Pennec, Xavier; Reyes, Mauricio

    2011-01-01

    Non-linear image registration is an important tool in many areas of image analysis. For instance, in morphometric studies of a population of brains, free-form deformations between images are analyzed to describe the structural anatomical variability. Such a simple deformation model is justified by the absence of an easy expressible prior about the shape changes. Applying the same algorithms used in brain imaging to orthopedic images might not be optimal due to the difference in the underlying prior on the inter-subject deformations. In particular, using an un-informed deformation prior often leads to local minima far from the expected solution. To improve robustness and promote anatomically meaningful deformations, we propose a locally affine and geometry-aware registration algorithm that automatically adapts to the data. We build upon the log-domain demons algorithm and introduce a new type of OBBTree-based regularization in the registration with a natural multiscale structure. The regularization model is composed of a hierarchy of locally affine transformations via their logarithms. Experiments on mandibles show improved accuracy and robustness when used to initialize the demons, and even similar performance by direct comparison to the demons, with a significantly lower degree of freedom. This closes the gap between polyaffine and non-rigid registration and opens new ways to statistically analyze the registration results.

  3. Quantification of organ motion based on an adaptive image-based scale invariant feature method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paganelli, Chiara; Peroni, Marta; Baroni, Guido

    2013-11-15

    Purpose: The availability of corresponding landmarks in IGRT image series allows quantifying the inter and intrafractional motion of internal organs. In this study, an approach for the automatic localization of anatomical landmarks is presented, with the aim of describing the nonrigid motion of anatomo-pathological structures in radiotherapy treatments according to local image contrast.Methods: An adaptive scale invariant feature transform (SIFT) was developed from the integration of a standard 3D SIFT approach with a local image-based contrast definition. The robustness and invariance of the proposed method to shape-preserving and deformable transforms were analyzed in a CT phantom study. The application ofmore » contrast transforms to the phantom images was also tested, in order to verify the variation of the local adaptive measure in relation to the modification of image contrast. The method was also applied to a lung 4D CT dataset, relying on manual feature identification by an expert user as ground truth. The 3D residual distance between matches obtained in adaptive-SIFT was then computed to verify the internal motion quantification with respect to the expert user. Extracted corresponding features in the lungs were used as regularization landmarks in a multistage deformable image registration (DIR) mapping the inhale vs exhale phase. The residual distances between the warped manual landmarks and their reference position in the inhale phase were evaluated, in order to provide a quantitative indication of the registration performed with the three different point sets.Results: The phantom study confirmed the method invariance and robustness properties to shape-preserving and deformable transforms, showing residual matching errors below the voxel dimension. The adapted SIFT algorithm on the 4D CT dataset provided automated and accurate motion detection of peak to peak breathing motion. The proposed method resulted in reduced residual errors with respect to standard SIFT, providing a motion description comparable to expert manual identification, as confirmed by DIR.Conclusions: The application of the method to a 4D lung CT patient dataset demonstrated adaptive-SIFT potential as an automatic tool to detect landmarks for DIR regularization and internal motion quantification. Future works should include the optimization of the computational cost and the application of the method to other anatomical sites and image modalities.« less

  4. SU-E-J-119: What Effect Have the Volume Defined in the Alignment Clipbox for Cervical Cancer Using Automatic Registration Methods for Cone- Beam CT Verification?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, W; Yang, H; Wang, Y

    2014-06-01

    Purpose: To investigate the impact of different clipbox volumes with automated registration techniques using commercially available software with on board volumetric imaging(OBI) for treatment verification in cervical cancer patients. Methods: Fifty cervical cancer patients received daily CBCT scans(on-board imaging v1.5 system, Varian Medical Systems) during the first treatment week and weekly thereafter were included this analysis. A total of 450 CBCT scans were registered to the planning CTscan using pelvic clipbox(clipbox-Pelvic) and around PTV clip box(clipbox- PTV). The translations(anterior-posterior, left-right, superior-inferior) and the rotations(yaw, pitch and roll) errors for each matches were recorded. The setup errors and the systematic andmore » random errors for both of the clip-boxes were calculated. Paired Samples t test was used to analysis the differences between clipbox-Pelvic and clipbox-PTV. Results: . The SD of systematic error(σ) was 1.0mm, 2.0mm,3.2mm and 1.9mm,2.3mm, 3.0mm in the AP, LR and SI directions for clipbox-Pelvic and clipbox-PTV, respectively. The average random error(Σ)was 1.7mm, 2.0mm,4.2mm and 1.7mm,3.4mm, 4.4mm in the AP, LR and SI directions for clipbox-Pelvic and clipbox-PTV, respectively. But, only the SI direction was acquired significantly differences between two image registration volumes(p=0.002,p=0.01 for mean and SD). For rotations, the yaw mean/SD and the pitch SD were acquired significantly differences between clipbox-Pelvic and clipbox-PTV. Conclusion: The defined volume for Image registration is important for cervical cancer when 3D/3D match was used. The alignment clipbox can effect the setup errors obtained. Further analysis is need to determine the optimal defined volume to use the image registration in cervical cancer. Conflict of interest: none.« less

  5. Automatic Coregistration for Multiview SAR Images in Urban Areas

    NASA Astrophysics Data System (ADS)

    Xiang, Y.; Kang, W.; Wang, F.; You, H.

    2017-09-01

    Due to the high resolution property and the side-looking mechanism of SAR sensors, complex buildings structures make the registration of SAR images in urban areas becomes very hard. In order to solve the problem, an automatic and robust coregistration approach for multiview high resolution SAR images is proposed in the paper, which consists of three main modules. First, both the reference image and the sensed image are segmented into two parts, urban areas and nonurban areas. Urban areas caused by double or multiple scattering in a SAR image have a tendency to show higher local mean and local variance values compared with general homogeneous regions due to the complex structural information. Based on this criterion, building areas are extracted. After obtaining the target regions, L-shape structures are detected using the SAR phase congruency model and Hough transform. The double bounce scatterings formed by wall and ground are shown as strong L- or T-shapes, which are usually taken as the most reliable indicator for building detection. According to the assumption that buildings are rectangular and flat models, planimetric buildings are delineated using the L-shapes, then the reconstructed target areas are obtained. For the orignal areas and the reconstructed target areas, the SAR-SIFT matching algorithm is implemented. Finally, correct corresponding points are extracted by the fast sample consensus (FSC) and the transformation model is also derived. The experimental results on a pair of multiview TerraSAR images with 1-m resolution show that the proposed approach gives a robust and precise registration performance, compared with the orignal SAR-SIFT method.

  6. WE-AB-BRA-12: Post-Implant Dosimetry in Prostate Brachytherapy by X-Ray and MRI Fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, S; Song, D; Lee, J

    Purpose: For post-implant dosimetric assessment after prostate brachytherapy, CT-MR fusion approach has been advocated due to the superior accuracy on both seeds localization and soft tissue delineation. However, CT deposits additional radiation to the patient, and seed identification in CT requires manual review and correction. In this study, we propose an accurate, low-dose, and cost-effective post-implant dosimetry approach based on X-ray and MRI. Methods: Implanted seeds are reconstructed using only three X-ray fluoroscopy images by solving a combinatorial optimization problem. The reconstructed seeds are then registered to MR images using an intensity-based points-to-volume registration. MR images are first pre-processed bymore » geometric and Gaussian filtering, yielding smooth candidate seed-only images. To accommodate potential soft tissue deformation, our registration is performed in two steps, an initial affine followed by local deformable registrations. An evolutionary optimizer in conjunction with a points-to-volume similarity metric is used for the affine registration. Local prostate deformation and seed migration are then adjusted by the deformable registration step with external and internal force constraints. Results: We tested our algorithm on twenty patient data sets. For quantitative evaluation, we obtained ground truth seed positions by fusing the post-implant CT-MR images. Seeds were semi-automatically extracted from CT and manually corrected and then registered to the MR images. Target registration error (TRE) was computed by measuring the Euclidean distances from the ground truth to the closest registered X-ray seeds. The overall TREs (mean±standard deviation in mm) are 1.6±1.1 (affine) and 1.3±0.8 (affine+deformable). The overall computation takes less than 1 minute. Conclusion: It has been reported that the CT-based seed localization error is ∼1.6mm and the seed localization uncertainty of 2mm results in less than 5% deviation of prostate D90. The average error of 1.3mm with our system outperforms the CT-based approach and is considered well within the clinically acceptable limit. Supported in part by NIH/NCI grant 5R01CA151395. The X-ray-based implant reconstruction method (US patent No. 8,233,686) was licensed to Acoustic MedSystems Inc.« less

  7. Automatic and robust extrinsic camera calibration for high-accuracy mobile mapping

    NASA Astrophysics Data System (ADS)

    Goeman, Werner; Douterloigne, Koen; Bogaert, Peter; Pires, Rui; Gautama, Sidharta

    2012-10-01

    A mobile mapping system (MMS) is the answer of the geoinformation community to the exponentially growing demand for various geospatial data with increasingly higher accuracies and captured by multiple sensors. As the mobile mapping technology is pushed to explore its use for various applications on water, rail, or road, the need emerges to have an external sensor calibration procedure which is portable, fast and easy to perform. This way, sensors can be mounted and demounted depending on the application requirements without the need for time consuming calibration procedures. A new methodology is presented to provide a high quality external calibration of cameras which is automatic, robust and fool proof.The MMS uses an Applanix POSLV420, which is a tightly coupled GPS/INS positioning system. The cameras used are Point Grey color video cameras synchronized with the GPS/INS system. The method uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration a well studied absolute orientation problem needs to be solved. Here, a mutual information based image registration technique is studied for automatic alignment of the ranging pole. Finally, a few benchmarking tests are done under various lighting conditions which proves the methodology's robustness, by showing high absolute stereo measurement accuracies of a few centimeters.

  8. Automatic Extraction of Small Spatial Plots from Geo-Registered UAS Imagery

    NASA Astrophysics Data System (ADS)

    Cherkauer, Keith; Hearst, Anthony

    2015-04-01

    Accurate extraction of spatial plots from high-resolution imagery acquired by Unmanned Aircraft Systems (UAS), is a prerequisite for accurate assessment of experimental plots in many geoscience fields. If the imagery is correctly geo-registered, then it may be possible to accurately extract plots from the imagery based on their map coordinates. To test this approach, a UAS was used to acquire visual imagery of 5 ha of soybean fields containing 6.0 m2 plots in a complex planting scheme. Sixteen artificial targets were setup in the fields before flights and different spatial configurations of 0 to 6 targets were used as Ground Control Points (GCPs) for geo-registration, resulting in a total of 175 geo-registered image mosaics with a broad range of geo-registration accuracies. Geo-registration accuracy was quantified based on the horizontal Root Mean Squared Error (RMSE) of targets used as checkpoints. Twenty test plots were extracted from the geo-registered imagery. Plot extraction accuracy was quantified based on the percentage of the desired plot area that was extracted. It was found that using 4 GCPs along the perimeter of the field minimized the horizontal RMSE and enabled a plot extraction accuracy of at least 70%, with a mean plot extraction accuracy of 92%. The methods developed are suitable for work in many fields where replicates across time and space are necessary to quantify variability.

  9. MO-FG-CAMPUS-JeP1-05: Water Equivalent Path Length Calculations Using Scatter-Corrected Head and Neck CBCT Images to Evaluate Patients for Adaptive Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, J; Park, Y; Sharp, G

    Purpose: To establish a method to evaluate the dosimetric impact of anatomic changes in head and neck patients during proton therapy by using scatter-corrected cone-beam CT (CBCT) images. Methods: The water equivalent path length (WEPL) was calculated to the distal edge of PTV contours by using tomographic images available for six head and neck patients received photon therapy. The proton range variation was measured by calculating the difference between the distal WEPLs calculated with the planning CT and weekly treatment CBCT images. By performing an automatic rigid registration, six degrees-of-freedom (DOF) correction was made to the CBCT images to accountmore » for the patient setup uncertainty. For accurate WEPL calculations, an existing CBCT scatter correction algorithm, whose performance was already proven for phantom images, was calibrated for head and neck patient images. Specifically, two different image similarity measures, mutual information (MI) and mean square error (MSE), were tested for the deformable image registration (DIR) in the CBCT scatter correction algorithm. Results: The impact of weight loss was reflected in the distal WEPL differences with the aid of the automatic rigid registration reducing the influence of patient setup uncertainty on the WEPL calculation results. The WEPL difference averaged over distal area was 2.9 ± 2.9 (mm) across all fractions of six patients and its maximum, mostly found at the last available fraction, was 6.2 ± 3.4 (mm). The MSE-based DIR successfully registered each treatment CBCT image to the planning CT image. On the other hand, the MI-based DIR deformed the skin voxels in the planning CT image to the immobilization mask in the treatment CBCT image, most of which was cropped out of the planning CT image. Conclusion: The dosimetric impact of anatomic changes was evaluated by calculating the distal WEPL difference with the existing scatter-correction algorithm appropriately calibrated. Jihun Kim, Yang-Kyun Park, Gregory Sharp, and Brian Winey have received grant support from the NCI Federal Share of program income earned by Massachusetts General Hospital on C06 CA059267, Proton Therapy Research and Treatment Center.« less

  10. Plane-Based Registration of Several Thousand Laser Scans on Standard Hardware

    NASA Astrophysics Data System (ADS)

    Wujanz, D.; Schaller, S.; Gielsdorf, F.; Gründig, L.

    2018-05-01

    The automatic registration of terrestrial laser scans appears to be a solved problem in science as well as in practice. However, this assumption is questionable especially in the context of large projects where an object of interest is described by several thousand scans. A critical issue inherently linked to this task is memory management especially if cloud-based registration approaches such as the ICP are being deployed. In order to process even thousands of scans on standard hardware a plane-based registration approach is applied. As a first step planar features are detected within the unregistered scans. This step drastically reduces the amount of data that has to be handled by the hardware. After determination of corresponding planar features a pairwise registration procedure is initiated based on a graph that represents topological relations among all scans. For every feature individual stochastic characteristics are computed that are consequently carried through the algorithm. Finally, a block adjustment is carried out that minimises the residuals between redundantly captured areas. The algorithm is demonstrated on a practical survey campaign featuring a historic town hall. In total, 4853 scans were registered on a standard PC with four processors (3.07 GHz) and 12 GB of RAM.

  11. Automatic lesion tracking for a PET/CT based computer aided cancer therapy monitoring system

    NASA Astrophysics Data System (ADS)

    Opfer, Roland; Brenner, Winfried; Carlsen, Ingwer; Renisch, Steffen; Sabczynski, Jörg; Wiemker, Rafael

    2008-03-01

    Response assessment of cancer therapy is a crucial component towards a more effective and patient individualized cancer therapy. Integrated PET/CT systems provide the opportunity to combine morphologic with functional information. However, dealing simultaneously with several PET/CT scans poses a serious workflow problem. It can be a difficult and tedious task to extract response criteria based upon an integrated analysis of PET and CT images and to track these criteria over time. In order to improve the workflow for serial analysis of PET/CT scans we introduce in this paper a fast lesion tracking algorithm. We combine a global multi-resolution rigid registration algorithm with a local block matching and a local region growing algorithm. Whenever the user clicks on a lesion in the base-line PET scan the course of standardized uptake values (SUV) is automatically identified and shown to the user as a graph plot. We have validated our method by a data collection from 7 patients. Each patient underwent two or three PET/CT scans during the course of a cancer therapy. An experienced nuclear medicine physician manually measured the courses of the maximum SUVs for altogether 18 lesions. As a result we obtained that the automatic detection of the corresponding lesions resulted in SUV measurements which are nearly identical to the manually measured SUVs. Between 38 measured maximum SUVs derived from manual and automatic detected lesions we observed a correlation of 0.9994 and a average error of 0.4 SUV units.

  12. SU-F-J-34: Automatic Target-Based Patient Positioning Framework for Image-Guided Radiotherapy in Prostate Cancer Treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sasahara, M; Arimura, H; Hirose, T

    Purpose: Current image-guided radiotherapy (IGRT) procedure is bonebased patient positioning, followed by subjective manual correction using cone beam computed tomography (CBCT). This procedure might cause the misalignment of the patient positioning. Automatic target-based patient positioning systems achieve the better reproducibility of patient setup. Our aim of this study was to develop an automatic target-based patient positioning framework for IGRT with CBCT images in prostate cancer treatment. Methods: Seventy-three CBCT images of 10 patients and 24 planning CT images with digital imaging and communications in medicine for radiotherapy (DICOM-RT) structures were used for this study. Our proposed framework started from themore » generation of probabilistic atlases of bone and prostate from 24 planning CT images and prostate contours, which were made in the treatment planning. Next, the gray-scale histograms of CBCT values within CTV regions in the planning CT images were obtained as the occurrence probability of the CBCT values. Then, CBCT images were registered to the atlases using a rigid registration with mutual information. Finally, prostate regions were estimated by applying the Bayesian inference to CBCT images with the probabilistic atlases and CBCT value occurrence probability. The proposed framework was evaluated by calculating the Euclidean distance of errors between two centroids of prostate regions determined by our method and ground truths of manual delineations by a radiation oncologist and a medical physicist on CBCT images for 10 patients. Results: The average Euclidean distance between the centroids of extracted prostate regions determined by our proposed method and ground truths was 4.4 mm. The average errors for each direction were 1.8 mm in anteroposterior direction, 0.6 mm in lateral direction and 2.1 mm in craniocaudal direction. Conclusion: Our proposed framework based on probabilistic atlases and Bayesian inference might be feasible to automatically determine prostate regions on CBCT images.« less

  13. Tumor growth model for atlas based registration of pathological brain MR images

    NASA Astrophysics Data System (ADS)

    Moualhi, Wafa; Ezzeddine, Zagrouba

    2015-02-01

    The motivation of this work is to register a tumor brain magnetic resonance (MR) image with a normal brain atlas. A normal brain atlas is deformed in order to take account of the presence of a large space occupying tumor. The method use a priori model of tumor growth assuming that the tumor grows in a radial way from a starting point. First, an affine transformation is used in order to bring the patient image and the brain atlas in a global correspondence. Second, the seeding of a synthetic tumor into the brain atlas provides a template for the lesion. Finally, the seeded atlas is deformed combining a method derived from optical flow principles and a model for tumor growth (MTG). Results show that an automatic segmentation method of brain structures in the presence of large deformation can be provided.

  14. Automatic Co-Registration of Multi-Temporal Landsat-8/OLI and Sentinel-2A/MSI Images

    NASA Technical Reports Server (NTRS)

    Skakun, S.; Roger, J.-C.; Vermote, E.; Justice, C.; Masek, J.

    2017-01-01

    Many applications in climate change and environmental and agricultural monitoring rely heavily on the exploitation of multi-temporal satellite imagery. Combined use of freely available Landsat-8 and Sentinel-2 images can offer high temporal frequency of about 1 image every 3-5 days globally.

  15. 75 FR 10206 - Codex Alimentarius Commission: Meeting of the Codex Committee on Contaminants in Food

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-05

    .... Early registration is encouraged because it will expedite entry into the building and its parking area. If you require parking, please include the vehicle make and tag number, if known, when you register... service which provides automatic and customized access to selected food safety news and information. This...

  16. 75 FR 4523 - Codex Alimentarius Commission: Meeting of the Codex Committee on Food Additives

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-28

    ... registration is encouraged because it will expedite entry into the building and its parking area. If you require parking, please include the vehicle make and tag number when you register. Because the meeting... provides automatic and customized access to selected food safety news and information. This service is...

  17. SAR/LANDSAT image registration study

    NASA Technical Reports Server (NTRS)

    Murphrey, S. W. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. Temporal registration of synthetic aperture radar data with LANDSAT-MSS data is both feasible (from a technical standpoint) and useful (from an information-content viewpoint). The greatest difficulty in registering aircraft SAR data to corrected LANDSAT-MSS data is control-point location. The differences in SAR and MSS data impact the selection of features that will serve as a good control points. The SAR and MSS data are unsuitable for automatic computer correlation of digital control-point data. The gray-level data can not be compared by the computer because of the different response characteristics of the MSS and SAR images.

  18. Co-Registration of DSMs Generated by Uav and Terrestrial Laser Scanning Systems

    NASA Astrophysics Data System (ADS)

    Ancil Persad, Ravi; Armenakis, Costas

    2016-06-01

    An approach for the co-registration of Digital Surface Models (DSMs) derived from Unmanned Aerial Vehicles (UAVs) and Terrestrial Laser Scanners (TLS) is proposed. Specifically, a wavelet-based feature descriptor for matching surface keypoints on the 2.5D DSMs is developed. DSMs are useful in wide-scope of various applications such as 3D building modelling and reconstruction, cultural heritage, urban and environmental planning, aircraft navigation/path routing, accident and crime scene reconstruction, mining as well as, topographic map revision and change detection. For these listed applications, it is not uncommon that there will be a need for automatically aligning multi-temporal DSMs which may have been acquired from multiple sensors, with different specifications over a period of time, and may have various overlaps. Terrestrial laser scanners usually capture urban facades in an accurate manner; however this is not the case for building roof structures. On the other hand, vertical photography from UAVs can capture the roofs. Therefore, the automatic fusion of UAV and laser-scanning based DSMs is addressed here as it serves various geospatial applications.

  19. An accuracy assessment of different rigid body image registration methods and robotic couch positional corrections using a novel phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arumugam, Sankar; Xing Aitang; Jameson, Michael G.

    2013-03-15

    Purpose: Image guided radiotherapy (IGRT) using cone beam computed tomography (CBCT) images greatly reduces interfractional patient positional uncertainties. An understanding of uncertainties in the IGRT process itself is essential to ensure appropriate use of this technology. The purpose of this study was to develop a phantom capable of assessing the accuracy of IGRT hardware and software including a 6 degrees of freedom patient positioning system and to investigate the accuracy of the Elekta XVI system in combination with the HexaPOD robotic treatment couch top. Methods: The constructed phantom enabled verification of the three automatic rigid body registrations (gray value, bone,more » seed) available in the Elekta XVI software and includes an adjustable mount that introduces known rotational offsets to the phantom from its reference position. Repeated positioning of the phantom was undertaken to assess phantom rotational accuracy. Using this phantom the accuracy of the XVI registration algorithms was assessed considering CBCT hardware factors and image resolution together with the residual error in the overall image guidance process when positional corrections were performed through the HexaPOD couch system. Results: The phantom positioning was found to be within 0.04 ({sigma}= 0.12) Degree-Sign , 0.02 ({sigma}= 0.13) Degree-Sign , and -0.03 ({sigma}= 0.06) Degree-Sign in X, Y, and Z directions, respectively, enabling assessment of IGRT with a 6 degrees of freedom patient positioning system. The gray value registration algorithm showed the least error in calculated offsets with maximum mean difference of -0.2({sigma}= 0.4) mm in translational and -0.1({sigma}= 0.1) Degree-Sign in rotational directions for all image resolutions. Bone and seed registration were found to be sensitive to CBCT image resolution. Seed registration was found to be most sensitive demonstrating a maximum mean error of -0.3({sigma}= 0.9) mm and -1.4({sigma}= 1.7) Degree-Sign in translational and rotational directions over low resolution images, and this is reduced to -0.1({sigma}= 0.2) mm and -0.1({sigma}= 0.79) Degree-Sign using high resolution images. Conclusions: The phantom, capable of rotating independently about three orthogonal axes was successfully used to assess the accuracy of an IGRT system considering 6 degrees of freedom. The overall residual error in the image guidance process of XVI in combination with the HexaPOD couch was demonstrated to be less than 0.3 mm and 0.3 Degree-Sign in translational and rotational directions when using the gray value registration with high resolution CBCT images. However, the residual error, especially in rotational directions, may increase when the seed registration is used with low resolution images.« less

  20. Development of a piecewise linear omnidirectional 3D image registration method

    NASA Astrophysics Data System (ADS)

    Bae, Hyunsoo; Kang, Wonjin; Lee, SukGyu; Kim, Youngwoo

    2016-12-01

    This paper proposes a new piecewise linear omnidirectional image registration method. The proposed method segments an image captured by multiple cameras into 2D segments defined by feature points of the image and then stitches each segment geometrically by considering the inclination of the segment in the 3D space. Depending on the intended use of image registration, the proposed method can be used to improve image registration accuracy or reduce the computation time in image registration because the trade-off between the computation time and image registration accuracy can be controlled for. In general, nonlinear image registration methods have been used in 3D omnidirectional image registration processes to reduce image distortion by camera lenses. The proposed method depends on a linear transformation process for omnidirectional image registration, and therefore it can enhance the effectiveness of the geometry recognition process, increase image registration accuracy by increasing the number of cameras or feature points of each image, increase the image registration speed by reducing the number of cameras or feature points of each image, and provide simultaneous information on shapes and colors of captured objects.

  1. Atlas-based automatic segmentation of head and neck organs at risk and nodal target volumes: a clinical validation.

    PubMed

    Daisne, Jean-François; Blumhofer, Andreas

    2013-06-26

    Intensity modulated radiotherapy for head and neck cancer necessitates accurate definition of organs at risk (OAR) and clinical target volumes (CTV). This crucial step is time consuming and prone to inter- and intra-observer variations. Automatic segmentation by atlas deformable registration may help to reduce time and variations. We aim to test a new commercial atlas algorithm for automatic segmentation of OAR and CTV in both ideal and clinical conditions. The updated Brainlab automatic head and neck atlas segmentation was tested on 20 patients: 10 cN0-stages (ideal population) and 10 unselected N-stages (clinical population). Following manual delineation of OAR and CTV, automatic segmentation of the same set of structures was performed and afterwards manually corrected. Dice Similarity Coefficient (DSC), Average Surface Distance (ASD) and Maximal Surface Distance (MSD) were calculated for "manual to automatic" and "manual to corrected" volumes comparisons. In both groups, automatic segmentation saved about 40% of the corresponding manual segmentation time. This effect was more pronounced for OAR than for CTV. The edition of the automatically obtained contours significantly improved DSC, ASD and MSD. Large distortions of normal anatomy or lack of iodine contrast were the limiting factors. The updated Brainlab atlas-based automatic segmentation tool for head and neck Cancer patients is timesaving but still necessitates review and corrections by an expert.

  2. 32 CFR 1615.1 - Registration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... registration card or other method of registration prescribed by the Director of Selective Service by a person... method of registration prescribed by the Director, he shall advise in writing the Selective Service System, P.O. Box 94638, Palatine, IL 60094-4638. (c) The methods of registration prescribed by the...

  3. Tools and Methods for the Registration and Fusion of Remotely Sensed Data

    NASA Technical Reports Server (NTRS)

    Goshtasby, Arthur Ardeshir; LeMoigne, Jacqueline

    2010-01-01

    Tools and methods for image registration were reviewed. Methods for the registration of remotely sensed data at NASA were discussed. Image fusion techniques were reviewed. Challenges in registration of remotely sensed data were discussed. Examples of image registration and image fusion were given.

  4. Simultaneous reconstruction of the activity image and registration of the CT image in TOF-PET

    NASA Astrophysics Data System (ADS)

    Rezaei, Ahmadreza; Michel, Christian; Casey, Michael E.; Nuyts, Johan

    2016-02-01

    Previously, maximum-likelihood methods have been proposed to jointly estimate the activity image and the attenuation image or the attenuation sinogram from time-of-flight (TOF) positron emission tomography (PET) data. In this contribution, we propose a method that addresses the possible alignment problem of the TOF-PET emission data and the computed tomography (CT) attenuation data, by combining reconstruction and registration. The method, called MLRR, iteratively reconstructs the activity image while registering the available CT-based attenuation image, so that the pair of activity and attenuation images maximise the likelihood of the TOF emission sinogram. The algorithm is slow to converge, but some acceleration could be achieved by using Nesterov’s momentum method and by applying a multi-resolution scheme for the non-rigid displacement estimation. The latter also helps to avoid local optima, although convergence to the global optimum cannot be guaranteed. The results are evaluated on 2D and 3D simulations as well as a respiratory gated clinical scan. Our experiments indicate that the proposed method is able to correct for possible misalignment of the CT-based attenuation image, and is therefore a very promising approach to suppressing attenuation artefacts in clinical PET/CT. When applied to respiratory gated data of a patient scan, it produced deformations that are compatible with breathing motion and which reduced the well known attenuation artefact near the dome of the liver. Since the method makes use of the energy-converted CT attenuation image, the scale problem of joint reconstruction is automatically solved.

  5. SU-E-J-131: Augmenting Atlas-Based Segmentation by Incorporating Image Features Proximal to the Atlas Contours

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dengwang; Liu, Li; Kapp, Daniel S.

    2015-06-15

    Purpose: For facilitating the current automatic segmentation, in this work we propose a narrow-shell strategy to enhance the information of each contour in the library and to improve the accuracy of the exiting atlas-based approach. Methods: In setting up an atlas-based library, we include not only the coordinates of contour points, but also the image features adjacent to the contour. 139 planning CT scans with normal appearing livers obtained during their radiotherapy treatment planning were used to construct the library. The CT images within the library were registered each other using affine registration. A nonlinear narrow shell with the regionalmore » thickness determined by the distance between two vertices alongside the contour. The narrow shell was automatically constructed both inside and outside of the liver contours. The common image features within narrow shell between a new case and a library case were first selected by a Speed-up Robust Features (SURF) strategy. A deformable registration was then performed using a thin plate splines (TPS) technique. The contour associated with the library case was propagated automatically onto the images of the new patient by exploiting the deformation field vectors. The liver contour was finally obtained by employing level set based energy function within the narrow shell. The performance of the proposed method was evaluated by comparing quantitatively the auto-segmentation results with that delineated by a physician. Results: Application of the technique to 30 liver cases suggested that the technique was capable of reliably segment organs such as the liver with little human intervention. Compared with the manual segmentation results by a physician, the average and discrepancies of the volumetric overlap percentage (VOP) was found to be 92.43%+2.14%. Conclusion: Incorporation of image features into the library contours improves the currently available atlas-based auto-contouring techniques and provides a clinically practical solution for auto-segmentation. This work is supported by NIH/NIBIB (1R01-EB016777), National Natural Science Foundation of China (No.61471226 and No.61201441), Research funding from Shandong Province (No.BS2012DX038 and No.J12LN23), and Research funding from Jinan City (No.201401221 and No.20120109)« less

  6. SU-F-J-171: Robust Atlas Based Segmentation of the Prostate and Peripheral Zone Regions On MRI Utilizing Multiple MRI System Vendors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Padgett, K; Pollack, A; Stoyanova, R

    Purpose: Automatically generated prostate MRI contours can be used to aid in image registration with CT or ultrasound and to reduce the burden of contouring for radiation treatment planning. In addition, prostate and zonal contours can assist to automate quantitative imaging features extraction and the analyses of longitudinal MRI studies. These potential gains are limited if the solutions are not compatible across different MRI vendors. The goal of this study is to characterize an atlas based automatic segmentation procedure of the prostate collected on MRI systems from multiple vendors. Methods: The prostate and peripheral zone (PZ) were manually contoured bymore » an expert radiation oncologist on T2-weighted scans acquired on both GE (n=31) and Siemens (n=33) 3T MRI systems. A leave-one-out approach was utilized where the target subject is removed from the atlas before the segmentation algorithm is initiated. The atlas-segmentation method finds the best nine matched atlas subjects and then performs a normalized intensity-based free-form deformable registration of these subjects to the target subject. These nine contours are then merged into a single contour using Simultaneous Truth and Performance Level Estimation (STAPLE). Contour comparisons were made using Dice similarity coefficients (DSC) and Hausdorff distances. Results: Using the T2 FatSat (FS) GE datasets the atlas generated contours resulted in an average DSC of 0.83±0.06 for prostate, 0.57±0.12 for PZ and 0.75±0.09 for CG. Similar results were found when using the Siemens data with a DSC of 0.79±0.14 for prostate, 0.54±0.16 and 0.70±0.9. Contrast between prostate and surrounding anatomy and between the PZ and CG contours for both vendors demonstrated superior contrast separation; significance was found for all comparisons p-value < 0.0001. Conclusion: Atlas-based segmentation yielded promising results for all contours compared to expertly defined contours in both Siemens and GE 3T systems providing fast and automatic segmentation of the prostate. Funding Support, Disclosures, and Conflict of Interest: AS Nelson is a partial owner of MIM Software, Inc. AS Nelson, and A Swallen are current employees at MIM Software, Inc.« less

  7. [Automatic registration of patients in digital radiology facilities: dosimetric record].

    PubMed

    Ten Morón, J I; Vañó Carruana, E; Arrazola García, J

    2013-12-01

    There is a consensus in the international community regarding both the need for and benefits of systematic registration and planning of the dosage indicators in patients exposed to ionizing radiation. The main interest is in the registration and follow-up of the techniques and procedures that can involve the greatest risk from exposure to radiation. This register should be planned to include the structure and tools necessary to take the radiological safety of the patients into account, enabling the physicians requesting the studies to access the most important information in the register so they can appropriately justify the request for additional studies. Likewise, it should be considered a priority to establish diagnostic reference levels for the different magnitudes that are defined in function of the modality and techniques used; this information is useful for the staff involved in procedures that use ionizing radiation. Copyright © 2013 SERAM. Published by Elsevier Espana. All rights reserved.

  8. Segmentation propagation for the automated quantification of ventricle volume from serial MRI

    NASA Astrophysics Data System (ADS)

    Linguraru, Marius George; Butman, John A.

    2009-02-01

    Accurate ventricle volume estimates could potentially improve the understanding and diagnosis of communicating hydrocephalus. Postoperative communicating hydrocephalus has been recognized in patients with brain tumors where the changes in ventricle volume can be difficult to identify, particularly over short time intervals. Because of the complex alterations of brain morphology in these patients, the segmentation of brain ventricles is challenging. Our method evaluates ventricle size from serial brain MRI examinations; we (i) combined serial images to increase SNR, (ii) automatically segmented this image to generate a ventricle template using fast marching methods and geodesic active contours, and (iii) propagated the segmentation using deformable registration of the original MRI datasets. By applying this deformation to the ventricle template, serial volume estimates were obtained in a robust manner from routine clinical images (0.93 overlap) and their variation analyzed.

  9. Error estimation of deformable image registration of pulmonary CT scans using convolutional neural networks.

    PubMed

    Eppenhof, Koen A J; Pluim, Josien P W

    2018-04-01

    Error estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D convolutional neural network that learns to estimate registration errors from a pair of image patches. By applying the network to patches centered around every voxel, we construct registration error maps. The network is trained using a set of representative images that have been synthetically transformed to construct a set of image pairs with known deformations. The method is evaluated on deformable registrations of inhale-exhale pairs of thoracic CT scans. Using ground truth target registration errors on manually annotated landmarks, we evaluate the method's ability to estimate local registration errors. Estimation of full domain error maps is evaluated using a gold standard approach. The two evaluation approaches show that we can train the network to robustly estimate registration errors in a predetermined range, with subvoxel accuracy. We achieved a root-mean-square deviation of 0.51 mm from gold standard registration errors and of 0.66 mm from ground truth landmark registration errors.

  10. Automatic bone segmentation in knee MR images using a coarse-to-fine strategy

    NASA Astrophysics Data System (ADS)

    Park, Sang Hyun; Lee, Soochahn; Yun, Il Dong; Lee, Sang Uk

    2012-02-01

    Segmentation of bone and cartilage from a three dimensional knee magnetic resonance (MR) image is a crucial element in monitoring and understanding of development and progress of osteoarthritis. Until now, various segmentation methods have been proposed to separate the bone from other tissues, but it still remains challenging problem due to different modality of MR images, low contrast between bone and tissues, and shape irregularity. In this paper, we present a new fully-automatic segmentation method of bone compartments using relevant bone atlases from a training set. To find the relevant bone atlases and obtain the segmentation, a coarse-to-fine strategy is proposed. In the coarse step, the best atlas among the training set and an initial segmentation are simultaneously detected using branch and bound tree search. Since the best atlas in the coarse step is not accurately aligned, all atlases from the training set are aligned to the initial segmentation, and the best aligned atlas is selected in the middle step. Finally, in the fine step, segmentation is conducted as adaptively integrating shape of the best aligned atlas and appearance prior based on characteristics of local regions. For experiment, femur and tibia bones of forty test MR images are segmented by the proposed method using sixty training MR images. Experimental results show that a performance of the segmentation and the registration becomes better as going near the fine step, and the proposed method obtain the comparable performance with the state-of-the-art methods.

  11. Full automatic fiducial marker detection on coil arrays for accurate instrumentation placement during MRI guided breast interventions

    NASA Astrophysics Data System (ADS)

    Filippatos, Konstantinos; Boehler, Tobias; Geisler, Benjamin; Zachmann, Harald; Twellmann, Thorsten

    2010-02-01

    With its high sensitivity, dynamic contrast-enhanced MR imaging (DCE-MRI) of the breast is today one of the first-line tools for early detection and diagnosis of breast cancer, particularly in the dense breast of young women. However, many relevant findings are very small or occult on targeted ultrasound images or mammography, so that MRI guided biopsy is the only option for a precise histological work-up [1]. State-of-the-art software tools for computer-aided diagnosis of breast cancer in DCE-MRI data offer also means for image-based planning of biopsy interventions. One step in the MRI guided biopsy workflow is the alignment of the patient position with the preoperative MR images. In these images, the location and orientation of the coil localization unit can be inferred from a number of fiducial markers, which for this purpose have to be manually or semi-automatically detected by the user. In this study, we propose a method for precise, full-automatic localization of fiducial markers, on which basis a virtual localization unit can be subsequently placed in the image volume for the purpose of determining the parameters for needle navigation. The method is based on adaptive thresholding for separating breast tissue from background followed by rigid registration of marker templates. In an evaluation of 25 clinical cases comprising 4 different commercial coil array models and 3 different MR imaging protocols, the method yielded a sensitivity of 0.96 at a false positive rate of 0.44 markers per case. The mean distance deviation between detected fiducial centers and ground truth information that was appointed from a radiologist was 0.94mm.

  12. Integrating personalized medical test contents with XML and XSL-FO.

    PubMed

    Toddenroth, Dennis; Dugas, Martin; Frankewitsch, Thomas

    2011-03-01

    In 2004 the adoption of a modular curriculum at the medical faculty in Muenster led to the introduction of centralized examinations based on multiple-choice questions (MCQs). We report on how organizational challenges of realizing faculty-wide personalized tests were addressed by implementation of a specialized software module to automatically generate test sheets from individual test registrations and MCQ contents. Key steps of the presented method for preparing personalized test sheets are (1) the compilation of relevant item contents and graphical media from a relational database with database queries, (2) the creation of Extensible Markup Language (XML) intermediates, and (3) the transformation into paginated documents. The software module by use of an open source print formatter consistently produced high-quality test sheets, while the blending of vectorized textual contents and pixel graphics resulted in efficient output file sizes. Concomitantly the module permitted an individual randomization of item sequences to prevent illicit collusion. The automatic generation of personalized MCQ test sheets is feasible using freely available open source software libraries, and can be efficiently deployed on a faculty-wide scale.

  13. Automatic segmentation of 4D cardiac MR images for extraction of ventricular chambers using a spatio-temporal approach

    NASA Astrophysics Data System (ADS)

    Atehortúa, Angélica; Zuluaga, Maria A.; Ourselin, Sébastien; Giraldo, Diana; Romero, Eduardo

    2016-03-01

    An accurate ventricular function quantification is important to support evaluation, diagnosis and prognosis of several cardiac pathologies. However, expert heart delineation, specifically for the right ventricle, is a time consuming task with high inter-and-intra observer variability. A fully automatic 3D+time heart segmentation framework is herein proposed for short-axis-cardiac MRI sequences. This approach estimates the heart using exclusively information from the sequence itself without tuning any parameters. The proposed framework uses a coarse-to-fine approach, which starts by localizing the heart via spatio-temporal analysis, followed by a segmentation of the basal heart that is then propagated to the apex by using a non-rigid-registration strategy. The obtained volume is then refined by estimating the ventricular muscle by locally searching a prior endocardium- pericardium intensity pattern. The proposed framework was applied to 48 patients datasets supplied by the organizers of the MICCAI 2012 Right Ventricle segmentation challenge. Results show the robustness, efficiency and competitiveness of the proposed method both in terms of accuracy and computational load.

  14. Modeling patterns of anatomical deformations in prostate patients undergoing radiation therapy with an endorectal balloon

    NASA Astrophysics Data System (ADS)

    Brion, Eliott; Richter, Christian; Macq, Benoit; Stützer, Kristin; Exner, Florian; Troost, Esther; Hölscher, Tobias; Bondar, Luiza

    2017-03-01

    External beam radiation therapy (EBRT) treats cancer by delivering daily fractions of radiation to a target volume. For prostate cancer, the target undergoes day-to-day variations in position, volume, and shape. For stereotactic photon and for proton EBRT, endorectal balloons (ERBs) can be used to limit variations. To date, patterns of non-rigid variations for patients with ERB have not been modeled. We extracted and modeled the patient-specific patterns of variations, using regularly acquired CT-images, non-rigid point cloud registration, and principal component analysis (PCA). For each patient, a non-rigid point-set registration method, called Coherent Point Drift, (CPD) was used to automatically generate landmark correspondences between all target shapes. To ensure accurate registrations, we tested and validated CPD by identifying parameter values leading to the smallest registration errors (surface matching error 0.13+/-0.09 mm). PCA demonstrated that 88+/-3.2% of the target motion could be explained using only 4 principal modes. The most dominant component of target motion is a squeezing and stretching in the anterior-posterior and superior-inferior directions. A PCA model of daily landmark displacements, generated using 6 to 10 CT-scans, could explain well the target motion for the CT-scans not included in the model (modeling error decreased from 1.83+/-0.8 mm for 6 CT-scans to 1.6+/-0.7 mm for 10 CT-scans). PCA modeling error was smaller than the naive approximation by the mean shape (approximation error 2.66+/-0.59 mm). Future work will investigate the use of the PCA-model to improve the accuracy of EBRT techniques that are highly susceptible to anatomical variations such as, proton therapy

  15. An atlas-based multimodal registration method for 2D images with discrepancy structures.

    PubMed

    Lv, Wenchao; Chen, Houjin; Peng, Yahui; Li, Yanfeng; Li, Jupeng

    2018-06-04

    An atlas-based multimodal registration method for 2-dimension images with discrepancy structures was proposed in this paper. Atlas was utilized for complementing the discrepancy structure information in multimodal medical images. The scheme includes three steps: floating image to atlas registration, atlas to reference image registration, and field-based deformation. To evaluate the performance, a frame model, a brain model, and clinical images were employed in registration experiments. We measured the registration performance by the squared sum of intensity differences. Results indicate that this method is robust and performs better than the direct registration for multimodal images with discrepancy structures. We conclude that the proposed method is suitable for multimodal images with discrepancy structures. Graphical Abstract An Atlas-based multimodal registration method schematic diagram.

  16. 17 CFR 200.30-1 - Delegation of authority to Director of Division of Corporation Finance.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... SECURITIES AND EXCHANGE COMMISSION ORGANIZATION; CONDUCT AND ETHICS; AND INFORMATION AND REQUESTS... right to have such denial reviewed by the Commission. (4) To accelerate the use or publication of any...), of an objection to the use of an automatic shelf registration as defined in Rule 405 (§ 230.405 of...

  17. The development of machine technology processing for earth resource survey

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A.

    1970-01-01

    The following technologies are considered for automatic processing of earth resources data: (1) registration of multispectral and multitemporal images, (2) digital image display systems, (3) data system parameter effects on satellite remote sensing systems, and (4) data compression techniques based on spectral redundancy. The importance of proper spectral band and compression algorithm selections is pointed out.

  18. 76 FR 8710 - Codex Alimentarius Commission: Meeting of the Codex Committee on Contaminants in Food

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-15

    ... registration is encouraged as it will expedite entry into the building and its parking area. You should also... require parking, please include the vehicle make and tag number when you register. Attendees that are not... provides automatic and customized access to selected food safety news and information. This service is...

  19. Semi-automatic medical image segmentation with adaptive local statistics in Conditional Random Fields framework.

    PubMed

    Hu, Yu-Chi J; Grossberg, Michael D; Mageras, Gikas S

    2008-01-01

    Planning radiotherapy and surgical procedures usually require onerous manual segmentation of anatomical structures from medical images. In this paper we present a semi-automatic and accurate segmentation method to dramatically reduce the time and effort required of expert users. This is accomplished by giving a user an intuitive graphical interface to indicate samples of target and non-target tissue by loosely drawing a few brush strokes on the image. We use these brush strokes to provide the statistical input for a Conditional Random Field (CRF) based segmentation. Since we extract purely statistical information from the user input, we eliminate the need of assumptions on boundary contrast previously used by many other methods, A new feature of our method is that the statistics on one image can be reused on related images without registration. To demonstrate this, we show that boundary statistics provided on a few 2D slices of volumetric medical data, can be propagated through the entire 3D stack of images without using the geometric correspondence between images. In addition, the image segmentation from the CRF can be formulated as a minimum s-t graph cut problem which has a solution that is both globally optimal and fast. The combination of a fast segmentation and minimal user input that is reusable, make this a powerful technique for the segmentation of medical images.

  20. Non-imaged based method for matching brains in a common anatomical space for cellular imagery.

    PubMed

    Midroit, Maëllie; Thevenet, Marc; Fournel, Arnaud; Sacquet, Joelle; Bensafi, Moustafa; Breton, Marine; Chalençon, Laura; Cavelius, Matthias; Didier, Anne; Mandairon, Nathalie

    2018-04-22

    Cellular imagery using histology sections is one of the most common techniques used in Neuroscience. However, this inescapable technique has severe limitations due to the need to delineate regions of interest on each brain, which is time consuming and variable across experimenters. We developed algorithms based on a vectors field elastic registration allowing fast, automatic realignment of experimental brain sections and associated labeling in a brain atlas with high accuracy and in a streamlined way. Thereby, brain areas of interest can be finely identified without outlining them and different experimental groups can be easily analyzed using conventional tools. This method directly readjusts labeling in the brain atlas without any intermediate manipulation of images. We mapped the expression of cFos, in the mouse brain (C57Bl/6J) after olfactory stimulation or a non-stimulated control condition and found an increased density of cFos-positive cells in the primary olfactory cortex but not in non-olfactory areas of the odor-stimulated animals compared to the controls. Existing methods of matching are based on image registration which often requires expensive material (two-photon tomography mapping or imaging with iDISCO) or are less accurate since they are based on mutual information contained in the images. Our new method is non-imaged based and relies only on the positions of detected labeling and the external contours of sections. We thus provide a new method that permits automated matching of histology sections of experimental brains with a brain reference atlas. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Multimodal Image Registration through Simultaneous Segmentation.

    PubMed

    Aganj, Iman; Fischl, Bruce

    2017-11-01

    Multimodal image registration facilitates the combination of complementary information from images acquired with different modalities. Most existing methods require computation of the joint histogram of the images, while some perform joint segmentation and registration in alternate iterations. In this work, we introduce a new non-information-theoretical method for pairwise multimodal image registration, in which the error of segmentation - using both images - is considered as the registration cost function. We empirically evaluate our method via rigid registration of multi-contrast brain magnetic resonance images, and demonstrate an often higher registration accuracy in the results produced by the proposed technique, compared to those by several existing methods.

  2. Point-based warping with optimized weighting factors of displacement vectors

    NASA Astrophysics Data System (ADS)

    Pielot, Ranier; Scholz, Michael; Obermayer, Klaus; Gundelfinger, Eckart D.; Hess, Andreas

    2000-06-01

    The accurate comparison of inter-individual 3D image brain datasets requires non-affine transformation techniques (warping) to reduce geometric variations. Constrained by the biological prerequisites we use in this study a landmark-based warping method with weighted sums of displacement vectors, which is enhanced by an optimization process. Furthermore, we investigate fast automatic procedures for determining landmarks to improve the practicability of 3D warping. This combined approach was tested on 3D autoradiographs of Gerbil brains. The autoradiographs were obtained after injecting a non-metabolized radioactive glucose derivative into the Gerbil thereby visualizing neuronal activity in the brain. Afterwards the brain was processed with standard autoradiographical methods. The landmark-generator computes corresponding reference points simultaneously within a given number of datasets by Monte-Carlo-techniques. The warping function is a distance weighted exponential function with a landmark- specific weighting factor. These weighting factors are optimized by a computational evolution strategy. The warping quality is quantified by several coefficients (correlation coefficient, overlap-index, and registration error). The described approach combines a highly suitable procedure to automatically detect landmarks in autoradiographical brain images and an enhanced point-based warping technique, optimizing the local weighting factors. This optimization process significantly improves the similarity between the warped and the target dataset.

  3. Automatic segmentation of the facial nerve and chorda tympani using image registration and statistical priors

    NASA Astrophysics Data System (ADS)

    Noble, Jack H.; Warren, Frank M.; Labadie, Robert F.; Dawant, Benoit M.

    2008-03-01

    In cochlear implant surgery, an electrode array is permanently implanted in the cochlea to stimulate the auditory nerve and allow deaf people to hear. A minimally invasive surgical technique has recently been proposed--percutaneous cochlear access--in which a single hole is drilled from the skull surface to the cochlea. For the method to be feasible, a safe and effective drilling trajectory must be determined using a pre-operative CT. Segmentation of the structures of the ear would improve trajectory planning safety and efficiency and enable the possibility of automated planning. Two important structures of the ear, the facial nerve and chorda tympani, present difficulties in intensity based segmentation due to their diameter (as small as 1.0 and 0.4 mm) and adjacent inter-patient variable structures of similar intensity in CT imagery. A multipart, model-based segmentation algorithm is presented in this paper that accomplishes automatic segmentation of the facial nerve and chorda tympani. Segmentation results are presented for 14 test ears and are compared to manually segmented surfaces. The results show that mean error in structure wall localization is 0.2 and 0.3 mm for the facial nerve and chorda, proving the method we propose is robust and accurate.

  4. Performing label-fusion-based segmentation using multiple automatically generated templates.

    PubMed

    Chakravarty, M Mallar; Steadman, Patrick; van Eede, Matthijs C; Calcott, Rebecca D; Gu, Victoria; Shaw, Philip; Raznahan, Armin; Collins, D Louis; Lerch, Jason P

    2013-10-01

    Classically, model-based segmentation procedures match magnetic resonance imaging (MRI) volumes to an expertly labeled atlas using nonlinear registration. The accuracy of these techniques are limited due to atlas biases, misregistration, and resampling error. Multi-atlas-based approaches are used as a remedy and involve matching each subject to a number of manually labeled templates. This approach yields numerous independent segmentations that are fused using a voxel-by-voxel label-voting procedure. In this article, we demonstrate how the multi-atlas approach can be extended to work with input atlases that are unique and extremely time consuming to construct by generating a library of multiple automatically generated templates of different brains (MAGeT Brain). We demonstrate the efficacy of our method for the mouse and human using two different nonlinear registration algorithms (ANIMAL and ANTs). The input atlases consist a high-resolution mouse brain atlas and an atlas of the human basal ganglia and thalamus derived from serial histological data. MAGeT Brain segmentation improves the identification of the mouse anterior commissure (mean Dice Kappa values (κ = 0.801), but may be encountering a ceiling effect for hippocampal segmentations. Applying MAGeT Brain to human subcortical structures improves segmentation accuracy for all structures compared to regular model-based techniques (κ = 0.845, 0.752, and 0.861 for the striatum, globus pallidus, and thalamus, respectively). Experiments performed with three manually derived input templates suggest that MAGeT Brain can approach or exceed the accuracy of multi-atlas label-fusion segmentation (κ = 0.894, 0.815, and 0.895 for the striatum, globus pallidus, and thalamus, respectively). Copyright © 2012 Wiley Periodicals, Inc.

  5. Discriminative confidence estimation for probabilistic multi-atlas label fusion.

    PubMed

    Benkarim, Oualid M; Piella, Gemma; González Ballester, Miguel Angel; Sanroma, Gerard

    2017-12-01

    Quantitative neuroimaging analyses often rely on the accurate segmentation of anatomical brain structures. In contrast to manual segmentation, automatic methods offer reproducible outputs and provide scalability to study large databases. Among existing approaches, multi-atlas segmentation has recently shown to yield state-of-the-art performance in automatic segmentation of brain images. It consists in propagating the labelmaps from a set of atlases to the anatomy of a target image using image registration, and then fusing these multiple warped labelmaps into a consensus segmentation on the target image. Accurately estimating the contribution of each atlas labelmap to the final segmentation is a critical step for the success of multi-atlas segmentation. Common approaches to label fusion either rely on local patch similarity, probabilistic statistical frameworks or a combination of both. In this work, we propose a probabilistic label fusion framework based on atlas label confidences computed at each voxel of the structure of interest. Maximum likelihood atlas confidences are estimated using a supervised approach, explicitly modeling the relationship between local image appearances and segmentation errors produced by each of the atlases. We evaluate different spatial pooling strategies for modeling local segmentation errors. We also present a novel type of label-dependent appearance features based on atlas labelmaps that are used during confidence estimation to increase the accuracy of our label fusion. Our approach is evaluated on the segmentation of seven subcortical brain structures from the MICCAI 2013 SATA Challenge dataset and the hippocampi from the ADNI dataset. Overall, our results indicate that the proposed label fusion framework achieves superior performance to state-of-the-art approaches in the majority of the evaluated brain structures and shows more robustness to registration errors. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Three-Dimensional Assessment of Temporomandibular Joint Using MRI-CBCT Image Registration

    PubMed Central

    Lagravere, Manuel; Boulanger, Pierre; Jaremko, Jacob L.; Major, Paul W.

    2017-01-01

    Purpose To introduce a new approach to reconstruct a 3D model of the TMJ using magnetic resonance imaging (MRI) and cone-beam computed tomography (CBCT) registered images, and to evaluate the intra-examiner reproducibility values of reconstructing the 3D models of the TMJ. Methods MRI and CBCT images of five patients (10 TMJs) were obtained. Multiple MRIs and CBCT images were registered using a mutual information based algorithm. The articular disc, condylar head and glenoid fossa were segmented at two different occasions, at least one-week apart, by one investigator, and 3D models were reconstructed. Differences between the segmentation at two occasions were automatically measured using the surface contours (Average Perpendicular Distance) and the volume overlap (Dice Similarity Index) of the 3D models. Descriptive analysis of the changes at 2 occasions, including means and standard deviation (SD) were reported to describe the intra-examiner reproducibility. Results The automatic segmentation of the condyle revealed maximum distance change of 1.9±0.93 mm, similarity index of 98% and root mean squared distance of 0.1±0.08 mm, and the glenoid fossa revealed maximum distance change of 2±0.52 mm, similarity index of 96% and root mean squared distance of 0.2±0.04 mm. The manual segmentation of the articular disc revealed maximum distance change of 3.6±0.32 mm, similarity index of 80% and root mean squared distance of 0.3±0.1 mm. Conclusion The MRI-CBCT registration provides a reliable tool to reconstruct 3D models of the TMJ’s soft and hard tissues, allows quantification of the articular disc morphology and position changes with associated differences of the condylar head and glenoid fossa, and facilitates measuring tissue changes over time. PMID:28095486

  7. Automatic 3D segmentation of spinal cord MRI using propagated deformable models

    NASA Astrophysics Data System (ADS)

    De Leener, B.; Cohen-Adad, J.; Kadoury, S.

    2014-03-01

    Spinal cord diseases or injuries can cause dysfunction of the sensory and locomotor systems. Segmentation of the spinal cord provides measures of atrophy and allows group analysis of multi-parametric MRI via inter-subject registration to a template. All these measures were shown to improve diagnostic and surgical intervention. We developed a framework to automatically segment the spinal cord on T2-weighted MR images, based on the propagation of a deformable model. The algorithm is divided into three parts: first, an initialization step detects the spinal cord position and orientation by using the elliptical Hough transform on multiple adjacent axial slices to produce an initial tubular mesh. Second, a low-resolution deformable model is iteratively propagated along the spinal cord. To deal with highly variable contrast levels between the spinal cord and the cerebrospinal fluid, the deformation is coupled with a contrast adaptation at each iteration. Third, a refinement process and a global deformation are applied on the low-resolution mesh to provide an accurate segmentation of the spinal cord. Our method was evaluated against a semi-automatic edge-based snake method implemented in ITK-SNAP (with heavy manual adjustment) by computing the 3D Dice coefficient, mean and maximum distance errors. Accuracy and robustness were assessed from 8 healthy subjects. Each subject had two volumes: one at the cervical and one at the thoracolumbar region. Results show a precision of 0.30 +/- 0.05 mm (mean absolute distance error) in the cervical region and 0.27 +/- 0.06 mm in the thoracolumbar region. The 3D Dice coefficient was of 0.93 for both regions.

  8. Placental fetal stem segmentation in a sequence of histology images

    NASA Astrophysics Data System (ADS)

    Athavale, Prashant; Vese, Luminita A.

    2012-02-01

    Recent research in perinatal pathology argues that analyzing properties of the placenta may reveal important information on how certain diseases progress. One important property is the structure of the placental fetal stems. Analysis of the fetal stems in a placenta could be useful in the study and diagnosis of some diseases like autism. To study the fetal stem structure effectively, we need to automatically and accurately track fetal stems through a sequence of digitized hematoxylin and eosin (H&E) stained histology slides. There are many problems in successfully achieving this goal. A few of the problems are: large size of images, misalignment of the consecutive H&E slides, unpredictable inaccuracies of manual tracing, very complicated texture patterns of various tissue types without clear characteristics, just to name a few. In this paper we propose a novel algorithm to achieve automatic tracing of the fetal stem in a sequence of H&E images, based on an inaccurate manual segmentation of a fetal stem in one of the images. This algorithm combines global affine registration, local non-affine registration and a novel 'dynamic' version of the active contours model without edges. We first use global affine image registration of all the images based on displacement, scaling and rotation. This gives us approximate location of the corresponding fetal stem in the image that needs to be traced. We then use the affine registration algorithm "locally" near this location. At this point, we use a fast non-affine registration based on L2-similarity measure and diffusion regularization to get a better location of the fetal stem. Finally, we have to take into account inaccuracies in the initial tracing. This is achieved through a novel dynamic version of the active contours model without edges where the coefficients of the fitting terms are computed iteratively to ensure that we obtain a unique stem in the segmentation. The segmentation thus obtained can then be used as an initial guess to obtain segmentation in the rest of the images in the sequence. This constitutes an important step in the extraction and understanding of the fetal stem vasculature.

  9. Grayscale inhomogeneity correction method for multiple mosaicked electron microscope images

    NASA Astrophysics Data System (ADS)

    Zhou, Fangxu; Chen, Xi; Sun, Rong; Han, Hua

    2018-04-01

    Electron microscope image stitching is highly desired to acquire microscopic resolution images of large target scenes in neuroscience. However, the result of multiple Mosaicked electron microscope images may exist severe gray scale inhomogeneity due to the instability of the electron microscope system and registration errors, which degrade the visual effect of the mosaicked EM images and aggravate the difficulty of follow-up treatment, such as automatic object recognition. Consequently, the grayscale correction method for multiple mosaicked electron microscope images is indispensable in these areas. Different from most previous grayscale correction methods, this paper designs a grayscale correction process for multiple EM images which tackles the difficulty of the multiple images monochrome correction and achieves the consistency of grayscale in the overlap regions. We adjust overall grayscale of the mosaicked images with the location and grayscale information of manual selected seed images, and then fuse local overlap regions between adjacent images using Poisson image editing. Experimental result demonstrates the effectiveness of our proposed method.

  10. Computerized multiple image analysis on mammograms: performance improvement of nipple identification for registration of multiple views using texture convergence analyses

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Sahiner, Berkman; Hadjiiski, Lubomir M.; Paramagul, Chintana

    2004-05-01

    Automated registration of multiple mammograms for CAD depends on accurate nipple identification. We developed two new image analysis techniques based on geometric and texture convergence analyses to improve the performance of our previously developed nipple identification method. A gradient-based algorithm is used to automatically track the breast boundary. The nipple search region along the boundary is then defined by geometric convergence analysis of the breast shape. Three nipple candidates are identified by detecting the changes along the gray level profiles inside and outside the boundary and the changes in the boundary direction. A texture orientation-field analysis method is developed to estimate the fourth nipple candidate based on the convergence of the tissue texture pattern towards the nipple. The final nipple location is determined from the four nipple candidates by a confidence analysis. Our training and test data sets consisted of 419 and 368 randomly selected mammograms, respectively. The nipple location identified on each image by an experienced radiologist was used as the ground truth. For 118 of the training and 70 of the test images, the radiologist could not positively identify the nipple, but provided an estimate of its location. These were referred to as invisible nipple images. In the training data set, 89.37% (269/301) of the visible nipples and 81.36% (96/118) of the invisible nipples could be detected within 1 cm of the truth. In the test data set, 92.28% (275/298) of the visible nipples and 67.14% (47/70) of the invisible nipples were identified within 1 cm of the truth. In comparison, our previous nipple identification method without using the two convergence analysis techniques detected 82.39% (248/301), 77.12% (91/118), 89.93% (268/298) and 54.29% (38/70) of the nipples within 1 cm of the truth for the visible and invisible nipples in the training and test sets, respectively. The results indicate that the nipple on mammograms can be detected accurately. This will be an important step towards automatic multiple image analysis for CAD techniques.

  11. A Statistically Representative Atlas for Mapping Neuronal Circuits in the Drosophila Adult Brain.

    PubMed

    Arganda-Carreras, Ignacio; Manoliu, Tudor; Mazuras, Nicolas; Schulze, Florian; Iglesias, Juan E; Bühler, Katja; Jenett, Arnim; Rouyer, François; Andrey, Philippe

    2018-01-01

    Imaging the expression patterns of reporter constructs is a powerful tool to dissect the neuronal circuits of perception and behavior in the adult brain of Drosophila , one of the major models for studying brain functions. To date, several Drosophila brain templates and digital atlases have been built to automatically analyze and compare collections of expression pattern images. However, there has been no systematic comparison of performances between alternative atlasing strategies and registration algorithms. Here, we objectively evaluated the performance of different strategies for building adult Drosophila brain templates and atlases. In addition, we used state-of-the-art registration algorithms to generate a new group-wise inter-sex atlas. Our results highlight the benefit of statistical atlases over individual ones and show that the newly proposed inter-sex atlas outperformed existing solutions for automated registration and annotation of expression patterns. Over 3,000 images from the Janelia Farm FlyLight collection were registered using the proposed strategy. These registered expression patterns can be searched and compared with a new version of the BrainBaseWeb system and BrainGazer software. We illustrate the validity of our methodology and brain atlas with registration-based predictions of expression patterns in a subset of clock neurons. The described registration framework should benefit to brain studies in Drosophila and other insect species.

  12. Improving left ventricular segmentation in four-dimensional flow MRI using intramodality image registration for cardiac blood flow analysis.

    PubMed

    Gupta, Vikas; Bustamante, Mariana; Fredriksson, Alexandru; Carlhäll, Carl-Johan; Ebbers, Tino

    2018-01-01

    Assessment of blood flow in the left ventricle using four-dimensional flow MRI requires accurate left ventricle segmentation that is often hampered by the low contrast between blood and the myocardium. The purpose of this work is to improve left-ventricular segmentation in four-dimensional flow MRI for reliable blood flow analysis. The left ventricle segmentations are first obtained using morphological cine-MRI with better in-plane resolution and contrast, and then aligned to four-dimensional flow MRI data. This alignment is, however, not trivial due to inter-slice misalignment errors caused by patient motion and respiratory drift during breath-hold based cine-MRI acquisition. A robust image registration based framework is proposed to mitigate such errors automatically. Data from 20 subjects, including healthy volunteers and patients, was used to evaluate its geometric accuracy and impact on blood flow analysis. High spatial correspondence was observed between manually and automatically aligned segmentations, and the improvements in alignment compared to uncorrected segmentations were significant (P < 0.01). Blood flow analysis from manual and automatically corrected segmentations did not differ significantly (P > 0.05). Our results demonstrate the efficacy of the proposed approach in improving left-ventricular segmentation in four-dimensional flow MRI, and its potential for reliable blood flow analysis. Magn Reson Med 79:554-560, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  13. Comparison of manual and automatic segmentation methods for brain structures in the presence of space-occupying lesions: a multi-expert study

    PubMed Central

    Deeley, M A; Chen, A; Datteri, R; Noble, J; Cmelak, A; Donnelly, E; Malcolm, A; Moretti, L; Jaboin, J; Niermann, K; Yang, Eddy S; Yu, David S; Yei, F; Koyama, T; Ding, G X; Dawant, B M

    2011-01-01

    The purpose of this work was to characterize expert variation in segmentation of intracranial structures pertinent to radiation therapy, and to assess a registration-driven atlas-based segmentation algorithm in that context. Eight experts were recruited to segment the brainstem, optic chiasm, optic nerves, and eyes, of 20 patients who underwent therapy for large space-occupying tumors. Performance variability was assessed through three geometric measures: volume, Dice similarity coefficient, and Euclidean distance. In addition, two simulated ground truth segmentations were calculated via the simultaneous truth and performance level estimation (STAPLE) algorithm and a novel application of probability maps. The experts and automatic system were found to generate structures of similar volume, though the experts exhibited higher variation with respect to tubular structures. No difference was found between the mean Dice coefficient (DSC) of the automatic and expert delineations as a group at a 5% significance level over all cases and organs. The larger structures of the brainstem and eyes exhibited mean DSC of approximately 0.8–0.9, whereas the tubular chiasm and nerves were lower, approximately 0.4–0.5. Similarly low DSC have been reported previously without the context of several experts and patient volumes. This study, however, provides evidence that experts are similarly challenged. The average maximum distances (maximum inside, maximum outside) from a simulated ground truth ranged from (−4.3, +5.4) mm for the automatic system to (−3.9, +7.5) mm for the experts considered as a group. Over all the structures in a rank of true positive rates at a 2 mm threshold from the simulated ground truth, the automatic system ranked second of the nine raters. This work underscores the need for large scale studies utilizing statistically robust numbers of patients and experts in evaluating quality of automatic algorithms. PMID:21725140

  14. Comparison of manual and automatic segmentation methods for brain structures in the presence of space-occupying lesions: a multi-expert study

    NASA Astrophysics Data System (ADS)

    Deeley, M. A.; Chen, A.; Datteri, R.; Noble, J. H.; Cmelak, A. J.; Donnelly, E. F.; Malcolm, A. W.; Moretti, L.; Jaboin, J.; Niermann, K.; Yang, Eddy S.; Yu, David S.; Yei, F.; Koyama, T.; Ding, G. X.; Dawant, B. M.

    2011-07-01

    The purpose of this work was to characterize expert variation in segmentation of intracranial structures pertinent to radiation therapy, and to assess a registration-driven atlas-based segmentation algorithm in that context. Eight experts were recruited to segment the brainstem, optic chiasm, optic nerves, and eyes, of 20 patients who underwent therapy for large space-occupying tumors. Performance variability was assessed through three geometric measures: volume, Dice similarity coefficient, and Euclidean distance. In addition, two simulated ground truth segmentations were calculated via the simultaneous truth and performance level estimation algorithm and a novel application of probability maps. The experts and automatic system were found to generate structures of similar volume, though the experts exhibited higher variation with respect to tubular structures. No difference was found between the mean Dice similarity coefficient (DSC) of the automatic and expert delineations as a group at a 5% significance level over all cases and organs. The larger structures of the brainstem and eyes exhibited mean DSC of approximately 0.8-0.9, whereas the tubular chiasm and nerves were lower, approximately 0.4-0.5. Similarly low DSCs have been reported previously without the context of several experts and patient volumes. This study, however, provides evidence that experts are similarly challenged. The average maximum distances (maximum inside, maximum outside) from a simulated ground truth ranged from (-4.3, +5.4) mm for the automatic system to (-3.9, +7.5) mm for the experts considered as a group. Over all the structures in a rank of true positive rates at a 2 mm threshold from the simulated ground truth, the automatic system ranked second of the nine raters. This work underscores the need for large scale studies utilizing statistically robust numbers of patients and experts in evaluating quality of automatic algorithms.

  15. 3D-2D registration in endovascular image-guided surgery: evaluation of state-of-the-art methods on cerebral angiograms.

    PubMed

    Mitrović, Uroš; Likar, Boštjan; Pernuš, Franjo; Špiclin, Žiga

    2018-02-01

    Image guidance for minimally invasive surgery is based on spatial co-registration and fusion of 3D pre-interventional images and treatment plans with the 2D live intra-interventional images. The spatial co-registration or 3D-2D registration is the key enabling technology; however, the performance of state-of-the-art automated methods is rather unclear as they have not been assessed under the same test conditions. Herein we perform a quantitative and comparative evaluation of ten state-of-the-art methods for 3D-2D registration on a public dataset of clinical angiograms. Image database consisted of 3D and 2D angiograms of 25 patients undergoing treatment for cerebral aneurysms or arteriovenous malformations. On each of the datasets, highly accurate "gold-standard" registrations of 3D and 2D images were established based on patient-attached fiducial markers. The database was used to rigorously evaluate ten state-of-the-art 3D-2D registration methods, namely two intensity-, two gradient-, three feature-based and three hybrid methods, both for registration of 3D pre-interventional image to monoplane or biplane 2D images. Intensity-based methods were most accurate in all tests (0.3 mm). One of the hybrid methods was most robust with 98.75% of successful registrations (SR) and capture range of 18 mm for registrations of 3D to biplane 2D angiograms. In general, registration accuracy was similar whether registration of 3D image was performed onto mono- or biplanar 2D images; however, the SR was substantially lower in case of 3D to monoplane 2D registration. Two feature-based and two hybrid methods had clinically feasible execution times in the order of a second. Performance of methods seems to fall below expectations in terms of robustness in case of registration of 3D to monoplane 2D images, while translation into clinical image guidance systems seems readily feasible for methods that perform registration of the 3D pre-interventional image onto biplanar intra-interventional 2D images.

  16. Automated Registration of Images from Multiple Bands of Resourcesat-2 Liss-4 camera

    NASA Astrophysics Data System (ADS)

    Radhadevi, P. V.; Solanki, S. S.; Jyothi, M. V.; Varadan, G.

    2014-11-01

    Continuous and automated co-registration and geo-tagging of images from multiple bands of Liss-4 camera is one of the interesting challenges of Resourcesat-2 data processing. Three arrays of the Liss-4 camera are physically separated in the focal plane in alongtrack direction. Thus, same line on the ground will be imaged by extreme bands with a time interval of as much as 2.1 seconds. During this time, the satellite would have covered a distance of about 14 km on the ground and the earth would have rotated through an angle of 30". A yaw steering is done to compensate the earth rotation effects, thus ensuring a first level registration between the bands. But this will not do a perfect co-registration because of the attitude fluctuations, satellite movement, terrain topography, PSM steering and small variations in the angular placement of the CCD lines (from the pre-launch values) in the focal plane. This paper describes an algorithm based on the viewing geometry of the satellite to do an automatic band to band registration of Liss-4 MX image of Resourcesat-2 in Level 1A. The algorithm is using the principles of photogrammetric collinearity equations. The model employs an orbit trajectory and attitude fitting with polynomials. Then, a direct geo-referencing with a global DEM with which every pixel in the middle band is mapped to a particular position on the surface of the earth with the given attitude. Attitude is estimated by interpolating measurement data obtained from star sensors and gyros, which are sampled at low frequency. When the sampling rate of attitude information is low compared to the frequency of jitter or micro-vibration, images processed by geometric correction suffer from distortion. Therefore, a set of conjugate points are identified between the bands to perform a relative attitude error estimation and correction which will ensure the internal accuracy and co-registration of bands. Accurate calculation of the exterior orientation parameters with GCPs is not required. Instead, the relative line of sight vector of each detector in different bands in relation to the payload is addressed. With this method a band to band registration accuracy of better than 0.3 pixels could be achieved even in high hill areas.

  17. Registration of Laser Scanning Point Clouds: A Review.

    PubMed

    Cheng, Liang; Chen, Song; Liu, Xiaoqiang; Xu, Hao; Wu, Yang; Li, Manchun; Chen, Yanming

    2018-05-21

    The integration of multi-platform, multi-angle, and multi-temporal LiDAR data has become important for geospatial data applications. This paper presents a comprehensive review of LiDAR data registration in the fields of photogrammetry and remote sensing. At present, a coarse-to-fine registration strategy is commonly used for LiDAR point clouds registration. The coarse registration method is first used to achieve a good initial position, based on which registration is then refined utilizing the fine registration method. According to the coarse-to-fine framework, this paper reviews current registration methods and their methodologies, and identifies important differences between them. The lack of standard data and unified evaluation systems is identified as a factor limiting objective comparison of different methods. The paper also describes the most commonly-used point cloud registration error analysis methods. Finally, avenues for future work on LiDAR data registration in terms of applications, data, and technology are discussed. In particular, there is a need to address registration of multi-angle and multi-scale data from various newly available types of LiDAR hardware, which will play an important role in diverse applications such as forest resource surveys, urban energy use, cultural heritage protection, and unmanned vehicles.

  18. Registration of Laser Scanning Point Clouds: A Review

    PubMed Central

    Cheng, Liang; Chen, Song; Xu, Hao; Wu, Yang; Li, Manchun

    2018-01-01

    The integration of multi-platform, multi-angle, and multi-temporal LiDAR data has become important for geospatial data applications. This paper presents a comprehensive review of LiDAR data registration in the fields of photogrammetry and remote sensing. At present, a coarse-to-fine registration strategy is commonly used for LiDAR point clouds registration. The coarse registration method is first used to achieve a good initial position, based on which registration is then refined utilizing the fine registration method. According to the coarse-to-fine framework, this paper reviews current registration methods and their methodologies, and identifies important differences between them. The lack of standard data and unified evaluation systems is identified as a factor limiting objective comparison of different methods. The paper also describes the most commonly-used point cloud registration error analysis methods. Finally, avenues for future work on LiDAR data registration in terms of applications, data, and technology are discussed. In particular, there is a need to address registration of multi-angle and multi-scale data from various newly available types of LiDAR hardware, which will play an important role in diverse applications such as forest resource surveys, urban energy use, cultural heritage protection, and unmanned vehicles. PMID:29883397

  19. Automatic Matching of Large Scale Images and Terrestrial LIDAR Based on App Synergy of Mobile Phone

    NASA Astrophysics Data System (ADS)

    Xia, G.; Hu, C.

    2018-04-01

    The digitalization of Cultural Heritage based on ground laser scanning technology has been widely applied. High-precision scanning and high-resolution photography of cultural relics are the main methods of data acquisition. The reconstruction with the complete point cloud and high-resolution image requires the matching of image and point cloud, the acquisition of the homonym feature points, the data registration, etc. However, the one-to-one correspondence between image and corresponding point cloud depends on inefficient manual search. The effective classify and management of a large number of image and the matching of large image and corresponding point cloud will be the focus of the research. In this paper, we propose automatic matching of large scale images and terrestrial LiDAR based on APP synergy of mobile phone. Firstly, we develop an APP based on Android, take pictures and record related information of classification. Secondly, all the images are automatically grouped with the recorded information. Thirdly, the matching algorithm is used to match the global and local image. According to the one-to-one correspondence between the global image and the point cloud reflection intensity image, the automatic matching of the image and its corresponding laser radar point cloud is realized. Finally, the mapping relationship between global image, local image and intensity image is established according to homonym feature point. So we can establish the data structure of the global image, the local image in the global image, the local image corresponding point cloud, and carry on the visualization management and query of image.

  20. Morphometric Atlas Selection for Automatic Brachial Plexus Segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van de Velde, Joris, E-mail: joris.vandevelde@ugent.be; Department of Radiotherapy, Ghent University, Ghent; Wouters, Johan

    Purpose: The purpose of this study was to determine the effects of atlas selection based on different morphometric parameters, on the accuracy of automatic brachial plexus (BP) segmentation for radiation therapy planning. The segmentation accuracy was measured by comparing all of the generated automatic segmentations with anatomically validated gold standard atlases developed using cadavers. Methods and Materials: Twelve cadaver computed tomography (CT) atlases (3 males, 9 females; mean age: 73 years) were included in the study. One atlas was selected to serve as a patient, and the other 11 atlases were registered separately onto this “patient” using deformable image registration. Thismore » procedure was repeated for every atlas as a patient. Next, the Dice and Jaccard similarity indices and inclusion index were calculated for every registered BP with the original gold standard BP. In parallel, differences in several morphometric parameters that may influence the BP segmentation accuracy were measured for the different atlases. Specific brachial plexus-related CT-visible bony points were used to define the morphometric parameters. Subsequently, correlations between the similarity indices and morphometric parameters were calculated. Results: A clear negative correlation between difference in protraction-retraction distance and the similarity indices was observed (mean Pearson correlation coefficient = −0.546). All of the other investigated Pearson correlation coefficients were weak. Conclusions: Differences in the shoulder protraction-retraction position between the atlas and the patient during planning CT influence the BP autosegmentation accuracy. A greater difference in the protraction-retraction distance between the atlas and the patient reduces the accuracy of the BP automatic segmentation result.« less

  1. Detection, modeling and matching of pleural thickenings from CT data towards an early diagnosis of malignant pleural mesothelioma

    NASA Astrophysics Data System (ADS)

    Chaisaowong, Kraisorn; Kraus, Thomas

    2014-03-01

    Pleural thickenings can be caused by asbestos exposure and may evolve into malignant pleural mesothelioma. While an early diagnosis plays the key role to an early treatment, and therefore helping to reduce morbidity, the growth rate of a pleural thickening can be in turn essential evidence to an early diagnosis of the pleural mesothelioma. The detection of pleural thickenings is today done by a visual inspection of CT data, which is time-consuming and underlies the physician's subjective judgment. Computer-assisted diagnosis systems to automatically assess pleural mesothelioma have been reported worldwide. But in this paper, an image analysis pipeline to automatically detect pleural thickenings and measure their volume is described. We first delineate automatically the pleural contour in the CT images. An adaptive surface-base smoothing technique is then applied to the pleural contours to identify all potential thickenings. A following tissue-specific topology-oriented detection based on a probabilistic Hounsfield Unit model of pleural plaques specify then the genuine pleural thickenings among them. The assessment of the detected pleural thickenings is based on the volumetry of the 3D model, created by mesh construction algorithm followed by Laplace-Beltrami eigenfunction expansion surface smoothing technique. Finally, the spatiotemporal matching of pleural thickenings from consecutive CT data is carried out based on the semi-automatic lung registration towards the assessment of its growth rate. With these methods, a new computer-assisted diagnosis system is presented in order to assure a precise and reproducible assessment of pleural thickenings towards the diagnosis of the pleural mesothelioma in its early stage.

  2. Hierarchical and successive approximate registration of the non-rigid medical image based on thin-plate splines

    NASA Astrophysics Data System (ADS)

    Hu, Jinyan; Li, Li; Yang, Yunfeng

    2017-06-01

    The hierarchical and successive approximate registration method of non-rigid medical image based on the thin-plate splines is proposed in the paper. There are two major novelties in the proposed method. First, the hierarchical registration based on Wavelet transform is used. The approximate image of Wavelet transform is selected as the registered object. Second, the successive approximation registration method is used to accomplish the non-rigid medical images registration, i.e. the local regions of the couple images are registered roughly based on the thin-plate splines, then, the current rough registration result is selected as the object to be registered in the following registration procedure. Experiments show that the proposed method is effective in the registration process of the non-rigid medical images.

  3. FIJI Macro 3D ART VeSElecT: 3D Automated Reconstruction Tool for Vesicle Structures of Electron Tomograms

    PubMed Central

    Kaltdorf, Kristin Verena; Schulze, Katja; Helmprobst, Frederik; Kollmannsberger, Philip; Stigloher, Christian

    2017-01-01

    Automatic image reconstruction is critical to cope with steadily increasing data from advanced microscopy. We describe here the Fiji macro 3D ART VeSElecT which we developed to study synaptic vesicles in electron tomograms. We apply this tool to quantify vesicle properties (i) in embryonic Danio rerio 4 and 8 days past fertilization (dpf) and (ii) to compare Caenorhabditis elegans N2 neuromuscular junctions (NMJ) wild-type and its septin mutant (unc-59(e261)). We demonstrate development-specific and mutant-specific changes in synaptic vesicle pools in both models. We confirm the functionality of our macro by applying our 3D ART VeSElecT on zebrafish NMJ showing smaller vesicles in 8 dpf embryos then 4 dpf, which was validated by manual reconstruction of the vesicle pool. Furthermore, we analyze the impact of C. elegans septin mutant unc-59(e261) on vesicle pool formation and vesicle size. Automated vesicle registration and characterization was implemented in Fiji as two macros (registration and measurement). This flexible arrangement allows in particular reducing false positives by an optional manual revision step. Preprocessing and contrast enhancement work on image-stacks of 1nm/pixel in x and y direction. Semi-automated cell selection was integrated. 3D ART VeSElecT removes interfering components, detects vesicles by 3D segmentation and calculates vesicle volume and diameter (spherical approximation, inner/outer diameter). Results are collected in color using the RoiManager plugin including the possibility of manual removal of non-matching confounder vesicles. Detailed evaluation considered performance (detected vesicles) and specificity (true vesicles) as well as precision and recall. We furthermore show gain in segmentation and morphological filtering compared to learning based methods and a large time gain compared to manual segmentation. 3D ART VeSElecT shows small error rates and its speed gain can be up to 68 times faster in comparison to manual annotation. Both automatic and semi-automatic modes are explained including a tutorial. PMID:28056033

  4. Automated identification of best-quality coronary artery segments from multiple-phase coronary CT angiography (cCTA) for vessel analysis

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Chughtai, Aamer; Wei, Jun; Kazerooni, Ella A.

    2016-03-01

    We are developing an automated method to identify the best quality segment among the corresponding segments in multiple-phase cCTA. The coronary artery trees are automatically extracted from different cCTA phases using our multi-scale vessel segmentation and tracking method. An automated registration method is then used to align the multiple-phase artery trees. The corresponding coronary artery segments are identified in the registered vessel trees and are straightened by curved planar reformation (CPR). Four features are extracted from each segment in each phase as quality indicators in the original CT volume and the straightened CPR volume. Each quality indicator is used as a voting classifier to vote the corresponding segments. A newly designed weighted voting ensemble (WVE) classifier is finally used to determine the best-quality coronary segment. An observer preference study is conducted with three readers to visually rate the quality of the vessels in 1 to 6 rankings. Six and 10 cCTA cases are used as training and test set in this preliminary study. For the 10 test cases, the agreement between automatically identified best-quality (AI-BQ) segments and radiologist's top 2 rankings is 79.7%, and between AI-BQ and the other two readers are 74.8% and 83.7%, respectively. The results demonstrated that the performance of our automated method was comparable to those of experienced readers for identification of the best-quality coronary segments.

  5. Robust Nonrigid Multimodal Image Registration using Local Frequency Maps*

    PubMed Central

    Jian, Bing; Vemuri, Baba C.; Marroquin, José L.

    2008-01-01

    Automatic multi-modal image registration is central to numerous tasks in medical imaging today and has a vast range of applications e.g., image guidance, atlas construction, etc. In this paper, we present a novel multi-modal 3D non-rigid registration algorithm where in 3D images to be registered are represented by their corresponding local frequency maps efficiently computed using the Riesz transform as opposed to the popularly used Gabor filters. The non-rigid registration between these local frequency maps is formulated in a statistically robust framework involving the minimization of the integral squared error a.k.a. L2E (L2 error). This error is expressed as the squared difference between the true density of the residual (which is the squared difference between the non-rigidly transformed reference and the target local frequency representations) and a Gaussian or mixture of Gaussians density approximation of the same. The non-rigid transformation is expressed in a B-spline basis to achieve the desired smoothness in the transformation as well as computational efficiency. The key contributions of this work are (i) the use of Riesz transform to achieve better efficiency in computing the local frequency representation in comparison to Gabor filter-based approaches, (ii) new mathematical model for local-frequency based non-rigid registration, (iii) analytic computation of the gradient of the robust non-rigid registration cost function to achieve efficient and accurate registration. The proposed non-rigid L2E-based registration is a significant extension of research reported in literature to date. We present experimental results for registering several real data sets with synthetic and real non-rigid misalignments. PMID:17354721

  6. Multi-atlas-based CT synthesis from conventional MRI with patch-based refinement for MRI-based radiotherapy planning

    NASA Astrophysics Data System (ADS)

    Lee, Junghoon; Carass, Aaron; Jog, Amod; Zhao, Can; Prince, Jerry L.

    2017-02-01

    Accurate CT synthesis, sometimes called electron density estimation, from MRI is crucial for successful MRI-based radiotherapy planning and dose computation. Existing CT synthesis methods are able to synthesize normal tissues but are unable to accurately synthesize abnormal tissues (i.e., tumor), thus providing a suboptimal solution. We propose a multiatlas- based hybrid synthesis approach that combines multi-atlas registration and patch-based synthesis to accurately synthesize both normal and abnormal tissues. Multi-parametric atlas MR images are registered to the target MR images by multi-channel deformable registration, from which the atlas CT images are deformed and fused by locally-weighted averaging using a structural similarity measure (SSIM). Synthetic MR images are also computed from the registered atlas MRIs by using the same weights used for the CT synthesis; these are compared to the target patient MRIs allowing for the assessment of the CT synthesis fidelity. Poor synthesis regions are automatically detected based on the fidelity measure and refined by a patch-based synthesis. The proposed approach was tested on brain cancer patient data, and showed a noticeable improvement for the tumor region.

  7. Joint modeling and registration of cell populations in cohorts of high-dimensional flow cytometric data.

    PubMed

    Pyne, Saumyadipta; Lee, Sharon X; Wang, Kui; Irish, Jonathan; Tamayo, Pablo; Nazaire, Marc-Danie; Duong, Tarn; Ng, Shu-Kay; Hafler, David; Levy, Ronald; Nolan, Garry P; Mesirov, Jill; McLachlan, Geoffrey J

    2014-01-01

    In biomedical applications, an experimenter encounters different potential sources of variation in data such as individual samples, multiple experimental conditions, and multivariate responses of a panel of markers such as from a signaling network. In multiparametric cytometry, which is often used for analyzing patient samples, such issues are critical. While computational methods can identify cell populations in individual samples, without the ability to automatically match them across samples, it is difficult to compare and characterize the populations in typical experiments, such as those responding to various stimulations or distinctive of particular patients or time-points, especially when there are many samples. Joint Clustering and Matching (JCM) is a multi-level framework for simultaneous modeling and registration of populations across a cohort. JCM models every population with a robust multivariate probability distribution. Simultaneously, JCM fits a random-effects model to construct an overall batch template--used for registering populations across samples, and classifying new samples. By tackling systems-level variation, JCM supports practical biomedical applications involving large cohorts. Software for fitting the JCM models have been implemented in an R package EMMIX-JCM, available from http://www.maths.uq.edu.au/~gjm/mix_soft/EMMIX-JCM/.

  8. Multi-atlas-based CT synthesis from conventional MRI with patch-based refinement for MRI-based radiotherapy planning.

    PubMed

    Lee, Junghoon; Carass, Aaron; Jog, Amod; Zhao, Can; Prince, Jerry L

    2017-02-01

    Accurate CT synthesis, sometimes called electron density estimation, from MRI is crucial for successful MRI-based radiotherapy planning and dose computation. Existing CT synthesis methods are able to synthesize normal tissues but are unable to accurately synthesize abnormal tissues (i.e., tumor), thus providing a suboptimal solution. We propose a multi-atlas-based hybrid synthesis approach that combines multi-atlas registration and patch-based synthesis to accurately synthesize both normal and abnormal tissues. Multi-parametric atlas MR images are registered to the target MR images by multi-channel deformable registration, from which the atlas CT images are deformed and fused by locally-weighted averaging using a structural similarity measure (SSIM). Synthetic MR images are also computed from the registered atlas MRIs by using the same weights used for the CT synthesis; these are compared to the target patient MRIs allowing for the assessment of the CT synthesis fidelity. Poor synthesis regions are automatically detected based on the fidelity measure and refined by a patch-based synthesis. The proposed approach was tested on brain cancer patient data, and showed a noticeable improvement for the tumor region.

  9. Automated Mounting Bias Calibration for Airborne LIDAR System

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Jiang, W.; Jiang, S.

    2012-07-01

    Mounting bias is the major error source of Airborne LIDAR system. In this paper, an automated calibration method for estimating LIDAR system mounting parameters is introduced. LIDAR direct geo-referencing model is used to calculate systematic errors. Due to LIDAR footprints discretely sampled, the real corresponding laser points are hardly existence among different strips. The traditional corresponding point methodology does not seem to apply to LIDAR strip registration. We proposed a Virtual Corresponding Point Model to resolve the corresponding problem among discrete laser points. Each VCPM contains a corresponding point and three real laser footprints. Two rules are defined to calculate tie point coordinate from real laser footprints. The Scale Invariant Feature Transform (SIFT) is used to extract corresponding points in LIDAR strips, and the automatic flow of LIDAR system calibration based on VCPM is detailed described. The practical examples illustrate the feasibility and effectiveness of the proposed calibration method.

  10. Fast cine-magnetic resonance imaging point tracking for prostate cancer radiation therapy planning

    NASA Astrophysics Data System (ADS)

    Dowling, J.; Dang, K.; Fox, Chris D.; Chandra, S.; Gill, Suki; Kron, T.; Pham, D.; Foroudi, F.

    2014-03-01

    The analysis of intra-fraction organ motion is important for improving the precision of radiation therapy treatment delivery. One method to quantify this motion is for one or more observers to manually identify anatomic points of interest (POIs) on each slice of a cine-MRI sequence. However this is labour intensive and inter- and intra- observer variation can introduce uncertainty. In this paper a fast method for non-rigid registration based point tracking in cine-MRI sagittal and coronal series is described which identifies POIs in 0.98 seconds per sagittal slice and 1.35 seconds per coronal slice. The manual and automatic points were highly correlated (r>0.99, p<0.001) for all organs and the difference generally less than 1mm. For prostate planning peristalsis and rectal gas can result in unpredictable out of plane motion, suggesting the results may require manual verification.

  11. Generalized parallel-perspective stereo mosaics from airborne video.

    PubMed

    Zhu, Zhigang; Hanson, Allen R; Riseman, Edward M

    2004-02-01

    In this paper, we present a new method for automatically and efficiently generating stereoscopic mosaics by seamless registration of images collected by a video camera mounted on an airborne platform. Using a parallel-perspective representation, a pair of geometrically registered stereo mosaics can be precisely constructed under quite general motion. A novel parallel ray interpolation for stereo mosaicing (PRISM) approach is proposed to make stereo mosaics seamless in the presence of obvious motion parallax and for rather arbitrary scenes. Parallel-perspective stereo mosaics generated with the PRISM method have better depth resolution than perspective stereo due to the adaptive baseline geometry. Moreover, unlike previous results showing that parallel-perspective stereo has a constant depth error, we conclude that the depth estimation error of stereo mosaics is in fact a linear function of the absolute depths of a scene. Experimental results on long video sequences are given.

  12. Validity of registration of ICD codes and prescriptions in a research database in Swedish primary care: a cross-sectional study in Skaraborg primary care database.

    PubMed

    Hjerpe, Per; Merlo, Juan; Ohlsson, Henrik; Bengtsson Boström, Kristina; Lindblad, Ulf

    2010-04-23

    In recent years, several primary care databases recording information from computerized medical records have been established and used for quality assessment of medical care and research. However, to be useful for research purposes, the data generated routinely from every day practice require registration of high quality. In this study we aimed to investigate (i) the frequency and validity of ICD code and drug prescription registration in the new Skaraborg primary care database (SPCD) and (ii) to investigate the sources of variation in this registration. SPCD contains anonymous electronic medical records (ProfDoc III) automatically retrieved from all 24 public health care centres (HCC) in Skaraborg, Sweden. The frequencies of ICD code registration for the selected diagnoses diabetes mellitus, hypertension and chronic cardiovascular disease and the relevant drug prescriptions in the time period between May 2002 and October 2003 were analysed. The validity of data registration in the SPCD was assessed in a random sample of 50 medical records from each HCC (n = 1200 records) using the medical record text as gold standard. The variance of ICD code registration was studied with multi-level logistic regression analysis and expressed as median odds ratio (MOR). For diabetes mellitus and hypertension ICD codes were registered in 80-90% of cases, while for congestive heart failure and ischemic heart disease ICD codes were registered more seldom (60-70%). Drug prescription registration was overall high (88%). A correlation between the frequency of ICD coded visits and the sensitivity of the ICD code registration was found for hypertension and congestive heart failure but not for diabetes or ischemic heart disease.The frequency of ICD code registration varied from 42 to 90% between HCCs, and the greatest variation was found at the physician level (MORPHYSICIAN = 4.2 and MORHCC = 2.3). Since the frequency of ICD code registration varies between different diagnoses, each diagnosis must be separately validated. Improved frequency and quality of ICD code registration might be achieved by interventions directed towards the physicians where the greatest amount of variation was found.

  13. An Alignment Method for the Integration of Underwater 3D Data Captured by a Stereovision System and an Acoustic Camera.

    PubMed

    Lagudi, Antonio; Bianco, Gianfranco; Muzzupappa, Maurizio; Bruno, Fabio

    2016-04-14

    The integration of underwater 3D data captured by acoustic and optical systems is a promising technique in various applications such as mapping or vehicle navigation. It allows for compensating the drawbacks of the low resolution of acoustic sensors and the limitations of optical sensors in bad visibility conditions. Aligning these data is a challenging problem, as it is hard to make a point-to-point correspondence. This paper presents a multi-sensor registration for the automatic integration of 3D data acquired from a stereovision system and a 3D acoustic camera in close-range acquisition. An appropriate rig has been used in the laboratory tests to determine the relative position between the two sensor frames. The experimental results show that our alignment approach, based on the acquisition of a rig in several poses, can be adopted to estimate the rigid transformation between the two heterogeneous sensors. A first estimation of the unknown geometric transformation is obtained by a registration of the two 3D point clouds, but it ends up to be strongly affected by noise and data dispersion. A robust and optimal estimation is obtained by a statistical processing of the transformations computed for each pose. The effectiveness of the method has been demonstrated in this first experimentation of the proposed 3D opto-acoustic camera.

  14. An Alignment Method for the Integration of Underwater 3D Data Captured by a Stereovision System and an Acoustic Camera

    PubMed Central

    Lagudi, Antonio; Bianco, Gianfranco; Muzzupappa, Maurizio; Bruno, Fabio

    2016-01-01

    The integration of underwater 3D data captured by acoustic and optical systems is a promising technique in various applications such as mapping or vehicle navigation. It allows for compensating the drawbacks of the low resolution of acoustic sensors and the limitations of optical sensors in bad visibility conditions. Aligning these data is a challenging problem, as it is hard to make a point-to-point correspondence. This paper presents a multi-sensor registration for the automatic integration of 3D data acquired from a stereovision system and a 3D acoustic camera in close-range acquisition. An appropriate rig has been used in the laboratory tests to determine the relative position between the two sensor frames. The experimental results show that our alignment approach, based on the acquisition of a rig in several poses, can be adopted to estimate the rigid transformation between the two heterogeneous sensors. A first estimation of the unknown geometric transformation is obtained by a registration of the two 3D point clouds, but it ends up to be strongly affected by noise and data dispersion. A robust and optimal estimation is obtained by a statistical processing of the transformations computed for each pose. The effectiveness of the method has been demonstrated in this first experimentation of the proposed 3D opto-acoustic camera. PMID:27089344

  15. Automated segmentation of the parotid gland based on atlas registration and machine learning: a longitudinal MRI study in head-and-neck radiation therapy.

    PubMed

    Yang, Xiaofeng; Wu, Ning; Cheng, Guanghui; Zhou, Zhengyang; Yu, David S; Beitler, Jonathan J; Curran, Walter J; Liu, Tian

    2014-12-01

    To develop an automated magnetic resonance imaging (MRI) parotid segmentation method to monitor radiation-induced parotid gland changes in patients after head and neck radiation therapy (RT). The proposed method combines the atlas registration method, which captures the global variation of anatomy, with a machine learning technology, which captures the local statistical features, to automatically segment the parotid glands from the MRIs. The segmentation method consists of 3 major steps. First, an atlas (pre-RT MRI and manually contoured parotid gland mask) is built for each patient. A hybrid deformable image registration is used to map the pre-RT MRI to the post-RT MRI, and the transformation is applied to the pre-RT parotid volume. Second, the kernel support vector machine (SVM) is trained with the subject-specific atlas pair consisting of multiple features (intensity, gradient, and others) from the aligned pre-RT MRI and the transformed parotid volume. Third, the well-trained kernel SVM is used to differentiate the parotid from surrounding tissues in the post-RT MRIs by statistically matching multiple texture features. A longitudinal study of 15 patients undergoing head and neck RT was conducted: baseline MRI was acquired prior to RT, and the post-RT MRIs were acquired at 3-, 6-, and 12-month follow-up examinations. The resulting segmentations were compared with the physicians' manual contours. Successful parotid segmentation was achieved for all 15 patients (42 post-RT MRIs). The average percentage of volume differences between the automated segmentations and those of the physicians' manual contours were 7.98% for the left parotid and 8.12% for the right parotid. The average volume overlap was 91.1% ± 1.6% for the left parotid and 90.5% ± 2.4% for the right parotid. The parotid gland volume reduction at follow-up was 25% at 3 months, 27% at 6 months, and 16% at 12 months. We have validated our automated parotid segmentation algorithm in a longitudinal study. This segmentation method may be useful in future studies to address radiation-induced xerostomia in head and neck radiation therapy. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Supervised local error estimation for nonlinear image registration using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Eppenhof, Koen A. J.; Pluim, Josien P. W.

    2017-02-01

    Error estimation in medical image registration is valuable when validating, comparing, or combining registration methods. To validate a nonlinear image registration method, ideally the registration error should be known for the entire image domain. We propose a supervised method for the estimation of a registration error map for nonlinear image registration. The method is based on a convolutional neural network that estimates the norm of the residual deformation from patches around each pixel in two registered images. This norm is interpreted as the registration error, and is defined for every pixel in the image domain. The network is trained using a set of artificially deformed images. Each training example is a pair of images: the original image, and a random deformation of that image. No manually labeled ground truth error is required. At test time, only the two registered images are required as input. We train and validate the network on registrations in a set of 2D digital subtraction angiography sequences, such that errors up to eight pixels can be estimated. We show that for this range of errors the convolutional network is able to learn the registration error in pairs of 2D registered images at subpixel precision. Finally, we present a proof of principle for the extension to 3D registration problems in chest CTs, showing that the method has the potential to estimate errors in 3D registration problems.

  17. Change detection of medical images using dictionary learning techniques and PCA

    NASA Astrophysics Data System (ADS)

    Nika, Varvara; Babyn, Paul; Zhu, Hongmei

    2014-03-01

    Automatic change detection methods for identifying the changes of serial MR images taken at different times are of great interest to radiologists. The majority of existing change detection methods in medical imaging, and those of brain images in particular, include many preprocessing steps and rely mostly on statistical analysis of MRI scans. Although most methods utilize registration software, tissue classification remains a difficult and overwhelming task. Recently, dictionary learning techniques are used in many areas of image processing, such as image surveillance, face recognition, remote sensing, and medical imaging. In this paper we present the Eigen-Block Change Detection algorithm (EigenBlockCD). It performs local registration and identifies the changes between consecutive MR images of the brain. Blocks of pixels from baseline scan are used to train local dictionaries that are then used to detect changes in the follow-up scan. We use PCA to reduce the dimensionality of the local dictionaries and the redundancy of data. Choosing the appropriate distance measure significantly affects the performance of our algorithm. We examine the differences between L1 and L2 norms as two possible similarity measures in the EigenBlockCD. We show the advantages of L2 norm over L1 norm theoretically and numerically. We also demonstrate the performance of the EigenBlockCD algorithm for detecting changes of MR images and compare our results with those provided in recent literature. Experimental results with both simulated and real MRI scans show that the EigenBlockCD outperforms the previous methods. It detects clinical changes while ignoring the changes due to patient's position and other acquisition artifacts.

  18. Schaltenbrand-Wahren-Talairach-Tournoux brain atlas registration

    NASA Astrophysics Data System (ADS)

    Nowinski, Wieslaw L.; Fang, Anthony; Nguyen, Bonnie T.

    1995-04-01

    The CIeMed electronic brain atlas system contains electronic versions of multiple paper brain atlases with 3D extensions; some other 3D brain atlases are under development. Its primary goal is to provide automatic labeling and quantification of brains. The atlas data are digitized, enhanced, color coded, labeled, and organized into volumes. The atlas system provides several tools for registration, 3D display and real-time manipulation, object extraction/editing, quantification, image processing and analysis, reformatting, anatomical index operations, and file handling. The two main stereotactic atlases provided by the system are electronic and enhanced versions of Atlas of Stereotaxy of the Human Brain by Schaltenbrand and Wahren and Co-Planar Stereotactic Atlas of the Human Brain by Talairach and Tournoux. Each of these atlases has its own strengths and their combination has several advantages. First, a complementary information is merged and provided to the user. Second, the user can register data with a single atlas only, as the Schaltenbrand-Wahren-Talairach-Tournoux registration is data-independent. And last but not least, a direct registration of the Schaltenbrand-Wahren microseries with MRI data may not be feasible, since cerebral deep structures are usually not clearly discernible on MRI images. This paper addresses registration of the Schaltenbrand- Wahren and Talairach-Tournoux brain atlases. A modified proportional grid system transformation is introduced and suitable sets of landmarks identifiable in both atlases are defined. The accuracy of registration is discussed. A continuous navigation in the multi- atlas/patient data space is presented.

  19. Estimation of slipping organ motion by registration with direction-dependent regularization.

    PubMed

    Schmidt-Richberg, Alexander; Werner, René; Handels, Heinz; Ehrhardt, Jan

    2012-01-01

    Accurate estimation of respiratory motion is essential for many applications in medical 4D imaging, for example for radiotherapy of thoracic and abdominal tumors. It is usually done by non-linear registration of image scans at different states of the breathing cycle but without further modeling of specific physiological motion properties. In this context, the accurate computation of respiration-driven lung motion is especially challenging because this organ is sliding along the surrounding tissue during the breathing cycle, leading to discontinuities in the motion field. Without considering this property in the registration model, common intensity-based algorithms cause incorrect estimation along the object boundaries. In this paper, we present a model for incorporating slipping motion in image registration. Extending the common diffusion registration by distinguishing between normal- and tangential-directed motion, we are able to estimate slipping motion at the organ boundaries while preventing gaps and ensuring smooth motion fields inside and outside. We further present an algorithm for a fully automatic detection of discontinuities in the motion field, which does not rely on a prior segmentation of the organ. We evaluate the approach for the estimation of lung motion based on 23 inspiration/expiration pairs of thoracic CT images. The results show a visually more plausible motion estimation. Moreover, the target registration error is quantified using manually defined landmarks and a significant improvement over the standard diffusion regularization is shown. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Automated analysis of whole skeletal muscle for muscular atrophy detection of ALS in whole-body CT images: preliminary study

    NASA Astrophysics Data System (ADS)

    Kamiya, Naoki; Ieda, Kosuke; Zhou, Xiangrong; Yamada, Megumi; Kato, Hiroki; Muramatsu, Chisako; Hara, Takeshi; Miyoshi, Toshiharu; Inuzuka, Takashi; Matsuo, Masayuki; Fujita, Hiroshi

    2017-03-01

    Amyotrophic lateral sclerosis (ALS) causes functional disorders such as difficulty in breathing and swallowing through the atrophy of voluntary muscles. ALS in its early stages is difficult to diagnose because of the difficulty in differentiating it from other muscular diseases. In addition, image inspection methods for aggressive diagnosis for ALS have not yet been established. The purpose of this study is to develop an automatic analysis system of the whole skeletal muscle to support the early differential diagnosis of ALS using whole-body CT images. In this study, the muscular atrophy parts including ALS patients are automatically identified by recognizing and segmenting whole skeletal muscle in the preliminary steps. First, the skeleton is identified by its gray value information. Second, the initial area of the body cavity is recognized by the deformation of the thoracic cavity based on the anatomical segmented skeleton. Third, the abdominal cavity boundary is recognized using ABM for precisely recognizing the body cavity. The body cavity is precisely recognized by non-rigid registration method based on the reference points of the abdominal cavity boundary. Fourth, the whole skeletal muscle is recognized by excluding the skeleton, the body cavity, and the subcutaneous fat. Additionally, the areas of muscular atrophy including ALS patients are automatically identified by comparison of the muscle mass. The experiments were carried out for ten cases with abnormality in the skeletal muscle. Global recognition and segmentation of the whole skeletal muscle were well realized in eight cases. Moreover, the areas of muscular atrophy including ALS patients were well identified in the lower limbs. As a result, this study indicated the basic technology to detect the muscle atrophy including ALS. In the future, it will be necessary to consider methods to differentiate other kinds of muscular atrophy as well as the clinical application of this detection method for early ALS detection and examine a large number of cases with stage and disease type.

  1. Improving anatomical mapping of complexly deformed anatomy for external beam radiotherapy and brachytherapy dose accumulation in cervical cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vásquez Osorio, Eliana M., E-mail: e.vasquezosorio@erasmusmc.nl; Kolkman-Deurloo, Inger-Karine K.; Schuring-Pereira, Monica

    Purpose: In the treatment of cervical cancer, large anatomical deformations, caused by, e.g., tumor shrinkage, bladder and rectum filling changes, organ sliding, and the presence of the brachytherapy (BT) applicator, prohibit the accumulation of external beam radiotherapy (EBRT) and BT dose distributions. This work proposes a structure-wise registration with vector field integration (SW+VF) to map the largely deformed anatomies between EBRT and BT, paving the way for 3D dose accumulation between EBRT and BT. Methods: T2w-MRIs acquired before EBRT and as a part of the MRI-guided BT procedure for 12 cervical cancer patients, along with the manual delineations of themore » bladder, cervix-uterus, and rectum-sigmoid, were used for this study. A rigid transformation was used to align the bony anatomy in the MRIs. The proposed SW+VF method starts by automatically segmenting features in the area surrounding the delineated organs. Then, each organ and feature pair is registered independently using a feature-based nonrigid registration algorithm developed in-house. Additionally, a background transformation is calculated to account for areas far from all organs and features. In order to obtain one transformation that can be used for dose accumulation, the organ-based, feature-based, and the background transformations are combined into one vector field using a weighted sum, where the contribution of each transformation can be directly controlled by its extent of influence (scope size). The optimal scope sizes for organ-based and feature-based transformations were found by an exhaustive analysis. The anatomical correctness of the mapping was independently validated by measuring the residual distances after transformation for delineated structures inside the cervix-uterus (inner anatomical correctness), and for anatomical landmarks outside the organs in the surrounding region (outer anatomical correctness). The results of the proposed method were compared with the results of the rigid transformation and nonrigid registration of all structures together (AST). Results: The rigid transformation achieved a good global alignment (mean outer anatomical correctness of 4.3 mm) but failed to align the deformed organs (mean inner anatomical correctness of 22.4 mm). Conversely, the AST registration produced a reasonable alignment for the organs (6.3 mm) but not for the surrounding region (16.9 mm). SW+VF registration achieved the best results for both regions (3.5 and 3.4 mm for the inner and outer anatomical correctness, respectively). All differences were significant (p < 0.02, Wilcoxon rank sum test). Additionally, optimization of the scope sizes determined that the method was robust for a large range of scope size values. Conclusions: The novel SW+VF method improved the mapping of large and complex deformations observed between EBRT and BT for cervical cancer patients. Future studies that quantify the mapping error in terms of dose errors are required to test the clinical applicability of dose accumulation by the SW+VF method.« less

  2. Joint multi-object registration and segmentation of left and right cardiac ventricles in 4D cine MRI

    NASA Astrophysics Data System (ADS)

    Ehrhardt, Jan; Kepp, Timo; Schmidt-Richberg, Alexander; Handels, Heinz

    2014-03-01

    The diagnosis of cardiac function based on cine MRI requires the segmentation of cardiac structures in the images, but the problem of automatic cardiac segmentation is still open, due to the imaging characteristics of cardiac MR images and the anatomical variability of the heart. In this paper, we present a variational framework for joint segmentation and registration of multiple structures of the heart. To enable the simultaneous segmentation and registration of multiple objects, a shape prior term is introduced into a region competition approach for multi-object level set segmentation. The proposed algorithm is applied for simultaneous segmentation of the myocardium as well as the left and right ventricular blood pool in short axis cine MRI images. Two experiments are performed: first, intra-patient 4D segmentation with a given initial segmentation for one time-point in a 4D sequence, and second, a multi-atlas segmentation strategy is applied to unseen patient data. Evaluation of segmentation accuracy is done by overlap coefficients and surface distances. An evaluation based on clinical 4D cine MRI images of 25 patients shows the benefit of the combined approach compared to sole registration and sole segmentation.

  3. Automatic tracking of arbitrarily shaped implanted markers in kilovoltage projection images: A feasibility study

    PubMed Central

    Regmi, Rajesh; Lovelock, D. Michael; Hunt, Margie; Zhang, Pengpeng; Pham, Hai; Xiong, Jianping; Yorke, Ellen D.; Goodman, Karyn A.; Rimner, Andreas; Mostafavi, Hassan; Mageras, Gig S.

    2014-01-01

    Purpose: Certain types of commonly used fiducial markers take on irregular shapes upon implantation in soft tissue. This poses a challenge for methods that assume a predefined shape of markers when automatically tracking such markers in kilovoltage (kV) radiographs. The authors have developed a method of automatically tracking regularly and irregularly shaped markers using kV projection images and assessed its potential for detecting intrafractional target motion during rotational treatment. Methods: Template-based matching used a normalized cross-correlation with simplex minimization. Templates were created from computed tomography (CT) images for phantom studies and from end-expiration breath-hold planning CT for patient studies. The kV images were processed using a Sobel filter to enhance marker visibility. To correct for changes in intermarker relative positions between simulation and treatment that can introduce errors in automatic matching, marker offsets in three dimensions were manually determined from an approximately orthogonal pair of kV images. Two studies in anthropomorphic phantom were carried out, one using a gold cylindrical marker representing regular shape, another using a Visicoil marker representing irregular shape. Automatic matching of templates to cone beam CT (CBCT) projection images was performed to known marker positions in phantom. In patient data, automatic matching was compared to manual matching as an approximate ground truth. Positional discrepancy between automatic and manual matching of less than 2 mm was assumed as the criterion for successful tracking. Tracking success rates were examined in kV projection images from 22 CBCT scans of four pancreas, six gastroesophageal junction, and one lung cancer patients. Each patient had at least one irregularly shaped radiopaque marker implanted in or near the tumor. In addition, automatic tracking was tested in intrafraction kV images of three lung cancer patients with irregularly shaped markers during 11 volumetric modulated arc treatments. Purpose-built software developed at our institution was used to create marker templates and track the markers embedded in kV images. Results: Phantom studies showed mean ± standard deviation measurement uncertainty of automatic registration to be 0.14 ± 0.07 mm and 0.17 ± 0.08 mm for Visicoil and gold cylindrical markers, respectively. The mean success rate of automatic tracking with CBCT projections (11 frames per second, fps) of pancreas, gastroesophageal junction, and lung cancer patients was 100%, 99.1% (range 98%–100%), and 100%, respectively. With intrafraction images (approx. 0.2 fps) of lung cancer patients, the success rate was 98.2% (range 97%–100%), and 94.3% (range 93%–97%) using templates from 1.25 mm and 2.5 mm slice spacing CT scans, respectively. Correction of intermarker relative position was found to improve the success rate in two out of eight patients analyzed. Conclusions: The proposed method can track arbitrary marker shapes in kV images using templates generated from a breath-hold CT acquired at simulation. The studies indicate its feasibility for tracking tumor motion during rotational treatment. Investigation of the causes of misregistration suggests that its rate of incidence can be reduced with higher frequency of image acquisition, templates made from smaller CT slice spacing, and correction of changes in intermarker relative positions when they occur. PMID:24989384

  4. Automatic tracking of arbitrarily shaped implanted markers in kilovoltage projection images: A feasibility study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Regmi, Rajesh; Lovelock, D. Michael; Hunt, Margie

    Purpose: Certain types of commonly used fiducial markers take on irregular shapes upon implantation in soft tissue. This poses a challenge for methods that assume a predefined shape of markers when automatically tracking such markers in kilovoltage (kV) radiographs. The authors have developed a method of automatically tracking regularly and irregularly shaped markers using kV projection images and assessed its potential for detecting intrafractional target motion during rotational treatment. Methods: Template-based matching used a normalized cross-correlation with simplex minimization. Templates were created from computed tomography (CT) images for phantom studies and from end-expiration breath-hold planning CT for patient studies. Themore » kV images were processed using a Sobel filter to enhance marker visibility. To correct for changes in intermarker relative positions between simulation and treatment that can introduce errors in automatic matching, marker offsets in three dimensions were manually determined from an approximately orthogonal pair of kV images. Two studies in anthropomorphic phantom were carried out, one using a gold cylindrical marker representing regular shape, another using a Visicoil marker representing irregular shape. Automatic matching of templates to cone beam CT (CBCT) projection images was performed to known marker positions in phantom. In patient data, automatic matching was compared to manual matching as an approximate ground truth. Positional discrepancy between automatic and manual matching of less than 2 mm was assumed as the criterion for successful tracking. Tracking success rates were examined in kV projection images from 22 CBCT scans of four pancreas, six gastroesophageal junction, and one lung cancer patients. Each patient had at least one irregularly shaped radiopaque marker implanted in or near the tumor. In addition, automatic tracking was tested in intrafraction kV images of three lung cancer patients with irregularly shaped markers during 11 volumetric modulated arc treatments. Purpose-built software developed at our institution was used to create marker templates and track the markers embedded in kV images. Results: Phantom studies showed mean ± standard deviation measurement uncertainty of automatic registration to be 0.14 ± 0.07 mm and 0.17 ± 0.08 mm for Visicoil and gold cylindrical markers, respectively. The mean success rate of automatic tracking with CBCT projections (11 frames per second, fps) of pancreas, gastroesophageal junction, and lung cancer patients was 100%, 99.1% (range 98%–100%), and 100%, respectively. With intrafraction images (approx. 0.2 fps) of lung cancer patients, the success rate was 98.2% (range 97%–100%), and 94.3% (range 93%–97%) using templates from 1.25 mm and 2.5 mm slice spacing CT scans, respectively. Correction of intermarker relative position was found to improve the success rate in two out of eight patients analyzed. Conclusions: The proposed method can track arbitrary marker shapes in kV images using templates generated from a breath-hold CT acquired at simulation. The studies indicate its feasibility for tracking tumor motion during rotational treatment. Investigation of the causes of misregistration suggests that its rate of incidence can be reduced with higher frequency of image acquisition, templates made from smaller CT slice spacing, and correction of changes in intermarker relative positions when they occur.« less

  5. [Preliminary application of an improved Demons deformable registration algorithm in tumor radiotherapy].

    PubMed

    Zhou, Lu; Zhen, Xin; Lu, Wenting; Dou, Jianhong; Zhou, Linghong

    2012-01-01

    To validate the efficiency of an improved Demons deformable registration algorithm and evaluate its application in registration of the treatment image and the planning image in image-guided radiotherapy (IGRT). Based on Brox's gradient constancy assumption and Malis's efficient second-order minimization algorithm, a grey value gradient similarity term was added into the original energy function, and a formula was derived to calculate the update of transformation field. The limited Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm was used to optimize the energy function for automatic determination of the iteration number. The proposed algorithm was validated using mathematically deformed images, physically deformed phantom images and clinical tumor images. Compared with the original Additive Demons algorithm, the improved Demons algorithm achieved a higher precision and a faster convergence speed. Due to the influence of different scanning conditions in fractionated radiation, the density range of the treatment image and the planning image may be different. The improved Demons algorithm can achieve faster and more accurate radiotherapy.

  6. Registration of T2-weighted and diffusion-weighted MR images of the prostate: comparison between manual and landmark-based methods

    NASA Astrophysics Data System (ADS)

    Peng, Yahui; Jiang, Yulei; Soylu, Fatma N.; Tomek, Mark; Sensakovic, William; Oto, Aytekin

    2012-02-01

    Quantitative analysis of multi-parametric magnetic resonance (MR) images of the prostate, including T2-weighted (T2w) and diffusion-weighted (DW) images, requires accurate image registration. We compared two registration methods between T2w and DW images. We collected pre-operative MR images of 124 prostate cancer patients (68 patients scanned with a GE scanner and 56 with Philips scanners). A landmark-based rigid registration was done based on six prostate landmarks in both T2w and DW images identified by a radiologist. Independently, a researcher manually registered the same images. A radiologist visually evaluated the registration results by using a 5-point ordinal scale of 1 (worst) to 5 (best). The Wilcoxon signed-rank test was used to determine whether the radiologist's ratings of the results of the two registration methods were significantly different. Results demonstrated that both methods were accurate: the average ratings were 4.2, 3.3, and 3.8 for GE, Philips, and all images, respectively, for the landmark-based method; and 4.6, 3.7, and 4.2, respectively, for the manual method. The manual registration results were more accurate than the landmark-based registration results (p < 0.0001 for GE, Philips, and all images). Therefore, the manual method produces more accurate registration between T2w and DW images than the landmark-based method.

  7. SU-G-JeP2-08: Image-Guided Radiation Therapy Using Synthetic CTs in Brain Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Price, R.G.; Glide-Hurst, C.; Henry Ford Health System, Detroit, MI

    Purpose: Synthetic-CTs(synCTs) are essential for MR-only treatment planning. However, the performance of synCT for IGRT must be carefully assessed. This work evaluated the accuracy of synCT and synCT-generated DRRs and determined their performance for IGRT in brain cancer radiation therapy. Methods: MR-SIM and CT-SIM images were acquired of a novel anthropomorphic phantom and a cohort of 12 patients. SynCTs were generated by combining an ultra-short echo time (UTE) sequence with other MRI datasets using voxel-based weighted summation. For the phantom, DRRs from synCT and CT were compared via bounding box and landmark analysis. Planar (MV/KV) and volumetric (CBCT) IGRT performancemore » were evaluated across several platforms. In patients, retrospective analysis was conducted to register CBCTs (n=34) to synCTs and CTs using automated rigid registration in the treatment planning system using whole brain and local registration techniques. A semi-automatic registration program was developed and validated to rigidly register planar MV/KV images (n=37) to synCT and CT DRRs. Registration reproducibility was assessed and margin differences were characterized using the van Herk formalism. Results: Bounding box and landmark analysis of phantom synCT DRRs were within 1mm of CT DRRs. Absolute 2D/2D registration shift differences ranged from 0.0–0.7mm for phantom DRRs on all treatment platforms and 0.0–0.4mm for volumetric registrations. For patient planar registrations, mean shift differences were 0.4±0.5mm (range: −0.6–1.6mm), 0.0±0.5mm, (range: −0.9–1.2mm), and 0.1±0.3mm (range: −0.7–0.6mm) for the superior-inferior(S-I), left-right(L–R), and anterior-posterior(A-P) axes, respectively. Mean shift differences in volumetric registrations were 0.6±0.4mm (range: −0.2–1.6mm), 0.2±0.4mm (range: −0.3–1.2mm), and 0.2±0.3mm (range: −0.2–1.2mm) for S-I, L–R, and A–P axes, respectively. CT-SIM and synCT derived margins were within 0.3mm. Conclusion: DRRs generated via synCT agreed well with CT-SIM. Planar and volumetric registrations to synCT-derived targets were comparable to CT. This validation is the next step toward clinical implementation of MR-only planning for the brain. The submitting institution has research agreements with Philips Healthcare. Research sponsored by a Henry Ford Health System Internal Mentored Grant.« less

  8. Computed tomography lung iodine contrast mapping by image registration and subtraction

    NASA Astrophysics Data System (ADS)

    Goatman, Keith; Plakas, Costas; Schuijf, Joanne; Beveridge, Erin; Prokop, Mathias

    2014-03-01

    Pulmonary embolism (PE) is a relatively common and potentially life threatening disease, affecting around 600,000 people annually in the United States alone. Prompt treatment using anticoagulants is effective and saves lives, but unnecessary treatment risks life threatening haemorrhage. The specificity of any diagnostic test for PE is therefore as important as its sensitivity. Computed tomography (CT) angiography is routinely used to diagnose PE. However, there are concerns it may over-report the condition. Additional information about the severity of an occlusion can be obtained from an iodine contrast map that represents tissue perfusion. Such maps tend to be derived from dual-energy CT acquisitions. However, they may also be calculated by subtracting pre- and post-contrast CT scans. Indeed, there are technical advantages to such a subtraction approach, including better contrast-to-noise ratio for the same radiation dose, and bone suppression. However, subtraction relies on accurate image registration. This paper presents a framework for the automatic alignment of pre- and post-contrast lung volumes prior to subtraction. The registration accuracy is evaluated for seven subjects for whom pre- and post-contrast helical CT scans were acquired using a Toshiba Aquilion ONE scanner. One hundred corresponding points were annotated on the pre- and post-contrast scans, distributed throughout the lung volume. Surface-to-surface error distances were also calculated from lung segmentations. Prior to registration the mean Euclidean landmark alignment error was 2.57mm (range 1.43-4.34 mm), and following registration the mean error was 0.54mm (range 0.44-0.64 mm). The mean surface error distance was 1.89mm before registration and 0.47mm after registration. There was a commensurate reduction in visual artefacts following registration. In conclusion, a framework for pre- and post-contrast lung registration has been developed that is sufficiently accurate for lung subtraction iodine mapping.

  9. A Remote Registration Based on MIDAS

    NASA Astrophysics Data System (ADS)

    JIN, Xin

    2017-04-01

    We often need for software registration to protect the interests of the software developers. This article narrated one kind of software long-distance registration technology. The registration method is: place the registration information in a database table, after the procedure starts in check table registration information, if it has registered then the procedure may the normal operation; Otherwise, the customer must input the sequence number and registers through the network on the long-distance server. If it registers successfully, then records the registration information in the database table. This remote registration method can protect the rights of software developers.

  10. The Insight ToolKit image registration framework

    PubMed Central

    Avants, Brian B.; Tustison, Nicholas J.; Stauffer, Michael; Song, Gang; Wu, Baohua; Gee, James C.

    2014-01-01

    Publicly available scientific resources help establish evaluation standards, provide a platform for teaching and improve reproducibility. Version 4 of the Insight ToolKit (ITK4) seeks to establish new standards in publicly available image registration methodology. ITK4 makes several advances in comparison to previous versions of ITK. ITK4 supports both multivariate images and objective functions; it also unifies high-dimensional (deformation field) and low-dimensional (affine) transformations with metrics that are reusable across transform types and with composite transforms that allow arbitrary series of geometric mappings to be chained together seamlessly. Metrics and optimizers take advantage of multi-core resources, when available. Furthermore, ITK4 reduces the parameter optimization burden via principled heuristics that automatically set scaling across disparate parameter types (rotations vs. translations). A related approach also constrains steps sizes for gradient-based optimizers. The result is that tuning for different metrics and/or image pairs is rarely necessary allowing the researcher to more easily focus on design/comparison of registration strategies. In total, the ITK4 contribution is intended as a structure to support reproducible research practices, will provide a more extensive foundation against which to evaluate new work in image registration and also enable application level programmers a broad suite of tools on which to build. Finally, we contextualize this work with a reference registration evaluation study with application to pediatric brain labeling.1 PMID:24817849

  11. A Statistically Representative Atlas for Mapping Neuronal Circuits in the Drosophila Adult Brain

    PubMed Central

    Arganda-Carreras, Ignacio; Manoliu, Tudor; Mazuras, Nicolas; Schulze, Florian; Iglesias, Juan E.; Bühler, Katja; Jenett, Arnim; Rouyer, François; Andrey, Philippe

    2018-01-01

    Imaging the expression patterns of reporter constructs is a powerful tool to dissect the neuronal circuits of perception and behavior in the adult brain of Drosophila, one of the major models for studying brain functions. To date, several Drosophila brain templates and digital atlases have been built to automatically analyze and compare collections of expression pattern images. However, there has been no systematic comparison of performances between alternative atlasing strategies and registration algorithms. Here, we objectively evaluated the performance of different strategies for building adult Drosophila brain templates and atlases. In addition, we used state-of-the-art registration algorithms to generate a new group-wise inter-sex atlas. Our results highlight the benefit of statistical atlases over individual ones and show that the newly proposed inter-sex atlas outperformed existing solutions for automated registration and annotation of expression patterns. Over 3,000 images from the Janelia Farm FlyLight collection were registered using the proposed strategy. These registered expression patterns can be searched and compared with a new version of the BrainBaseWeb system and BrainGazer software. We illustrate the validity of our methodology and brain atlas with registration-based predictions of expression patterns in a subset of clock neurons. The described registration framework should benefit to brain studies in Drosophila and other insect species. PMID:29628885

  12. A 3D Freehand Ultrasound System for Multi-view Reconstructions from Sparse 2D Scanning Planes

    PubMed Central

    2011-01-01

    Background A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. Methods We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes. For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Results Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better agreement with clinical measures than measures from single view reconstructions. Conclusions Multi-view 3D reconstruction from sparse 2D freehand B-mode images leads to more accurate volume quantification compared to single view systems. The flexibility and low-cost of the proposed system allow for fine control of the image acquisition planes for optimal 3D reconstructions from multiple views. PMID:21251284

  13. Hierarchical and symmetric infant image registration by robust longitudinal-example-guided correspondence detection

    PubMed Central

    Wu, Yao; Wu, Guorong; Wang, Li; Munsell, Brent C.; Wang, Qian; Lin, Weili; Feng, Qianjin; Chen, Wufan; Shen, Dinggang

    2015-01-01

    Purpose: To investigate anatomical differences across individual subjects, or longitudinal changes in early brain development, it is important to perform accurate image registration. However, due to fast brain development and dynamic tissue appearance changes, it is very difficult to align infant brain images acquired from birth to 1-yr-old. Methods: To solve this challenging problem, a novel image registration method is proposed to align two infant brain images, regardless of age at acquisition. The main idea is to utilize the growth trajectories, or spatial-temporal correspondences, learned from a set of longitudinal training images, for guiding the registration of two different time-point images with different image appearances. Specifically, in the training stage, an intrinsic growth trajectory is first estimated for each training subject using the longitudinal images. To register two new infant images with potentially a large age gap, the corresponding images patches between each new image and its respective training images with similar age are identified. Finally, the registration between the two new images can be assisted by the learned growth trajectories from one time point to another time point that have been established in the training stage. To further improve registration accuracy, the proposed method is combined with a hierarchical and symmetric registration framework that can iteratively add new key points in both images to steer the estimation of the deformation between the two infant brain images under registration. Results: To evaluate image registration accuracy, the proposed method is used to align 24 infant subjects at five different time points (2-week-old, 3-month-old, 6-month-old, 9-month-old, and 12-month-old). Compared to the state-of-the-art methods, the proposed method demonstrated superior registration performance. Conclusions: The proposed method addresses the difficulties in the infant brain registration and produces better results compared to existing state-of-the-art registration methods. PMID:26133617

  14. Superiority of automatic remote monitoring compared with in-person evaluation for scheduled ICD follow-up in the TRUST trial - testing execution of the recommendations

    PubMed Central

    Varma, Niraj; Michalski, Justin; Stambler, Bruce; Pavri, Behzad B.

    2014-01-01

    Aims To test recommended implantable cardioverter defibrillator (ICD) follow-up methods by ‘in-person evaluations’ (IPE) vs. ‘remote Home Monitoring’ (HM). Methods and results ICD patients were randomized 2:1 to automatic HM or to Conventional monitoring, with follow-up checks scheduled at 3, 6, 9, 12, and 15 months post-implant. Conventional patients were evaluated with IPE only. Home Monitoring patients were assessed remotely only for 1 year between 3 and 15 month evaluations. Adherence to follow-up was measured. HM and Conventional patients were similar (age 63 years, 72% male, left ventricular ejection fraction 29%, primary prevention 73%, DDD 57%). Conventional management suffered greater patient attrition during the trial (20.1 vs. 14.2% in HM, P = 0.007). Three month follow-up occurred in 84% in both groups. There was 100% adherence (5 of 5 checks) in 47.3% Conventional vs. 59.7% HM (P < 0.001). Between 3 and 15 months, HM exhibited superior (2.2×) adherence to scheduled follow-up [incidence of failed follow up was 146 of 2421 (6.0%) in HM vs. 145 of 1098 (13.2%) in Conventional, P < 0.001] and punctuality. In HM (daily transmission success rate median 91%), transmission loss caused only 22 of 2275 (0.97%) failed HM evaluations between 3 and 15 months; others resulted from clinic oversight. Overall IPE failure rate in Conventional [193 of 1841 (10.5%) exceeded that in HM [97 of 1484 (6.5%), P < 0.001] by 62%, i.e. HM patients remained more loyal to IPE when this was mandated. Conclusion Automatic remote monitoring better preserves patient retention and adherence to scheduled follow-up compared with IPE. Clinical trial registration NCT00336284. PMID:24595864

  15. Hemodynamic consequences of LPA stenosis in single ventricle stage 2 LPN circulation with automatic registration

    NASA Astrophysics Data System (ADS)

    Schiavazzi, Daniele E.; Kung, Ethan O.; Dorfman, Adam L.; Hsia, Tain-Yen; Baretta, Alessia; Arbia, Gregory; Marsden, Alison L.

    2013-11-01

    Congenital heart diseases such as hypoplastic left heart syndrome annually affect about 3% of births in the US alone. Surgical palliation of single ventricle patients is performed in stages. Consequently to the stage 2 surgical procedure or other previous conditions, a stenosis of the left pulmonary artery (LPA) is often observed, raising the clinical question of whether or not it should be treated. The severity of stenoses are commonly assessed through geometric inspection or catheter in-vivo pressure measurements with limited quantitative information about patient-specific physiology. The present study uses a multiscale CFD approach to provide an assessment of the severity of LPA stenoses. A lumped parameter 0D model is used to simulate stage 2 circulation, and parameters are automatically identified accounting for uncertainty in the clinical data available for a cohort of patients. The importance of the latter parameters, whether alone or in groups, is also ranked using forward uncertainty propagation methods. Various stenosis levels are applied to the three-dimensional SVC-PA junction model using a dual mesh-morphing approach. Traditional assessments methodologies are compared to the results of our findings and critically discussed.

  16. Real-time CT-video registration for continuous endoscopic guidance

    NASA Astrophysics Data System (ADS)

    Merritt, Scott A.; Rai, Lav; Higgins, William E.

    2006-03-01

    Previous research has shown that CT-image-based guidance could be useful for the bronchoscopic assessment of lung cancer. This research drew upon the registration of bronchoscopic video images to CT-based endoluminal renderings of the airway tree. The proposed methods either were restricted to discrete single-frame registration, which took several seconds to complete, or required non-real-time buffering and processing of video sequences. We have devised a fast 2D/3D image registration method that performs single-frame CT-Video registration in under 1/15th of a second. This allows the method to be used for real-time registration at full video frame rates without significantly altering the physician's behavior. The method achieves its speed through a gradient-based optimization method that allows most of the computation to be performed off-line. During live registration, the optimization iteratively steps toward the locally optimal viewpoint at which a CT-based endoluminal view is most similar to a current bronchoscopic video frame. After an initial registration to begin the process (generally done in the trachea for bronchoscopy), subsequent registrations are performed in real-time on each incoming video frame. As each new bronchoscopic video frame becomes available, the current optimization is initialized using the previous frame's optimization result, allowing continuous guidance to proceed without manual re-initialization. Tests were performed using both synthetic and pre-recorded bronchoscopic video. The results show that the method is robust to initialization errors, that registration accuracy is high, and that continuous registration can proceed on real-time video at >15 frames per sec. with minimal user-intervention.

  17. Toward the development of intrafraction tumor deformation tracking using a dynamic multi-leaf collimator

    PubMed Central

    Ge, Yuanyuan; O’Brien, Ricky T.; Shieh, Chun-Chien; Booth, Jeremy T.; Keall, Paul J.

    2014-01-01

    Purpose: Intrafraction deformation limits targeting accuracy in radiotherapy. Studies show tumor deformation of over 10 mm for both single tumor deformation and system deformation (due to differential motion between primary tumors and involved lymph nodes). Such deformation cannot be adapted to with current radiotherapy methods. The objective of this study was to develop and experimentally investigate the ability of a dynamic multi-leaf collimator (DMLC) tracking system to account for tumor deformation. Methods: To compensate for tumor deformation, the DMLC tracking strategy is to warp the planned beam aperture directly to conform to the new tumor shape based on real time tumor deformation input. Two deformable phantoms that correspond to a single tumor and a tumor system were developed. The planar deformations derived from the phantom images in beam's eye view were used to guide the aperture warping. An in-house deformable image registration software was developed to automatically trigger the registration once new target image was acquired and send the computed deformation to the DMLC tracking software. Because the registration speed is not fast enough to implement the experiment in real-time manner, the phantom deformation only proceeded to the next position until registration of the current deformation position was completed. The deformation tracking accuracy was evaluated by a geometric target coverage metric defined as the sum of the area incorrectly outside and inside the ideal aperture. The individual contributions from the deformable registration algorithm and the finite leaf width to the tracking uncertainty were analyzed. Clinical proof-of-principle experiment of deformation tracking using previously acquired MR images of a lung cancer patient was implemented to represent the MRI-Linac environment. Intensity-modulated radiation therapy (IMRT) treatment delivered with enabled deformation tracking was simulated and demonstrated. Results: The first experimental investigation of adapting to tumor deformation has been performed using simple deformable phantoms. For the single tumor deformation, the Au+Ao was reduced over 56% when deformation was larger than 2 mm. Overall, the total improvement was 82%. For the tumor system deformation, the Au+Ao reductions were all above 75% and the total Au+Ao improvement was 86%. Similar coverage improvement was also found in simulating deformation tracking during IMRT delivery. The deformable image registration algorithm was identified as the dominant contributor to the tracking error rather than the finite leaf width. The discrepancy between the warped beam shape and the ideal beam shape due to the deformable registration was observed to be partially compensated during leaf fitting due to the finite leaf width. The clinical proof-of-principle experiment demonstrated the feasibility of intrafraction deformable tracking for clinical scenarios. Conclusions: For the first time, we developed and demonstrated an experimental system that is capable of adapting the MLC aperture to account for tumor deformation. This work provides a potentially widely available management method to effectively account for intrafractional tumor deformation. This proof-of-principle study is the first experimental step toward the development of an image-guided radiotherapy system to treat deforming tumors in real-time. PMID:24877798

  18. Toward the development of intrafraction tumor deformation tracking using a dynamic multi-leaf collimator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ge, Yuanyuan; O’Brien, Ricky T.; Shieh, Chun-Chien

    Purpose: Intrafraction deformation limits targeting accuracy in radiotherapy. Studies show tumor deformation of over 10 mm for both single tumor deformation and system deformation (due to differential motion between primary tumors and involved lymph nodes). Such deformation cannot be adapted to with current radiotherapy methods. The objective of this study was to develop and experimentally investigate the ability of a dynamic multi-leaf collimator (DMLC) tracking system to account for tumor deformation. Methods: To compensate for tumor deformation, the DMLC tracking strategy is to warp the planned beam aperture directly to conform to the new tumor shape based on real timemore » tumor deformation input. Two deformable phantoms that correspond to a single tumor and a tumor system were developed. The planar deformations derived from the phantom images in beam's eye view were used to guide the aperture warping. An in-house deformable image registration software was developed to automatically trigger the registration once new target image was acquired and send the computed deformation to the DMLC tracking software. Because the registration speed is not fast enough to implement the experiment in real-time manner, the phantom deformation only proceeded to the next position until registration of the current deformation position was completed. The deformation tracking accuracy was evaluated by a geometric target coverage metric defined as the sum of the area incorrectly outside and inside the ideal aperture. The individual contributions from the deformable registration algorithm and the finite leaf width to the tracking uncertainty were analyzed. Clinical proof-of-principle experiment of deformation tracking using previously acquired MR images of a lung cancer patient was implemented to represent the MRI-Linac environment. Intensity-modulated radiation therapy (IMRT) treatment delivered with enabled deformation tracking was simulated and demonstrated. Results: The first experimental investigation of adapting to tumor deformation has been performed using simple deformable phantoms. For the single tumor deformation, the A{sub u}+A{sub o} was reduced over 56% when deformation was larger than 2 mm. Overall, the total improvement was 82%. For the tumor system deformation, the A{sub u}+A{sub o} reductions were all above 75% and the total A{sub u}+A{sub o} improvement was 86%. Similar coverage improvement was also found in simulating deformation tracking during IMRT delivery. The deformable image registration algorithm was identified as the dominant contributor to the tracking error rather than the finite leaf width. The discrepancy between the warped beam shape and the ideal beam shape due to the deformable registration was observed to be partially compensated during leaf fitting due to the finite leaf width. The clinical proof-of-principle experiment demonstrated the feasibility of intrafraction deformable tracking for clinical scenarios. Conclusions: For the first time, we developed and demonstrated an experimental system that is capable of adapting the MLC aperture to account for tumor deformation. This work provides a potentially widely available management method to effectively account for intrafractional tumor deformation. This proof-of-principle study is the first experimental step toward the development of an image-guided radiotherapy system to treat deforming tumors in real-time.« less

  19. Learning-based deformable image registration for infant MR images in the first year of life.

    PubMed

    Hu, Shunbo; Wei, Lifang; Gao, Yaozong; Guo, Yanrong; Wu, Guorong; Shen, Dinggang

    2017-01-01

    Many brain development studies have been devoted to investigate dynamic structural and functional changes in the first year of life. To quantitatively measure brain development in such a dynamic period, accurate image registration for different infant subjects with possible large age gap is of high demand. Although many state-of-the-art image registration methods have been proposed for young and elderly brain images, very few registration methods work for infant brain images acquired in the first year of life, because of (a) large anatomical changes due to fast brain development and (b) dynamic appearance changes due to white-matter myelination. To address these two difficulties, we propose a learning-based registration method to not only align the anatomical structures but also alleviate the appearance differences between two arbitrary infant MR images (with large age gap) by leveraging the regression forest to predict both the initial displacement vector and appearance changes. Specifically, in the training stage, two regression models are trained separately, with (a) one model learning the relationship between local image appearance (of one development phase) and its displacement toward the template (of another development phase) and (b) another model learning the local appearance changes between the two brain development phases. Then, in the testing stage, to register a new infant image to the template, we first predict both its voxel-wise displacement and appearance changes by the two learned regression models. Since such initializations can alleviate significant appearance and shape differences between new infant image and the template, it is easy to just use a conventional registration method to refine the remaining registration. We apply our proposed registration method to align 24 infant subjects at five different time points (i.e., 2-week-old, 3-month-old, 6-month-old, 9-month-old, and 12-month-old), and achieve more accurate and robust registration results, compared to the state-of-the-art registration methods. The proposed learning-based registration method addresses the challenging task of registering infant brain images and achieves higher registration accuracy compared with other counterpart registration methods. © 2016 American Association of Physicists in Medicine.

  20. Coronary artery analysis: Computer-assisted selection of best-quality segments in multiple-phase coronary CT angiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Chuan, E-mail: chuan@umich.edu; Chan, Heang-

    Purpose: The authors are developing an automated method to identify the best-quality coronary arterial segment from multiple-phase coronary CT angiography (cCTA) acquisitions, which may be used by either interpreting physicians or computer-aided detection systems to optimally and efficiently utilize the diagnostic information available in multiple-phase cCTA for the detection of coronary artery disease. Methods: After initialization with a manually identified seed point, each coronary artery tree is automatically extracted from multiple cCTA phases using our multiscale coronary artery response enhancement and 3D rolling balloon region growing vessel segmentation and tracking method. The coronary artery trees from multiple phases are thenmore » aligned by a global registration using an affine transformation with quadratic terms and nonlinear simplex optimization, followed by a local registration using a cubic B-spline method with fast localized optimization. The corresponding coronary arteries among the available phases are identified using a recursive coronary segment matching method. Each of the identified vessel segments is transformed by the curved planar reformation (CPR) method. Four features are extracted from each corresponding segment as quality indicators in the original computed tomography volume and the straightened CPR volume, and each quality indicator is used as a voting classifier for the arterial segment. A weighted voting ensemble (WVE) classifier is designed to combine the votes of the four voting classifiers for each corresponding segment. The segment with the highest WVE vote is then selected as the best-quality segment. In this study, the training and test sets consisted of 6 and 20 cCTA cases, respectively, each with 6 phases, containing a total of 156 cCTA volumes and 312 coronary artery trees. An observer preference study was also conducted with one expert cardiothoracic radiologist and four nonradiologist readers to visually rank vessel segment quality. The performance of our automated method was evaluated by comparing the automatically identified best-quality segments identified by the computer to those selected by the observers. Results: For the 20 test cases, 254 groups of corresponding vessel segments were identified after multiple phase registration and recursive matching. The AI-BQ segments agreed with the radiologist’s top 2 ranked segments in 78.3% of the 254 groups (Cohen’s kappa 0.60), and with the 4 nonradiologist observers in 76.8%, 84.3%, 83.9%, and 85.8% of the 254 groups. In addition, 89.4% of the AI-BQ segments agreed with at least two observers’ top 2 rankings, and 96.5% agreed with at least one observer’s top 2 rankings. In comparison, agreement between the four observers’ top ranked segment and the radiologist’s top 2 ranked segments were 79.9%, 80.7%, 82.3%, and 76.8%, respectively, with kappa values ranging from 0.56 to 0.68. Conclusions: The performance of our automated method for selecting the best-quality coronary segments from a multiple-phase cCTA acquisition was comparable to the selection made by human observers. This study demonstrates the potential usefulness of the automated method in clinical practice, enabling interpreting physicians to fully utilize the best available information in cCTA for diagnosis of coronary disease, without requiring manual search through the multiple phases and minimizing the variability in image phase selection for evaluation of coronary artery segments across the diversity of human readers with variations in expertise.« less

  1. A prospective comparison between auto-registration and manual registration of real-time ultrasound with MR images for percutaneous ablation or biopsy of hepatic lesions.

    PubMed

    Cha, Dong Ik; Lee, Min Woo; Song, Kyoung Doo; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga

    2017-06-01

    To compare the accuracy and required time for image fusion of real-time ultrasound (US) with pre-procedural magnetic resonance (MR) images between positioning auto-registration and manual registration for percutaneous radiofrequency ablation or biopsy of hepatic lesions. This prospective study was approved by the institutional review board, and all patients gave written informed consent. Twenty-two patients (male/female, n = 18/n = 4; age, 61.0 ± 7.7 years) who were referred for planning US to assess the feasibility of radiofrequency ablation (n = 21) or biopsy (n = 1) for focal hepatic lesions were included. One experienced radiologist performed the two types of image fusion methods in each patient. The performance of auto-registration and manual registration was evaluated. The accuracy of the two methods, based on measuring registration error, and the time required for image fusion for both methods were recorded using in-house software and respectively compared using the Wilcoxon signed rank test. Image fusion was successful in all patients. The registration error was not significantly different between the two methods (auto-registration: median, 3.75 mm; range, 1.0-15.8 mm vs. manual registration: median, 2.95 mm; range, 1.2-12.5 mm, p = 0.242). The time required for image fusion was significantly shorter with auto-registration than with manual registration (median, 28.5 s; range, 18-47 s, vs. median, 36.5 s; range, 14-105 s, p = 0.026). Positioning auto-registration showed promising results compared with manual registration, with similar accuracy and even shorter registration time.

  2. Accuracy Considerations in Image-guided Cardiac Interventions: Experience and Lessons Learned

    PubMed Central

    Linte, Cristian A.; Lang, Pencilla; Rettmann, Maryam E.; Cho, Daniel S.; Holmes, David R.; Robb, Richard A.; Peters, Terry M.

    2014-01-01

    Motivation Medical imaging and its application in interventional guidance has revolutionized the development of minimally invasive surgical procedures leading to reduced patient trauma, fewer risks, and shorter recovery times. However, a frequently posed question with regards to an image guidance system is “how accurate is it?” On one hand, the accuracy challenge can be posed in terms of the tolerable clinical error associated with the procedure; on the other hand, accuracy is bound by the limitations of the system’s components, including modeling, patient registration, and surgical instrument tracking, all of which ultimately impact the overall targeting capabilities of the system. Methods While these processes are not unique to any interventional specialty, this paper discusses them in the context of two different cardiac image-guidance platforms: a model-enhanced ultrasound platform for intracardiac interventions and a prototype system for advanced visualization in image-guided cardiac ablation therapy. Results Pre-operative modeling techniques involving manual, semi-automatic and registration-based segmentation are discussed. The performance and limitations of clinically feasible approaches for patient registration evaluated both in the laboratory and operating room are presented. Our experience with two different magnetic tracking systems for instrument and ultrasound transducer localization is reported. Ultimately, the overall accuracy of the systems is discussed based on both in vitro and preliminary in vivo experience. Conclusion While clinical accuracy is specific to a particular patient and procedure and vastly dependent on the surgeon’s experience, the system’s engineering limitations are critical to determine whether the clinical requirements can be met. PMID:21671097

  3. A segmentation and point-matching enhanced efficient deformable image registration method for dose accumulation between HDR CT images

    NASA Astrophysics Data System (ADS)

    Zhen, Xin; Chen, Haibin; Yan, Hao; Zhou, Linghong; Mell, Loren K.; Yashar, Catheryn M.; Jiang, Steve; Jia, Xun; Gu, Xuejun; Cervino, Laura

    2015-04-01

    Deformable image registration (DIR) of fractional high-dose-rate (HDR) CT images is challenging due to the presence of applicators in the brachytherapy image. Point-to-point correspondence fails because of the undesired deformation vector fields (DVF) propagated from the applicator region (AR) to the surrounding tissues, which can potentially introduce significant DIR errors in dose mapping. This paper proposes a novel segmentation and point-matching enhanced efficient DIR (named SPEED) scheme to facilitate dose accumulation among HDR treatment fractions. In SPEED, a semi-automatic seed point generation approach is developed to obtain the incremented fore/background point sets to feed the random walks algorithm, which is used to segment and remove the AR, leaving empty AR cavities in the HDR CT images. A feature-based ‘thin-plate-spline robust point matching’ algorithm is then employed for AR cavity surface points matching. With the resulting mapping, a DVF defining on each voxel is estimated by B-spline approximation, which serves as the initial DVF for the subsequent Demons-based DIR between the AR-free HDR CT images. The calculated DVF via Demons combined with the initial one serve as the final DVF to map doses between HDR fractions. The segmentation and registration accuracy are quantitatively assessed by nine clinical HDR cases from three gynecological cancer patients. The quantitative analysis and visual inspection of the DIR results indicate that SPEED can suppress the impact of applicator on DIR, and accurately register HDR CT images as well as deform and add interfractional HDR doses.

  4. A segmentation and point-matching enhanced efficient deformable image registration method for dose accumulation between HDR CT images.

    PubMed

    Zhen, Xin; Chen, Haibin; Yan, Hao; Zhou, Linghong; Mell, Loren K; Yashar, Catheryn M; Jiang, Steve; Jia, Xun; Gu, Xuejun; Cervino, Laura

    2015-04-07

    Deformable image registration (DIR) of fractional high-dose-rate (HDR) CT images is challenging due to the presence of applicators in the brachytherapy image. Point-to-point correspondence fails because of the undesired deformation vector fields (DVF) propagated from the applicator region (AR) to the surrounding tissues, which can potentially introduce significant DIR errors in dose mapping. This paper proposes a novel segmentation and point-matching enhanced efficient DIR (named SPEED) scheme to facilitate dose accumulation among HDR treatment fractions. In SPEED, a semi-automatic seed point generation approach is developed to obtain the incremented fore/background point sets to feed the random walks algorithm, which is used to segment and remove the AR, leaving empty AR cavities in the HDR CT images. A feature-based 'thin-plate-spline robust point matching' algorithm is then employed for AR cavity surface points matching. With the resulting mapping, a DVF defining on each voxel is estimated by B-spline approximation, which serves as the initial DVF for the subsequent Demons-based DIR between the AR-free HDR CT images. The calculated DVF via Demons combined with the initial one serve as the final DVF to map doses between HDR fractions. The segmentation and registration accuracy are quantitatively assessed by nine clinical HDR cases from three gynecological cancer patients. The quantitative analysis and visual inspection of the DIR results indicate that SPEED can suppress the impact of applicator on DIR, and accurately register HDR CT images as well as deform and add interfractional HDR doses.

  5. Deformable registration of CT and cone-beam CT with local intensity matching.

    PubMed

    Park, Seyoun; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon

    2017-02-07

    Cone-beam CT (CBCT) is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. In this paper, we propose a method to accurately register CT and CBCT by iteratively matching local CT and CBCT intensities. We correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. The correction-registration steps are repeated in an alternating way until the result image converges. We integrate the intensity matching into three different deformable registration methods, B-spline, demons, and optical flow that are widely used for CT-CBCT registration. All three registration methods were implemented on a graphics processing unit for efficient parallel computation. We tested the proposed methods on twenty five head and neck cancer cases and compared the performance with state-of-the-art registration methods. Normalized cross correlation (NCC), structural similarity index (SSIM), and target registration error (TRE) were computed to evaluate the registration performance. Our method produced overall NCC of 0.96, SSIM of 0.94, and TRE of 2.26 → 2.27 mm, outperforming existing methods by 9%, 12%, and 27%, respectively. Experimental results also show that our method performs consistently and is more accurate than existing algorithms, and also computationally efficient.

  6. Deformable registration of CT and cone-beam CT with local intensity matching

    NASA Astrophysics Data System (ADS)

    Park, Seyoun; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon

    2017-02-01

    Cone-beam CT (CBCT) is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. In this paper, we propose a method to accurately register CT and CBCT by iteratively matching local CT and CBCT intensities. We correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. The correction-registration steps are repeated in an alternating way until the result image converges. We integrate the intensity matching into three different deformable registration methods, B-spline, demons, and optical flow that are widely used for CT-CBCT registration. All three registration methods were implemented on a graphics processing unit for efficient parallel computation. We tested the proposed methods on twenty five head and neck cancer cases and compared the performance with state-of-the-art registration methods. Normalized cross correlation (NCC), structural similarity index (SSIM), and target registration error (TRE) were computed to evaluate the registration performance. Our method produced overall NCC of 0.96, SSIM of 0.94, and TRE of 2.26 → 2.27 mm, outperforming existing methods by 9%, 12%, and 27%, respectively. Experimental results also show that our method performs consistently and is more accurate than existing algorithms, and also computationally efficient.

  7. Automatic classification of fluorescence and optical diffusion spectroscopy data in neuro-oncology

    NASA Astrophysics Data System (ADS)

    Savelieva, T. A.; Loshchenov, V. B.; Goryajnov, S. A.; Potapov, A. A.

    2018-04-01

    The complexity of the biological tissue spectroscopic analysis due to the overlap of biological molecules' absorption spectra, multiple scattering effect, as well as measurement geometry in vivo has caused the relevance of this work. In the neurooncology the problem of tumor boundaries delineation is especially acute and requires the development of new methods of intraoperative diagnosis. Methods of optical spectroscopy allow detecting various diagnostically significant parameters non-invasively. 5-ALA induced protoporphyrin IX is frequently used as fluorescent tumor marker in neurooncology. At the same time analysis of the concentration and the oxygenation level of haemoglobin and significant changes of light scattering in tumor tissues have a high diagnostic value. This paper presents an original method for the simultaneous registration of backward diffuse reflectance and fluorescence spectra, which allows defining all the parameters listed above simultaneously. The clinical studies involving 47 patients with intracranial glial tumors of II-IV Grades were carried out in N.N. Burdenko National Medical Research Center of Neurosurgery. To register the spectral dependences the spectroscopic system LESA- 01-BIOSPEC was used with specially developed w-shaped diagnostic fiber optic probe. The original algorithm of combined spectroscopic signal processing was developed. We have created a software and hardware, which allowed (as compared with the methods currently used in neurosurgical practice) to increase the sensitivity of intraoperative demarcation of intracranial tumors from 78% to 96%, specificity of 60% to 82%. The result of analysis of different techniques of automatic classification shows that in our case the most appropriate is the k Nearest Neighbors algorithm with cubic metrics.

  8. Pre-processing, registration and selection of adaptive optics corrected retinal images.

    PubMed

    Ramaswamy, Gomathy; Devaney, Nicholas

    2013-07-01

    In this paper, the aim is to demonstrate enhanced processing of sequences of fundus images obtained using a commercial AO flood illumination system. The purpose of the work is to (1) correct for uneven illumination at the retina (2) automatically select the best quality images and (3) precisely register the best images. Adaptive optics corrected retinal images are pre-processed to correct uneven illumination using different methods; subtracting or dividing by the average filtered image, homomorphic filtering and a wavelet based approach. These images are evaluated to measure the image quality using various parameters, including sharpness, variance, power spectrum kurtosis and contrast. We have carried out the registration in two stages; a coarse stage using cross-correlation followed by fine registration using two approaches; parabolic interpolation on the peak of the cross-correlation and maximum-likelihood estimation. The angle of rotation of the images is measured using a combination of peak tracking and Procrustes transformation. We have found that a wavelet approach (Daubechies 4 wavelet at 6th level decomposition) provides good illumination correction with clear improvement in image sharpness and contrast. The assessment of image quality using a 'Designer metric' works well when compared to visual evaluation, although it is highly correlated with other metrics. In image registration, sub-pixel translation measured using parabolic interpolation on the peak of the cross-correlation function and maximum-likelihood estimation are found to give very similar results (RMS difference 0.047 pixels). We have confirmed that correcting rotation of the images provides a significant improvement, especially at the edges of the image. We observed that selecting the better quality frames (e.g. best 75% images) for image registration gives improved resolution, at the expense of poorer signal-to-noise. The sharpness map of the registered and de-rotated images shows increased sharpness over most of the field of view. Adaptive optics assisted images of the cone photoreceptors can be better pre-processed using a wavelet approach. These images can be assessed for image quality using a 'Designer Metric'. Two-stage image registration including correcting for rotation significantly improves the final image contrast and sharpness. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.

  9. TU-AB-BRA-12: Impact of Image Registration Algorithms On the Prediction of Pathological Response with Radiomic Textures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yip, S; Coroller, T; Niu, N

    2015-06-15

    Purpose: Tumor regions-of-interest (ROI) can be propagated from the pre-onto the post-treatment PET/CT images using image registration of their CT counterparts, providing an automatic way to compute texture features on longitudinal scans. This exploratory study assessed the impact of image registration algorithms on textures to predict pathological response. Methods: Forty-six esophageal cancer patients (1 tumor/patient) underwent PET/CT scans before and after chemoradiotherapy. Patients were classified into responders and non-responders after the surgery. Physician-defined tumor ROIs on pre-treatment PET were propagated onto the post-treatment PET using rigid and ten deformable registration algorithms. One co-occurrence, two run-length and size zone matrix texturesmore » were computed within all ROIs. The relative difference of each texture at different treatment time-points was used to predict the pathologic responders. Their predictive value was assessed using the area under the receiver-operating-characteristic curve (AUC). Propagated ROIs and texture quantification resulting from different algorithms were compared using overlap volume (OV) and coefficient of variation (CoV), respectively. Results: Tumor volumes were better captured by ROIs propagated by deformable rather than the rigid registration. The OV between rigidly and deformably propagated ROIs were 69%. The deformably propagated ROIs were found to be similar (OV∼80%) except for fast-demons (OV∼60%). Rigidly propagated ROIs with run-length matrix textures failed to significantly differentiate between responders and non-responders (AUC=0.65, p=0.07), while the differentiation was significant with other textures (AUC=0.69–0.72, p<0.03). Among the deformable algorithms, fast-demons was the least predictive (AUC=0.68–0.71, p<0.04). ROIs propagated by all other deformable algorithms with any texture significantly predicted pathologic responders (AUC=0.71–0.78, p<0.01) despite substantial variation in texture quantification (CoV>70%). Conclusion: Propagated ROIs using deformable registration for all textures can lead to accurate prediction of pathologic response, potentially expediting the temporal texture analysis process. However, rigid and fast-demons deformable algorithms are not recommended due to their inferior performance compared to other algorithms. The project was supported in part by a Kaye Scholar Award.« less

  10. How Would Children Register Their Own Births? Insights from a Survey of Students Regarding Birth Registration Knowledge and Policy Suggestions in Kenya

    PubMed Central

    Pelowski, Matthew; Wamai, Richard G.; Wangombe, Joseph; Nyakundi, Hellen; Oduwo, Geofrey O.; Ngugi, Benjamin K.; Ogembo, Javier G.

    2016-01-01

    Birth registration and obtaining physical birth certificates impose major challenges in developing countries, with impact on child and community health, education, planning, and all levels of development. However despite initiatives, universal registration is elusive, leading to calls for new approaches to understanding the decisions of parents. In this paper, we report results of a survey of students in grades six to eight (age ~12–16) in an under-registered area of Kenya regarding their own understanding of registration issues and their suggestions for improvement. These students were selected because they themselves were also nearing the age for high school enrollment/entrance examinations, which specifically requires possession of a birth certificate. This assessment was also a companion to our previous representative survey of adults in the same Kenyan region, allowing for parent-child comparison. Results supported previous research, showing that only 43% had birth certificates. At the same time, despite these low totals, students were themselves quite aware of registration factors and purposes. The students also made quite prescient sources for understanding their households’ motivations, with many of their suggestions—for focus on communication of pragmatic benefits, or automatic measures shifting responsibility from parents—mirroring our own previous suggestions, and showing a level of pragmatism not witnessed when surveying their parents. This paper therefore adds evidence to the discussion of registration policy planning. More generally, it also builds on an important trend regarding the treatment of children as stakeholders and important sources of information, and raising an intriguing new avenue for future research. PMID:26939000

  11. Automatic sorting of toxicological information into the IUCLID (International Uniform Chemical Information Database) endpoint-categories making use of the semantic search engine Go3R.

    PubMed

    Sauer, Ursula G; Wächter, Thomas; Hareng, Lars; Wareing, Britta; Langsch, Angelika; Zschunke, Matthias; Alvers, Michael R; Landsiedel, Robert

    2014-06-01

    The knowledge-based search engine Go3R, www.Go3R.org, has been developed to assist scientists from industry and regulatory authorities in collecting comprehensive toxicological information with a special focus on identifying available alternatives to animal testing. The semantic search paradigm of Go3R makes use of expert knowledge on 3Rs methods and regulatory toxicology, laid down in the ontology, a network of concepts, terms, and synonyms, to recognize the contents of documents. Search results are automatically sorted into a dynamic table of contents presented alongside the list of documents retrieved. This table of contents allows the user to quickly filter the set of documents by topics of interest. Documents containing hazard information are automatically assigned to a user interface following the endpoint-specific IUCLID5 categorization scheme required, e.g. for REACH registration dossiers. For this purpose, complex endpoint-specific search queries were compiled and integrated into the search engine (based upon a gold standard of 310 references that had been assigned manually to the different endpoint categories). Go3R sorts 87% of the references concordantly into the respective IUCLID5 categories. Currently, Go3R searches in the 22 million documents available in the PubMed and TOXNET databases. However, it can be customized to search in other databases including in-house databanks. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. The design of a microscopic system for typical fluorescent in-situ hybridization applications

    NASA Astrophysics Data System (ADS)

    Yi, Dingrong; Xie, Shaochuan

    2013-12-01

    Fluorescence in situ hybridization (FISH) is a modern molecular biology technique used for the detection of genetic abnormalities in terms of the number and structure of chromosomes and genes. The FISH technique is typically employed for prenatal diagnosis of congenital dementia in the Obstetrics and Genecology department. It is also routinely used to pick up qualifying breast cancer patients that are known to be highly curable by the prescription of Her2 targeted therapy. During the microscopic observation phase, the technician needs to count typically green probe dots and red probe dots contained in a single nucleus and calculate their ratio. This procedure need to be done to over hundreds of nuclei. Successful implementation of FISH tests critically depends on a suitable fluorescent microscope which is primarily imported from overseas due to the complexity of such a system beyond the maturity of the domestic optoelectrical industry. In this paper, the typical requirements of a fluorescent microscope that is suitable for FISH applications are first reviewed. The focus of this paper is on the system design and computational methods of an automatic florescent microscopy with high magnification APO objectives, a fast spinning automatic filter wheel, an automatic shutter, a cooled CCD camera used as a photo-detector, and a software platform for image acquisition, registration, pseudo-color generation, multi-channel fusing and multi-focus fusion. Preliminary results from FISH experiments indicate that this system satisfies routine FISH microscopic observation tasks.

  13. Evaluation of oesophageal transit velocity using the improved Demons technique.

    PubMed

    De Souza, Michele N; Xavier, Fernando E B; Secaf, Marie; Troncon, Luiz E A; de Oliveira, Ricardo B; Moraes, Eder R

    2016-01-01

    This paper presents a novel method to compute oesophageal transit velocity in a direct and automatized manner by the registration of scintigraphy images. A total of 36 images from nine healthy volunteers were processed. Four dynamic image series per volunteer were acquired after a minimum 8 h fast. Each acquisition was made following the ingestion of 5 ml saline labelled with about 26 MBq (700 µCi) technetium-99m phytate in a single swallow. Between the acquisitions, another two swallows of 5 ml saline were performed to clear the oesophagus. The composite acquired files were made of 240 frames of anterior and posterior views. Each frame is the accumulate count for 250 ms.At the end of acquisitions, the images were corrected for radioactive decay, the geometric mean was computed between the anterior and posterior views and the registration of a set of subsequent images was performed. Utilizing the improved Demons technique, we obtained from the deformation field the regional resultant velocity, which is directly related to the oesophagus transit velocity. The mean regional resulting velocities decreases progressively from the proximal to the distal oesophageal portions and, at the proximal portion, is virtually identical to the primary peristaltic pump typical velocity. Comparison between this parameter and 'time-activity' curves reveals consistency in velocities obtained using both methods, for the proximal portion. Application of the improved Demons technique, as an easy and automated method to evaluate velocities of oesophageal bolus transit, is feasible and seems to yield consistent data, particularly for the proximal oesophagus.

  14. Intensity-Based Registration for Lung Motion Estimation

    NASA Astrophysics Data System (ADS)

    Cao, Kunlin; Ding, Kai; Amelon, Ryan E.; Du, Kaifang; Reinhardt, Joseph M.; Raghavan, Madhavan L.; Christensen, Gary E.

    Image registration plays an important role within pulmonary image analysis. The task of registration is to find the spatial mapping that brings two images into alignment. Registration algorithms designed for matching 4D lung scans or two 3D scans acquired at different inflation levels can catch the temporal changes in position and shape of the region of interest. Accurate registration is critical to post-analysis of lung mechanics and motion estimation. In this chapter, we discuss lung-specific adaptations of intensity-based registration methods for 3D/4D lung images and review approaches for assessing registration accuracy. Then we introduce methods for estimating tissue motion and studying lung mechanics. Finally, we discuss methods for assessing and quantifying specific volume change, specific ventilation, strain/ stretch information and lobar sliding.

  15. Nonlocal Means Denoising of Self-Gated and k-Space Sorted 4-Dimensional Magnetic Resonance Imaging Using Block-Matching and 3-Dimensional Filtering: Implications for Pancreatic Tumor Registration and Segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Jun; McKenzie, Elizabeth; Fan, Zhaoyang

    Purpose: To denoise self-gated k-space sorted 4-dimensional magnetic resonance imaging (SG-KS-4D-MRI) by applying a nonlocal means denoising filter, block-matching and 3-dimensional filtering (BM3D), to test its impact on the accuracy of 4D image deformable registration and automated tumor segmentation for pancreatic cancer patients. Methods and Materials: Nine patients with pancreatic cancer and abdominal SG-KS-4D-MRI were included in the study. Block-matching and 3D filtering was adapted to search in the axial slices/frames adjacent to the reference image patch in the spatial and temporal domains. The patches with high similarity to the reference patch were used to collectively denoise the 4D-MRI image. Themore » pancreas tumor was manually contoured on the first end-of-exhalation phase for both the raw and the denoised 4D-MRI. B-spline deformable registration was applied to the subsequent phases for contour propagation. The consistency of tumor volume defined by the standard deviation of gross tumor volumes from 10 breathing phases (σ-GTV), tumor motion trajectories in 3 cardinal motion planes, 4D-MRI imaging noise, and image contrast-to-noise ratio were compared between the raw and denoised groups. Results: Block-matching and 3D filtering visually and quantitatively reduced image noise by 52% and improved image contrast-to-noise ratio by 56%, without compromising soft tissue edge definitions. Automatic tumor segmentation is statistically more consistent on the denoised 4D-MRI (σ-GTV = 0.6 cm{sup 3}) than on the raw 4D-MRI (σ-GTV = 0.8 cm{sup 3}). Tumor end-of-exhalation location is also more reproducible on the denoised 4D-MRI than on the raw 4D-MRI in all 3 cardinal motion planes. Conclusions: Block-matching and 3D filtering can significantly reduce random image noise while maintaining structural features in the SG-KS-4D-MRI datasets. In this study of pancreatic tumor segmentation, automatic segmentation of GTV in the registered image sets is shown to be more consistent on the denoised 4D-MRI than on the raw 4D-MRI.« less

  16. A deep learning approach for real time prostate segmentation in freehand ultrasound guided biopsy.

    PubMed

    Anas, Emran Mohammad Abu; Mousavi, Parvin; Abolmaesumi, Purang

    2018-06-01

    Targeted prostate biopsy, incorporating multi-parametric magnetic resonance imaging (mp-MRI) and its registration with ultrasound, is currently the state-of-the-art in prostate cancer diagnosis. The registration process in most targeted biopsy systems today relies heavily on accurate segmentation of ultrasound images. Automatic or semi-automatic segmentation is typically performed offline prior to the start of the biopsy procedure. In this paper, we present a deep neural network based real-time prostate segmentation technique during the biopsy procedure, hence paving the way for dynamic registration of mp-MRI and ultrasound data. In addition to using convolutional networks for extracting spatial features, the proposed approach employs recurrent networks to exploit the temporal information among a series of ultrasound images. One of the key contributions in the architecture is to use residual convolution in the recurrent networks to improve optimization. We also exploit recurrent connections within and across different layers of the deep networks to maximize the utilization of the temporal information. Furthermore, we perform dense and sparse sampling of the input ultrasound sequence to make the network robust to ultrasound artifacts. Our architecture is trained on 2,238 labeled transrectal ultrasound images, with an additional 637 and 1,017 unseen images used for validation and testing, respectively. We obtain a mean Dice similarity coefficient of 93%, a mean surface distance error of 1.10 mm and a mean Hausdorff distance error of 3.0 mm. A comparison of the reported results with those of a state-of-the-art technique indicates statistically significant improvement achieved by the proposed approach. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. 3D ultrasound volume stitching using phase symmetry and harris corner detection for orthopaedic applications

    NASA Astrophysics Data System (ADS)

    Dalvi, Rupin; Hacihaliloglu, Ilker; Abugharbieh, Rafeef

    2010-03-01

    Stitching of volumes obtained from three dimensional (3D) ultrasound (US) scanners improves visualization of anatomy in many clinical applications. Fast but accurate volume registration remains the key challenge in this area.We propose a volume stitching method based on efficient registration of 3D US volumes obtained from a tracked US probe. Since the volumes, after adjusting for probe motion, are coarsely registered, we obtain salient correspondence points in the central slices of these volumes. This is done by first removing artifacts in the US slices using intensity invariant local phase image processing and then applying the Harris Corner detection algorithm. Fast sub-volume registration on a small neighborhood around the points then gives fast, accurate 3D registration parameters. The method has been tested on 3D US scans of phantom and real human radius and pelvis bones and a phantom human fetus. The method has also been compared to volumetric registration, as well as feature based registration using 3D-SIFT. Quantitative results show average post-registration error of 0.33mm which is comparable to volumetric registration accuracy (0.31mm) and much better than 3D-SIFT based registration which failed to register the volumes. The proposed method was also much faster than volumetric registration (~4.5 seconds versus 83 seconds).

  18. Multi-template analysis of human perirhinal cortex in brain MRI: Explicitly accounting for anatomical variability

    PubMed Central

    Xie, Long; Pluta, John B.; Das, Sandhitsu R.; Wisse, Laura E.M.; Wang, Hongzhi; Mancuso, Lauren; Kliot, Dasha; Avants, Brian B.; Ding, Song-Lin; Manjón, José V.; Wolk, David A.; Yushkevich, Paul A.

    2016-01-01

    Rational The human perirhinal cortex (PRC) plays critical roles in episodic and semantic memory and visual perception. The PRC consists of Brodmann areas 35 and 36 (BA35, BA36). In Alzheimer's disease (AD), BA35 is the first cortical site affected by neurofibrillary tangle pathology, which is closely linked to neural injury in AD. Large anatomical variability, manifested in the form of different cortical folding and branching patterns, makes it difficult to segment the PRC in MRI scans. Pathology studies have found that in ~97% of specimens, the PRC falls into one of three discrete anatomical variants. However, current methods for PRC segmentation and morphometry in MRI are based on single-template approaches, which may not be able to accurately model these discrete variants Methods A multi-template analysis pipeline that explicitly accounts for anatomical variability is used to automatically label the PRC and measure its thickness in T2-weighted MRI scans. The pipeline uses multi-atlas segmentation to automatically label medial temporal lobe cortices including entorhinal cortex, PRC and the parahippocampal cortex. Pairwise registration between label maps and clustering based on residual dissimilarity after registration are used to construct separate templates for the anatomical variants of the PRC. An optimal path of deformations linking these templates is used to establish correspondences between all the subjects. Experimental evaluation focuses on the ability of single-template and multi-template analyses to detect differences in the thickness of medial temporal lobe cortices between patients with amnestic mild cognitive impairment (aMCI, n=41) and age-matched controls (n=44). Results The proposed technique is able to generate templates that recover the three dominant discrete variants of PRC and establish more meaningful correspondences between subjects than a single-template approach. The largest reduction in thickness associated with aMCI, in absolute terms, was found in left BA35 using both regional and summary thickness measures. Further, statistical maps of regional thickness difference between aMCI and controls revealed different patterns for the three anatomical variants. PMID:27702610

  19. Improvement of registration accuracy in accelerated partial breast irradiation using the point-based rigid-body registration algorithm for patients with implanted fiducial markers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inoue, Minoru; Yoshimura, Michio, E-mail: myossy@kuhp.kyoto-u.ac.jp; Sato, Sayaka

    2015-04-15

    Purpose: To investigate image-registration errors when using fiducial markers with a manual method and the point-based rigid-body registration (PRBR) algorithm in accelerated partial breast irradiation (APBI) patients, with accompanying fiducial deviations. Methods: Twenty-two consecutive patients were enrolled in a prospective trial examining 10-fraction APBI. Titanium clips were implanted intraoperatively around the seroma in all patients. For image-registration, the positions of the clips in daily kV x-ray images were matched to those in the planning digitally reconstructed radiographs. Fiducial and gravity registration errors (FREs and GREs, respectively), representing resulting misalignments of the edge and center of the target, respectively, were comparedmore » between the manual and algorithm-based methods. Results: In total, 218 fractions were evaluated. Although the mean FRE/GRE values for the manual and algorithm-based methods were within 3 mm (2.3/1.7 and 1.3/0.4 mm, respectively), the percentages of fractions where FRE/GRE exceeded 3 mm using the manual and algorithm-based methods were 18.8%/7.3% and 0%/0%, respectively. Manual registration resulted in 18.6% of patients with fractions of FRE/GRE exceeding 5 mm. The patients with larger clip deviation had significantly more fractions showing large FRE/GRE using manual registration. Conclusions: For image-registration using fiducial markers in APBI, the manual registration results in more fractions with considerable registration error due to loss of fiducial objectivity resulting from their deviation. The authors recommend the PRBR algorithm as a safe and effective strategy for accurate, image-guided registration and PTV margin reduction.« less

  20. Simultaneous 3D–2D image registration and C-arm calibration: Application to endovascular image-guided interventions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitrović, Uroš; Pernuš, Franjo; Likar, Boštjan

    Purpose: Three-dimensional to two-dimensional (3D–2D) image registration is a key to fusion and simultaneous visualization of valuable information contained in 3D pre-interventional and 2D intra-interventional images with the final goal of image guidance of a procedure. In this paper, the authors focus on 3D–2D image registration within the context of intracranial endovascular image-guided interventions (EIGIs), where the 3D and 2D images are generally acquired with the same C-arm system. The accuracy and robustness of any 3D–2D registration method, to be used in a clinical setting, is influenced by (1) the method itself, (2) uncertainty of initial pose of the 3Dmore » image from which registration starts, (3) uncertainty of C-arm’s geometry and pose, and (4) the number of 2D intra-interventional images used for registration, which is generally one and at most two. The study of these influences requires rigorous and objective validation of any 3D–2D registration method against a highly accurate reference or “gold standard” registration, performed on clinical image datasets acquired in the context of the intervention. Methods: The registration process is split into two sequential, i.e., initial and final, registration stages. The initial stage is either machine-based or template matching. The latter aims to reduce possibly large in-plane translation errors by matching a projection of the 3D vessel model and 2D image. In the final registration stage, four state-of-the-art intrinsic image-based 3D–2D registration methods, which involve simultaneous refinement of rigid-body and C-arm parameters, are evaluated. For objective validation, the authors acquired an image database of 15 patients undergoing cerebral EIGI, for which accurate gold standard registrations were established by fiducial marker coregistration. Results: Based on target registration error, the obtained success rates of 3D to a single 2D image registration after initial machine-based and template matching and final registration involving C-arm calibration were 36%, 73%, and 93%, respectively, while registration accuracy of 0.59 mm was the best after final registration. By compensating in-plane translation errors by initial template matching, the success rates achieved after the final stage improved consistently for all methods, especially if C-arm calibration was performed simultaneously with the 3D–2D image registration. Conclusions: Because the tested methods perform simultaneous C-arm calibration and 3D–2D registration based solely on anatomical information, they have a high potential for automation and thus for an immediate integration into current interventional workflow. One of the authors’ main contributions is also comprehensive and representative validation performed under realistic conditions as encountered during cerebral EIGI.« less

Top