Sample records for software-based image fusion

  1. Different source image fusion based on FPGA

    NASA Astrophysics Data System (ADS)

    Luo, Xiao; Piao, Yan

    2016-03-01

    The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.

  2. Segment fusion of ToF-SIMS images.

    PubMed

    Milillo, Tammy M; Miller, Mary E; Fischione, Remo; Montes, Angelina; Gardella, Joseph A

    2016-06-08

    The imaging capabilities of time-of-flight secondary ion mass spectrometry (ToF-SIMS) have not been used to their full potential in the analysis of polymer and biological samples. Imaging has been limited by the size of the dataset and the chemical complexity of the sample being imaged. Pixel and segment based image fusion algorithms commonly used in remote sensing, ecology, geography, and geology provide a way to improve spatial resolution and classification of biological images. In this study, a sample of Arabidopsis thaliana was treated with silver nanoparticles and imaged with ToF-SIMS. These images provide insight into the uptake mechanism for the silver nanoparticles into the plant tissue, giving new understanding to the mechanism of uptake of heavy metals in the environment. The Munechika algorithm was programmed in-house and applied to achieve pixel based fusion, which improved the spatial resolution of the image obtained. Multispectral and quadtree segment or region based fusion algorithms were performed using ecognition software, a commercially available remote sensing software suite, and used to classify the images. The Munechika fusion improved the spatial resolution for the images containing silver nanoparticles, while the segment fusion allowed classification and fusion based on the tissue types in the sample, suggesting potential pathways for the uptake of the silver nanoparticles.

  3. Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.

    PubMed

    Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F

    2013-09-01

    The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.

  4. Accuracy and feasibility of three different methods for software-based image fusion in whole-body PET and CT.

    PubMed

    Putzer, Daniel; Henninger, Benjamin; Kovacs, Peter; Uprimny, Christian; Kendler, Dorota; Jaschke, Werner; Bale, Reto J

    2016-06-01

    Even as PET/CT provides valuable diagnostic information in a great number of clinical indications, availability of hybrid PET/CT scanners is mainly limited to clinical centers. A software-based image fusion would facilitate combined image reading of CT and PET data sets if hardware image fusion is not available. To analyze the relevance of retrospective image fusion of separately acquired PET and CT data sets, we studied the accuracy, practicability and reproducibility of three different image registration techniques. We evaluated whole-body 18F-FDG-PET and CT data sets of 71 oncologic patients. Images were fused retrospectively using Stealth Station System, Treon (Medtronic Inc., Louisville, CO, USA) equipped with Cranial4 Software. External markers fixed to a vacuum mattress were used as reference for exact repositioning. Registration was repeated using internal anatomic landmarks and Automerge software, assessing accuracy for all three methods, measuring distances of liver representation in CT and PET with reference to a common coordinate system. On first measurement of image fusions with external markers, 53 were successful, 16 feasible and 2 not successful. Using anatomic landmarks, 42 were successful, 26 feasible and 3 not successful. Using Automerge Software only 13 were successful. The mean distance between center points in PET and CT was 7.69±4.96 mm on first, and 7.65±4.2 mm on second measurement. Results with external markers correlate very well and inaccuracies are significantly lower (P<0.001) than results using anatomical landmarks (10.38±6.13 mm and 10.83±6.23 mm). Analysis revealed a significantly faster alignment using external markers (P<0.001). External fiducials in combination with immobilization devices and breathing protocols allow for highly accurate image fusion cost-effectively and significantly less time, posing an attractive alternative for PET/CT interpretation when a hybrid scanner is not available.

  5. Diagnostic Value of Software-Based Image Fusion of Computed Tomography and F18-FDG PET Scans in Patients with Malignant Lymphoma

    PubMed Central

    Henninger, B.; Putzer, D.; Kendler, D.; Uprimny, C.; Virgolini, I.; Gunsilius, E.; Bale, R.

    2012-01-01

    Aim. The purpose of this study was to evaluate the accuracy of 2-deoxy-2-[fluorine-18]fluoro-D-glucose (FDG) positron emission tomography (PET), computed tomography (CT), and software-based image fusion of both modalities in the imaging of non-Hodgkin's lymphoma (NHL) and Hodgkin's disease (HD). Methods. 77 patients with NHL (n = 58) or HD (n = 19) underwent a FDG PET scan, a contrast-enhanced CT, and a subsequent digital image fusion during initial staging or followup. 109 examinations of each modality were evaluated and compared to each other. Conventional staging procedures, other imaging techniques, laboratory screening, and follow-up data constituted the reference standard for comparison with image fusion. Sensitivity and specificity were calculated for CT and PET separately. Results. Sensitivity and specificity for detecting malignant lymphoma were 90% and 76% for CT and 94% and 91% for PET, respectively. A lymph node region-based analysis (comprising 14 defined anatomical regions) revealed a sensitivity of 81% and a specificity of 97% for CT and 96% and 99% for FDG PET, respectively. Only three of 109 image fusion findings needed further evaluation (false positive). Conclusion. Digital fusion of PET and CT improves the accuracy of staging, restaging, and therapy monitoring in patients with malignant lymphoma and may reduce the need for invasive diagnostic procedures. PMID:22654631

  6. An FPGA-based heterogeneous image fusion system design method

    NASA Astrophysics Data System (ADS)

    Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong

    2011-08-01

    Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.

  7. An object-oriented framework for medical image registration, fusion, and visualization.

    PubMed

    Zhu, Yang-Ming; Cochoff, Steven M

    2006-06-01

    An object-oriented framework for image registration, fusion, and visualization was developed based on the classic model-view-controller paradigm. The framework employs many design patterns to facilitate legacy code reuse, manage software complexity, and enhance the maintainability and portability of the framework. Three sample applications built a-top of this framework are illustrated to show the effectiveness of this framework: the first one is for volume image grouping and re-sampling, the second one is for 2D registration and fusion, and the last one is for visualization of single images as well as registered volume images.

  8. Designing Image Operators for MRI-PET Image Fusion of the Brain

    NASA Astrophysics Data System (ADS)

    Márquez, Jorge; Gastélum, Alfonso; Padilla, Miguel A.

    2006-09-01

    Our goal is to obtain images combining in a useful and precise way the information from 3D volumes of medical imaging sets. We address two modalities combining anatomy (Magnetic Resonance Imaging or MRI) and functional information (Positron Emission Tomography or PET). Commercial imaging software offers image fusion tools based on fixed blending or color-channel combination of two modalities, and color Look-Up Tables (LUTs), without considering the anatomical and functional character of the image features. We used a sensible approach for image fusion taking advantage mainly from the HSL (Hue, Saturation and Luminosity) color space, in order to enhance the fusion results. We further tested operators for gradient and contour extraction to enhance anatomical details, plus other spatial-domain filters for functional features corresponding to wide point-spread-function responses in PET images. A set of image-fusion operators was formulated and tested on PET and MRI acquisitions.

  9. A dual-channel fusion system of visual and infrared images based on color transfer

    NASA Astrophysics Data System (ADS)

    Pei, Chuang; Jiang, Xiao-yu; Zhang, Peng-wei; Liang, Hao-cong

    2013-09-01

    A dual-channel fusion system of visual and infrared images based on color transfer The increasing availability and deployment of imaging sensors operating in multiple spectrums has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, most of these algorithms have gray or false color fusion results which are not adapt to human vision. Transfer color from a day-time reference image to get natural color fusion result is an effective way to solve this problem, but the computation cost of color transfer is expensive and can't meet the request of real-time image processing. We developed a dual-channel infrared and visual images fusion system based on TMS320DM642 digital signal processing chip. The system is divided into image acquisition and registration unit, image fusion processing unit, system control unit and image fusion result out-put unit. The image registration of dual-channel images is realized by combining hardware and software methods in the system. False color image fusion algorithm in RGB color space is used to get R-G fused image, then the system chooses a reference image to transfer color to the fusion result. A color lookup table based on statistical properties of images is proposed to solve the complexity computation problem in color transfer. The mapping calculation between the standard lookup table and the improved color lookup table is simple and only once for a fixed scene. The real-time fusion and natural colorization of infrared and visual images are realized by this system. The experimental result shows that the color-transferred images have a natural color perception to human eyes, and can highlight the targets effectively with clear background details. Human observers with this system will be able to interpret the image better and faster, thereby improving situational awareness and reducing target detection time.

  10. Towards a Unified Approach to Information Integration - A review paper on data/information fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitney, Paul D.; Posse, Christian; Lei, Xingye C.

    2005-10-14

    Information or data fusion of data from different sources are ubiquitous in many applications, from epidemiology, medical, biological, political, and intelligence to military applications. Data fusion involves integration of spectral, imaging, text, and many other sensor data. For example, in epidemiology, information is often obtained based on many studies conducted by different researchers at different regions with different protocols. In the medical field, the diagnosis of a disease is often based on imaging (MRI, X-Ray, CT), clinical examination, and lab results. In the biological field, information is obtained based on studies conducted on many different species. In military field, informationmore » is obtained based on data from radar sensors, text messages, chemical biological sensor, acoustic sensor, optical warning and many other sources. Many methodologies are used in the data integration process, from classical, Bayesian, to evidence based expert systems. The implementation of the data integration ranges from pure software design to a mixture of software and hardware. In this review we summarize the methodologies and implementations of data fusion process, and illustrate in more detail the methodologies involved in three examples. We propose a unified multi-stage and multi-path mapping approach to the data fusion process, and point out future prospects and challenges.« less

  11. Enhanced EDX images by fusion of multimodal SEM images using pansharpening techniques.

    PubMed

    Franchi, G; Angulo, J; Moreaud, M; Sorbier, L

    2018-01-01

    The goal of this paper is to explore the potential interest of image fusion in the context of multimodal scanning electron microscope (SEM) imaging. In particular, we aim at merging the backscattered electron images that usually have a high spatial resolution but do not provide enough discriminative information to physically classify the nature of the sample, with energy-dispersive X-ray spectroscopy (EDX) images that have discriminative information but a lower spatial resolution. The produced images are named enhanced EDX. To achieve this goal, we have compared the results obtained with classical pansharpening techniques for image fusion with an original approach tailored for multimodal SEM fusion of information. Quantitative assessment is obtained by means of two SEM images and a simulated dataset produced by a software based on PENELOPE. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  12. Novel Three-Dimensional Image Fusion Software to Facilitate Guidance of Complex Cardiac Catheterization : 3D image fusion for interventions in CHD.

    PubMed

    Goreczny, Sebastian; Dryzek, Pawel; Morgan, Gareth J; Lukaszewski, Maciej; Moll, Jadwiga A; Moszura, Tomasz

    2017-08-01

    We report initial experience with novel three-dimensional (3D) image fusion software for guidance of transcatheter interventions in congenital heart disease. Developments in fusion imaging have facilitated the integration of 3D roadmaps from computed tomography or magnetic resonance imaging datasets. The latest software allows live fusion of two-dimensional (2D) fluoroscopy with pre-registered 3D roadmaps. We reviewed all cardiac catheterizations guided with this software (Philips VesselNavigator). Pre-catheterization imaging and catheterization data were collected focusing on fusion of 3D roadmap, intervention guidance, contrast and radiation exposure. From 09/2015 until 06/2016, VesselNavigator was applied in 34 patients for guidance (n = 28) or planning (n = 6) of cardiac catheterization. In all 28 patients successful 2D-3D registration was performed. Bony structures combined with the cardiovascular silhouette were used for fusion in 26 patients (93%), calcifications in 9 (32%), previously implanted devices in 8 (29%) and low-volume contrast injection in 7 patients (25%). Accurate initial 3D roadmap alignment was achieved in 25 patients (89%). Six patients (22%) required realignment during the procedure due to distortion of the anatomy after introduction of stiff equipment. Overall, VesselNavigator was applied successfully in 27 patients (96%) without any complications related to 3D image overlay. VesselNavigator was useful in guidance of nearly all of cardiac catheterizations. The combination of anatomical markers and low-volume contrast injections allowed reliable 2D-3D registration in the vast majority of patients.

  13. Surgical planning and manual image fusion based on 3D model facilitate laparoscopic partial nephrectomy for intrarenal tumors.

    PubMed

    Chen, Yuanbo; Li, Hulin; Wu, Dingtao; Bi, Keming; Liu, Chunxiao

    2014-12-01

    Construction of three-dimensional (3D) model of renal tumor facilitated surgical planning and imaging guidance of manual image fusion in laparoscopic partial nephrectomy (LPN) for intrarenal tumors. Fifteen patients with intrarenal tumors underwent LPN between January and December 2012. Computed tomography-based reconstruction of the 3D models of renal tumors was performed using Mimics 12.1 software. Surgical planning was performed through morphometry and multi-angle visual views of the tumor model. Two-step manual image fusion superimposed 3D model images onto 2D laparoscopic images. The image fusion was verified by intraoperative ultrasound. Imaging-guided laparoscopic hilar clamping and tumor excision was performed. Manual fusion time, patient demographics, surgical details, and postoperative treatment parameters were analyzed. The reconstructed 3D tumor models accurately represented the patient's physiological anatomical landmarks. The surgical planning markers were marked successfully. Manual image fusion was flexible and feasible with fusion time of 6 min (5-7 min). All surgeries were completed laparoscopically. The median tumor excision time was 5.4 min (3.5-10 min), whereas the median warm ischemia time was 25.5 min (16-32 min). Twelve patients (80 %) demonstrated renal cell carcinoma on final pathology, and all surgical margins were negative. No tumor recurrence was detected after a media follow-up of 1 year (3-15 months). The surgical planning and two-step manual image fusion based on 3D model of renal tumor facilitated visible-imaging-guided tumor resection with negative margin in LPN for intrarenal tumor. It is promising and moves us one step closer to imaging-guided surgery.

  14. FZUImageReg: A toolbox for medical image registration and dose fusion in cervical cancer radiotherapy

    PubMed Central

    Bai, Penggang; Du, Min; Ni, Xiaolei; Ke, Dongzhong; Tong, Tong

    2017-01-01

    The combination external-beam radiotherapy and high-dose-rate brachytherapy is a standard form of treatment for patients with locally advanced uterine cervical cancer. Personalized radiotherapy in cervical cancer requires efficient and accurate dose planning and assessment across these types of treatment. To achieve radiation dose assessment, accurate mapping of the dose distribution from HDR-BT onto EBRT is extremely important. However, few systems can achieve robust dose fusion and determine the accumulated dose distribution during the entire course of treatment. We have therefore developed a toolbox (FZUImageReg), which is a user-friendly dose fusion system based on hybrid image registration for radiation dose assessment in cervical cancer radiotherapy. The main part of the software consists of a collection of medical image registration algorithms and a modular design with a user-friendly interface, which allows users to quickly configure, test, monitor, and compare different registration methods for a specific application. Owing to the large deformation, the direct application of conventional state-of-the-art image registration methods is not sufficient for the accurate alignment of EBRT and HDR-BT images. To solve this problem, a multi-phase non-rigid registration method using local landmark-based free-form deformation is proposed for locally large deformation between EBRT and HDR-BT images, followed by intensity-based free-form deformation. With the transformation, the software also provides a dose mapping function according to the deformation field. The total dose distribution during the entire course of treatment can then be presented. Experimental results clearly show that the proposed system can achieve accurate registration between EBRT and HDR-BT images and provide radiation dose warping and fusion results for dose assessment in cervical cancer radiotherapy in terms of high accuracy and efficiency. PMID:28388623

  15. Conceptual design of the CZMIL data processing system (DPS): algorithms and software for fusing lidar, hyperspectral data, and digital images

    NASA Astrophysics Data System (ADS)

    Park, Joong Yong; Tuell, Grady

    2010-04-01

    The Data Processing System (DPS) of the Coastal Zone Mapping and Imaging Lidar (CZMIL) has been designed to automatically produce a number of novel environmental products through the fusion of Lidar, spectrometer, and camera data in a single software package. These new products significantly transcend use of the system as a bathymeter, and support use of CZMIL as a complete coastal and benthic mapping tool. The DPS provides a spinning globe capability for accessing data files; automated generation of combined topographic and bathymetric point clouds; a fully-integrated manual editor and data analysis tool; automated generation of orthophoto mosaics; automated generation of reflectance data cubes from the imaging spectrometer; a coupled air-ocean spectral optimization model producing images of chlorophyll and CDOM concentrations; and a fusion based capability to produce images and classifications of the shallow water seafloor. Adopting a multitasking approach, we expect to achieve computation of the point clouds, DEMs, and reflectance images at a 1:1 processing to acquisition ratio.

  16. Processing LiDAR Data to Predict Natural Hazards

    NASA Technical Reports Server (NTRS)

    Fairweather, Ian; Crabtree, Robert; Hager, Stacey

    2008-01-01

    ELF-Base and ELF-Hazards (wherein 'ELF' signifies 'Extract LiDAR Features' and 'LiDAR' signifies 'light detection and ranging') are developmental software modules for processing remote-sensing LiDAR data to identify past natural hazards (principally, landslides) and predict future ones. ELF-Base processes raw LiDAR data, including LiDAR intensity data that are often ignored in other software, to create digital terrain models (DTMs) and digital feature models (DFMs) with sub-meter accuracy. ELF-Hazards fuses raw LiDAR data, data from multispectral and hyperspectral optical images, and DTMs and DFMs generated by ELF-Base to generate hazard risk maps. Advanced algorithms in these software modules include line-enhancement and edge-detection algorithms, surface-characterization algorithms, and algorithms that implement innovative data-fusion techniques. The line-extraction and edge-detection algorithms enable users to locate such features as faults and landslide headwall scarps. Also implemented in this software are improved methodologies for identification and mapping of past landslide events by use of (1) accurate, ELF-derived surface characterizations and (2) three LiDAR/optical-data-fusion techniques: post-classification data fusion, maximum-likelihood estimation modeling, and hierarchical within-class discrimination. This software is expected to enable faster, more accurate forecasting of natural hazards than has previously been possible.

  17. Use of image registration and fusion algorithms and techniques in radiotherapy: Report of the AAPM Radiation Therapy Committee Task Group No. 132.

    PubMed

    Brock, Kristy K; Mutic, Sasa; McNutt, Todd R; Li, Hua; Kessler, Marc L

    2017-07-01

    Image registration and fusion algorithms exist in almost every software system that creates or uses images in radiotherapy. Most treatment planning systems support some form of image registration and fusion to allow the use of multimodality and time-series image data and even anatomical atlases to assist in target volume and normal tissue delineation. Treatment delivery systems perform registration and fusion between the planning images and the in-room images acquired during the treatment to assist patient positioning. Advanced applications are beginning to support daily dose assessment and enable adaptive radiotherapy using image registration and fusion to propagate contours and accumulate dose between image data taken over the course of therapy to provide up-to-date estimates of anatomical changes and delivered dose. This information aids in the detection of anatomical and functional changes that might elicit changes in the treatment plan or prescription. As the output of the image registration process is always used as the input of another process for planning or delivery, it is important to understand and communicate the uncertainty associated with the software in general and the result of a specific registration. Unfortunately, there is no standard mathematical formalism to perform this for real-world situations where noise, distortion, and complex anatomical variations can occur. Validation of the software systems performance is also complicated by the lack of documentation available from commercial systems leading to use of these systems in undesirable 'black-box' fashion. In view of this situation and the central role that image registration and fusion play in treatment planning and delivery, the Therapy Physics Committee of the American Association of Physicists in Medicine commissioned Task Group 132 to review current approaches and solutions for image registration (both rigid and deformable) in radiotherapy and to provide recommendations for quality assurance and quality control of these clinical processes. © 2017 American Association of Physicists in Medicine.

  18. Our solution for fusion of simultaneusly acquired whole body scintigrams and optical images, as usesful tool in clinical practice in patients with differentiated thyroid carcinomas after radioiodine therapy. A useful tool in clinical practice.

    PubMed

    Matovic, Milovan; Jankovic, Milica; Barjaktarovic, Marko; Jeremic, Marija

    2017-01-01

    After radioiodine therapy of differentiated thyroid cancer (DTC) patients, whole body scintigraphy (WBS) is standard procedure before releasing the patient from the hospital. A common problem is the precise localization of regions where the iod-avide tissue is located. Sometimes is practically impossible to perform precise topographic localization of such regions. In order to face this problem, we have developed a low-cost Vision-Fusion system for web-camera image acquisition simultaneously with routine scintigraphic whole body acquisition including the algorithm for fusion of images given from both cameras. For image acquisition in the gamma part of the spectra we used e.cam dual head gamma camera (Siemens, Erlangen, Germany) in WBS modality, with matrix size of 256×1024 pixels and bed speed of 6cm/min, equipped with high energy collimator. For optical image acquisition in visible part of spectra we have used web-camera model C905 (Logitech, USA) with Carl Zeiss® optics, native resolution 1600×1200 pixels, 34 o field of view, 30g weight, with autofocus option turned "off" and auto white balance turned "on". Web camera is connected to upper head of gamma camera (GC) by a holder of lightweight aluminum rod and a plexiglas adapter. Our own Vision-Fusion software for image acquisition and coregistration was developed using NI LabVIEW programming environment 2015 (National Instruments, Texas, USA) and two additional LabVIEW modules: NI Vision Acquisition Software (VAS) and NI Vision Development Module (VDM). Vision acquisition software enables communication and control between laptop computer and web-camera. Vision development module is image processing library used for image preprocessing and fusion. Software starts the web-camera image acquisition before starting image acquisition on GC and stops it when GC completes the acquisition. Web-camera is in continuous acquisition mode with frame rate f depending on speed of patient bed movement v (f=v/∆ cm , where ∆ cm is a displacement step that can be changed in Settings option of Vision-Fusion software; by default, ∆ cm is set to 1cm corresponding to ∆ p =15 pixels). All images captured while patient's bed is moving are processed. Movement of patient's bed is checked using cross-correlation of two successive images. After each image capturing, algorithm extracts the central region of interest (ROI) of the image, with the same width as captured image (1600 pixels) and the height that is equal to the ∆ p displacement in pixels. All extracted central ROI are placed next to each other in the overall whole-body image. Stacking of narrow central ROI introduces negligible distortion in the overall whole-body image. The first step for fusion of the scintigram and the optical image was determination of spatial transformation between them. We have made an experiment with two markers (point radioactivity sources of 99m Tc pertechnetate 1MBq) visible in both images (WBS and optical) to find transformation of coordinates between images. The distance between point markers is used for spatial coregistration of the gamma and optical images. At the end of coregistration process, gamma image is rescaled in spatial domain and added to the optical image (green or red channel, amplification changeable from user interface). We tested our system for 10 patients with DTC who received radioiodine therapy (8 women and two men, with average age of 50.10±12.26 years). Five patients received 5.55Gbq, three 3.70GBq and two 1.85GBq. Whole-body scintigraphy and optical image acquisition were performed 72 hours after application of radioiodine therapy. Based on our first results during clinical testing of our system, we can conclude that our system can improve diagnostic possibility of whole body scintigraphy to detect thyroid remnant tissue in patients with DTC after radioiodine therapy.

  19. Enabling image fusion for a CT guided needle placement robot

    NASA Astrophysics Data System (ADS)

    Seifabadi, Reza; Xu, Sheng; Aalamifar, Fereshteh; Velusamy, Gnanasekar; Puhazhendi, Kaliyappan; Wood, Bradford J.

    2017-03-01

    Purpose: This study presents development and integration of hardware and software that enables ultrasound (US) and computer tomography (CT) fusion for a FDA-approved CT-guided needle placement robot. Having real-time US image registered to a priori-taken intraoperative CT image provides more anatomic information during needle insertion, in order to target hard-to-see lesions or avoid critical structures invisible to CT, track target motion, and to better monitor ablation treatment zone in relation to the tumor location. Method: A passive encoded mechanical arm is developed for the robot in order to hold and track an abdominal US transducer. This 4 degrees of freedom (DOF) arm is designed to attach to the robot end-effector. The arm is locked by default and is released by a press of button. The arm is designed such that the needle is always in plane with US image. The articulated arm is calibrated to improve its accuracy. Custom designed software (OncoNav, NIH) was developed to fuse real-time US image to a priori-taken CT. Results: The accuracy of the end effector before and after passive arm calibration was 7.07mm +/- 4.14mm and 1.74mm +/-1.60mm, respectively. The accuracy of the US image to the arm calibration was 5mm. The feasibility of US-CT fusion using the proposed hardware and software was demonstrated in an abdominal commercial phantom. Conclusions: Calibration significantly improved the accuracy of the arm in US image tracking. Fusion of US to CT using the proposed hardware and software was feasible.

  20. Towards Omni-Tomography—Grand Fusion of Multiple Modalities for Simultaneous Interior Tomography

    PubMed Central

    Wang, Ge; Zhang, Jie; Gao, Hao; Weir, Victor; Yu, Hengyong; Cong, Wenxiang; Xu, Xiaochen; Shen, Haiou; Bennett, James; Furth, Mark; Wang, Yue; Vannier, Michael

    2012-01-01

    We recently elevated interior tomography from its origin in computed tomography (CT) to a general tomographic principle, and proved its validity for other tomographic modalities including SPECT, MRI, and others. Here we propose “omni-tomography”, a novel concept for the grand fusion of multiple tomographic modalities for simultaneous data acquisition in a region of interest (ROI). Omni-tomography can be instrumental when physiological processes under investigation are multi-dimensional, multi-scale, multi-temporal and multi-parametric. Both preclinical and clinical studies now depend on in vivo tomography, often requiring separate evaluations by different imaging modalities. Over the past decade, two approaches have been used for multimodality fusion: Software based image registration and hybrid scanners such as PET-CT, PET-MRI, and SPECT-CT among others. While there are intrinsic limitations with both approaches, the main obstacle to the seamless fusion of multiple imaging modalities has been the bulkiness of each individual imager and the conflict of their physical (especially spatial) requirements. To address this challenge, omni-tomography is now unveiled as an emerging direction for biomedical imaging and systems biomedicine. PMID:22768108

  1. Radiotherapy treatment planning: benefits of CT-MR image registration and fusion in tumor volume delineation.

    PubMed

    Djan, Igor; Petrović, Borislava; Erak, Marko; Nikolić, Ivan; Lucić, Silvija

    2013-08-01

    Development of imaging techniques, computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET), made great impact on radiotherapy treatment planning by improving the localization of target volumes. Improved localization allows better local control of tumor volumes, but also minimizes geographical misses. Mutual information is obtained by registration and fusion of images achieved manually or automatically. The aim of this study was to validate the CT-MRI image fusion method and compare delineation obtained by CT versus CT-MRI image fusion. The image fusion software (XIO CMS 4.50.0) was applied to delineate 16 patients. The patients were scanned on CT and MRI in the treatment position within an immobilization device before the initial treatment. The gross tumor volume (GTV) and clinical target volume (CTV) were delineated on CT alone and on CT+MRI images consecutively and image fusion was obtained. Image fusion showed that CTV delineated on a CT image study set is mainly inadequate for treatment planning, in comparison with CTV delineated on CT-MRI fused image study set. Fusion of different modalities enables the most accurate target volume delineation. This study shows that registration and image fusion allows precise target localization in terms of GTV and CTV and local disease control.

  2. Multimodality Image Fusion-Guided Procedures: Technique, Accuracy, and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abi-Jaoudeh, Nadine, E-mail: naj@mail.nih.gov; Kruecker, Jochen, E-mail: jochen.kruecker@philips.com; Kadoury, Samuel, E-mail: samuel.kadoury@polymtl.ca

    2012-10-15

    Personalized therapies play an increasingly critical role in cancer care: Image guidance with multimodality image fusion facilitates the targeting of specific tissue for tissue characterization and plays a role in drug discovery and optimization of tailored therapies. Positron-emission tomography (PET), magnetic resonance imaging (MRI), and contrast-enhanced computed tomography (CT) may offer additional information not otherwise available to the operator during minimally invasive image-guided procedures, such as biopsy and ablation. With use of multimodality image fusion for image-guided interventions, navigation with advanced modalities does not require the physical presence of the PET, MRI, or CT imaging system. Several commercially available methodsmore » of image-fusion and device navigation are reviewed along with an explanation of common tracking hardware and software. An overview of current clinical applications for multimodality navigation is provided.« less

  3. Prostate seed implant quality assessment using MR and CT image fusion.

    PubMed

    Amdur, R J; Gladstone, D; Leopold, K A; Harris, R D

    1999-01-01

    After a seed implant of the prostate, computerized tomography (CT) is ideal for determining seed distribution but soft tissue anatomy is frequently not well visualized. Magnetic resonance (MR) images soft tissue anatomy well but seed visualization is problematic. We describe a method of fusing CT and MR images to exploit the advantages of both of these modalities when assessing the quality of a prostate seed implant. Eleven consecutive prostate seed implant patients were imaged with axial MR and CT scans. MR and CT images were fused in three dimensions using the Pinnacle 3.0 version of the ADAC treatment planning system. The urethra and bladder base were used to "line up" MR and CT image sets during image fusion. Alignment was accomplished using translation and rotation in the three ortho-normal planes. Accuracy of image fusion was evaluated by calculating the maximum deviation in millimeters between the center of the urethra on axial MR versus CT images. Implant quality was determined by comparing dosimetric results to previously set parameters. Image fusion was performed with a high degree of accuracy. When lining up the urethra and base of bladder, the maximum difference in axial position of the urethra between MR and CT averaged 2.5 mm (range 1.3-4.0 mm, SD 0.9 mm). By projecting CT-derived dose distributions over MR images of soft tissue structures, qualitative and quantitative evaluation of implant quality is straightforward. The image-fusion process we describe provides a sophisticated way of assessing the quality of a prostate seed implant. Commercial software makes the process time-efficient and available to any clinical practice with a high-quality treatment planning system. While we use MR to image soft tissue structures, the process could be used with any imaging modality that is able to visualize the prostatic urethra (e.g., ultrasound).

  4. Real-time fusion of coronary CT angiography with x-ray fluoroscopy during chronic total occlusion PCI.

    PubMed

    Ghoshhajra, Brian B; Takx, Richard A P; Stone, Luke L; Girard, Erin E; Brilakis, Emmanouil S; Lombardi, William L; Yeh, Robert W; Jaffer, Farouc A

    2017-06-01

    The purpose of this study was to demonstrate the feasibility of real-time fusion of coronary computed tomography angiography (CTA) centreline and arterial wall calcification with x-ray fluoroscopy during chronic total occlusion (CTO) percutaneous coronary intervention (PCI). Patients undergoing CTO PCI were prospectively enrolled. Pre-procedural CT scans were integrated with conventional coronary fluoroscopy using prototype software. We enrolled 24 patients who underwent CTO PCI using the prototype CT fusion software, and 24 consecutive CTO PCI patients without CT guidance served as a control group. Mean age was 66 ± 11 years, and 43/48 patients were men. Real-time CTA fusion during CTO PCI provided additional information regarding coronary arterial calcification and tortuosity that generated new insights into antegrade wiring, antegrade dissection/reentry, and retrograde wiring during CTO PCI. Overall CTO success rates and procedural outcomes remained similar between the two groups, despite a trend toward higher complexity in the fusion CTA group. This study demonstrates that real-time automated co-registration of coronary CTA centreline and calcification onto live fluoroscopic images is feasible and provides new insights into CTO PCI, and in particular, antegrade dissection reentry-based CTO PCI. • Real-time semi-automated fusion of CTA/fluoroscopy is feasible during CTO PCI. • CTA fusion data can be toggled on/off as desired during CTO PCI • Real-time CT calcium and centreline overlay could benefit antegrade dissection/reentry-based CTO PCI.

  5. Fusion and quality analysis for remote sensing images using contourlet transform

    NASA Astrophysics Data System (ADS)

    Choi, Yoonsuk; Sharifahmadian, Ershad; Latifi, Shahram

    2013-05-01

    Recent developments in remote sensing technologies have provided various images with high spatial and spectral resolutions. However, multispectral images have low spatial resolution and panchromatic images have low spectral resolution. Therefore, image fusion techniques are necessary to improve the spatial resolution of spectral images by injecting spatial details of high-resolution panchromatic images. The objective of image fusion is to provide useful information by improving the spatial resolution and the spectral information of the original images. The fusion results can be utilized in various applications, such as military, medical imaging, and remote sensing. This paper addresses two issues in image fusion: i) image fusion method and ii) quality analysis of fusion results. First, a new contourlet-based image fusion method is presented, which is an improvement over the wavelet-based fusion. This fusion method is then applied to a case study to demonstrate its fusion performance. Fusion framework and scheme used in the study are discussed in detail. Second, quality analysis for the fusion results is discussed. We employed various quality metrics in order to analyze the fusion results both spatially and spectrally. Our results indicate that the proposed contourlet-based fusion method performs better than the conventional wavelet-based fusion methods.

  6. A Comparison of Accuracy of Image- versus Hardware-based Tracking Technologies in 3D Fusion in Aortic Endografting.

    PubMed

    Rolls, A E; Maurel, B; Davis, M; Constantinou, J; Hamilton, G; Mastracci, T M

    2016-09-01

    Fusion of three-dimensional (3D) computed tomography and intraoperative two-dimensional imaging in endovascular surgery relies on manual rigid co-registration of bony landmarks and tracking of hardware to provide a 3D overlay (hardware-based tracking, HWT). An alternative technique (image-based tracking, IMT) uses image recognition to register and place the fusion mask. We present preliminary experience with an agnostic fusion technology that uses IMT, with the aim of comparing the accuracy of overlay for this technology with HWT. Data were collected prospectively for 12 patients. All devices were deployed using both IMT and HWT fusion assistance concurrently. Postoperative analysis of both systems was performed by three blinded expert observers, from selected time-points during the procedures, using the displacement of fusion rings, the overlay of vascular markings and the true ostia of renal arteries. The Mean overlay error and the deviation from mean error was derived using image analysis software. Comparison of the mean overlay error was made between IMT and HWT. The validity of the point-picking technique was assessed. IMT was successful in all of the first 12 cases, whereas technical learning curve challenges thwarted HWT in four cases. When independent operators assessed the degree of accuracy of the overlay, the median error for IMT was 3.9 mm (IQR 2.89-6.24, max 9.5) versus 8.64 mm (IQR 6.1-16.8, max 24.5) for HWT (p = .001). Variance per observer was 0.69 mm(2) and 95% limit of agreement ±1.63. In this preliminary study, the error of magnitude of displacement from the "true anatomy" during image overlay in IMT was less than for HWT. This confirms that ongoing manual re-registration, as recommended by the manufacturer, should be performed for HWT systems to maintain accuracy. The error in position of the fusion markers for IMT was consistent, thus may be considered predictable. Copyright © 2016 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  7. Accuracy of volume measurement using 3D ultrasound and development of CT-3D US image fusion algorithm for prostate cancer radiotherapy.

    PubMed

    Baek, Jihye; Huh, Jangyoung; Kim, Myungsoo; Hyun An, So; Oh, Yoonjin; Kim, DongYoung; Chung, Kwangzoo; Cho, Sungho; Lee, Rena

    2013-02-01

    To evaluate the accuracy of measuring volumes using three-dimensional ultrasound (3D US), and to verify the feasibility of the replacement of CT-MR fusion images with CT-3D US in radiotherapy treatment planning. Phantoms, consisting of water, contrast agent, and agarose, were manufactured. The volume was measured using 3D US, CT, and MR devices. A CT-3D US and MR-3D US image fusion software was developed using the Insight Toolkit library in order to acquire three-dimensional fusion images. The quality of the image fusion was evaluated using metric value and fusion images. Volume measurement, using 3D US, shows a 2.8 ± 1.5% error, 4.4 ± 3.0% error for CT, and 3.1 ± 2.0% error for MR. The results imply that volume measurement using the 3D US devices has a similar accuracy level to that of CT and MR. Three-dimensional image fusion of CT-3D US and MR-3D US was successfully performed using phantom images. Moreover, MR-3D US image fusion was performed using human bladder images. 3D US could be used in the volume measurement of human bladders and prostates. CT-3D US image fusion could be used in monitoring the target position in each fraction of external beam radiation therapy. Moreover, the feasibility of replacing the CT-MR image fusion to the CT-3D US in radiotherapy treatment planning was verified.

  8. The design of red-blue 3D video fusion system based on DM642

    NASA Astrophysics Data System (ADS)

    Fu, Rongguo; Luo, Hao; Lv, Jin; Feng, Shu; Wei, Yifang; Zhang, Hao

    2016-10-01

    Aiming at the uncertainty of traditional 3D video capturing including camera focal lengths, distance and angle parameters between two cameras, a red-blue 3D video fusion system based on DM642 hardware processing platform is designed with the parallel optical axis. In view of the brightness reduction of traditional 3D video, the brightness enhancement algorithm based on human visual characteristics is proposed and the luminance component processing method based on YCbCr color space is also proposed. The BIOS real-time operating system is used to improve the real-time performance. The video processing circuit with the core of DM642 enhances the brightness of the images, then converts the video signals of YCbCr to RGB and extracts the R component from one camera, so does the other video and G, B component are extracted synchronously, outputs 3D fusion images finally. The real-time adjustments such as translation and scaling of the two color components are realized through the serial communication between the VC software and BIOS. The system with the method of adding red-blue components reduces the lost of the chrominance components and makes the picture color saturation reduce to more than 95% of the original. Enhancement algorithm after optimization to reduce the amount of data fusion in the processing of video is used to reduce the fusion time and watching effect is improved. Experimental results show that the system can capture images in near distance, output red-blue 3D video and presents the nice experiences to the audience wearing red-blue glasses.

  9. [Efficacy of fusion image for the preoperative assessment of anatomical variation of the anterior choroidal artery].

    PubMed

    Aoki, Yasuko; Endo, Hidenori; Niizuma, Kuniyasu; Inoue, Takashi; Shimizu, Hiroaki; Tominaga, Teiji

    2013-12-01

    We report two cases with internal carotid artery(ICA)aneurysms, in which fusion image effectively indicated the anatomical variations of the anterior choroidal artery (AchoA). Fusion image was obtained using fusion application software (Integrated Registration, Advantage Workstation VS4, GE Healthcare). When the artery passed through the choroidal fissure, it was diagnosed as AchoA. Case 1 had an aneurysm at the left ICA. Left internal carotid angiography (ICAG) showed that an artery arising from the aneurysmal neck supplied the medial occipital lobe. Fusion image showed that this artery had a branch passing through the choroidal fissure, which was diagnosed as hyperplastic AchoA. Case 2 had an aneurysm at the supraclinoid segment of the right ICA. AchoA or posterior communicating artery (PcomA) were not detected by the right ICAG. Fusion image obtained from 3D vertebral angiography (VAG) and MRI showed that the right AchoA arose from the right PcomA. Fusion image obtained from the right ICAG and the left VAG suggested that the aneurysm was located on the ICA where the PcomA regressed. Fusion image is an effective tool for assessing anatomical variations of AchoA. The present method is simple and quick for obtaining a fusion image that can be used in a real-time clinical setting.

  10. What automated age estimation of hand and wrist MRI data tells us about skeletal maturation in male adolescents.

    PubMed

    Urschler, Martin; Grassegger, Sabine; Štern, Darko

    2015-01-01

    Age estimation of individuals is important in human biology and has various medical and forensic applications. Recent interest in MR-based methods aims to investigate alternatives for established methods involving ionising radiation. Automatic, software-based methods additionally promise improved estimation objectivity. To investigate how informative automatically selected image features are regarding their ability to discriminate age, by exploring a recently proposed software-based age estimation method for MR images of the left hand and wrist. One hundred and two MR datasets of left hand images are used to evaluate age estimation performance, consisting of bone and epiphyseal gap volume localisation, computation of one age regression model per bone mapping image features to age and fusion of individual bone age predictions to a final age estimate. Quantitative results of the software-based method show an age estimation performance with a mean absolute difference of 0.85 years (SD = 0.58 years) to chronological age, as determined by a cross-validation experiment. Qualitatively, it is demonstrated how feature selection works and which image features of skeletal maturation are automatically chosen to model the non-linear regression function. Feasibility of automatic age estimation based on MRI data is shown and selected image features are found to be informative for describing anatomical changes during physical maturation in male adolescents.

  11. Software defined multi-spectral imaging for Arctic sensor networks

    NASA Astrophysics Data System (ADS)

    Siewert, Sam; Angoth, Vivek; Krishnamurthy, Ramnarayan; Mani, Karthikeyan; Mock, Kenrick; Singh, Surjith B.; Srivistava, Saurav; Wagner, Chris; Claus, Ryan; Vis, Matthew Demi

    2016-05-01

    Availability of off-the-shelf infrared sensors combined with high definition visible cameras has made possible the construction of a Software Defined Multi-Spectral Imager (SDMSI) combining long-wave, near-infrared and visible imaging. The SDMSI requires a real-time embedded processor to fuse images and to create real-time depth maps for opportunistic uplink in sensor networks. Researchers at Embry Riddle Aeronautical University working with University of Alaska Anchorage at the Arctic Domain Awareness Center and the University of Colorado Boulder have built several versions of a low-cost drop-in-place SDMSI to test alternatives for power efficient image fusion. The SDMSI is intended for use in field applications including marine security, search and rescue operations and environmental surveys in the Arctic region. Based on Arctic marine sensor network mission goals, the team has designed the SDMSI to include features to rank images based on saliency and to provide on camera fusion and depth mapping. A major challenge has been the design of the camera computing system to operate within a 10 to 20 Watt power budget. This paper presents a power analysis of three options: 1) multi-core, 2) field programmable gate array with multi-core, and 3) graphics processing units with multi-core. For each test, power consumed for common fusion workloads has been measured at a range of frame rates and resolutions. Detailed analyses from our power efficiency comparison for workloads specific to stereo depth mapping and sensor fusion are summarized. Preliminary mission feasibility results from testing with off-the-shelf long-wave infrared and visible cameras in Alaska and Arizona are also summarized to demonstrate the value of the SDMSI for applications such as ice tracking, ocean color, soil moisture, animal and marine vessel detection and tracking. The goal is to select the most power efficient solution for the SDMSI for use on UAVs (Unoccupied Aerial Vehicles) and other drop-in-place installations in the Arctic. The prototype selected will be field tested in Alaska in the summer of 2016.

  12. Hardware Timestamping for an Image Acquisition System Based on FlexRIO and IEEE 1588 v2 Standard

    NASA Astrophysics Data System (ADS)

    Esquembri, S.; Sanz, D.; Barrera, E.; Ruiz, M.; Bustos, A.; Vega, J.; Castro, R.

    2016-02-01

    Current fusion devices usually implement distributed acquisition systems for the multiple diagnostics of their experiments. However, each diagnostic is composed by hundreds or even thousands of signals, including images from the vessel interior. These signals and images must be correctly timestamped, because all the information will be analyzed to identify plasma behavior using temporal correlations. For acquisition devices without synchronization mechanisms the timestamp is given by another device with timing capabilities when signaled by the first device. Later, each data should be related with its timestamp, usually via software. This critical action is unfeasible for software applications when sampling rates are high. In order to solve this problem this paper presents the implementation of an image acquisition system with real-time hardware timestamping mechanism. This is synchronized with a master clock using the IEEE 1588 v2 Precision Time Protocol (PTP). Synchronization, image acquisition and processing, and timestamping mechanisms are implemented using Field Programmable Gate Array (FPGA) and a timing card -PTP v2 synchronized. The system has been validated using a camera simulator streaming videos from fusion databases. The developed architecture is fully compatible with ITER Fast Controllers and has been integrated with EPICS to control and monitor the whole system.

  13. Fusion of magnetic resonance angiography and magnetic resonance imaging for surgical planning for meningioma--technical note.

    PubMed

    Kashimura, Hiroshi; Ogasawara, Kuniaki; Arai, Hiroshi; Beppu, Takaaki; Inoue, Takashi; Takahashi, Tsutomu; Matsuda, Koichi; Takahashi, Yujiro; Fujiwara, Shunrou; Ogawa, Akira

    2008-09-01

    A fusion technique for magnetic resonance (MR) angiography and MR imaging was developed to help assess the peritumoral angioarchitecture during surgical planning for meningioma. Three-dimensional time-of-flight (3D-TOF) and 3D-spoiled gradient recalled (SPGR) datasets were obtained from 10 patients with intracranial meningioma, and fused using newly developed volume registration and visualization software. Maximum intensity projection (MIP) images from 3D-TOF MR angiography and axial SPGR MR imaging were displayed at the same time on the monitor. Selecting a vessel on the real-time MIP image indicated the corresponding points on the axial image automatically. Fusion images showed displacement of the anterior cerebral or middle cerebral artery in 7 patients and encasement of the anterior cerebral arteries in 1 patient, with no relationship between the main arterial trunk and tumor in 2 patients. Fusion of MR angiography and MR imaging can clarify relationships between the intracranial vasculature and meningioma, and may be helpful for surgical planning for meningioma.

  14. [Three-dimensional data fusion method for tooth crown and root based on curvature continuity algorithm].

    PubMed

    Zhao, Y J; Liu, Y; Sun, Y C; Wang, Y

    2017-08-18

    To explore a three-dimensional (3D) data fusion and integration method of optical scanning tooth crowns and cone beam CT (CBCT) reconstructing tooth roots for their natural transition in the 3D profile. One mild dental crowding case was chosen from orthodontics clinics with full denture. The CBCT data were acquired to reconstruct the dental model with tooth roots by Mimics 17.0 medical imaging software, and the optical impression was taken to obtain the dentition model with high precision physiological contour of crowns by Smart Optics dental scanner. The two models were doing 3D registration based on their common part of the crowns' shape in Geomagic Studio 2012 reverse engineering software. The model coordinate system was established by defining the occlusal plane. crown-gingiva boundary was extracted from optical scanning model manually, then crown-root boundary was generated by offsetting and projecting crown-gingiva boundary to the root model. After trimming the crown and root models, the 3D fusion model with physiological contour crown and nature root was formed by curvature continuity filling algorithm finally. In the study, 10 patients with dentition mild crowded from the oral clinics were followed up with this method to obtain 3D crown and root fusion models, and 10 high qualification doctors were invited to do subjective evaluation of these fusion models. This study based on commercial software platform, preliminarily realized the 3D data fusion and integration method of optical scanning tooth crowns and CBCT tooth roots with a curvature continuous shape transition. The 10 patients' 3D crown and root fusion models were constructed successfully by the method, and the average score of the doctors' subjective evaluation for these 10 models was 8.6 points (0-10 points). which meant that all the fusion models could basically meet the need of the oral clinics, and also showed the method in our study was feasible and efficient in orthodontics study and clinics. The method of this study for 3D crown and root data fusion could obtain an integrate tooth or dental model more close to the nature shape. CBCT model calibration may probably improve the precision of the fusion model. The adaptation of this method for severe dentition crowding and micromaxillary deformity needs further research.

  15. Novel wavelength diversity technique for high-speed atmospheric turbulence compensation

    NASA Astrophysics Data System (ADS)

    Arrasmith, William W.; Sullivan, Sean F.

    2010-04-01

    The defense, intelligence, and homeland security communities are driving a need for software dominant, real-time or near-real time atmospheric turbulence compensated imagery. The development of parallel processing capabilities are finding application in diverse areas including image processing, target tracking, pattern recognition, and image fusion to name a few. A novel approach to the computationally intensive case of software dominant optical and near infrared imaging through atmospheric turbulence is addressed in this paper. Previously, the somewhat conventional wavelength diversity method has been used to compensate for atmospheric turbulence with great success. We apply a new correlation based approach to the wavelength diversity methodology using a parallel processing architecture enabling high speed atmospheric turbulence compensation. Methods for optical imaging through distributed turbulence are discussed, simulation results are presented, and computational and performance assessments are provided.

  16. [Time consumption and quality of an automated fusion tool for SPECT and MRI images of the brain].

    PubMed

    Fiedler, E; Platsch, G; Schwarz, A; Schmiedehausen, K; Tomandl, B; Huk, W; Rupprecht, Th; Rahn, N; Kuwert, T

    2003-10-01

    Although the fusion of images from different modalities may improve diagnostic accuracy, it is rarely used in clinical routine work due to logistic problems. Therefore we evaluated performance and time needed for fusing MRI and SPECT images using a semiautomated dedicated software. PATIENTS, MATERIAL AND METHOD: In 32 patients regional cerebral blood flow was measured using (99m)Tc ethylcystein dimer (ECD) and the three-headed SPECT camera MultiSPECT 3. MRI scans of the brain were performed using either a 0,2 T Open or a 1,5 T Sonata. Twelve of the MRI data sets were acquired using a 3D-T1w MPRAGE sequence, 20 with a 2D acquisition technique and different echo sequences. Image fusion was performed on a Syngo workstation using an entropy minimizing algorithm by an experienced user of the software. The fusion results were classified. We measured the time needed for the automated fusion procedure and in case of need that for manual realignment after automated, but insufficient fusion. The mean time of the automated fusion procedure was 123 s. It was for the 2D significantly shorter than for the 3D MRI datasets. For four of the 2D data sets and two of the 3D data sets an optimal fit was reached using the automated approach. The remaining 26 data sets required manual correction. The sum of the time required for automated fusion and that needed for manual correction averaged 320 s (50-886 s). The fusion of 3D MRI data sets lasted significantly longer than that of the 2D MRI data. The automated fusion tool delivered in 20% an optimal fit, in 80% manual correction was necessary. Nevertheless, each of the 32 SPECT data sets could be merged in less than 15 min with the corresponding MRI data, which seems acceptable for clinical routine use.

  17. Hardware implementation of hierarchical volume subdivision-based elastic registration.

    PubMed

    Dandekar, Omkar; Walimbe, Vivek; Shekhar, Raj

    2006-01-01

    Real-time, elastic and fully automated 3D image registration is critical to the efficiency and effectiveness of many image-guided diagnostic and treatment procedures relying on multimodality image fusion or serial image comparison. True, real-time performance will make many 3D image registration-based techniques clinically viable. Hierarchical volume subdivision-based image registration techniques are inherently faster than most elastic registration techniques, e.g. free-form deformation (FFD)-based techniques, and are more amenable for achieving real-time performance through hardware acceleration. Our group has previously reported an FPGA-based architecture for accelerating FFD-based image registration. In this article we show how our existing architecture can be adapted to support hierarchical volume subdivision-based image registration. A proof-of-concept implementation of the architecture achieved speedups of 100 for elastic registration against an optimized software implementation on a 3.2 GHz Pentium III Xeon workstation. Due to inherent parallel nature of the hierarchical volume subdivision-based image registration techniques further speedup can be achieved by using several computing modules in parallel.

  18. Data processing for soft X-ray diagnostics based on GEM detector measurements for fusion plasma imaging

    NASA Astrophysics Data System (ADS)

    Czarski, T.; Chernyshova, M.; Pozniak, K. T.; Kasprowicz, G.; Byszuk, A.; Juszczyk, B.; Wojenski, A.; Zabolotny, W.; Zienkiewicz, P.

    2015-12-01

    The measurement system based on GEM - Gas Electron Multiplier detector is developed for X-ray diagnostics of magnetic confinement fusion plasmas. The Triple Gas Electron Multiplier (T-GEM) is presented as soft X-ray (SXR) energy and position sensitive detector. The paper is focused on the measurement subject and describes the fundamental data processing to obtain reliable characteristics (histograms) useful for physicists. So, it is the software part of the project between the electronic hardware and physics applications. The project is original and it was developed by the paper authors. Multi-channel measurement system and essential data processing for X-ray energy and position recognition are considered. Several modes of data acquisition determined by hardware and software processing are introduced. Typical measuring issues are deliberated for the enhancement of data quality. The primary version based on 1-D GEM detector was applied for the high-resolution X-ray crystal spectrometer KX1 in the JET tokamak. The current version considers 2-D detector structures initially for the investigation purpose. Two detector structures with single-pixel sensors and multi-pixel (directional) sensors are considered for two-dimensional X-ray imaging. Fundamental output characteristics are presented for one and two dimensional detector structure. Representative results for reference source and tokamak plasma are demonstrated.

  19. Useful diagnostic biometabolic data obtained by PET/CT and MR fusion imaging using open source software.

    PubMed

    Antonica, Filippo; Asabella, Artor Niccoli; Ferrari, Cristina; Rubini, Domenico; Notaristefano, Antonio; Nicoletti, Adriano; Altini, Corinna; Merenda, Nunzio; Mossa, Emilio; Guarini, Attilio; Rubini, Giuseppe

    2014-01-01

    In the last decade numerous attempts were considered to co-register and integrate different imaging data. Like PET/CT the integration of PET to MR showed great interest. PET/MR scanners are recently tested on different distrectual or systemic pathologies. Unfortunately PET/MR scanners are expensive and diagnostic protocols are still under studies and investigations. Nuclear Medicine imaging highlights functional and biometabolic information but has poor anatomic details. The aim of this study is to integrate MR and PET data to produce distrectual or whole body fused images acquired from different scanners even in different days. We propose an offline method to fuse PET with MR data using an open-source software that has to be inexpensive, reproducible and capable to exchange data over the network. We also evaluate global quality, alignment quality, and diagnostic confidence of fused PET-MR images. We selected PET/CT studies performed in our Nuclear Medicine unit, MR studies provided by patients on DICOM CD media or network received. We used Osirix 5.7 open source version. We aligned CT slices with the first MR slice, pointed and marked for co-registration using MR-T1 sequence and CT as reference and fused with PET to produce a PET-MR image. A total of 100 PET/CT studies were fused with the following MR studies: 20 head, 15 thorax, 24 abdomen, 31 pelvis, 10 whole body. An interval of no more than 15 days between PET and MR was the inclusion criteria. PET/CT, MR and fused studies were evaluated by two experienced radiologist and two experienced nuclear medicine physicians. Each one filled a five point based evaluation scoring scheme based on image quality, image artifacts, segmentation errors, fusion misalignment and diagnostic confidence. Our fusion method showed best results for head, thorax and pelvic districts in terms of global quality, alignment quality and diagnostic confidence,while for the abdomen and pelvis alignement quality and global quality resulted poor due to internal organs filling variation and time shifting beetwen examinations. PET/CT images with time of flight reconstruction and real attenuation correction were combined with anatomical detailed MRI images. We used Osirix, an image processing Open Source Software dedicated to DICOM images. No additional costs, to buy and upgrade proprietary software are required for combining data. No high technology or very expensive PET/MR scanner, that requires dedicated shielded room spaces and personnel to be employed or to be trained, are needed. Our method allows to share patient PET/MR fused data with different medical staff using dedicated networks. The proposed method may be applied to every MR sequence (MR-DWI and MR-STIR, magnet enhanced sequences) to characterize soft tissue alterations and improve discrimination diseases. It can be applied not only to PET with MR but virtually to every DICOM study.

  20. A prospective comparison between auto-registration and manual registration of real-time ultrasound with MR images for percutaneous ablation or biopsy of hepatic lesions.

    PubMed

    Cha, Dong Ik; Lee, Min Woo; Song, Kyoung Doo; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga

    2017-06-01

    To compare the accuracy and required time for image fusion of real-time ultrasound (US) with pre-procedural magnetic resonance (MR) images between positioning auto-registration and manual registration for percutaneous radiofrequency ablation or biopsy of hepatic lesions. This prospective study was approved by the institutional review board, and all patients gave written informed consent. Twenty-two patients (male/female, n = 18/n = 4; age, 61.0 ± 7.7 years) who were referred for planning US to assess the feasibility of radiofrequency ablation (n = 21) or biopsy (n = 1) for focal hepatic lesions were included. One experienced radiologist performed the two types of image fusion methods in each patient. The performance of auto-registration and manual registration was evaluated. The accuracy of the two methods, based on measuring registration error, and the time required for image fusion for both methods were recorded using in-house software and respectively compared using the Wilcoxon signed rank test. Image fusion was successful in all patients. The registration error was not significantly different between the two methods (auto-registration: median, 3.75 mm; range, 1.0-15.8 mm vs. manual registration: median, 2.95 mm; range, 1.2-12.5 mm, p = 0.242). The time required for image fusion was significantly shorter with auto-registration than with manual registration (median, 28.5 s; range, 18-47 s, vs. median, 36.5 s; range, 14-105 s, p = 0.026). Positioning auto-registration showed promising results compared with manual registration, with similar accuracy and even shorter registration time.

  1. Evaluation of image registration in PET/CT of the liver and recommendations for optimized imaging.

    PubMed

    Vogel, Wouter V; van Dalen, Jorn A; Wiering, Bas; Huisman, Henkjan; Corstens, Frans H M; Ruers, Theo J M; Oyen, Wim J G

    2007-06-01

    Multimodality PET/CT of the liver can be performed with an integrated (hybrid) PET/CT scanner or with software fusion of dedicated PET and CT. Accurate anatomic correlation and good image quality of both modalities are important prerequisites, regardless of the applied method. Registration accuracy is influenced by breathing motion differences on PET and CT, which may also have impact on (attenuation correction-related) artifacts, especially in the upper abdomen. The impact of these issues was evaluated for both hybrid PET/CT and software fusion, focused on imaging of the liver. Thirty patients underwent hybrid PET/CT, 20 with CT during expiration breath-hold (EB) and 10 with CT during free breathing (FB). Ten additional patients underwent software fusion of dedicated PET and dedicated expiration breath-hold CT (SF). The image registration accuracy was evaluated at the location of liver borders on CT and uncorrected PET images and at the location of liver lesions. Attenuation-correction artifacts were evaluated by comparison of liver borders on uncorrected and attenuation-corrected PET images. CT images were evaluated for the presence of breathing artifacts. In EB, 40% of patients had an absolute registration error of the diaphragm in the craniocaudal direction of >1 cm (range, -16 to 44 mm), and 45% of lesions were mispositioned >1 cm. In 50% of cases, attenuation-correction artifacts caused a deformation of the liver dome on PET of >1 cm. Poor compliance to breath-hold instructions caused CT artifacts in 55% of cases. In FB, 30% had registration errors of >1 cm (range, -4 to 16 mm) and PET artifacts were less extensive, but all CT images had breathing artifacts. As SF allows independent alignment of PET and CT, no registration errors or artifacts of >1 cm of the diaphragm occurred. Hybrid PET/CT of the liver may have significant registration errors and artifacts related to breathing motion. The extent of these issues depends on the selected breathing protocol and the speed of the CT scanner. No protocol or scanner can guarantee perfect image fusion. On the basis of these findings, recommendations were formulated with regard to scanner requirements, breathing protocols, and reporting.

  2. A New Approach to Image Fusion Based on Cokriging

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; LeMoigne, Jacqueline; Mount, David M.; Morisette, Jeffrey T.

    2005-01-01

    We consider the image fusion problem involving remotely sensed data. We introduce cokriging as a method to perform fusion. We investigate the advantages of fusing Hyperion with ALI. The evaluation is performed by comparing the classification of the fused data with that of input images and by calculating well-chosen quantitative fusion quality metrics. We consider the Invasive Species Forecasting System (ISFS) project as our fusion application. The fusion of ALI with Hyperion data is studies using PCA and wavelet-based fusion. We then propose utilizing a geostatistical based interpolation method called cokriging as a new approach for image fusion.

  3. Nighttime images fusion based on Laplacian pyramid

    NASA Astrophysics Data System (ADS)

    Wu, Cong; Zhan, Jinhao; Jin, Jicheng

    2018-02-01

    This paper expounds method of the average weighted fusion, image pyramid fusion, the wavelet transform and apply these methods on the fusion of multiple exposures nighttime images. Through calculating information entropy and cross entropy of fusion images, we can evaluate the effect of different fusion. Experiments showed that Laplacian pyramid image fusion algorithm is suitable for processing nighttime images fusion, it can reduce the halo while preserving image details.

  4. AceTree: a tool for visual analysis of Caenorhabditis elegans embryogenesis

    PubMed Central

    Boyle, Thomas J; Bao, Zhirong; Murray, John I; Araya, Carlos L; Waterston, Robert H

    2006-01-01

    Background The invariant lineage of the nematode Caenorhabditis elegans has potential as a powerful tool for the description of mutant phenotypes and gene expression patterns. We previously described procedures for the imaging and automatic extraction of the cell lineage from C. elegans embryos. That method uses time-lapse confocal imaging of a strain expressing histone-GFP fusions and a software package, StarryNite, processes the thousands of images and produces output files that describe the location and lineage relationship of each nucleus at each time point. Results We have developed a companion software package, AceTree, which links the images and the annotations using tree representations of the lineage. This facilitates curation and editing of the lineage. AceTree also contains powerful visualization and interpretive tools, such as space filling models and tree-based expression patterning, that can be used to extract biological significance from the data. Conclusion By pairing a fast lineaging program written in C with a user interface program written in Java we have produced a powerful software suite for exploring embryonic development. PMID:16740163

  5. AceTree: a tool for visual analysis of Caenorhabditis elegans embryogenesis.

    PubMed

    Boyle, Thomas J; Bao, Zhirong; Murray, John I; Araya, Carlos L; Waterston, Robert H

    2006-06-01

    The invariant lineage of the nematode Caenorhabditis elegans has potential as a powerful tool for the description of mutant phenotypes and gene expression patterns. We previously described procedures for the imaging and automatic extraction of the cell lineage from C. elegans embryos. That method uses time-lapse confocal imaging of a strain expressing histone-GFP fusions and a software package, StarryNite, processes the thousands of images and produces output files that describe the location and lineage relationship of each nucleus at each time point. We have developed a companion software package, AceTree, which links the images and the annotations using tree representations of the lineage. This facilitates curation and editing of the lineage. AceTree also contains powerful visualization and interpretive tools, such as space filling models and tree-based expression patterning, that can be used to extract biological significance from the data. By pairing a fast lineaging program written in C with a user interface program written in Java we have produced a powerful software suite for exploring embryonic development.

  6. Evaluation of multimodality imaging using image fusion with ultrasound tissue elasticity imaging in an experimental animal model.

    PubMed

    Paprottka, P M; Zengel, P; Cyran, C C; Ingrisch, M; Nikolaou, K; Reiser, M F; Clevert, D A

    2014-01-01

    To evaluate the ultrasound tissue elasticity imaging by comparison to multimodality imaging using image fusion with Magnetic Resonance Imaging (MRI) and conventional grey scale imaging with additional elasticity-ultrasound in an experimental small-animal-squamous-cell carcinoma-model for the assessment of tissue morphology. Human hypopharynx carcinoma cells were subcutaneously injected into the left flank of 12 female athymic nude rats. After 10 days (SD ± 2) of subcutaneous tumor growth, sonographic grey scale including elasticity imaging and MRI measurements were performed using a high-end ultrasound system and a 3T MR. For image fusion the contrast-enhanced MRI DICOM data set was uploaded in the ultrasonic device which has a magnetic field generator, a linear array transducer (6-15 MHz) and a dedicated software package (GE Logic E9), that can detect transducers by means of a positioning system. Conventional grey scale and elasticity imaging were integrated in the image fusion examination. After successful registration and image fusion the registered MR-images were simultaneously shown with the respective ultrasound sectional plane. Data evaluation was performed using the digitally stored video sequence data sets by two experienced radiologist using a modified Tsukuba Elasticity score. The colors "red and green" are assigned for an area of soft tissue, "blue" indicates hard tissue. In all cases a successful image fusion and plan registration with MRI and ultrasound imaging including grey scale and elasticity imaging was possible. The mean tumor volume based on caliper measurements in 3 dimensions was ~323 mm3. 4/12 rats were evaluated with Score I, 5/12 rates were evaluated with Score II, 3/12 rates were evaluated with Score III. There was a close correlation in the fused MRI with existing small necrosis in the tumor. None of the scored II or III lesions was visible by conventional grey scale. The comparison of ultrasound tissue elasticity imaging enables a secure differentiation between different tumor tissue areas in comparison to image fusion with MRI in our small study group. Therefore ultrasound tissue elasticity imaging might be used for fast detection of tumor response in the future whereas conventional grey scale imaging alone could not provide the additional information. By using standard, contrast-enhanced MRI images for reliable and reproducible slice positioning, the strongly user-dependent limitation of ultrasound tissue elasticity imaging may be overcome, especially for a comparison between baseline and follow-up measurements.

  7. A color fusion method of infrared and low-light-level images based on visual perception

    NASA Astrophysics Data System (ADS)

    Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa

    2014-11-01

    The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.

  8. The usefulness of (18)F-FDG PET/MRI fusion image in diagnosing pancreatic tumor: comparison with (18)F-FDG PET/CT.

    PubMed

    Nagamachi, Shigeki; Nishii, Ryuichi; Wakamatsu, Hideyuki; Mizutani, Youichi; Kiyohara, Shogo; Fujita, Seigo; Futami, Shigemi; Sakae, Tatefumi; Furukoji, Eiji; Tamura, Shozo; Arita, Hideo; Chijiiwa, Kazuo; Kawai, Keiichi

    2013-07-01

    This study aimed at demonstrating the feasibility of retrospectively fused (18)F FDG-PET and MRI (PET/MRI fusion image) in diagnosing pancreatic tumor, in particular differentiating malignant tumor from benign lesions. In addition, we evaluated additional findings characterizing pancreatic lesions by FDG-PET/MRI fusion image. We analyzed retrospectively 119 patients: 96 cancers and 23 benign lesions. FDG-PET/MRI fusion images (PET/T1 WI or PET/T2WI) were made by dedicated software using 1.5 Tesla (T) MRI image and FDG-PET images. These images were interpreted by two well-trained radiologists without knowledge of clinical information and compared with FDG-PET/CT images. We compared the differential diagnostic capability between PET/CT and FDG-PET/MRI fusion image. In addition, we evaluated additional findings such as tumor structure and tumor invasion. FDG-PET/MRI fusion image significantly improved accuracy compared with that of PET/CT (96.6 vs. 86.6 %). As additional finding, dilatation of main pancreatic duct was noted in 65.9 % of solid types and in 22.6 % of cystic types, on PET/MRI-T2 fusion image. Similarly, encasement of adjacent vessels was noted in 43.1 % of solid types and in 6.5 % of cystic types. Particularly in cystic types, intra-tumor structures such as mural nodule (35.4 %) or intra-cystic septum (74.2 %) were detected additionally. Besides, PET/MRI-T2 fusion image could detect extra benign cystic lesions (9.1 % in solid type and 9.7 % in cystic type) that were not noted by PET/CT. In diagnosing pancreatic lesions, FDG-PET/MRI fusion image was useful in differentiating pancreatic cancer from benign lesions. Furthermore, it was helpful in evaluating relationship between lesions and surrounding tissues as well as in detecting extra benign cysts.

  9. Dual-Tree Complex Wavelet Transform and Image Block Residual-Based Multi-Focus Image Fusion in Visual Sensor Networks

    PubMed Central

    Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan

    2014-01-01

    This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs. PMID:25587878

  10. Dual-tree complex wavelet transform and image block residual-based multi-focus image fusion in visual sensor networks.

    PubMed

    Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan

    2014-11-26

    This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs.

  11. Multi-focus image fusion using a guided-filter-based difference image.

    PubMed

    Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Yang, Tingwu

    2016-03-20

    The aim of multi-focus image fusion technology is to integrate different partially focused images into one all-focused image. To realize this goal, a new multi-focus image fusion method based on a guided filter is proposed and an efficient salient feature extraction method is presented in this paper. Furthermore, feature extraction is primarily the main objective of the present work. Based on salient feature extraction, the guided filter is first used to acquire the smoothing image containing the most sharpness regions. To obtain the initial fusion map, we compose a mixed focus measure by combining the variance of image intensities and the energy of the image gradient together. Then, the initial fusion map is further processed by a morphological filter to obtain a good reprocessed fusion map. Lastly, the final fusion map is determined via the reprocessed fusion map and is optimized by a guided filter. Experimental results demonstrate that the proposed method does markedly improve the fusion performance compared to previous fusion methods and can be competitive with or even outperform state-of-the-art fusion methods in terms of both subjective visual effects and objective quality metrics.

  12. Sensor fusion for synthetic vision

    NASA Technical Reports Server (NTRS)

    Pavel, M.; Larimer, J.; Ahumada, A.

    1991-01-01

    Display methodologies are explored for fusing images gathered by millimeter wave sensors with images rendered from an on-board terrain data base to facilitate visually guided flight and ground operations in low visibility conditions. An approach to fusion based on multiresolution image representation and processing is described which facilitates fusion of images differing in resolution within and between images. To investigate possible fusion methods, a workstation-based simulation environment is being developed.

  13. [An improved medical image fusion algorithm and quality evaluation].

    PubMed

    Chen, Meiling; Tao, Ling; Qian, Zhiyu

    2009-08-01

    Medical image fusion is of very important value for application in medical image analysis and diagnosis. In this paper, the conventional method of wavelet fusion is improved,so a new algorithm of medical image fusion is presented and the high frequency and low frequency coefficients are studied respectively. When high frequency coefficients are chosen, the regional edge intensities of each sub-image are calculated to realize adaptive fusion. The choice of low frequency coefficient is based on the edges of images, so that the fused image preserves all useful information and appears more distinctly. We apply the conventional and the improved fusion algorithms based on wavelet transform to fuse two images of human body and also evaluate the fusion results through a quality evaluation method. Experimental results show that this algorithm can effectively retain the details of information on original images and enhance their edge and texture features. This new algorithm is better than the conventional fusion algorithm based on wavelet transform.

  14. The CTBTO/WMO Atmospheric Backtracking Response System and the Data Fusion Exercise 2007

    DTIC Science & Technology

    2008-09-01

    sensitivity of the measurement (sample) towards releases at all points on the globe . For a more comprehensive description, see the presentation from last...localization information, including the error ellipse, is comparatively small. The red spots on the right image mark seismic events that occurred on...hours indicated in the calendar of the PTS post-processing software WEB- GRAPE . 2008 Monitoring Research Review: Ground-Based Nuclear

  15. Investigations of image fusion

    NASA Astrophysics Data System (ADS)

    Zhang, Zhong

    1999-12-01

    The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a single image which is more suitable for the purpose of human visual perception or further image processing tasks. In this thesis, a region-based fusion algorithm using the wavelet transform is proposed. The identification of important features in each image, such as edges and regions of interest, are used to guide the fusion process. The idea of multiscale grouping is also introduced and a generic image fusion framework based on multiscale decomposition is studied. The framework includes all of the existing multiscale-decomposition- based fusion approaches we found in the literature which did not assume a statistical model for the source images. Comparisons indicate that our framework includes some new approaches which outperform the existing approaches for the cases we consider. Registration must precede our fusion algorithms. So we proposed a hybrid scheme which uses both feature-based and intensity-based methods. The idea of robust estimation of optical flow from time- varying images is employed with a coarse-to-fine multi- resolution approach and feature-based registration to overcome some of the limitations of the intensity-based schemes. Experiments show that this approach is robust and efficient. Assessing image fusion performance in a real application is a complicated issue. In this dissertation, a mixture probability density function model is used in conjunction with the Expectation- Maximization algorithm to model histograms of edge intensity. Some new techniques are proposed for estimating the quality of a noisy image of a natural scene. Such quality measures can be used to guide the fusion. Finally, we study fusion of images obtained from several copies of a new type of camera developed for video surveillance. Our techniques increase the capability and reliability of the surveillance system and provide an easy way to obtain 3-D information of objects in the space monitored by the system.

  16. Objective quality assessment for multiexposure multifocus image fusion.

    PubMed

    Hassen, Rania; Wang, Zhou; Salama, Magdy M A

    2015-09-01

    There has been a growing interest in image fusion technologies, but how to objectively evaluate the quality of fused images has not been fully understood. Here, we propose a method for objective quality assessment of multiexposure multifocus image fusion based on the evaluation of three key factors of fused image quality: 1) contrast preservation; 2) sharpness; and 3) structure preservation. Subjective experiments are conducted to create an image fusion database, based on which, performance evaluation shows that the proposed fusion quality index correlates well with subjective scores, and gives a significant improvement over the existing fusion quality measures.

  17. Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators

    PubMed Central

    Bai, Xiangzhi

    2015-01-01

    The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion. PMID:26184229

  18. Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators.

    PubMed

    Bai, Xiangzhi

    2015-07-15

    The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion.

  19. Multi-focus image fusion based on window empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Qin, Xinqiang; Zheng, Jiaoyue; Hu, Gang; Wang, Jiao

    2017-09-01

    In order to improve multi-focus image fusion quality, a novel fusion algorithm based on window empirical mode decomposition (WEMD) is proposed. This WEMD is an improved form of bidimensional empirical mode decomposition (BEMD), due to its decomposition process using the adding window principle, effectively resolving the signal concealment problem. We used WEMD for multi-focus image fusion, and formulated different fusion rules for bidimensional intrinsic mode function (BIMF) components and the residue component. For fusion of the BIMF components, the concept of the Sum-modified-Laplacian was used and a scheme based on the visual feature contrast adopted; when choosing the residue coefficients, a pixel value based on the local visibility was selected. We carried out four groups of multi-focus image fusion experiments and compared objective evaluation criteria with other three fusion methods. The experimental results show that the proposed fusion approach is effective and performs better at fusing multi-focus images than some traditional methods.

  20. Infrared and visible image fusion with the target marked based on multi-resolution visual attention mechanisms

    NASA Astrophysics Data System (ADS)

    Huang, Yadong; Gao, Kun; Gong, Chen; Han, Lu; Guo, Yue

    2016-03-01

    During traditional multi-resolution infrared and visible image fusion processing, the low contrast ratio target may be weakened and become inconspicuous because of the opposite DN values in the source images. So a novel target pseudo-color enhanced image fusion algorithm based on the modified attention model and fast discrete curvelet transformation is proposed. The interesting target regions are extracted from source images by introducing the motion features gained from the modified attention model, and source images are performed the gray fusion via the rules based on physical characteristics of sensors in curvelet domain. The final fusion image is obtained by mapping extracted targets into the gray result with the proper pseudo-color instead. The experiments show that the algorithm can highlight dim targets effectively and improve SNR of fusion image.

  1. A novel framework of tissue membrane systems for image fusion.

    PubMed

    Zhang, Zulin; Yi, Xinzhong; Peng, Hong

    2014-01-01

    This paper proposes a tissue membrane system-based framework to deal with the optimal image fusion problem. A spatial domain fusion algorithm is given, and a tissue membrane system of multiple cells is used as its computing framework. Based on the multicellular structure and inherent communication mechanism of the tissue membrane system, an improved velocity-position model is developed. The performance of the fusion framework is studied with comparison of several traditional fusion methods as well as genetic algorithm (GA)-based and differential evolution (DE)-based spatial domain fusion methods. Experimental results show that the proposed fusion framework is superior or comparable to the other methods and can be efficiently used for image fusion.

  2. Research on HDR image fusion algorithm based on Laplace pyramid weight transform with extreme low-light CMOS

    NASA Astrophysics Data System (ADS)

    Guan, Wen; Li, Li; Jin, Weiqi; Qiu, Su; Zou, Yan

    2015-10-01

    Extreme-Low-Light CMOS has been widely applied in the field of night-vision as a new type of solid image sensor. But if the illumination in the scene has drastic changes or the illumination is too strong, Extreme-Low-Light CMOS can't both clearly present the high-light scene and low-light region. According to the partial saturation problem in the field of night-vision, a HDR image fusion algorithm based on the Laplace Pyramid was researched. The overall gray value and the contrast of the low light image is very low. We choose the fusion strategy based on regional average gradient for the top layer of the long exposure image and short exposure image, which has rich brightness and textural features. The remained layers which represent the edge feature information of the target are based on the fusion strategy based on regional energy. In the process of source image reconstruction with Laplacian pyramid image, we compare the fusion results with four kinds of basal images. The algorithm is tested using Matlab and compared with the different fusion strategies. We use information entropy, average gradient and standard deviation these three objective evaluation parameters for the further analysis of the fusion result. Different low illumination environment experiments show that the algorithm in this paper can rapidly get wide dynamic range while keeping high entropy. Through the verification of this algorithm features, there is a further application prospect of the optimized algorithm. Keywords: high dynamic range imaging, image fusion, multi-exposure image, weight coefficient, information fusion, Laplacian pyramid transform.

  3. Stereotactic radiation treatment planning and follow-up studies involving fused multimodality imaging.

    PubMed

    Hamm, Klaus D; Surber, Gunnar; Schmücking, Michael; Wurm, Reinhard E; Aschenbach, Rene; Kleinert, Gabriele; Niesen, A; Baum, Richard P

    2004-11-01

    Innovative new software solutions may enable image fusion to produce the desired data superposition for precise target definition and follow-up studies in radiosurgery/stereotactic radiotherapy in patients with intracranial lesions. The aim is to integrate the anatomical and functional information completely into the radiation treatment planning and to achieve an exact comparison for follow-up examinations. Special conditions and advantages of BrainLAB's fully automatic image fusion system are evaluated and described for this purpose. In 458 patients, the radiation treatment planning and some follow-up studies were performed using an automatic image fusion technique involving the use of different imaging modalities. Each fusion was visually checked and corrected as necessary. The computerized tomography (CT) scans for radiation treatment planning (slice thickness 1.25 mm), as well as stereotactic angiography for arteriovenous malformations, were acquired using head fixation with stereotactic arc or, in the case of stereotactic radiotherapy, with a relocatable stereotactic mask. Different magnetic resonance (MR) imaging sequences (T1, T2, and fluid-attenuated inversion-recovery images) and positron emission tomography (PET) scans were obtained without head fixation. Fusion results and the effects on radiation treatment planning and follow-up studies were analyzed. The precision level of the results of the automatic fusion depended primarily on the image quality, especially the slice thickness and the field homogeneity when using MR images, as well as on patient movement during data acquisition. Fully automated image fusion of different MR, CT, and PET studies was performed for each patient. Only in a few cases was it necessary to correct the fusion manually after visual evaluation. These corrections were minor and did not materially affect treatment planning. High-quality fusion of thin slices of a region of interest with a complete head data set could be performed easily. The target volume for radiation treatment planning could be accurately delineated using multimodal information provided by CT, MR, angiography, and PET studies. The fusion of follow-up image data sets yielded results that could be successfully compared and quantitatively evaluated. Depending on the quality of the originally acquired image, automated image fusion can be a very valuable tool, allowing for fast (approximately 1-2 minute) and precise fusion of all relevant data sets. Fused multimodality imaging improves the target volume definition for radiation treatment planning. High-quality follow-up image data sets should be acquired for image fusion to provide exactly comparable slices and volumetric results that will contribute to quality contol.

  4. Optimisation of coronary vascular territorial 3D echocardiographic strain imaging using computed tomography: a feasibility study using image fusion.

    PubMed

    de Knegt, Martina Chantal; Fuchs, A; Weeke, P; Møgelvang, R; Hassager, C; Kofoed, K F

    2016-12-01

    Current echocardiographic assessments of coronary vascular territories use the 17-segment model and are based on general assumptions of coronary vascular distribution. Fusion of 3D echocardiography (3DE) with multidetector computed tomography (MDCT) derived coronary anatomy may provide a more accurate assessment of left ventricular (LV) territorial function. We aimed to test the feasibility of MDCT and 3DE fusion and to compare territorial longitudinal strain (LS) using the 17-segment model and a MDCT-guided vascular model. 28 patients underwent 320-slice MDCT and transthoracic 3DE on the same day followed by invasive coronary angiography. MDCT (Aquilion ONE, ViSION Edition, Toshiba Medical Systems) and 3DE apical full-volume images (Artida, Toshiba Medical Systems) were fused offline using a dedicated workstation (prototype fusion software, Toshiba Medical Systems). 3DE/MDCT image alignment was assessed by 3 readers using a 4-point scale. Territorial LS was assessed using the 17-segment model and the MDCT-guided vascular model in territories supplied by significantly stenotic and non-significantly stenotic vessels. Successful 3DE/MDCT image alignment was obtained in 86 and 93 % of cases for reader one, and reader two and three, respectively. Fair agreement on the quality of automatic image alignment (intra-class correlation = 0.40) and the success of manual image alignment (Fleiss' Kappa = 0.40) among the readers was found. In territories supplied by non-significantly stenotic left circumflex arteries, LS was significantly higher in the MDCT-guided vascular model compared to the 17-segment model: -15.00 ± 7.17 (mean ± standard deviation) versus -11.87 ± 4.09 (p < 0.05). Fusion of MDCT and 3DE is feasible and provides physiologically meaningful displays of myocardial function.

  5. Hybrid Vision-Fusion system for whole-body scintigraphy.

    PubMed

    Barjaktarović, Marko; Janković, Milica M; Jeremić, Marija; Matović, Milovan

    2018-05-01

    Radioiodine therapy in the treatment of differentiated thyroid carcinoma (DTC) is used in clinical practice for the ablation of thyroid residues and/or destruction of tumour tissue. Whole-body scintigraphy for visualization of the spatial 131I distribution performed by a gamma camera (GC) is a standard procedure in DTC patients after application of radioiodine therapy. A common problem is the precise topographic localization of regions where radioiodine is accumulated even in SPECT imaging. SPECT/CT can provide precise topographic localization of regions where radioiodine is accumulated, but it is often unavailable, especially in developing countries because of the high price of the equipment. In this paper, we present a Vision-Fusion system as an affordable solution for 1) acquiring an optical whole-body image during routine whole-body scintigraphy and 2) fusing gamma and optical images (also available for the auto-contour mode of GC). The estimated prediction error for image registration is 1.84 mm. The validity of fusing was tested by performing simultaneous optical and scintigraphy image acquisition of the bar phantom. The fusion result shows that the fusing process has a slight influence and is lower than the spatial resolution of GC (mean value ± standard deviation: 1.24 ± 0.22 mm). The Vision-Fusion system was used for radioiodine post-therapeutic treatment, and 17 patients were followed (11 women and 6 men, with an average age of 48.18 ± 13.27 years). Visual inspection showed no misregistration. Based on our first clinical experience, we noticed that the Vision-Fusion system could be very useful for improving the diagnostic possibility of whole-body scintigraphy after radioiodine therapy. Additionally, the proposed Vision-Fusion software can be used as an upgrade for any GC to improve localizations of thyroid/tumour tissue. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. An automatic fuzzy-based multi-temporal brain digital subtraction angiography image fusion algorithm using curvelet transform and content selection strategy.

    PubMed

    Momeni, Saba; Pourghassem, Hossein

    2014-08-01

    Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.

  7. Multi-focus image fusion based on area-based standard deviation in dual tree contourlet transform domain

    NASA Astrophysics Data System (ADS)

    Dong, Min; Dong, Chenghui; Guo, Miao; Wang, Zhe; Mu, Xiaomin

    2018-04-01

    Multiresolution-based methods, such as wavelet and Contourlet are usually used to image fusion. This work presents a new image fusion frame-work by utilizing area-based standard deviation in dual tree Contourlet trans-form domain. Firstly, the pre-registered source images are decomposed with dual tree Contourlet transform; low-pass and high-pass coefficients are obtained. Then, the low-pass bands are fused with weighted average based on area standard deviation rather than the simple "averaging" rule. While the high-pass bands are merged with the "max-absolute' fusion rule. Finally, the modified low-pass and high-pass coefficients are used to reconstruct the final fused image. The major advantage of the proposed fusion method over conventional fusion is the approximately shift invariance and multidirectional selectivity of dual tree Contourlet transform. The proposed method is compared with wavelet- , Contourletbased methods and other the state-of-the art methods on common used multi focus images. Experiments demonstrate that the proposed fusion framework is feasible and effective, and it performs better in both subjective and objective evaluation.

  8. [Development of a Surgical Navigation System with Beam Split and Fusion of the Visible and Near-Infrared Fluorescence].

    PubMed

    Yang, Xiaofeng; Wu, Wei; Wang, Guoan

    2015-04-01

    This paper presents a surgical optical navigation system with non-invasive, real-time, and positioning characteristics for open surgical procedure. The design was based on the principle of near-infrared fluorescence molecular imaging. The in vivo fluorescence excitation technology, multi-channel spectral camera technology and image fusion software technology were used. Visible and near-infrared light ring LED excitation source, multi-channel band pass filters, spectral camera 2 CCD optical sensor technology and computer systems were integrated, and, as a result, a new surgical optical navigation system was successfully developed. When the near-infrared fluorescence was injected, the system could display anatomical images of the tissue surface and near-infrared fluorescent functional images of surgical field simultaneously. The system can identify the lymphatic vessels, lymph node, tumor edge which doctor cannot find out with naked eye intra-operatively. Our research will guide effectively the surgeon to remove the tumor tissue to improve significantly the success rate of surgery. The technologies have obtained a national patent, with patent No. ZI. 2011 1 0292374. 1.

  9. Infrared and visible image fusion method based on saliency detection in sparse domain

    NASA Astrophysics Data System (ADS)

    Liu, C. H.; Qi, Y.; Ding, W. R.

    2017-06-01

    Infrared and visible image fusion is a key problem in the field of multi-sensor image fusion. To better preserve the significant information of the infrared and visible images in the final fused image, the saliency maps of the source images is introduced into the fusion procedure. Firstly, under the framework of the joint sparse representation (JSR) model, the global and local saliency maps of the source images are obtained based on sparse coefficients. Then, a saliency detection model is proposed, which combines the global and local saliency maps to generate an integrated saliency map. Finally, a weighted fusion algorithm based on the integrated saliency map is developed to achieve the fusion progress. The experimental results show that our method is superior to the state-of-the-art methods in terms of several universal quality evaluation indexes, as well as in the visual quality.

  10. Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.

    PubMed

    Liu, Min; Wang, Xueping; Zhang, Hongzhong

    2018-03-01

    In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Comparative Effectiveness of Targeted Prostate Biopsy Using Magnetic Resonance Imaging Ultrasound Fusion Software and Visual Targeting: a Prospective Study.

    PubMed

    Lee, Daniel J; Recabal, Pedro; Sjoberg, Daniel D; Thong, Alan; Lee, Justin K; Eastham, James A; Scardino, Peter T; Vargas, Hebert Alberto; Coleman, Jonathan; Ehdaie, Behfar

    2016-09-01

    We compared the diagnostic outcomes of magnetic resonance-ultrasound fusion and visually targeted biopsy for targeting regions of interest on prostate multiparametric magnetic resonance imaging. Patients presenting for prostate biopsy with regions of interest on multiparametric magnetic resonance imaging underwent magnetic resonance imaging targeted biopsy. For each region of interest 2 visually targeted cores were obtained, followed by 2 cores using a magnetic resonance-ultrasound fusion device. Our primary end point was the difference in the detection of high grade (Gleason 7 or greater) and any grade cancer between visually targeted and magnetic resonance-ultrasound fusion, investigated using McNemar's method. Secondary end points were the difference in detection rate by biopsy location using a logistic regression model and the difference in median cancer length using the Wilcoxon signed rank test. We identified 396 regions of interest in 286 men. The difference in the detection of high grade cancer between magnetic resonance-ultrasound fusion biopsy and visually targeted biopsy was -1.4% (95% CI -6.4 to 3.6, p=0.6) and for any grade cancer the difference was 3.5% (95% CI -1.9 to 8.9, p=0.2). Median cancer length detected by magnetic resonance-ultrasound fusion and visually targeted biopsy was 5.5 vs 5.8 mm, respectively (p=0.8). Magnetic resonance-ultrasound fusion biopsy detected 15% more cancers in the transition zone (p=0.046) and visually targeted biopsy detected 11% more high grade cancer at the prostate base (p=0.005). Only 52% of all high grade cancers were detected by both techniques. We found no evidence of a significant difference in the detection of high grade or any grade cancer between visually targeted and magnetic resonance-ultrasound fusion biopsy. However, the performance of each technique varied in specific biopsy locations and the outcomes of both techniques were complementary. Combining visually targeted biopsy and magnetic resonance-ultrasound fusion biopsy may optimize the detection of prostate cancer. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  12. Enhanced image fusion using directional contrast rules in fuzzy transform domain.

    PubMed

    Nandal, Amita; Rosales, Hamurabi Gamboa

    2016-01-01

    In this paper a novel image fusion algorithm based on directional contrast in fuzzy transform (FTR) domain is proposed. Input images to be fused are first divided into several non-overlapping blocks. The components of these sub-blocks are fused using directional contrast based fuzzy fusion rule in FTR domain. The fused sub-blocks are then transformed into original size blocks using inverse-FTR. Further, these inverse transformed blocks are fused according to select maximum based fusion rule for reconstructing the final fused image. The proposed fusion algorithm is both visually and quantitatively compared with other standard and recent fusion algorithms. Experimental results demonstrate that the proposed method generates better results than the other methods.

  13. An adaptive block-based fusion method with LUE-SSIM for multi-focus images

    NASA Astrophysics Data System (ADS)

    Zheng, Jianing; Guo, Yongcai; Huang, Yukun

    2016-09-01

    Because of the lenses' limited depth of field, digital cameras are incapable of acquiring an all-in-focus image of objects at varying distances in a scene. Multi-focus image fusion technique can effectively solve this problem. Aiming at the block-based multi-focus image fusion methods, the problem that blocking-artifacts often occurs. An Adaptive block-based fusion method based on lifting undistorted-edge structural similarity (LUE-SSIM) is put forward. In this method, image quality metrics LUE-SSIM is firstly proposed, which utilizes the characteristics of human visual system (HVS) and structural similarity (SSIM) to make the metrics consistent with the human visual perception. Particle swarm optimization(PSO) algorithm which selects LUE-SSIM as the object function is used for optimizing the block size to construct the fused image. Experimental results on LIVE image database shows that LUE-SSIM outperform SSIM on Gaussian defocus blur images quality assessment. Besides, multi-focus image fusion experiment is carried out to verify our proposed image fusion method in terms of visual and quantitative evaluation. The results show that the proposed method performs better than some other block-based methods, especially in reducing the blocking-artifact of the fused image. And our method can effectively preserve the undistorted-edge details in focus region of the source images.

  14. Hybrid Image Fusion for Sharpness Enhancement of Multi-Spectral Lunar Images

    NASA Astrophysics Data System (ADS)

    Awumah, Anna; Mahanti, Prasun; Robinson, Mark

    2016-10-01

    Image fusion enhances the sharpness of a multi-spectral (MS) image by incorporating spatial details from a higher-resolution panchromatic (Pan) image [1,2]. Known applications of image fusion for planetary images are rare, although image fusion is well-known for its applications to Earth-based remote sensing. In a recent work [3], six different image fusion algorithms were implemented and their performances were verified with images from the Lunar Reconnaissance Orbiter (LRO) Camera. The image fusion procedure obtained a high-resolution multi-spectral (HRMS) product from the LRO Narrow Angle Camera (used as Pan) and LRO Wide Angle Camera (used as MS) images. The results showed that the Intensity-Hue-Saturation (IHS) algorithm results in a high-spatial quality product while the Wavelet-based image fusion algorithm best preserves spectral quality among all the algorithms. In this work we show the results of a hybrid IHS-Wavelet image fusion algorithm when applied to LROC MS images. The hybrid method provides the best HRMS product - both in terms of spatial resolution and preservation of spectral details. Results from hybrid image fusion can enable new science and increase the science return from existing LROC images.[1] Pohl, Cle, and John L. Van Genderen. "Review article multisensor image fusion in remote sensing: concepts, methods and applications." International journal of remote sensing 19.5 (1998): 823-854.[2] Zhang, Yun. "Understanding image fusion." Photogramm. Eng. Remote Sens 70.6 (2004): 657-661.[3] Mahanti, Prasun et al. "Enhancement of spatial resolution of the LROC Wide Angle Camera images." Archives, XXIII ISPRS Congress Archives (2016).

  15. Log-Gabor Energy Based Multimodal Medical Image Fusion in NSCT Domain

    PubMed Central

    Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan

    2014-01-01

    Multimodal medical image fusion is a powerful tool in clinical applications such as noninvasive diagnosis, image-guided radiotherapy, and treatment planning. In this paper, a novel nonsubsampled Contourlet transform (NSCT) based method for multimodal medical image fusion is presented, which is approximately shift invariant and can effectively suppress the pseudo-Gibbs phenomena. The source medical images are initially transformed by NSCT followed by fusing low- and high-frequency components. The phase congruency that can provide a contrast and brightness-invariant representation is applied to fuse low-frequency coefficients, whereas the Log-Gabor energy that can efficiently determine the frequency coefficients from the clear and detail parts is employed to fuse the high-frequency coefficients. The proposed fusion method has been compared with the discrete wavelet transform (DWT), the fast discrete curvelet transform (FDCT), and the dual tree complex wavelet transform (DTCWT) based image fusion methods and other NSCT-based methods. Visually and quantitatively experimental results indicate that the proposed fusion method can obtain more effective and accurate fusion results of multimodal medical images than other algorithms. Further, the applicability of the proposed method has been testified by carrying out a clinical example on a woman affected with recurrent tumor images. PMID:25214889

  16. Quality assessment of remote sensing image fusion using feature-based fourth-order correlation coefficient

    NASA Astrophysics Data System (ADS)

    Ma, Dan; Liu, Jun; Chen, Kai; Li, Huali; Liu, Ping; Chen, Huijuan; Qian, Jing

    2016-04-01

    In remote sensing fusion, the spatial details of a panchromatic (PAN) image and the spectrum information of multispectral (MS) images will be transferred into fused images according to the characteristics of the human visual system. Thus, a remote sensing image fusion quality assessment called feature-based fourth-order correlation coefficient (FFOCC) is proposed. FFOCC is based on the feature-based coefficient concept. Spatial features related to spatial details of the PAN image and spectral features related to the spectrum information of MS images are first extracted from the fused image. Then, the fourth-order correlation coefficient between the spatial and spectral features is calculated and treated as the assessment result. FFOCC was then compared with existing widely used indices, such as Erreur Relative Globale Adimensionnelle de Synthese, and quality assessed with no reference. Results of the fusion and distortion experiments indicate that the FFOCC is consistent with subjective evaluation. FFOCC significantly outperforms the other indices in evaluating fusion images that are produced by different fusion methods and that are distorted in spatial and spectral features by blurring, adding noise, and changing intensity. All the findings indicate that the proposed method is an objective and effective quality assessment for remote sensing image fusion.

  17. Remote sensing fusion based on guided image filtering

    NASA Astrophysics Data System (ADS)

    Zhao, Wenfei; Dai, Qinling; Wang, Leiguang

    2015-12-01

    In this paper, we propose a novel remote sensing fusion approach based on guided image filtering. The fused images can well preserve the spectral features of the original multispectral (MS) images, meanwhile, enhance the spatial details information. Four quality assessment indexes are also introduced to evaluate the fusion effect when compared with other fusion methods. Experiments carried out on Gaofen-2, QuickBird, WorldView-2 and Landsat-8 images. And the results show an excellent performance of the proposed method.

  18. Computed tomography-magnetic resonance image fusion: a clinical evaluation of an innovative approach for improved tumor localization in primary central nervous system lesions.

    PubMed

    Lattanzi, J P; Fein, D A; McNeeley, S W; Shaer, A H; Movsas, B; Hanks, G E

    1997-01-01

    We describe our initial experience with the AcQSim (Picker International, St. David, PA) computed tomography-magnetic resonance imaging (CT-MRI) fusion software in eight patients with intracranial lesions. MRI data are electronically integrated into the CT-based treatment planning system. Since MRI is superior to CT in identifying intracranial abnormalities, we evaluated the precision and feasibility of this new localization method. Patients initially underwent CT simulation from C2 to the most superior portion of the scalp. T2 and post-contrast T1-weighted MRI of this area was then performed. Patient positioning was duplicated utilizing a head cup and bridge of nose to forehead angle measurements. First, a gross tumor volume (GTV) was identified utilizing the CT (CT/GTV). The CT and MRI scans were subsequently fused utilizing a point pair matching method and a second GTV (CT-MRI/GTV) was contoured with the aid of both studies. The fusion process was uncomplicated and completed in a timely manner. Volumetric analysis revealed the CT-MRI/GTV to be larger than the CT/GTV in all eight cases. The mean CT-MRI/GTV was 28.7 cm3 compared to 16.7 cm3 by CT alone. This translated into a 72% increase in the radiographic tumor volume by CT-MRI. A simulated dose-volume histogram in two patients revealed that marginal portions of the lesion, as identified by CT and MRI, were not included in the high dose treatment volume as contoured with the use of CT alone. Our initial experience with the fusion software demonstrated an improvement in tumor localization with this technique. Based on these patients the use of CT alone for treatment planning purposes in central nervous system (CNS) lesions is inadequate and would result in an unacceptable rate of marginal misses. The importation of MRI data into three-dimensional treatment planning is therefore crucial to accurate tumor localization. The fusion process simplifies and improves precision of this task.

  19. Airborne Infrared and Visible Image Fusion Combined with Region Segmentation

    PubMed Central

    Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao

    2017-01-01

    This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems. PMID:28505137

  20. Airborne Infrared and Visible Image Fusion Combined with Region Segmentation.

    PubMed

    Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao

    2017-05-15

    This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems.

  1. Feature-Motivated Simplified Adaptive PCNN-Based Medical Image Fusion Algorithm in NSST Domain.

    PubMed

    Ganasala, Padma; Kumar, Vinod

    2016-02-01

    Multimodality medical image fusion plays a vital role in diagnosis, treatment planning, and follow-up studies of various diseases. It provides a composite image containing critical information of source images required for better localization and definition of different organs and lesions. In the state-of-the-art image fusion methods based on nonsubsampled shearlet transform (NSST) and pulse-coupled neural network (PCNN), authors have used normalized coefficient value to motivate the PCNN-processing both low-frequency (LF) and high-frequency (HF) sub-bands. This makes the fused image blurred and decreases its contrast. The main objective of this work is to design an image fusion method that gives the fused image with better contrast, more detail information, and suitable for clinical use. We propose a novel image fusion method utilizing feature-motivated adaptive PCNN in NSST domain for fusion of anatomical images. The basic PCNN model is simplified, and adaptive-linking strength is used. Different features are used to motivate the PCNN-processing LF and HF sub-bands. The proposed method is extended for fusion of functional image with an anatomical image in improved nonlinear intensity hue and saturation (INIHS) color model. Extensive fusion experiments have been performed on CT-MRI and SPECT-MRI datasets. Visual and quantitative analysis of experimental results proved that the proposed method provides satisfactory fusion outcome compared to other image fusion methods.

  2. Fusion of imaging and nonimaging data for surveillance aircraft

    NASA Astrophysics Data System (ADS)

    Shahbazian, Elisa; Gagnon, Langis; Duquet, Jean Remi; Macieszczak, Maciej; Valin, Pierre

    1997-06-01

    This paper describes a phased incremental integration approach for application of image analysis and data fusion technologies to provide automated intelligent target tracking and identification for airborne surveillance on board an Aurora Maritime Patrol Aircraft. The sensor suite of the Aurora consists of a radar, an identification friend or foe (IFF) system, an electronic support measures (ESM) system, a spotlight synthetic aperture radar (SSAR), a forward looking infra-red (FLIR) sensor and a link-11 tactical datalink system. Lockheed Martin Canada (LMCan) is developing a testbed, which will be used to analyze and evaluate approaches for combining the data provided by the existing sensors, which were initially not designed to feed a fusion system. Three concurrent research proof-of-concept activities provide techniques, algorithms and methodology into three sequential phases of integration of this testbed. These activities are: (1) analysis of the fusion architecture (track/contact/hybrid) most appropriate for the type of data available, (2) extraction and fusion of simple features from the imaging data into the fusion system performing automatic target identification, and (3) development of a unique software architecture which will permit integration and independent evolution, enhancement and optimization of various decision aid capabilities, such as multi-sensor data fusion (MSDF), situation and threat assessment (STA) and resource management (RM).

  3. Fusion of GFP and phase contrast images with complex shearlet transform and Haar wavelet-based energy rule.

    PubMed

    Qiu, Chenhui; Wang, Yuanyuan; Guo, Yanen; Xia, Shunren

    2018-03-14

    Image fusion techniques can integrate the information from different imaging modalities to get a composite image which is more suitable for human visual perception and further image processing tasks. Fusing green fluorescent protein (GFP) and phase contrast images is very important for subcellular localization, functional analysis of protein and genome expression. The fusion method of GFP and phase contrast images based on complex shearlet transform (CST) is proposed in this paper. Firstly the GFP image is converted to IHS model and its intensity component is obtained. Secondly the CST is performed on the intensity component and the phase contrast image to acquire the low-frequency subbands and the high-frequency subbands. Then the high-frequency subbands are merged by the absolute-maximum rule while the low-frequency subbands are merged by the proposed Haar wavelet-based energy (HWE) rule. Finally the fused image is obtained by performing the inverse CST on the merged subbands and conducting IHS-to-RGB conversion. The proposed fusion method is tested on a number of GFP and phase contrast images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. © 2018 Wiley Periodicals, Inc.

  4. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection

    PubMed Central

    Ou, Yangming; Resnick, Susan M.; Gur, Ruben C.; Gur, Raquel E.; Satterthwaite, Theodore D.; Furth, Susan; Davatzikos, Christos

    2016-01-01

    Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328

  5. Multispectral medical image fusion in Contourlet domain for computer based diagnosis of Alzheimer's disease.

    PubMed

    Bhateja, Vikrant; Moin, Aisha; Srivastava, Anuja; Bao, Le Nguyen; Lay-Ekuakille, Aimé; Le, Dac-Nhuong

    2016-07-01

    Computer based diagnosis of Alzheimer's disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer's disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).

  6. Multispectral medical image fusion in Contourlet domain for computer based diagnosis of Alzheimer’s disease

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhateja, Vikrant, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn; Moin, Aisha; Srivastava, Anuja

    Computer based diagnosis of Alzheimer’s disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer’s disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Componentmore » Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).« less

  7. Multimodal medical image fusion by combining gradient minimization smoothing filter and non-subsampled directional filter bank

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng; Wenbo, Mei; Huiqian, Du; Zexian, Wang

    2018-04-01

    A new algorithm was proposed for medical images fusion in this paper, which combined gradient minimization smoothing filter (GMSF) with non-sampled directional filter bank (NSDFB). In order to preserve more detail information, a multi scale edge preserving decomposition framework (MEDF) was used to decompose an image into a base image and a series of detail images. For the fusion of base images, the local Gaussian membership function is applied to construct the fusion weighted factor. For the fusion of detail images, NSDFB was applied to decompose each detail image into multiple directional sub-images that are fused by pulse coupled neural network (PCNN) respectively. The experimental results demonstrate that the proposed algorithm is superior to the compared algorithms in both visual effect and objective assessment.

  8. Spiking Cortical Model Based Multimodal Medical Image Fusion by Combining Entropy Information with Weber Local Descriptor

    PubMed Central

    Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei

    2016-01-01

    Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation. PMID:27649190

  9. Spiking Cortical Model Based Multimodal Medical Image Fusion by Combining Entropy Information with Weber Local Descriptor.

    PubMed

    Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei

    2016-09-15

    Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation.

  10. Infrared and visible image fusion with spectral graph wavelet transform.

    PubMed

    Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Zong, Jing-guo

    2015-09-01

    Infrared and visible image fusion technique is a popular topic in image analysis because it can integrate complementary information and obtain reliable and accurate description of scenes. Multiscale transform theory as a signal representation method is widely used in image fusion. In this paper, a novel infrared and visible image fusion method is proposed based on spectral graph wavelet transform (SGWT) and bilateral filter. The main novelty of this study is that SGWT is used for image fusion. On the one hand, source images are decomposed by SGWT in its transform domain. The proposed approach not only effectively preserves the details of different source images, but also excellently represents the irregular areas of the source images. On the other hand, a novel weighted average method based on bilateral filter is proposed to fuse low- and high-frequency subbands by taking advantage of spatial consistency of natural images. Experimental results demonstrate that the proposed method outperforms seven recently proposed image fusion methods in terms of both visual effect and objective evaluation metrics.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, Kalpagam; Liu, Jeff; Kohli, Kirpal

    Purpose: Fusion of electrical impedance tomography (EIT) with computed tomography (CT) can be useful as a clinical tool for providing additional physiological information about tissues, but requires suitable fusion algorithms and validation procedures. This work explores the feasibility of fusing EIT and CT images using an algorithm for coregistration. The imaging performance is validated through feature space assessment on phantom contrast targets. Methods: EIT data were acquired by scanning a phantom using a circuit, configured for injecting current through 16 electrodes, placed around the phantom. A conductivity image of the phantom was obtained from the data using electrical impedance andmore » diffuse optical tomography reconstruction software (EIDORS). A CT image of the phantom was also acquired. The EIT and CT images were fused using a region of interest (ROI) coregistration fusion algorithm. Phantom imaging experiments were carried out on objects of different contrasts, sizes, and positions. The conductive medium of the phantoms was made of a tissue-mimicking bolus material that is routinely used in clinical radiation therapy settings. To validate the imaging performance in detecting different contrasts, the ROI of the phantom was filled with distilled water and normal saline. Spatially separated cylindrical objects of different sizes were used for validating the imaging performance in multiple target detection. Analyses of the CT, EIT and the EIT/CT phantom images were carried out based on the variations of contrast, correlation, energy, and homogeneity, using a gray level co-occurrence matrix (GLCM). A reference image of the phantom was simulated using EIDORS, and the performances of the CT and EIT imaging systems were evaluated and compared against the performance of the EIT/CT system using various feature metrics, detectability, and structural similarity index measures. Results: In detecting distilled and normal saline water in bolus medium, EIT as a stand-alone imaging system showed contrast discrimination of 47%, while the CT imaging system showed a discrimination of only 1.5%. The structural similarity index measure showed a drop of 24% with EIT imaging compared to CT imaging. The average detectability measure for CT imaging was found to be 2.375 ± 0.19 before fusion. After complementing with EIT information, the detectability measure increased to 11.06 ± 2.04. Based on the feature metrics, the functional imaging quality of CT and EIT were found to be 2.29% and 86%, respectively, before fusion. Structural imaging quality was found to be 66% for CT and 16% for EIT. After fusion, functional imaging quality improved in CT imaging from 2.29% to 42% and the structural imaging quality of EIT imaging changed from 16% to 66%. The improvement in image quality was also observed in detecting objects of different sizes. Conclusions: The authors found a significant improvement in the contrast detectability performance of CT imaging when complemented with functional imaging information from EIT. Along with the feature assessment metrics, the concept of complementing CT with EIT imaging can lead to an EIT/CT imaging modality which might fully utilize the functional imaging abilities of EIT imaging, thereby enhancing the quality of care in the areas of cancer diagnosis and radiotherapy treatment planning.« less

  12. A transversal approach for patch-based label fusion via matrix completion

    PubMed Central

    Sanroma, Gerard; Wu, Guorong; Gao, Yaozong; Thung, Kim-Han; Guo, Yanrong; Shen, Dinggang

    2015-01-01

    Recently, multi-atlas patch-based label fusion has received an increasing interest in the medical image segmentation field. After warping the anatomical labels from the atlas images to the target image by registration, label fusion is the key step to determine the latent label for each target image point. Two popular types of patch-based label fusion approaches are (1) reconstruction-based approaches that compute the target labels as a weighted average of atlas labels, where the weights are derived by reconstructing the target image patch using the atlas image patches; and (2) classification-based approaches that determine the target label as a mapping of the target image patch, where the mapping function is often learned using the atlas image patches and their corresponding labels. Both approaches have their advantages and limitations. In this paper, we propose a novel patch-based label fusion method to combine the above two types of approaches via matrix completion (and hence, we call it transversal). As we will show, our method overcomes the individual limitations of both reconstruction-based and classification-based approaches. Since the labeling confidences may vary across the target image points, we further propose a sequential labeling framework that first labels the highly confident points and then gradually labels more challenging points in an iterative manner, guided by the label information determined in the previous iterations. We demonstrate the performance of our novel label fusion method in segmenting the hippocampus in the ADNI dataset, subcortical and limbic structures in the LONI dataset, and mid-brain structures in the SATA dataset. We achieve more accurate segmentation results than both reconstruction-based and classification-based approaches. Our label fusion method is also ranked 1st in the online SATA Multi-Atlas Segmentation Challenge. PMID:26160394

  13. Digital diagnosis of medical images

    NASA Astrophysics Data System (ADS)

    Heinonen, Tomi; Kuismin, Raimo; Jormalainen, Raimo; Dastidar, Prasun; Frey, Harry; Eskola, Hannu

    2001-08-01

    The popularity of digital imaging devices and PACS installations has increased during the last years. Still, images are analyzed and diagnosed using conventional techniques. Our research group begun to study the requirements for digital image diagnostic methods to be applied together with PACS systems. The research was focused on various image analysis procedures (e.g., segmentation, volumetry, 3D visualization, image fusion, anatomic atlas, etc.) that could be useful in medical diagnosis. We have developed Image Analysis software (www.medimag.net) to enable several image-processing applications in medical diagnosis, such as volumetry, multimodal visualization, and 3D visualizations. We have also developed a commercial scalable image archive system (ActaServer, supports DICOM) based on component technology (www.acta.fi), and several telemedicine applications. All the software and systems operate in NT environment and are in clinical use in several hospitals. The analysis software have been applied in clinical work and utilized in numerous patient cases (500 patients). This method has been used in the diagnosis, therapy and follow-up in various diseases of the central nervous system (CNS), respiratory system (RS) and human reproductive system (HRS). In many of these diseases e.g. Systemic Lupus Erythematosus (CNS), nasal airways diseases (RS) and ovarian tumors (HRS), these methods have been used for the first time in clinical work. According to our results, digital diagnosis improves diagnostic capabilities, and together with PACS installations it will become standard tool during the next decade by enabling more accurate diagnosis and patient follow-up.

  14. Implementing and validating of pan-sharpening algorithms in open-source software

    NASA Astrophysics Data System (ADS)

    Pesántez-Cobos, Paúl; Cánovas-García, Fulgencio; Alonso-Sarría, Francisco

    2017-10-01

    Several approaches have been used in remote sensing to integrate images with different spectral and spatial resolutions in order to obtain fused enhanced images. The objective of this research is three-fold. To implement in R three image fusion techniques (High Pass Filter, Principal Component Analysis and Gram-Schmidt); to apply these techniques to merging multispectral and panchromatic images from five different images with different spatial resolutions; finally, to evaluate the results using the universal image quality index (Q index) and the ERGAS index. As regards qualitative analysis, Landsat-7 and Landsat-8 show greater colour distortion with the three pansharpening methods, although the results for the other images were better. Q index revealed that HPF fusion performs better for the QuickBird, IKONOS and Landsat-7 images, followed by GS fusion; whereas in the case of Landsat-8 and Natmur-08 images, the results were more even. Regarding the ERGAS spatial index, the ACP algorithm performed better for the QuickBird, IKONOS, Landsat-7 and Natmur-08 images, followed closely by the GS algorithm. Only for the Landsat-8 image did, the GS fusion present the best result. In the evaluation of spectral components, HPF results tended to be better and ACP results worse, the opposite was the case with the spatial components. Better quantitative results are obtained in Landsat-7 and Landsat-8 images with the three fusion methods than with the QuickBird, IKONOS and Natmur-08 images. This contrasts with the qualitative evaluation reflecting the importance of splitting the two evaluation approaches (qualitative and quantitative). Significant disagreement may arise when different methodologies are used to asses the quality of an image fusion. Moreover, it is not possible to designate, a priori, a given algorithm as the best, not only because of the different characteristics of the sensors, but also because of the different atmospherics conditions or peculiarities of the different study areas, among other reasons.

  15. Adaptive fusion of infrared and visible images in dynamic scene

    NASA Astrophysics Data System (ADS)

    Yang, Guang; Yin, Yafeng; Man, Hong; Desai, Sachi

    2011-11-01

    Multiple modalities sensor fusion has been widely employed in various surveillance and military applications. A variety of image fusion techniques including PCA, wavelet, curvelet and HSV has been proposed in recent years to improve human visual perception for object detection. One of the main challenges for visible and infrared image fusion is to automatically determine an optimal fusion strategy for different input scenes along with an acceptable computational cost. This paper, we propose a fast and adaptive feature selection based image fusion method to obtain high a contrast image from visible and infrared sensors for targets detection. At first, fuzzy c-means clustering is applied on the infrared image to highlight possible hotspot regions, which will be considered as potential targets' locations. After that, the region surrounding the target area is segmented as the background regions. Then image fusion is locally applied on the selected target and background regions by computing different linear combination of color components from registered visible and infrared images. After obtaining different fused images, histogram distributions are computed on these local fusion images as the fusion feature set. The variance ratio which is based on Linear Discriminative Analysis (LDA) measure is employed to sort the feature set and the most discriminative one is selected for the whole image fusion. As the feature selection is performed over time, the process will dynamically determine the most suitable feature for the image fusion in different scenes. Experiment is conducted on the OSU Color-Thermal database, and TNO Human Factor dataset. The fusion results indicate that our proposed method achieved a competitive performance compared with other fusion algorithms at a relatively low computational cost.

  16. Improved medical image fusion based on cascaded PCA and shift invariant wavelet transforms.

    PubMed

    Reena Benjamin, J; Jayasree, T

    2018-02-01

    In the medical field, radiologists need more informative and high-quality medical images to diagnose diseases. Image fusion plays a vital role in the field of biomedical image analysis. It aims to integrate the complementary information from multimodal images, producing a new composite image which is expected to be more informative for visual perception than any of the individual input images. The main objective of this paper is to improve the information, to preserve the edges and to enhance the quality of the fused image using cascaded principal component analysis (PCA) and shift invariant wavelet transforms. A novel image fusion technique based on cascaded PCA and shift invariant wavelet transforms is proposed in this paper. PCA in spatial domain extracts relevant information from the large dataset based on eigenvalue decomposition, and the wavelet transform operating in the complex domain with shift invariant properties brings out more directional and phase details of the image. The significance of maximum fusion rule applied in dual-tree complex wavelet transform domain enhances the average information and morphological details. The input images of the human brain of two different modalities (MRI and CT) are collected from whole brain atlas data distributed by Harvard University. Both MRI and CT images are fused using cascaded PCA and shift invariant wavelet transform method. The proposed method is evaluated based on three main key factors, namely structure preservation, edge preservation, contrast preservation. The experimental results and comparison with other existing fusion methods show the superior performance of the proposed image fusion framework in terms of visual and quantitative evaluations. In this paper, a complex wavelet-based image fusion has been discussed. The experimental results demonstrate that the proposed method enhances the directional features as well as fine edge details. Also, it reduces the redundant details, artifacts, distortions.

  17. Infrared and visible image fusion scheme based on NSCT and low-level visual features

    NASA Astrophysics Data System (ADS)

    Li, Huafeng; Qiu, Hongmei; Yu, Zhengtao; Zhang, Yafei

    2016-05-01

    Multi-scale transform (MST) is an efficient tool for image fusion. Recently, many fusion methods have been developed based on different MSTs, and they have shown potential application in many fields. In this paper, we propose an effective infrared and visible image fusion scheme in nonsubsampled contourlet transform (NSCT) domain, in which the NSCT is firstly employed to decompose each of the source images into a series of high frequency subbands and one low frequency subband. To improve the fusion performance we designed two new activity measures for fusion of the lowpass subbands and the highpass subbands. These measures are developed based on the fact that the human visual system (HVS) percept the image quality mainly according to its some low-level features. Then, the selection principles of different subbands are presented based on the corresponding activity measures. Finally, the merged subbands are constructed according to the selection principles, and the final fused image is produced by applying the inverse NSCT on these merged subbands. Experimental results demonstrate the effectiveness and superiority of the proposed method over the state-of-the-art fusion methods in terms of both visual effect and objective evaluation results.

  18. Image fusion based on millimeter-wave for concealed weapon detection

    NASA Astrophysics Data System (ADS)

    Zhu, Weiwen; Zhao, Yuejin; Deng, Chao; Zhang, Cunlin; Zhang, Yalin; Zhang, Jingshui

    2010-11-01

    This paper describes a novel multi sensors image fusion technology which is presented for concealed weapon detection (CWD). It is known to all, because of the good transparency of the clothes at millimeter wave band, a millimeter wave radiometer can be used to image and distinguish concealed contraband beneath clothes, for example guns, knives, detonator and so on. As a result, we adopt the passive millimeter wave (PMMW) imaging technology for airport security. However, in consideration of the wavelength of millimeter wave and the single channel mechanical scanning, the millimeter wave image has law optical resolution, which can't meet the need of practical application. Therefore, visible image (VI), which has higher resolution, is proposed for the image fusion with the millimeter wave image to enhance the readability. Before the image fusion, a novel image pre-processing which specifics to the fusion of millimeter wave imaging and visible image is adopted. And in the process of image fusion, multi resolution analysis (MRA) based on Wavelet Transform (WT) is adopted. In this way, the experiment result shows that this method has advantages in concealed weapon detection and has practical significance.

  19. Adaptive multifocus image fusion using block compressed sensing with smoothed projected Landweber integration in the wavelet domain.

    PubMed

    V S, Unni; Mishra, Deepak; Subrahmanyam, G R K S

    2016-12-01

    The need for image fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. Image fusion is the process of combining substantial information from several sensors using mathematical techniques in order to create a single composite image that will be more comprehensive and thus more useful for a human operator or other computer vision tasks. This paper presents a new approach to multifocus image fusion based on sparse signal representation. Block-based compressive sensing integrated with a projection-driven compressive sensing (CS) recovery that encourages sparsity in the wavelet domain is used as a method to get the focused image from a set of out-of-focus images. Compression is achieved during the image acquisition process using a block compressive sensing method. An adaptive thresholding technique within the smoothed projected Landweber recovery process reconstructs high-resolution focused images from low-dimensional CS measurements of out-of-focus images. Discrete wavelet transform and dual-tree complex wavelet transform are used as the sparsifying basis for the proposed fusion. The main finding lies in the fact that sparsification enables a better selection of the fusion coefficients and hence better fusion. A Laplacian mixture model fit is done in the wavelet domain and estimation of the probability density function (pdf) parameters by expectation maximization leads us to the proper selection of the coefficients of the fused image. Using the proposed method compared with the fusion scheme without employing the projected Landweber (PL) scheme and the other existing CS-based fusion approaches, it is observed that with fewer samples itself, the proposed method outperforms other approaches.

  20. Image fusion based on Bandelet and sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Jiuxing; Zhang, Wei; Li, Xuzhi

    2018-04-01

    Bandelet transform could acquire geometric regular direction and geometric flow, sparse representation could represent signals with as little as possible atoms on over-complete dictionary, both of which could be used to image fusion. Therefore, a new fusion method is proposed based on Bandelet and Sparse Representation, to fuse Bandelet coefficients of multi-source images and obtain high quality fusion effects. The test are performed on remote sensing images and simulated multi-focus images, experimental results show that the performance of new method is better than tested methods according to objective evaluation indexes and subjective visual effects.

  1. A Fusion Algorithm for GFP Image and Phase Contrast Image of Arabidopsis Cell Based on SFL-Contourlet Transform

    PubMed Central

    Feng, Peng; Wang, Jing; Wei, Biao; Mi, Deling

    2013-01-01

    A hybrid multiscale and multilevel image fusion algorithm for green fluorescent protein (GFP) image and phase contrast image of Arabidopsis cell is proposed in this paper. Combining intensity-hue-saturation (IHS) transform and sharp frequency localization Contourlet transform (SFL-CT), this algorithm uses different fusion strategies for different detailed subbands, which include neighborhood consistency measurement (NCM) that can adaptively find balance between color background and gray structure. Also two kinds of neighborhood classes based on empirical model are taken into consideration. Visual information fidelity (VIF) as an objective criterion is introduced to evaluate the fusion image. The experimental results of 117 groups of Arabidopsis cell image from John Innes Center show that the new algorithm cannot only make the details of original images well preserved but also improve the visibility of the fusion image, which shows the superiority of the novel method to traditional ones. PMID:23476716

  2. Imaging Research With Non-Periodic Multilayers for Inertial Confinement Fusion Diagnostic Experiments

    NASA Astrophysics Data System (ADS)

    L. Wang, F.; Mu, B. Z.; Wang, Z. S.; Gu, C. S.; Zhang, Z.; Qin, S. J.; Chen, L. Y.

    A grazing Kirkpatrick-Baez (K-B) microscope was designed for hard x-ray (8keV; Cu Ka radiation) imaging in Inertial Confinement Fusion (ICF) diagnostic experiments. Ray tracing software was used to simulate optical system performance. The optimized theoretical resolution of K-B microscope was about 2 micron and better than 10 micron in 200 micron field of view. Tungsten and boron carbide were chosen as multilayer materials and the multilayer was deposited onto the silicon wafer substrate and the reflectivity was measured by x-ray diffraction (XRD). The reflectivity of supermirror was about 20 % in 0.3 % of bandwidth. 8keV Cu target x-ray tube source was used in x-ray imaging experiments and the magnification of 1x and 2x x-ray images were obtained.

  3. Hybrid-fusion SPECT/CT systems in parathyroid adenoma: Technological improvements and added clinical diagnostic value.

    PubMed

    Wong, K K; Chondrogiannis, S; Bowles, H; Fuster, D; Sánchez, N; Rampin, L; Rubello, D

    Nuclear medicine traditionally employs planar and single photon emission computed tomography (SPECT) imaging techniques to depict the biodistribution of radiotracers for the diagnostic investigation of a range of disorders of endocrine gland function. The usefulness of combining functional information with anatomy derived from computed tomography (CT), magnetic resonance imaging (MRI), and high resolution ultrasound (US), has long been appreciated, either using visual side-by-side correlation, or software-based co-registration. The emergence of hybrid SPECT/CT camera technology now allows the simultaneous acquisition of combined multi-modality imaging, with seamless fusion of 3D volume datasets. Thus, it is not surprising that there is growing literature describing the many advantages that contemporary SPECT/CT technology brings to radionuclide investigation of endocrine disorders, showing potential advantages for the pre-operative locating of the parathyroid adenoma using a minimally invasive surgical approach, especially in the presence of ectopic glands and in multiglandular disease. In conclusion, hybrid SPECT/CT imaging has become an essential tool to ensure the most accurate diagnostic in the management of patients with hyperparathyroidism. Copyright © 2016 Elsevier España, S.L.U. y SEMNIM. All rights reserved.

  4. Design and validation of Segment--freely available software for cardiovascular image analysis.

    PubMed

    Heiberg, Einar; Sjögren, Jane; Ugander, Martin; Carlsson, Marcus; Engblom, Henrik; Arheden, Håkan

    2010-01-11

    Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http://segment.heiberg.se. Segment is a well-validated comprehensive software package for cardiovascular image analysis. It is freely available for research purposes provided that relevant original research publications related to the software are cited.

  5. Remote Sensing Image Fusion Method Based on Nonsubsampled Shearlet Transform and Sparse Representation

    NASA Astrophysics Data System (ADS)

    Moonon, Altan-Ulzii; Hu, Jianwen; Li, Shutao

    2015-12-01

    The remote sensing image fusion is an important preprocessing technique in remote sensing image processing. In this paper, a remote sensing image fusion method based on the nonsubsampled shearlet transform (NSST) with sparse representation (SR) is proposed. Firstly, the low resolution multispectral (MS) image is upsampled and color space is transformed from Red-Green-Blue (RGB) to Intensity-Hue-Saturation (IHS). Then, the high resolution panchromatic (PAN) image and intensity component of MS image are decomposed by NSST to high and low frequency coefficients. The low frequency coefficients of PAN and the intensity component are fused by the SR with the learned dictionary. The high frequency coefficients of intensity component and PAN image are fused by local energy based fusion rule. Finally, the fused result is obtained by performing inverse NSST and inverse IHS transform. The experimental results on IKONOS and QuickBird satellites demonstrate that the proposed method provides better spectral quality and superior spatial information in the fused image than other remote sensing image fusion methods both in visual effect and object evaluation.

  6. Speckle noise reduction in ultrasound images using a discrete wavelet transform-based image fusion technique.

    PubMed

    Choi, Hyun Ho; Lee, Ju Hwan; Kim, Sung Min; Park, Sung Yun

    2015-01-01

    Here, the speckle noise in ultrasonic images is removed using an image fusion-based denoising method. To optimize the denoising performance, each discrete wavelet transform (DWT) and filtering technique was analyzed and compared. In addition, the performances were compared in order to derive the optimal input conditions. To evaluate the speckle noise removal performance, an image fusion algorithm was applied to the ultrasound images, and comparatively analyzed with the original image without the algorithm. As a result, applying DWT and filtering techniques caused information loss and noise characteristics, and did not represent the most significant noise reduction performance. Conversely, an image fusion method applying SRAD-original conditions preserved the key information in the original image, and the speckle noise was removed. Based on such characteristics, the input conditions of SRAD-original had the best denoising performance with the ultrasound images. From this study, the best denoising technique proposed based on the results was confirmed to have a high potential for clinical application.

  7. Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.

    2016-01-01

    Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.

  8. An efficient method for the fusion of light field refocused images

    NASA Astrophysics Data System (ADS)

    Wang, Yingqian; Yang, Jungang; Xiao, Chao; An, Wei

    2018-04-01

    Light field cameras have drawn much attention due to the advantage of post-capture adjustments such as refocusing after exposure. The depth of field in refocused images is always shallow because of the large equivalent aperture. As a result, a large number of multi-focus images are obtained and an all-in-focus image is demanded. Consider that most multi-focus image fusion algorithms do not particularly aim at large numbers of source images and traditional DWT-based fusion approach has serious problems in dealing with lots of multi-focus images, causing color distortion and ringing effect. To solve this problem, this paper proposes an efficient multi-focus image fusion method based on stationary wavelet transform (SWT), which can deal with a large quantity of multi-focus images with shallow depth of fields. We compare SWT-based approach with DWT-based approach on various occasions. And the results demonstrate that the proposed method performs much better both visually and quantitatively.

  9. Research on segmentation based on multi-atlas in brain MR image

    NASA Astrophysics Data System (ADS)

    Qian, Yuejing

    2018-03-01

    Accurate segmentation of specific tissues in brain MR image can be effectively achieved with the multi-atlas-based segmentation method, and the accuracy mainly depends on the image registration accuracy and fusion scheme. This paper proposes an automatic segmentation method based on the multi-atlas for brain MR image. Firstly, to improve the registration accuracy in the area to be segmented, we employ a target-oriented image registration method for the refinement. Then In the label fusion, we proposed a new algorithm to detect the abnormal sparse patch and simultaneously abandon the corresponding abnormal sparse coefficients, this method is made based on the remaining sparse coefficients combined with the multipoint label estimator strategy. The performance of the proposed method was compared with those of the nonlocal patch-based label fusion method (Nonlocal-PBM), the sparse patch-based label fusion method (Sparse-PBM) and majority voting method (MV). Based on our experimental results, the proposed method is efficient in the brain MR images segmentation compared with MV, Nonlocal-PBM, and Sparse-PBM methods.

  10. A method based on IHS cylindrical transform model for quality assessment of image fusion

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaokun; Jia, Yonghong

    2005-10-01

    Image fusion technique has been widely applied to remote sensing image analysis and processing, and methods for quality assessment of image fusion in remote sensing have also become the research issues at home and abroad. Traditional assessment methods combine calculation of quantitative indexes and visual interpretation to compare fused images quantificationally and qualitatively. However, in the existing assessment methods, there are two defects: on one hand, most imdexes lack the theoretic support to compare different fusion methods. On the hand, there is not a uniform preference for most of the quantitative assessment indexes when they are applied to estimate the fusion effects. That is, the spatial resolution and spectral feature could not be analyzed synchronously by these indexes and there is not a general method to unify the spatial and spectral feature assessment. So in this paper, on the basis of the approximate general model of four traditional fusion methods, including Intensity Hue Saturation(IHS) triangle transform fusion, High Pass Filter(HPF) fusion, Principal Component Analysis(PCA) fusion, Wavelet Transform(WT) fusion, a correlation coefficient assessment method based on IHS cylindrical transform is proposed. By experiments, this method can not only get the evaluation results of spatial and spectral features on the basis of uniform preference, but also can acquire the comparison between fusion image sources and fused images, and acquire differences among fusion methods. Compared with the traditional assessment methods, the new methods is more intuitionistic, and in accord with subjective estimation.

  11. Change detection in synthetic aperture radar images based on image fusion and fuzzy clustering.

    PubMed

    Gong, Maoguo; Zhou, Zhiqiang; Ma, Jingjing

    2012-04-01

    This paper presents an unsupervised distribution-free change detection approach for synthetic aperture radar (SAR) images based on an image fusion strategy and a novel fuzzy clustering algorithm. The image fusion technique is introduced to generate a difference image by using complementary information from a mean-ratio image and a log-ratio image. In order to restrain the background information and enhance the information of changed regions in the fused difference image, wavelet fusion rules based on an average operator and minimum local area energy are chosen to fuse the wavelet coefficients for a low-frequency band and a high-frequency band, respectively. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.

  12. Real-time tracking and virtual endoscopy in cone-beam CT-guided surgery of the sinuses and skull base in a cadaver model.

    PubMed

    Prisman, Eitan; Daly, Michael J; Chan, Harley; Siewerdsen, Jeffrey H; Vescan, Allan; Irish, Jonathan C

    2011-01-01

    Custom software was developed to integrate intraoperative cone-beam computed tomography (CBCT) images with endoscopic video for surgical navigation and guidance. A cadaveric head was used to assess the accuracy and potential clinical utility of the following functionality: (1) real-time tracking of the endoscope in intraoperative 3-dimensional (3D) CBCT; (2) projecting an orthogonal reconstructed CBCT image, at or beyond the endoscope, which is parallel to the tip of the endoscope corresponding to the surgical plane; (3) virtual reality fusion of endoscopic video and 3D CBCT surface rendering; and (4) overlay of preoperatively defined contours of anatomical structures of interest. Anatomical landmarks were contoured in CBCT of a cadaveric head. An experienced endoscopic surgeon was oriented to the software and asked to rate the utility of the navigation software in carrying out predefined surgical tasks. Utility was evaluated using a rating scale for: (1) safely completing the task; and (2) potential for surgical training. Surgical tasks included: (1) uncinectomy; (2) ethmoidectomy; (3) sphenoidectomy/pituitary resection; and (4) clival resection. CBCT images were updated following each ablative task. As a teaching tool, the software was evaluated as "very useful" for all surgical tasks. Regarding safety and task completion, the software was evaluated as "no advantage" for task (1), "minimal" for task (2), and "very useful" for tasks (3) and (4). Landmark identification for structures behind bone was "very useful" for both categories. The software increased surgical confidence in safely completing challenging ablative tasks by presenting real-time image guidance for highly complex ablative procedures. In addition, such technology offers a valuable teaching aid to surgeons in training. Copyright © 2011 American Rhinologic Society-American Academy of Otolaryngic Allergy, LLC.

  13. Image Fusion of CT and MR with Sparse Representation in NSST Domain

    PubMed Central

    Qiu, Chenhui; Wang, Yuanyuan; Zhang, Huan

    2017-01-01

    Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. PMID:29250134

  14. Image Fusion of CT and MR with Sparse Representation in NSST Domain.

    PubMed

    Qiu, Chenhui; Wang, Yuanyuan; Zhang, Huan; Xia, Shunren

    2017-01-01

    Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation.

  15. TU-E-BRB-00: Deformable Image Registration: Is It Right for Your Clinic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2015-06-15

    Deformable image registration (DIR) is developing rapidly and is poised to substantially improve dose fusion accuracy for adaptive and retreatment planning and motion management and PET fusion to enhance contour delineation for treatment planning. However, DIR dose warping accuracy is difficult to quantify, in general, and particularly difficult to do so on a patient-specific basis. As clinical DIR options become more widely available, there is an increased need to understand the implications of incorporating DIR into clinical workflow. Several groups have assessed DIR accuracy in clinically relevant scenarios, but no comprehensive review material is yet available. This session will alsomore » discuss aspects of the AAPM Task Group 132 on the Use of Image Registration and Data Fusion Algorithms and Techniques in Radiotherapy Treatment Planning official report, which provides recommendations for DIR clinical use. We will summarize and compare various commercial DIR software options, outline successful clinical techniques, show specific examples with discussion of appropriate and inappropriate applications of DIR, discuss the clinical implications of DIR, provide an overview of current DIR error analysis research, review QA options and research phantom development and present TG-132 recommendations. Learning Objectives: Compare/contrast commercial DIR software and QA options Overview clinical DIR workflow for retreatment To understand uncertainties introduced by DIR Review TG-132 proposed recommendations.« less

  16. The Research on Dryland Crop Classification Based on the Fusion of SENTINEL-1A SAR and Optical Images

    NASA Astrophysics Data System (ADS)

    Liu, F.; Chen, T.; He, J.; Wen, Q.; Yu, F.; Gu, X.; Wang, Z.

    2018-04-01

    In recent years, the quick upgrading and improvement of SAR sensors provide beneficial complements for the traditional optical remote sensing in the aspects of theory, technology and data. In this paper, Sentinel-1A SAR data and GF-1 optical data were selected for image fusion, and more emphases were put on the dryland crop classification under a complex crop planting structure, regarding corn and cotton as the research objects. Considering the differences among various data fusion methods, the principal component analysis (PCA), Gram-Schmidt (GS), Brovey and wavelet transform (WT) methods were compared with each other, and the GS and Brovey methods were proved to be more applicable in the study area. Then, the classification was conducted based on the object-oriented technique process. And for the GS, Brovey fusion images and GF-1 optical image, the nearest neighbour algorithm was adopted to realize the supervised classification with the same training samples. Based on the sample plots in the study area, the accuracy assessment was conducted subsequently. The values of overall accuracy and kappa coefficient of fusion images were all higher than those of GF-1 optical image, and GS method performed better than Brovey method. In particular, the overall accuracy of GS fusion image was 79.8 %, and the Kappa coefficient was 0.644. Thus, the results showed that GS and Brovey fusion images were superior to optical images for dryland crop classification. This study suggests that the fusion of SAR and optical images is reliable for dryland crop classification under a complex crop planting structure.

  17. Enhancing hyperspectral spatial resolution using multispectral image fusion: A wavelet approach

    NASA Astrophysics Data System (ADS)

    Jazaeri, Amin

    High spectral and spatial resolution images have a significant impact in remote sensing applications. Because both spatial and spectral resolutions of spaceborne sensors are fixed by design and it is not possible to further increase the spatial or spectral resolution, techniques such as image fusion must be applied to achieve such goals. This dissertation introduces the concept of wavelet fusion between hyperspectral and multispectral sensors in order to enhance the spectral and spatial resolution of a hyperspectral image. To test the robustness of this concept, images from Hyperion (hyperspectral sensor) and Advanced Land Imager (multispectral sensor) were first co-registered and then fused using different wavelet algorithms. A regression-based fusion algorithm was also implemented for comparison purposes. The results show that the fused images using a combined bi-linear wavelet-regression algorithm have less error than other methods when compared to the ground truth. In addition, a combined regression-wavelet algorithm shows more immunity to misalignment of the pixels due to the lack of proper registration. The quantitative measures of average mean square error show that the performance of wavelet-based methods degrades when the spatial resolution of hyperspectral images becomes eight times less than its corresponding multispectral image. Regardless of what method of fusion is utilized, the main challenge in image fusion is image registration, which is also a very time intensive process. Because the combined regression wavelet technique is computationally expensive, a hybrid technique based on regression and wavelet methods was also implemented to decrease computational overhead. However, the gain in faster computation was offset by the introduction of more error in the outcome. The secondary objective of this dissertation is to examine the feasibility and sensor requirements for image fusion for future NASA missions in order to be able to perform onboard image fusion. In this process, the main challenge of image registration was resolved by registering the input images using transformation matrices of previously acquired data. The composite image resulted from the fusion process remarkably matched the ground truth, indicating the possibility of real time onboard fusion processing.

  18. Image fusion method based on regional feature and improved bidimensional empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Qin, Xinqiang; Hu, Gang; Hu, Kai

    2018-01-01

    The decomposition of multiple source images using bidimensional empirical mode decomposition (BEMD) often produces mismatched bidimensional intrinsic mode functions, either by their number or their frequency, making image fusion difficult. A solution to this problem is proposed using a fixed number of iterations and a union operation in the sifting process. By combining the local regional features of the images, an image fusion method has been developed. First, the source images are decomposed using the proposed BEMD to produce the first intrinsic mode function (IMF) and residue component. Second, for the IMF component, a selection and weighted average strategy based on local area energy is used to obtain a high-frequency fusion component. Third, for the residue component, a selection and weighted average strategy based on local average gray difference is used to obtain a low-frequency fusion component. Finally, the fused image is obtained by applying the inverse BEMD transform. Experimental results show that the proposed algorithm provides superior performance over methods based on wavelet transform, line and column-based EMD, and complex empirical mode decomposition, both in terms of visual quality and objective evaluation criteria.

  19. Fusion of infrared and visible images based on BEMD and NSDFB

    NASA Astrophysics Data System (ADS)

    Zhu, Pan; Huang, Zhanhua; Lei, Hai

    2016-07-01

    This paper presents a new fusion method based on the adaptive multi-scale decomposition of bidimensional empirical mode decomposition (BEMD) and the flexible directional expansion of nonsubsampled directional filter banks (NSDFB) for visible-infrared images. Compared with conventional multi-scale fusion methods, BEMD is non-parametric and completely data-driven, which is relatively more suitable for non-linear signals decomposition and fusion. NSDFB can provide direction filtering on the decomposition levels to capture more geometrical structure of the source images effectively. In our fusion framework, the entropies of the two patterns of source images are firstly calculated and the residue of the image whose entropy is larger is extracted to make it highly relevant with the other source image. Then, the residue and the other source image are decomposed into low-frequency sub-bands and a sequence of high-frequency directional sub-bands in different scales by using BEMD and NSDFB. In this fusion scheme, two relevant fusion rules are used in low-frequency sub-bands and high-frequency directional sub-bands, respectively. Finally, the fused image is obtained by applying corresponding inverse transform. Experimental results indicate that the proposed fusion algorithm can obtain state-of-the-art performance for visible-infrared images fusion in both aspects of objective assessment and subjective visual quality even for the source images obtained in different conditions. Furthermore, the fused results have high contrast, remarkable target information and rich details information that are more suitable for human visual characteristics or machine perception.

  20. Multiscale Medical Image Fusion in Wavelet Domain

    PubMed Central

    Khare, Ashish

    2013-01-01

    Wavelet transforms have emerged as a powerful tool in image fusion. However, the study and analysis of medical image fusion is still a challenging area of research. Therefore, in this paper, we propose a multiscale fusion of multimodal medical images in wavelet domain. Fusion of medical images has been performed at multiple scales varying from minimum to maximum level using maximum selection rule which provides more flexibility and choice to select the relevant fused images. The experimental analysis of the proposed method has been performed with several sets of medical images. Fusion results have been evaluated subjectively and objectively with existing state-of-the-art fusion methods which include several pyramid- and wavelet-transform-based fusion methods and principal component analysis (PCA) fusion method. The comparative analysis of the fusion results has been performed with edge strength (Q), mutual information (MI), entropy (E), standard deviation (SD), blind structural similarity index metric (BSSIM), spatial frequency (SF), and average gradient (AG) metrics. The combined subjective and objective evaluations of the proposed fusion method at multiple scales showed the effectiveness and goodness of the proposed approach. PMID:24453868

  1. Real-time FDG PET Guidance during Biopsies and Radiofrequency Ablation Using Multimodality Fusion with Electromagnetic Navigation

    PubMed Central

    Kadoury, Samuel; Abi-Jaoudeh, Nadine; Levy, Elliot B.; Maass-Moreno, Roberto; Krücker, Jochen; Dalal, Sandeep; Xu, Sheng; Glossop, Neil; Wood, Bradford J.

    2011-01-01

    Purpose: To assess the feasibility of combined electromagnetic device tracking and computed tomography (CT)/ultrasonography (US)/fluorine 18 fluorodeoxyglucose (FDG) positron emission tomography (PET) fusion for real-time feedback during percutaneous and intraoperative biopsies and hepatic radiofrequency (RF) ablation. Materials and Methods: In this HIPAA-compliant, institutional review board–approved prospective study with written informed consent, 25 patients (17 men, eight women) underwent 33 percutaneous and three intraoperative biopsies of 36 FDG-avid targets between November 2007 and August 2010. One patient underwent biopsy and RF ablation of an FDG-avid hepatic focus. Targets demonstrated heterogeneous FDG uptake or were not well seen or were totally inapparent at conventional imaging. Preprocedural FDG PET scans were rigidly registered through a semiautomatic method to intraprocedural CT scans. Coaxial biopsy needle introducer tips and RF ablation electrode guider needle tips containing electromagnetic sensor coils were spatially tracked through an electromagnetic field generator. Real-time US scans were registered through a fiducial-based method, allowing US scans to be fused with intraprocedural CT and preacquired FDG PET scans. A visual display of US/CT image fusion with overlaid coregistered FDG PET targets was used for guidance; navigation software enabled real-time biopsy needle and needle electrode navigation and feedback. Results: Successful fusion of real-time US to coregistered CT and FDG PET scans was achieved in all patients. Thirty-one of 36 biopsies were diagnostic (malignancy in 18 cases, benign processes in 13 cases). RF ablation resulted in resolution of targeted FDG avidity, with no local treatment failure during short follow-up (56 days). Conclusion: Combined electromagnetic device tracking and image fusion with real-time feedback may facilitate biopsies and ablations of focal FDG PET abnormalities that would be challenging with conventional image guidance. © RSNA, 2011 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.11101985/-/DC1 PMID:21734159

  2. Fusion of multi-spectral and panchromatic images based on 2D-PWVD and SSIM

    NASA Astrophysics Data System (ADS)

    Tan, Dongjie; Liu, Yi; Hou, Ruonan; Xue, Bindang

    2016-03-01

    A combined method using 2D pseudo Wigner-Ville distribution (2D-PWVD) and structural similarity(SSIM) index is proposed for fusion of low resolution multi-spectral (MS) image and high resolution panchromatic (PAN) image. First, the intensity component of multi-spectral image is extracted with generalized IHS transform. Then, the spectrum diagrams of the intensity components of multi-spectral image and panchromatic image are obtained with 2D-PWVD. Different fusion rules are designed for different frequency information of the spectrum diagrams. SSIM index is used to evaluate the high frequency information of the spectrum diagrams for assigning the weights in the fusion processing adaptively. After the new spectrum diagram is achieved according to the fusion rule, the final fusion image can be obtained by inverse 2D-PWVD and inverse GIHS transform. Experimental results show that, the proposed method can obtain high quality fusion images.

  3. Effective Multifocus Image Fusion Based on HVS and BP Neural Network

    PubMed Central

    Yang, Yong

    2014-01-01

    The aim of multifocus image fusion is to fuse the images taken from the same scene with different focuses to obtain a resultant image with all objects in focus. In this paper, a novel multifocus image fusion method based on human visual system (HVS) and back propagation (BP) neural network is presented. Three features which reflect the clarity of a pixel are firstly extracted and used to train a BP neural network to determine which pixel is clearer. The clearer pixels are then used to construct the initial fused image. Thirdly, the focused regions are detected by measuring the similarity between the source images and the initial fused image followed by morphological opening and closing operations. Finally, the final fused image is obtained by a fusion rule for those focused regions. Experimental results show that the proposed method can provide better performance and outperform several existing popular fusion methods in terms of both objective and subjective evaluations. PMID:24683327

  4. A GIHS-based spectral preservation fusion method for remote sensing images using edge restored spectral modulation

    NASA Astrophysics Data System (ADS)

    Zhou, Xiran; Liu, Jun; Liu, Shuguang; Cao, Lei; Zhou, Qiming; Huang, Huawen

    2014-02-01

    High spatial resolution and spectral fidelity are basic standards for evaluating an image fusion algorithm. Numerous fusion methods for remote sensing images have been developed. Some of these methods are based on the intensity-hue-saturation (IHS) transform and the generalized IHS (GIHS), which may cause serious spectral distortion. Spectral distortion in the GIHS is proven to result from changes in saturation during fusion. Therefore, reducing such changes can achieve high spectral fidelity. A GIHS-based spectral preservation fusion method that can theoretically reduce spectral distortion is proposed in this study. The proposed algorithm consists of two steps. The first step is spectral modulation (SM), which uses the Gaussian function to extract spatial details and conduct SM of multispectral (MS) images. This method yields a desirable visual effect without requiring histogram matching between the panchromatic image and the intensity of the MS image. The second step uses the Gaussian convolution function to restore lost edge details during SM. The proposed method is proven effective and shown to provide better results compared with other GIHS-based methods.

  5. Multi-focus image fusion with the all convolutional neural network

    NASA Astrophysics Data System (ADS)

    Du, Chao-ben; Gao, She-sheng

    2018-01-01

    A decision map contains complete and clear information about the image to be fused, which is crucial to various image fusion issues, especially multi-focus image fusion. However, in order to get a satisfactory image fusion effect, getting a decision map is very necessary and usually difficult to finish. In this letter, we address this problem with convolutional neural network (CNN), aiming to get a state-of-the-art decision map. The main idea is that the max-pooling of CNN is replaced by a convolution layer, the residuals are propagated backwards by gradient descent, and the training parameters of the individual layers of the CNN are updated layer by layer. Based on this, we propose a new all CNN (ACNN)-based multi-focus image fusion method in spatial domain. We demonstrate that the decision map obtained from the ACNN is reliable and can lead to high-quality fusion results. Experimental results clearly validate that the proposed algorithm can obtain state-of-the-art fusion performance in terms of both qualitative and quantitative evaluations.

  6. A motorized ultrasound system for MRI-ultrasound fusion guided prostatectomy

    NASA Astrophysics Data System (ADS)

    Seifabadi, Reza; Xu, Sheng; Pinto, Peter; Wood, Bradford J.

    2016-03-01

    Purpose: This study presents MoTRUS, a motorized transrectal ultrasound system, to enable remote navigation of a transrectal ultrasound (TRUS) probe during da Vinci assisted prostatectomy. MoTRUS not only provides a stable platform to the ultrasound probe, but also allows the physician to navigate it remotely while sitting on the da Vinci console. This study also presents phantom feasibility study with the goal being intraoperative MRI-US image fusion capability to bring preoperative MR images to the operating room for the best visualization of the gland, boundaries, nerves, etc. Method: A two degree-of-freedom probe holder is developed to insert and rotate a bi-plane transrectal ultrasound transducer. A custom joystick is made to enable remote navigation of MoTRUS. Safety features have been considered to avoid inadvertent risks (if any) to the patient. Custom design software has been developed to fuse pre-operative MR images to intraoperative ultrasound images acquired by MoTRUS. Results: Remote TRUS probe navigation was evaluated on a patient after taking required consents during prostatectomy using MoTRUS. It took 10 min to setup the system in OR. MoTRUS provided similar capability in addition to remote navigation and stable imaging. No complications were observed. Image fusion was evaluated on a commercial prostate phantom. Electromagnetic tracking was used for the fusion. Conclusions: Motorized navigation of the TRUS probe during prostatectomy is safe and feasible. Remote navigation provides physician with a more precise and easier control of the ultrasound image while removing the burden of manual manipulation of the probe. Image fusion improved visualization of the prostate and boundaries in a phantom study.

  7. Multispectral multisensor image fusion using wavelet transforms

    USGS Publications Warehouse

    Lemeshewsky, George P.

    1999-01-01

    Fusion techniques can be applied to multispectral and higher spatial resolution panchromatic images to create a composite image that is easier to interpret than the individual images. Wavelet transform-based multisensor, multiresolution fusion (a type of band sharpening) was applied to Landsat thematic mapper (TM) multispectral and coregistered higher resolution SPOT panchromatic images. The objective was to obtain increased spatial resolution, false color composite products to support the interpretation of land cover types wherein the spectral characteristics of the imagery are preserved to provide the spectral clues needed for interpretation. Since the fusion process should not introduce artifacts, a shift invariant implementation of the discrete wavelet transform (SIDWT) was used. These results were compared with those using the shift variant, discrete wavelet transform (DWT). Overall, the process includes a hue, saturation, and value color space transform to minimize color changes, and a reported point-wise maximum selection rule to combine transform coefficients. The performance of fusion based on the SIDWT and DWT was evaluated with a simulated TM 30-m spatial resolution test image and a higher resolution reference. Simulated imagery was made by blurring higher resolution color-infrared photography with the TM sensors' point spread function. The SIDWT based technique produced imagery with fewer artifacts and lower error between fused images and the full resolution reference. Image examples with TM and SPOT 10-m panchromatic illustrate the reduction in artifacts due to the SIDWT based fusion.

  8. A comparative study of multi-focus image fusion validation metrics

    NASA Astrophysics Data System (ADS)

    Giansiracusa, Michael; Lutz, Adam; Messer, Neal; Ezekiel, Soundararajan; Alford, Mark; Blasch, Erik; Bubalo, Adnan; Manno, Michael

    2016-05-01

    Fusion of visual information from multiple sources is relevant for applications security, transportation, and safety applications. One way that image fusion can be particularly useful is when fusing imagery data from multiple levels of focus. Different focus levels can create different visual qualities for different regions in the imagery, which can provide much more visual information to analysts when fused. Multi-focus image fusion would benefit a user through automation, which requires the evaluation of the fused images to determine whether they have properly fused the focused regions of each image. Many no-reference metrics, such as information theory based, image feature based and structural similarity-based have been developed to accomplish comparisons. However, it is hard to scale an accurate assessment of visual quality which requires the validation of these metrics for different types of applications. In order to do this, human perception based validation methods have been developed, particularly dealing with the use of receiver operating characteristics (ROC) curves and the area under them (AUC). Our study uses these to analyze the effectiveness of no-reference image fusion metrics applied to multi-resolution fusion methods in order to determine which should be used when dealing with multi-focus data. Preliminary results show that the Tsallis, SF, and spatial frequency metrics are consistent with the image quality and peak signal to noise ratio (PSNR).

  9. Multifocus image fusion using phase congruency

    NASA Astrophysics Data System (ADS)

    Zhan, Kun; Li, Qiaoqiao; Teng, Jicai; Wang, Mingying; Shi, Jinhui

    2015-05-01

    We address the problem of fusing multifocus images based on the phase congruency (PC). PC provides a sharpness feature of a natural image. The focus measure (FM) is identified as strong PC near a distinctive image feature evaluated by the complex Gabor wavelet. The PC is more robust against noise than other FMs. The fusion image is obtained by a new fusion rule (FR), and the focused region is selected by the FR from one of the input images. Experimental results show that the proposed fusion scheme achieves the fusion performance of the state-of-the-art methods in terms of visual quality and quantitative evaluations.

  10. Adaptive polarization image fusion based on regional energy dynamic weighted average

    NASA Astrophysics Data System (ADS)

    Zhao, Yong-Qiang; Pan, Quan; Zhang, Hong-Cai

    2005-11-01

    According to the principle of polarization imaging and the relation between Stokes parameters and the degree of linear polarization, there are much redundant and complementary information in polarized images. Since man-made objects and natural objects can be easily distinguished in images of degree of linear polarization and images of Stokes parameters contain rich detailed information of the scene, the clutters in the images can be removed efficiently while the detailed information can be maintained by combining these images. An algorithm of adaptive polarization image fusion based on regional energy dynamic weighted average is proposed in this paper to combine these images. Through an experiment and simulations, most clutters are removed by this algorithm. The fusion method is used for different light conditions in simulation, and the influence of lighting conditions on the fusion results is analyzed.

  11. A novel fusion framework of visible light and infrared images based on singular value decomposition and adaptive DUAL-PCNN in NSST domain

    NASA Astrophysics Data System (ADS)

    Cheng, Boyang; Jin, Longxu; Li, Guoning

    2018-06-01

    Visible light and infrared images fusion has been a significant subject in imaging science. As a new contribution to this field, a novel fusion framework of visible light and infrared images based on adaptive dual-channel unit-linking pulse coupled neural networks with singular value decomposition (ADS-PCNN) in non-subsampled shearlet transform (NSST) domain is present in this paper. First, the source images are decomposed into multi-direction and multi-scale sub-images by NSST. Furthermore, an improved novel sum modified-Laplacian (INSML) of low-pass sub-image and an improved average gradient (IAVG) of high-pass sub-images are input to stimulate the ADS-PCNN, respectively. To address the large spectral difference between infrared and visible light and the occurrence of black artifacts in fused images, a local structure information operator (LSI), which comes from local area singular value decomposition in each source image, is regarded as the adaptive linking strength that enhances fusion accuracy. Compared with PCNN models in other studies, the proposed method simplifies certain peripheral parameters, and the time matrix is utilized to decide the iteration number adaptively. A series of images from diverse scenes are used for fusion experiments and the fusion results are evaluated subjectively and objectively. The results of the subjective and objective evaluation show that our algorithm exhibits superior fusion performance and is more effective than the existing typical fusion techniques.

  12. A METHODOLOGY FOR INTEGRATING IMAGES AND TEXT FOR OBJECT IDENTIFICATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paulson, Patrick R.; Hohimer, Ryan E.; Doucette, Peter J.

    2006-02-13

    Often text and imagery contain information that must be combined to solve a problem. One approach begins with transforming the raw text and imagery into a common structure that contains the critical information in a usable form. This paper presents an application in which the imagery of vehicles and the text from police reports were combined to demonstrate the power of data fusion to correctly identify the target vehicle--e.g., a red 2002 Ford truck identified in a police report--from a collection of diverse vehicle images. The imagery was abstracted into a common signature by first capturing the conceptual models ofmore » the imagery experts in software. Our system then (1) extracted fundamental features (e.g., wheel base, color), (2) made inferences about the information (e.g., it’s a red Ford) and then (3) translated the raw information into an abstract knowledge signature that was designed to both capture the important features and account for uncertainty. Likewise, the conceptual models of text analysis experts were instantiated into software that was used to generate an abstract knowledge signature that could be readily compared to the imagery knowledge signature. While this experiment primary focus was to demonstrate the power of text and imagery fusion for a specific example it also suggested several ways that text and geo-registered imagery could be combined to help solve other types of problems.« less

  13. Multi-atlas based segmentation using probabilistic label fusion with adaptive weighting of image similarity measures.

    PubMed

    Sjöberg, C; Ahnesjö, A

    2013-06-01

    Label fusion multi-atlas approaches for image segmentation can give better segmentation results than single atlas methods. We present a multi-atlas label fusion strategy based on probabilistic weighting of distance maps. Relationships between image similarities and segmentation similarities are estimated in a learning phase and used to derive fusion weights that are proportional to the probability for each atlas to improve the segmentation result. The method was tested using a leave-one-out strategy on a database of 21 pre-segmented prostate patients for different image registrations combined with different image similarity scorings. The probabilistic weighting yields results that are equal or better compared to both fusion with equal weights and results using the STAPLE algorithm. Results from the experiments demonstrate that label fusion by weighted distance maps is feasible, and that probabilistic weighted fusion improves segmentation quality more the stronger the individual atlas segmentation quality depends on the corresponding registered image similarity. The regions used for evaluation of the image similarity measures were found to be more important than the choice of similarity measure. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  14. Mosaicing of single plane illumination microscopy images using groupwise registration and fast content-based image fusion

    NASA Astrophysics Data System (ADS)

    Preibisch, Stephan; Rohlfing, Torsten; Hasak, Michael P.; Tomancak, Pavel

    2008-03-01

    Single Plane Illumination Microscopy (SPIM; Huisken et al., Nature 305(5686):1007-1009, 2004) is an emerging microscopic technique that enables live imaging of large biological specimens in their entirety. By imaging the living biological sample from multiple angles SPIM has the potential to achieve isotropic resolution throughout even relatively large biological specimens. For every angle, however, only a relatively shallow section of the specimen is imaged with high resolution, whereas deeper regions appear increasingly blurred. In order to produce a single, uniformly high resolution image, we propose here an image mosaicing algorithm that combines state of the art groupwise image registration for alignment with content-based image fusion to prevent degrading of the fused image due to regional blurring of the input images. For the registration stage, we introduce an application-specific groupwise transformation model that incorporates per-image as well as groupwise transformation parameters. We also propose a new fusion algorithm based on Gaussian filters, which is substantially faster than fusion based on local image entropy. We demonstrate the performance of our mosaicing method on data acquired from living embryos of the fruit fly, Drosophila, using four and eight angle acquisitions.

  15. Three-Dimensional Image Fusion of 18F-Fluorodeoxyglucose-Positron Emission Tomography/Computed Tomography and Contrast-Enhanced Computed Tomography for Computer-Assisted Planning of Maxillectomy of Recurrent Maxillary Squamous Cell Carcinoma and Defect Reconstruction.

    PubMed

    Yu, Yao; Zhang, Wen-Bo; Liu, Xiao-Jing; Guo, Chuan-Bin; Yu, Guang-Yan; Peng, Xin

    2017-06-01

    The purpose of this study was to describe new technology assisted by 3-dimensional (3D) image fusion of 18 F-fluorodeoxyglucose (FDG)-positron emission tomography (PET)/computed tomography (CT) and contrast-enhanced CT (CECT) for computer planning of a maxillectomy of recurrent maxillary squamous cell carcinoma and defect reconstruction. Treatment of recurrent maxillary squamous cell carcinoma usually includes tumor resection and free flap reconstruction. FDG-PET/CT provided images of regions of abnormal glucose uptake and thus showed metabolic tumor volume to guide tumor resection. CECT data were used to create 3D reconstructed images of vessels to show the vascular diameters and locations, so that the most suitable vein and artery could be selected during anastomosis of the free flap. The data from preoperative maxillofacial CECT scans and FDG-PET/CT imaging were imported into the navigation system (iPlan 3.0; Brainlab, Feldkirchen, Germany). Three-dimensional image fusion between FDG-PET/CT and CECT was accomplished using Brainlab software according to the position of the 2 skulls simulated in the CECT image and PET/CT image, respectively. After verification of the image fusion accuracy, the 3D reconstruction images of the metabolic tumor, vessels, and other critical structures could be visualized within the same coordinate system. These sagittal, coronal, axial, and 3D reconstruction images were used to determine the virtual osteotomy sites and reconstruction plan, which was provided to the surgeon and used for surgical navigation. The average shift of the 3D image fusion between FDG-PET/CT and CECT was less than 1 mm. This technique, by clearly showing the metabolic tumor volume and the most suitable vessels for anastomosis, facilitated resection and reconstruction of recurrent maxillary squamous cell carcinoma. We used 3D image fusion of FDG-PET/CT and CECT to successfully accomplish resection and reconstruction of recurrent maxillary squamous cell carcinoma. This method has the potential to improve the clinical outcomes of these challenging procedures. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  16. Multispectral image fusion for target detection

    NASA Astrophysics Data System (ADS)

    Leviner, Marom; Maltz, Masha

    2009-09-01

    Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in an experiment using MSSF against two established methods: Averaging and Principle Components Analysis (PCA), and against its two source bands, visible and infrared. The task that we studied was: target detection in the cluttered environment. MSSF proved superior to the other fusion methods. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.

  17. Fully Convolutional Network-Based Multifocus Image Fusion.

    PubMed

    Guo, Xiaopeng; Nie, Rencan; Cao, Jinde; Zhou, Dongming; Qian, Wenhua

    2018-07-01

    As the optical lenses for cameras always have limited depth of field, the captured images with the same scene are not all in focus. Multifocus image fusion is an efficient technology that can synthesize an all-in-focus image using several partially focused images. Previous methods have accomplished the fusion task in spatial or transform domains. However, fusion rules are always a problem in most methods. In this letter, from the aspect of focus region detection, we propose a novel multifocus image fusion method based on a fully convolutional network (FCN) learned from synthesized multifocus images. The primary novelty of this method is that the pixel-wise focus regions are detected through a learning FCN, and the entire image, not just the image patches, are exploited to train the FCN. First, we synthesize 4500 pairs of multifocus images by repeatedly using a gaussian filter for each image from PASCAL VOC 2012, to train the FCN. After that, a pair of source images is fed into the trained FCN, and two score maps indicating the focus property are generated. Next, an inversed score map is averaged with another score map to produce an aggregative score map, which take full advantage of focus probabilities in two score maps. We implement the fully connected conditional random field (CRF) on the aggregative score map to accomplish and refine a binary decision map for the fusion task. Finally, we exploit the weighted strategy based on the refined decision map to produce the fused image. To demonstrate the performance of the proposed method, we compare its fused results with several start-of-the-art methods not only on a gray data set but also on a color data set. Experimental results show that the proposed method can achieve superior fusion performance in both human visual quality and objective assessment.

  18. Hierarchical Multi-atlas Label Fusion with Multi-scale Feature Representation and Label-specific Patch Partition

    PubMed Central

    Wu, Guorong; Kim, Minjeong; Sanroma, Gerard; Wang, Qian; Munsell, Brent C.; Shen, Dinggang

    2014-01-01

    Multi-atlas patch-based label fusion methods have been successfully used to improve segmentation accuracy in many important medical image analysis applications. In general, to achieve label fusion a single target image is first registered to several atlas images, after registration a label is assigned to each target point in the target image by determining the similarity between the underlying target image patch (centered at the target point) and the aligned image patch in each atlas image. To achieve the highest level of accuracy during the label fusion process it’s critical the chosen patch similarity measurement accurately captures the tissue/shape appearance of the anatomical structure. One major limitation of existing state-of-the-art label fusion methods is that they often apply a fixed size image patch throughout the entire label fusion procedure. Doing so may severely affect the fidelity of the patch similarity measurement, which in turn may not adequately capture complex tissue appearance patterns expressed by the anatomical structure. To address this limitation, we advance state-of-the-art by adding three new label fusion contributions: First, each image patch now characterized by a multi-scale feature representation that encodes both local and semi-local image information. Doing so will increase the accuracy of the patch-based similarity measurement. Second, to limit the possibility of the patch-based similarity measurement being wrongly guided by the presence of multiple anatomical structures in the same image patch, each atlas image patch is further partitioned into a set of label-specific partial image patches according to the existing labels. Since image information has now been semantically divided into different patterns, these new label-specific atlas patches make the label fusion process more specific and flexible. Lastly, in order to correct target points that are mislabeled during label fusion, a hierarchically approach is used to improve the label fusion results. In particular, a coarse-to-fine iterative label fusion approach is used that gradually reduces the patch size. To evaluate the accuracy of our label fusion approach, the proposed method was used to segment the hippocampus in the ADNI dataset and 7.0 tesla MR images, sub-cortical regions in LONI LBPA40 dataset, mid-brain regions in SATA dataset from MICCAI 2013 segmentation challenge, and a set of key internal gray matter structures in IXI dataset. In all experiments, the segmentation results of the proposed hierarchical label fusion method with multi-scale feature representations and label-specific atlas patches are more accurate than several well-known state-of-the-art label fusion methods. PMID:25463474

  19. Biometric image enhancement using decision rule based image fusion techniques

    NASA Astrophysics Data System (ADS)

    Sagayee, G. Mary Amirtha; Arumugam, S.

    2010-02-01

    Introducing biometrics into information systems may result in considerable benefits. Most of the researchers confirmed that the finger print is widely used than the iris or face and more over it is the primary choice for most privacy concerned applications. For finger prints applications, choosing proper sensor is at risk. The proposed work deals about, how the image quality can be improved by introducing image fusion technique at sensor levels. The results of the images after introducing the decision rule based image fusion technique are evaluated and analyzed with its entropy levels and root mean square error.

  20. Layer-Based Approach for Image Pair Fusion.

    PubMed

    Son, Chang-Hwan; Zhang, Xiao-Ping

    2016-04-20

    Recently, image pairs, such as noisy and blurred images or infrared and noisy images, have been considered as a solution to provide high-quality photographs under low lighting conditions. In this paper, a new method for decomposing the image pairs into two layers, i.e., the base layer and the detail layer, is proposed for image pair fusion. In the case of infrared and noisy images, simple naive fusion leads to unsatisfactory results due to the discrepancies in brightness and image structures between the image pair. To address this problem, a local contrast-preserving conversion method is first proposed to create a new base layer of the infrared image, which can have visual appearance similar to another base layer such as the denoised noisy image. Then, a new way of designing three types of detail layers from the given noisy and infrared images is presented. To estimate the noise-free and unknown detail layer from the three designed detail layers, the optimization framework is modeled with residual-based sparsity and patch redundancy priors. To better suppress the noise, an iterative approach that updates the detail layer of the noisy image is adopted via a feedback loop. This proposed layer-based method can also be applied to fuse another noisy and blurred image pair. The experimental results show that the proposed method is effective for solving the image pair fusion problem.

  1. High-Level Data Fusion Software for SHOALS-1000TH FY07 Annual Report

    DTIC Science & Technology

    2007-01-01

    This survey covered the lakeside town of Alpena , Michigan, and the shoreline of Lake Huron. Additionally a small set of ground reflectance...Figure 2. SHOALS green laser reflectance image of the eastern part of Alpena , Michigan, and the shoreline of Thunder Bay

  2. Study on polarization image methods in turbid medium

    NASA Astrophysics Data System (ADS)

    Fu, Qiang; Mo, Chunhe; Liu, Boyu; Duan, Jin; Zhang, Su; Zhu, Yong

    2014-11-01

    Polarization imaging detection technology in addition to the traditional imaging information, also can get polarization multi-dimensional information, thus improve the probability of target detection and recognition.Image fusion in turbid medium target polarization image research, is helpful to obtain high quality images. Based on visible light wavelength of light wavelength of laser polarization imaging, through the rotation Angle of polaroid get corresponding linear polarized light intensity, respectively to obtain the concentration range from 5% to 10% of turbid medium target stocks of polarization parameters, introduces the processing of image fusion technology, main research on access to the polarization of the image by using different polarization image fusion methods for image processing, discusses several kinds of turbid medium has superior performance of polarization image fusion method, and gives the treatment effect and analysis of data tables. Then use pixel level, feature level and decision level fusion algorithm on three levels of information fusion, DOLP polarization image fusion, the results show that: with the increase of the polarization Angle, polarization image will be more and more fuzzy, quality worse and worse. Than a single fused image contrast of the image be improved obviously, the finally analysis on reasons of the increase the image contrast and polarized light.

  3. Open-source image registration for MRI-TRUS fusion-guided prostate interventions.

    PubMed

    Fedorov, Andriy; Khallaghi, Siavash; Sánchez, C Antonio; Lasso, Andras; Fels, Sidney; Tuncali, Kemal; Sugar, Emily Neubauer; Kapur, Tina; Zhang, Chenxi; Wells, William; Nguyen, Paul L; Abolmaesumi, Purang; Tempany, Clare

    2015-06-01

    We propose two software tools for non-rigid registration of MRI and transrectal ultrasound (TRUS) images of the prostate. Our ultimate goal is to develop an open-source solution to support MRI-TRUS fusion image guidance of prostate interventions, such as targeted biopsy for prostate cancer detection and focal therapy. It is widely hypothesized that image registration is an essential component in such systems. The two non-rigid registration methods are: (1) a deformable registration of the prostate segmentation distance maps with B-spline regularization and (2) a finite element-based deformable registration of the segmentation surfaces in the presence of partial data. We evaluate the methods retrospectively using clinical patient image data collected during standard clinical procedures. Computation time and Target Registration Error (TRE) calculated at the expert-identified anatomical landmarks were used as quantitative measures for the evaluation. The presented image registration tools were capable of completing deformable registration computation within 5 min. Average TRE was approximately 3 mm for both methods, which is comparable with the slice thickness in our MRI data. Both tools are available under nonrestrictive open-source license. We release open-source tools that may be used for registration during MRI-TRUS-guided prostate interventions. Our tools implement novel registration approaches and produce acceptable registration results. We believe these tools will lower the barriers in development and deployment of interventional research solutions and facilitate comparison with similar tools.

  4. A Study for Texture Feature Extraction of High-Resolution Satellite Images Based on a Direction Measure and Gray Level Co-Occurrence Matrix Fusion Algorithm

    PubMed Central

    Zhang, Xin; Cui, Jintian; Wang, Weisheng; Lin, Chao

    2017-01-01

    To address the problem of image texture feature extraction, a direction measure statistic that is based on the directionality of image texture is constructed, and a new method of texture feature extraction, which is based on the direction measure and a gray level co-occurrence matrix (GLCM) fusion algorithm, is proposed in this paper. This method applies the GLCM to extract the texture feature value of an image and integrates the weight factor that is introduced by the direction measure to obtain the final texture feature of an image. A set of classification experiments for the high-resolution remote sensing images were performed by using support vector machine (SVM) classifier with the direction measure and gray level co-occurrence matrix fusion algorithm. Both qualitative and quantitative approaches were applied to assess the classification results. The experimental results demonstrated that texture feature extraction based on the fusion algorithm achieved a better image recognition, and the accuracy of classification based on this method has been significantly improved. PMID:28640181

  5. Framework for 2D-3D image fusion of infrared thermography with preoperative MRI.

    PubMed

    Hoffmann, Nico; Weidner, Florian; Urban, Peter; Meyer, Tobias; Schnabel, Christian; Radev, Yordan; Schackert, Gabriele; Petersohn, Uwe; Koch, Edmund; Gumhold, Stefan; Steiner, Gerald; Kirsch, Matthias

    2017-11-27

    Multimodal medical image fusion combines information of one or more images in order to improve the diagnostic value. While previous applications mainly focus on merging images from computed tomography, magnetic resonance imaging (MRI), ultrasonic and single-photon emission computed tomography, we propose a novel approach for the registration and fusion of preoperative 3D MRI with intraoperative 2D infrared thermography. Image-guided neurosurgeries are based on neuronavigation systems, which further allow us track the position and orientation of arbitrary cameras. Hereby, we are able to relate the 2D coordinate system of the infrared camera with the 3D MRI coordinate system. The registered image data are now combined by calibration-based image fusion in order to map our intraoperative 2D thermographic images onto the respective brain surface recovered from preoperative MRI. In extensive accuracy measurements, we found that the proposed framework achieves a mean accuracy of 2.46 mm.

  6. Object-based land cover classification based on fusion of multifrequency SAR data and THAICHOTE optical imagery

    NASA Astrophysics Data System (ADS)

    Sukawattanavijit, Chanika; Srestasathiern, Panu

    2017-10-01

    Land Use and Land Cover (LULC) information are significant to observe and evaluate environmental change. LULC classification applying remotely sensed data is a technique popularly employed on a global and local dimension particularly, in urban areas which have diverse land cover types. These are essential components of the urban terrain and ecosystem. In the present, object-based image analysis (OBIA) is becoming widely popular for land cover classification using the high-resolution image. COSMO-SkyMed SAR data was fused with THAICHOTE (namely, THEOS: Thailand Earth Observation Satellite) optical data for land cover classification using object-based. This paper indicates a comparison between object-based and pixel-based approaches in image fusion. The per-pixel method, support vector machines (SVM) was implemented to the fused image based on Principal Component Analysis (PCA). For the objectbased classification was applied to the fused images to separate land cover classes by using nearest neighbor (NN) classifier. Finally, the accuracy assessment was employed by comparing with the classification of land cover mapping generated from fused image dataset and THAICHOTE image. The object-based data fused COSMO-SkyMed with THAICHOTE images demonstrated the best classification accuracies, well over 85%. As the results, an object-based data fusion provides higher land cover classification accuracy than per-pixel data fusion.

  7. A survey of infrared and visual image fusion methods

    NASA Astrophysics Data System (ADS)

    Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Hai, Jinjin; He, Kangjian

    2017-09-01

    Infrared (IR) and visual (VI) image fusion is designed to fuse multiple source images into a comprehensive image to boost imaging quality and reduce redundancy information, which is widely used in various imaging equipment to improve the visual ability of human and robot. The accurate, reliable and complementary descriptions of the scene in fused images make these techniques be widely used in various fields. In recent years, a large number of fusion methods for IR and VI images have been proposed due to the ever-growing demands and the progress of image representation methods; however, there has not been published an integrated survey paper about this field in last several years. Therefore, we make a survey to report the algorithmic developments of IR and VI image fusion. In this paper, we first characterize the IR and VI image fusion based applications to represent an overview of the research status. Then we present a synthesize survey of the state of the art. Thirdly, the frequently-used image fusion quality measures are introduced. Fourthly, we perform some experiments of typical methods and make corresponding analysis. At last, we summarize the corresponding tendencies and challenges in IR and VI image fusion. This survey concludes that although various IR and VI image fusion methods have been proposed, there still exist further improvements or potential research directions in different applications of IR and VI image fusion.

  8. Extended depth of field integral imaging using multi-focus fusion

    NASA Astrophysics Data System (ADS)

    Piao, Yongri; Zhang, Miao; Wang, Xiaohui; Li, Peihua

    2018-03-01

    In this paper, we propose a new method for depth of field extension in integral imaging by realizing the image fusion method on the multi-focus elemental images. In the proposed method, a camera is translated on a 2D grid to take multi-focus elemental images by sweeping the focus plane across the scene. Simply applying an image fusion method on the elemental images holding rich parallax information does not work effectively because registration accuracy of images is the prerequisite for image fusion. To solve this problem an elemental image generalization method is proposed. The aim of this generalization process is to geometrically align the objects in all elemental images so that the correct regions of multi-focus elemental images can be exacted. The all-in focus elemental images are then generated by fusing the generalized elemental images using the block based fusion method. The experimental results demonstrate that the depth of field of synthetic aperture integral imaging system has been extended by realizing the generation method combined with the image fusion on multi-focus elemental images in synthetic aperture integral imaging system.

  9. Dim target detection method based on salient graph fusion

    NASA Astrophysics Data System (ADS)

    Hu, Ruo-lan; Shen, Yi-yan; Jiang, Jun

    2018-02-01

    Dim target detection is one key problem in digital image processing field. With development of multi-spectrum imaging sensor, it becomes a trend to improve the performance of dim target detection by fusing the information from different spectral images. In this paper, one dim target detection method based on salient graph fusion was proposed. In the method, Gabor filter with multi-direction and contrast filter with multi-scale were combined to construct salient graph from digital image. And then, the maximum salience fusion strategy was designed to fuse the salient graph from different spectral images. Top-hat filter was used to detect dim target from the fusion salient graph. Experimental results show that proposal method improved the probability of target detection and reduced the probability of false alarm on clutter background images.

  10. Correlative Microscopy Combining Secondary Ion Mass Spectrometry and Electron Microscopy: Comparison of Intensity-Hue-Saturation and Laplacian Pyramid Methods for Image Fusion.

    PubMed

    Vollnhals, Florian; Audinot, Jean-Nicolas; Wirtz, Tom; Mercier-Bonin, Muriel; Fourquaux, Isabelle; Schroeppel, Birgit; Kraushaar, Udo; Lev-Ram, Varda; Ellisman, Mark H; Eswara, Santhana

    2017-10-17

    Correlative microscopy combining various imaging modalities offers powerful insights into obtaining a comprehensive understanding of physical, chemical, and biological phenomena. In this article, we investigate two approaches for image fusion in the context of combining the inherently lower-resolution chemical images obtained using secondary ion mass spectrometry (SIMS) with the high-resolution ultrastructural images obtained using electron microscopy (EM). We evaluate the image fusion methods with three different case studies selected to broadly represent the typical samples in life science research: (i) histology (unlabeled tissue), (ii) nanotoxicology, and (iii) metabolism (isotopically labeled tissue). We show that the intensity-hue-saturation fusion method often applied for EM-sharpening can result in serious image artifacts, especially in cases where different contrast mechanisms interplay. Here, we introduce and demonstrate Laplacian pyramid fusion as a powerful and more robust alternative method for image fusion. Both physical and technical aspects of correlative image overlay and image fusion specific to SIMS-based correlative microscopy are discussed in detail alongside the advantages, limitations, and the potential artifacts. Quantitative metrics to evaluate the results of image fusion are also discussed.

  11. Gradient-based multiresolution image fusion.

    PubMed

    Petrović, Valdimir S; Xydeas, Costas S

    2004-02-01

    A novel approach to multiresolution signal-level image fusion is presented for accurately transferring visual information from any number of input image signals, into a single fused image without loss of information or the introduction of distortion. The proposed system uses a "fuse-then-decompose" technique realized through a novel, fusion/decomposition system architecture. In particular, information fusion is performed on a multiresolution gradient map representation domain of image signal information. At each resolution, input images are represented as gradient maps and combined to produce new, fused gradient maps. Fused gradient map signals are processed, using gradient filters derived from high-pass quadrature mirror filters to yield a fused multiresolution pyramid representation. The fused output image is obtained by applying, on the fused pyramid, a reconstruction process that is analogous to that of conventional discrete wavelet transform. This new gradient fusion significantly reduces the amount of distortion artefacts and the loss of contrast information usually observed in fused images obtained from conventional multiresolution fusion schemes. This is because fusion in the gradient map domain significantly improves the reliability of the feature selection and information fusion processes. Fusion performance is evaluated through informal visual inspection and subjective psychometric preference tests, as well as objective fusion performance measurements. Results clearly demonstrate the superiority of this new approach when compared to conventional fusion systems.

  12. TU-E-BRB-03: Overview of Proposed TG-132 Recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brock, K.

    2015-06-15

    Deformable image registration (DIR) is developing rapidly and is poised to substantially improve dose fusion accuracy for adaptive and retreatment planning and motion management and PET fusion to enhance contour delineation for treatment planning. However, DIR dose warping accuracy is difficult to quantify, in general, and particularly difficult to do so on a patient-specific basis. As clinical DIR options become more widely available, there is an increased need to understand the implications of incorporating DIR into clinical workflow. Several groups have assessed DIR accuracy in clinically relevant scenarios, but no comprehensive review material is yet available. This session will alsomore » discuss aspects of the AAPM Task Group 132 on the Use of Image Registration and Data Fusion Algorithms and Techniques in Radiotherapy Treatment Planning official report, which provides recommendations for DIR clinical use. We will summarize and compare various commercial DIR software options, outline successful clinical techniques, show specific examples with discussion of appropriate and inappropriate applications of DIR, discuss the clinical implications of DIR, provide an overview of current DIR error analysis research, review QA options and research phantom development and present TG-132 recommendations. Learning Objectives: Compare/contrast commercial DIR software and QA options Overview clinical DIR workflow for retreatment To understand uncertainties introduced by DIR Review TG-132 proposed recommendations.« less

  13. Advances in multi-sensor data fusion: algorithms and applications.

    PubMed

    Dong, Jiang; Zhuang, Dafang; Huang, Yaohuan; Fu, Jingying

    2009-01-01

    With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of "algorithm fusion" methods; (3) Establishment of an automatic quality assessment scheme.

  14. Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.

    PubMed

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D

    2015-05-08

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.

  15. Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition

    PubMed Central

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.

    2015-01-01

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714

  16. A Multi-Objective Decision Making Approach for Solving the Image Segmentation Fusion Problem.

    PubMed

    Khelifi, Lazhar; Mignotte, Max

    2017-08-01

    Image segmentation fusion is defined as the set of methods which aim at merging several image segmentations, in a manner that takes full advantage of the complementarity of each one. Previous relevant researches in this field have been impeded by the difficulty in identifying an appropriate single segmentation fusion criterion, providing the best possible, i.e., the more informative, result of fusion. In this paper, we propose a new model of image segmentation fusion based on multi-objective optimization which can mitigate this problem, to obtain a final improved result of segmentation. Our fusion framework incorporates the dominance concept in order to efficiently combine and optimize two complementary segmentation criteria, namely, the global consistency error and the F-measure (precision-recall) criterion. To this end, we present a hierarchical and efficient way to optimize the multi-objective consensus energy function related to this fusion model, which exploits a simple and deterministic iterative relaxation strategy combining the different image segments. This step is followed by a decision making task based on the so-called "technique for order performance by similarity to ideal solution". Results obtained on two publicly available databases with manual ground truth segmentations clearly show that our multi-objective energy-based model gives better results than the classical mono-objective one.

  17. Diffusion weighted imaging with background body signal suppression / T2 image fusion in magnetic resonance mammography for breast cancer diagnosis.

    PubMed

    Nechifor-Boilă, I A; Bancu, S; Buruian, M; Charlot, M; Decaussin-Petrucci, M; Krauth, J-S; Nechifor-Boilă, A C; Borda, A

    2013-01-01

    Dynamic Contrast-Enhanced Magnetic Resonance Mammography (DCE-MRM) represents the most sensitive examination for breast cancer (BC) diagnosis. However literature data reports very inhomogeneous specificity. The aim of our study was to evaluate the clinical efficiency of a new MRM technique - diffusion weighted imaging with background body signal suppression T2 image fusion in BC diagnosis, compared to DCE-MRM. We retrospectively analyzed 50 consecutive DCE-MRM examinations with DWIBS sequence from the archives of the Department of Radiology, Lyon Sud Hospital, (02.2010- 02.2011), summing up to 64 breast lesions. Fusions were created using the Osirix software from the DWIBS images (b=1000 s mm2) and their T2 correspondents. Interpretation was performed using an adapted BI-RADS system. The final histopathological examination or a minimum 6-months follow-up served as gold standard. Out of the 64 examined breast lesions, 35(54.7%) were classified as malignant by DCE-MRM and 24(37.5%) cases by DWIBS T2, respectively. Thus the DWIBS T2 fusion had a Sensitivity of 62.5%(95%CI:35.4-84.8) and a Specificity of 70.8%(95%CI:55.9-83.3) while DCE-MRM had a higher Sensitivity: 87.5%(95%CI:61.6-98.4) but a lower Specificity: 56.2%(95%CI:41.1-70.5). DWIBS T2 fusion is an innovative MRM technique, with a specificity superior to DCE-MRM, showing a large potential for improving the clinical efficiency of classical MRM. Celsius.

  18. An investigation of density measurement method for yarn-dyed woven fabrics based on dual-side fusion technique

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Xin, Binjie

    2016-08-01

    Yarn density is always considered as the fundamental structural parameter used for the quality evaluation of woven fabrics. The conventional yarn density measurement method is based on one-side analysis. In this paper, a novel density measurement method is developed for yarn-dyed woven fabrics based on a dual-side fusion technique. Firstly, a lab-used dual-side imaging system is established to acquire both face-side and back-side images of woven fabric and the affine transform is used for the alignment and fusion of the dual-side images. Then, the color images of the woven fabrics are transferred from the RGB to the CIE-Lab color space, and the intensity information of the image extracted from the L component is used for texture fusion and analysis. Subsequently, three image fusion methods are developed and utilized to merge the dual-side images: the weighted average method, wavelet transform method and Laplacian pyramid blending method. The fusion efficacy of each method is evaluated by three evaluation indicators and the best of them is selected to do the reconstruction of the complete fabric texture. Finally, the yarn density of the fused image is measured based on the fast Fourier transform, and the yarn alignment image could be reconstructed using the inverse fast Fourier transform. Our experimental results show that the accuracy of density measurement by using the proposed method is close to 99.44% compared with the traditional method and the robustness of this new proposed method is better than that of conventional analysis methods.

  19. Artificial intelligence (AI)-based relational matching and multimodal medical image fusion: generalized 3D approaches

    NASA Astrophysics Data System (ADS)

    Vajdic, Stevan M.; Katz, Henry E.; Downing, Andrew R.; Brooks, Michael J.

    1994-09-01

    A 3D relational image matching/fusion algorithm is introduced. It is implemented in the domain of medical imaging and is based on Artificial Intelligence paradigms--in particular, knowledge base representation and tree search. The 2D reference and target images are selected from 3D sets and segmented into non-touching and non-overlapping regions, using iterative thresholding and/or knowledge about the anatomical shapes of human organs. Selected image region attributes are calculated. Region matches are obtained using a tree search, and the error is minimized by evaluating a `goodness' of matching function based on similarities of region attributes. Once the matched regions are found and the spline geometric transform is applied to regional centers of gravity, images are ready for fusion and visualization into a single 3D image of higher clarity.

  20. The addition of a sagittal image fusion improves the prostate cancer detection in a sensor-based MRI /ultrasound fusion guided targeted biopsy.

    PubMed

    Günzel, Karsten; Cash, Hannes; Buckendahl, John; Königbauer, Maximilian; Asbach, Patrick; Haas, Matthias; Neymeyer, Jörg; Hinz, Stefan; Miller, Kurt; Kempkensteffen, Carsten

    2017-01-13

    To explore the diagnostic benefit of an additional image fusion of the sagittal plane in addition to the standard axial image fusion, using a sensor-based MRI/US fusion platform. During July 2013 and September 2015, 251 patients with at least one suspicious lesion on mpMRI (rated by PI-RADS) were included into the analysis. All patients underwent MRI/US targeted biopsy (TB) in combination with a 10 core systematic prostate biopsy (SB). All biopsies were performed on a sensor-based fusion system. Group A included 162 men who received TB by an axial MRI/US image fusion. Group B comprised 89 men in whom the TB was performed with an additional sagittal image fusion. The median age in group A was 67 years (IQR 61-72) and in group B 68 years (IQR 60-71). The median PSA level in group A was 8.10 ng/ml (IQR 6.05-14) and in group B 8.59 ng/ml (IQR 5.65-12.32). In group A the proportion of patients with a suspicious digital rectal examination (DRE) (14 vs. 29%, p = 0.007) and the proportion of primary biopsies (33 vs 46%, p = 0.046) were significantly lower. The rate of PI-RADS 3 lesions were overrepresented in group A compared to group B (19 vs. 9%; p = 0.044). Classified according to PI-RADS 3, 4 and 5, the detection rates of TB were 42, 48, 75% in group A and 25, 74, 90% in group B. The rate of PCa with a Gleason score ≥7 missed by TB was 33% (18 cases) in group A and 9% (5 cases) in group B; p-value 0.072. An explorative multivariate binary logistic regression analysis revealed that PI-RADS, a suspicious DRE and performing an additional sagittal image fusion were significant predictors for PCa detection in TB. 9 PCa were only detected by TB with sagittal fusion (sTB) and sTB identified 10 additional clinically significant PCa (Gleason ≥7). Performing an additional sagittal image fusion besides the standard axial fusion appears to improve the accuracy of the sensor-based MRI/US fusion platform.

  1. A fusion algorithm for infrared and visible based on guided filtering and phase congruency in NSST domain

    NASA Astrophysics Data System (ADS)

    Liu, Zhanwen; Feng, Yan; Chen, Hang; Jiao, Licheng

    2017-10-01

    A novel and effective image fusion method is proposed for creating a highly informative and smooth surface of fused image through merging visible and infrared images. Firstly, a two-scale non-subsampled shearlet transform (NSST) is employed to decompose the visible and infrared images into detail layers and one base layer. Then, phase congruency is adopted to extract the saliency maps from the detail layers and a guided filtering is proposed to compute the filtering output of base layer and saliency maps. Next, a novel weighted average technique is used to make full use of scene consistency for fusion and obtaining coefficients map. Finally the fusion image was acquired by taking inverse NSST of the fused coefficients map. Experiments show that the proposed approach can achieve better performance than other methods in terms of subjective visual effect and objective assessment.

  2. Infrared and visible image fusion based on total variation and augmented Lagrangian.

    PubMed

    Guo, Hanqi; Ma, Yong; Mei, Xiaoguang; Ma, Jiayi

    2017-11-01

    This paper proposes a new algorithm for infrared and visible image fusion based on gradient transfer that achieves fusion by preserving the intensity of the infrared image and then transferring gradients in the corresponding visible one to the result. The gradient transfer suffers from the problems of low dynamic range and detail loss because it ignores the intensity from the visible image. The new algorithm solves these problems by providing additive intensity from the visible image to balance the intensity between the infrared image and the visible one. It formulates the fusion task as an l 1 -l 1 -TV minimization problem and then employs variable splitting and augmented Lagrangian to convert the unconstrained problem to a constrained one that can be solved in the framework of alternating the multiplier direction method. Experiments demonstrate that the new algorithm achieves better fusion results with a high computation efficiency in both qualitative and quantitative tests than gradient transfer and most state-of-the-art methods.

  3. Three-dimensional Image Fusion Guidance for Transjugular Intrahepatic Portosystemic Shunt Placement.

    PubMed

    Tacher, Vania; Petit, Arthur; Derbel, Haytham; Novelli, Luigi; Vitellius, Manuel; Ridouani, Fourat; Luciani, Alain; Rahmouni, Alain; Duvoux, Christophe; Salloum, Chady; Chiaradia, Mélanie; Kobeiter, Hicham

    2017-11-01

    To assess the safety, feasibility and effectiveness of image fusion guidance with pre-procedural portal phase computed tomography with intraprocedural fluoroscopy for transjugular intrahepatic portosystemic shunt (TIPS) placement. All consecutive cirrhotic patients presenting at our interventional unit for TIPS creation from January 2015 to January 2016 were prospectively enrolled. Procedures were performed under general anesthesia in an interventional suite equipped with flat panel detector, cone-beam computed tomography (CBCT) and image fusion technique. All TIPSs were placed under image fusion guidance. After hepatic vein catheterization, an unenhanced CBCT acquisition was performed and co-registered with the pre-procedural portal phase CT images. A virtual path between hepatic vein and portal branch was made using the virtual needle path trajectory software. Subsequently, the 3D virtual path was overlaid on 2D fluoroscopy for guidance during portal branch cannulation. Safety, feasibility, effectiveness and per-procedural data were evaluated. Sixteen patients (12 males; median age 56 years) were included. Procedures were technically feasible in 15 of the 16 patients (94%). One procedure was aborted due to hepatic vein catheterization failure related to severe liver distortion. No periprocedural complications occurred within 48 h of the procedure. The median dose-area product was 91 Gy cm 2 , fluoroscopy time 15 min, procedure time 40 min and contrast media consumption 65 mL. Clinical benefit of the TIPS placement was observed in nine patients (56%). This study suggests that 3D image fusion guidance for TIPS is feasible, safe and effective. By identifying virtual needle path, CBCT enables real-time multiplanar guidance and may facilitate TIPS placement.

  4. Practical guide for implementing hybrid PET/MR clinical service: lessons learned from our experience

    PubMed Central

    Parikh, Nainesh; Friedman, Kent P.; Shah, Shetal N.; Chandarana, Hersh

    2015-01-01

    Positron emission tomography (PET) and magnetic resonance imaging, until recently, have been performed on separate PET and MR systems with varying temporal delay between the two acquisitions. The interpretation of these two separately acquired studies requires cognitive fusion by radiologists/nuclear medicine physicians or dedicated and challenging post-processing. Recent advances in hardware and software with introduction of hybrid PET/MR systems have made it possible to acquire the PET and MR images simultaneously or near simultaneously. This review article serves as a road-map for clinical implementation of hybrid PET/MR systems and briefly discusses hardware systems, the personnel needs, safety and quality issues, and reimbursement topics based on experience at NYU Langone Medical Center and Cleveland Clinic. PMID:25985966

  5. A new evaluation method research for fusion quality of infrared and visible images

    NASA Astrophysics Data System (ADS)

    Ge, Xingguo; Ji, Yiguo; Tao, Zhongxiang; Tian, Chunyan; Ning, Chengda

    2017-03-01

    In order to objectively evaluate the fusion effect of infrared and visible image, a fusion evaluation method for infrared and visible images based on energy-weighted average structure similarity and edge information retention value is proposed for drawbacks of existing evaluation methods. The evaluation index of this method is given, and the infrared and visible image fusion results under different algorithms and environments are made evaluation experiments on the basis of this index. The experimental results show that the objective evaluation index is consistent with the subjective evaluation results obtained from this method, which shows that the method is a practical and effective fusion image quality evaluation method.

  6. Data fusion of Landsat TM and IRS images in forest classification

    Treesearch

    Guangxing Wang; Markus Holopainen; Eero Lukkarinen

    2000-01-01

    Data fusion of Landsat TM images and Indian Remote Sensing satellite panchromatic image (IRS-1C PAN) was studied and compared to the use of TM or IRS image only. The aim was to combine the high spatial resolution of IRS-1C PAN to the high spectral resolution of Landsat TM images using a data fusion algorithm. The ground truth of the study was based on a sample of 1,020...

  7. A Remote Sensing Image Fusion Method based on adaptive dictionary learning

    NASA Astrophysics Data System (ADS)

    He, Tongdi; Che, Zongxi

    2018-01-01

    This paper discusses using a remote sensing fusion method, based on' adaptive sparse representation (ASP)', to provide improved spectral information, reduce data redundancy and decrease system complexity. First, the training sample set is formed by taking random blocks from the images to be fused, the dictionary is then constructed using the training samples, and the remaining terms are clustered to obtain the complete dictionary by iterated processing at each step. Second, the self-adaptive weighted coefficient rule of regional energy is used to select the feature fusion coefficients and complete the reconstruction of the image blocks. Finally, the reconstructed image blocks are rearranged and an average is taken to obtain the final fused images. Experimental results show that the proposed method is superior to other traditional remote sensing image fusion methods in both spectral information preservation and spatial resolution.

  8. a Comparative Analysis of Spatiotemporal Data Fusion Models for Landsat and Modis Data

    NASA Astrophysics Data System (ADS)

    Hazaymeh, K.; Almagbile, A.

    2018-04-01

    In this study, three documented spatiotemporal data fusion models were applied to Landsat-7 and MODIS surface reflectance, and NDVI. The algorithms included the spatial and temporal adaptive reflectance fusion model (STARFM), sparse representation based on a spatiotemporal reflectance fusion model (SPSTFM), and spatiotemporal image-fusion model (STI-FM). The objectives of this study were to (i) compare the performance of these three fusion models using a one Landsat-MODIS spectral reflectance image pairs using time-series datasets from the Coleambally irrigation area in Australia, and (ii) quantitatively evaluate the accuracy of the synthetic images generated from each fusion model using statistical measurements. Results showed that the three fusion models predicted the synthetic Landsat-7 image with adequate agreements. The STI-FM produced more accurate reconstructions of both Landsat-7 spectral bands and NDVI. Furthermore, it produced surface reflectance images having the highest correlation with the actual Landsat-7 images. This study indicated that STI-FM would be more suitable for spatiotemporal data fusion applications such as vegetation monitoring, drought monitoring, and evapotranspiration.

  9. Multifocus image fusion scheme based on the multiscale curvature in nonsubsampled contourlet transform domain

    NASA Astrophysics Data System (ADS)

    Li, Xiaosong; Li, Huafeng; Yu, Zhengtao; Kong, Yingchun

    2015-07-01

    An efficient multifocus image fusion scheme in nonsubsampled contourlet transform (NSCT) domain is proposed. Based on the property of optical imaging and the theory of defocused image, we present a selection principle for lowpass frequency coefficients and also investigate the connection between a low-frequency image and the defocused image. Generally, the NSCT algorithm decomposes detail image information indwells in different scales and different directions in the bandpass subband coefficient. In order to correctly pick out the prefused bandpass directional coefficients, we introduce multiscale curvature, which not only inherits the advantages of windows with different sizes, but also correctly recognizes the focused pixels from source images, and then develop a new fusion scheme of the bandpass subband coefficients. The fused image can be obtained by inverse NSCT with the different fused coefficients. Several multifocus image fusion methods are compared with the proposed scheme. The experimental results clearly indicate the validity and superiority of the proposed scheme in terms of both the visual qualities and the quantitative evaluation.

  10. Image Fusion Algorithms Using Human Visual System in Transform Domain

    NASA Astrophysics Data System (ADS)

    Vadhi, Radhika; Swamy Kilari, Veera; Samayamantula, Srinivas Kumar

    2017-08-01

    The endeavor of digital image fusion is to combine the important visual parts from various sources to advance the visibility eminence of the image. The fused image has a more visual quality than any source images. In this paper, the Human Visual System (HVS) weights are used in the transform domain to select appropriate information from various source images and then to attain a fused image. In this process, mainly two steps are involved. First, apply the DWT to the registered source images. Later, identify qualitative sub-bands using HVS weights. Hence, qualitative sub-bands are selected from different sources to form high quality HVS based fused image. The quality of the HVS based fused image is evaluated with general fusion metrics. The results show the superiority among the state-of-the art resolution Transforms (MRT) such as Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), Contourlet Transform (CT), and Non Sub Sampled Contourlet Transform (NSCT) using maximum selection fusion rule.

  11. Accuracy of implementing principles of fusion imaging in the follow up and surveillance of complex aneurysm repair.

    PubMed

    Martin-Gonzalez, Teresa; Penney, Graeme; Chong, Debra; Davis, Meryl; Mastracci, Tara M

    2018-05-01

    Fusion imaging is standard for the endovascular treatment of complex aortic aneurysms, but its role in follow up has not been explored. A critical issue is renal function deterioration over time. Renal volume has been used as a marker of renal impairment; however, it is not reproducible and remains a complex and resource-intensive procedure. The aim of this study is to determine the accuracy of a fusion-based software to automatically calculate the renal volume changes during follow up. In this study, computerized tomography (CT) scans of 16 patients who underwent complex aortic endovascular repair were analysed. Preoperative, 1-month and 1-year follow-up CT scans have been analysed using a conventional approach of semi-automatic segmentation, and a second approach with automatic segmentation. For each kidney and at each time point the percentage of change in renal volume was calculated using both techniques. After review, volume assessment was feasible for all CT scans. For the left kidney, the intraclass correlation coefficient (ICC) was 0.794 and 0.877 at 1 month and 1 year, respectively. For the right side, the ICC was 0.817 at 1 month and 0.966 at 1 year. The automated technique reliably detected a decrease in renal volume for the eight patients with occluded renal arteries during follow up. This is the first report of a fusion-based algorithm to detect changes in renal volume during postoperative surveillance using an automated process. Using this technique, the standardized assessment of renal volume could be implemented with greater ease and reproducibility and serve as a warning of potential renal impairment.

  12. Some new classification methods for hyperspectral remote sensing

    NASA Astrophysics Data System (ADS)

    Du, Pei-jun; Chen, Yun-hao; Jones, Simon; Ferwerda, Jelle G.; Chen, Zhi-jun; Zhang, Hua-peng; Tan, Kun; Yin, Zuo-xia

    2006-10-01

    Hyperspectral Remote Sensing (HRS) is one of the most significant recent achievements of Earth Observation Technology. Classification is the most commonly employed processing methodology. In this paper three new hyperspectral RS image classification methods are analyzed. These methods are: Object-oriented FIRS image classification, HRS image classification based on information fusion and HSRS image classification by Back Propagation Neural Network (BPNN). OMIS FIRS image is used as the example data. Object-oriented techniques have gained popularity for RS image classification in recent years. In such method, image segmentation is used to extract the regions from the pixel information based on homogeneity criteria at first, and spectral parameters like mean vector, texture, NDVI and spatial/shape parameters like aspect ratio, convexity, solidity, roundness and orientation for each region are calculated, finally classification of the image using the region feature vectors and also using suitable classifiers such as artificial neural network (ANN). It proves that object-oriented methods can improve classification accuracy since they utilize information and features both from the point and the neighborhood, and the processing unit is a polygon (in which all pixels are homogeneous and belong to the class). HRS image classification based on information fusion, divides all bands of the image into different groups initially, and extracts features from every group according to the properties of each group. Three levels of information fusion: data level fusion, feature level fusion and decision level fusion are used to HRS image classification. Artificial Neural Network (ANN) can perform well in RS image classification. In order to promote the advances of ANN used for HIRS image classification, Back Propagation Neural Network (BPNN), the most commonly used neural network, is used to HRS image classification.

  13. An infrared-visible image fusion scheme based on NSCT and compressed sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Maldague, Xavier

    2015-05-01

    Image fusion, as a research hot point nowadays in the field of infrared computer vision, has been developed utilizing different varieties of methods. Traditional image fusion algorithms are inclined to bring problems, such as data storage shortage and computational complexity increase, etc. Compressed sensing (CS) uses sparse sampling without knowing the priori knowledge and greatly reconstructs the image, which reduces the cost and complexity of image processing. In this paper, an advanced compressed sensing image fusion algorithm based on non-subsampled contourlet transform (NSCT) is proposed. NSCT provides better sparsity than the wavelet transform in image representation. Throughout the NSCT decomposition, the low-frequency and high-frequency coefficients can be obtained respectively. For the fusion processing of low-frequency coefficients of infrared and visible images , the adaptive regional energy weighting rule is utilized. Thus only the high-frequency coefficients are specially measured. Here we use sparse representation and random projection to obtain the required values of high-frequency coefficients, afterwards, the coefficients of each image block can be fused via the absolute maximum selection rule and/or the regional standard deviation rule. In the reconstruction of the compressive sampling results, a gradient-based iterative algorithm and the total variation (TV) method are employed to recover the high-frequency coefficients. Eventually, the fused image is recovered by inverse NSCT. Both the visual effects and the numerical computation results after experiments indicate that the presented approach achieves much higher quality of image fusion, accelerates the calculations, enhances various targets and extracts more useful information.

  14. The Nonsubsampled Contourlet Transform Based Statistical Medical Image Fusion Using Generalized Gaussian Density

    PubMed Central

    Yang, Guocheng; Li, Meiling; Chen, Leiting; Yu, Jie

    2015-01-01

    We propose a novel medical image fusion scheme based on the statistical dependencies between coefficients in the nonsubsampled contourlet transform (NSCT) domain, in which the probability density function of the NSCT coefficients is concisely fitted using generalized Gaussian density (GGD), as well as the similarity measurement of two subbands is accurately computed by Jensen-Shannon divergence of two GGDs. To preserve more useful information from source images, the new fusion rules are developed to combine the subbands with the varied frequencies. That is, the low frequency subbands are fused by utilizing two activity measures based on the regional standard deviation and Shannon entropy and the high frequency subbands are merged together via weight maps which are determined by the saliency values of pixels. The experimental results demonstrate that the proposed method significantly outperforms the conventional NSCT based medical image fusion approaches in both visual perception and evaluation indices. PMID:26557871

  15. [Perceptual sharpness metric for visible and infrared color fusion images].

    PubMed

    Gao, Shao-Shu; Jin, Wei-Qi; Wang, Xia; Wang, Ling-Xue; Luo, Yuan

    2012-12-01

    For visible and infrared color fusion images, objective sharpness assessment model is proposed to measure the clarity of detail and edge definition of the fusion image. Firstly, the contrast sensitivity functions (CSF) of the human visual system is used to reduce insensitive frequency components under certain viewing conditions. Secondly, perceptual contrast model, which takes human luminance masking effect into account, is proposed based on local band-limited contrast model. Finally, the perceptual contrast is calculated in the region of interest (contains image details and edges) in the fusion image to evaluate image perceptual sharpness. Experimental results show that the proposed perceptual sharpness metrics provides better predictions, which are more closely matched to human perceptual evaluations, than five existing sharpness (blur) metrics for color images. The proposed perceptual sharpness metrics can evaluate the perceptual sharpness for color fusion images effectively.

  16. Image fusion pitfalls for cranial radiosurgery.

    PubMed

    Jonker, Benjamin P

    2013-01-01

    Stereotactic radiosurgery requires imaging to define both the stereotactic space in which the treatment is delivered and the target itself. Image fusion is the process of using rotation and translation to bring a second image set into alignment with the first image set. This allows the potential concurrent use of multiple image sets to define the target and stereotactic space. While a single magnetic resonance imaging (MRI) sequence alone can be used for delineation of the target and fiducials, there may be significant advantages to using additional imaging sets including other MRI sequences, computed tomography (CT) scans, and advanced imaging sets such as catheter-based angiography, diffusor tension imaging-based fiber tracking and positon emission tomography in order to more accurately define the target and surrounding critical structures. Stereotactic space is usually defined by detection of fiducials on the stereotactic head frame or mask system. Unfortunately MRI sequences are susceptible to geometric distortion, whereas CT scans do not face this problem (although they have poorer resolution of the target in most cases). Thus image fusion can allow the definition of stereotactic space to proceed from the geometrically accurate CT images at the same time as using MRI to define the target. The use of image fusion is associated with risk of error introduced by inaccuracies of the fusion process, as well as workflow changes that if not properly accounted for can mislead the treating clinician. The purpose of this review is to describe the uses of image fusion in stereotactic radiosurgery as well as its potential pitfalls.

  17. Multiscale image fusion using the undecimated wavelet transform with spectral factorization and nonorthogonal filter banks.

    PubMed

    Ellmauthaler, Andreas; Pagliari, Carla L; da Silva, Eduardo A B

    2013-03-01

    Multiscale transforms are among the most popular techniques in the field of pixel-level image fusion. However, the fusion performance of these methods often deteriorates for images derived from different sensor modalities. In this paper, we demonstrate that for such images, results can be improved using a novel undecimated wavelet transform (UWT)-based fusion scheme, which splits the image decomposition process into two successive filtering operations using spectral factorization of the analysis filters. The actual fusion takes place after convolution with the first filter pair. Its significantly smaller support size leads to the minimization of the unwanted spreading of coefficient values around overlapping image singularities. This usually complicates the feature selection process and may lead to the introduction of reconstruction errors in the fused image. Moreover, we will show that the nonsubsampled nature of the UWT allows the design of nonorthogonal filter banks, which are more robust to artifacts introduced during fusion, additionally improving the obtained results. The combination of these techniques leads to a fusion framework, which provides clear advantages over traditional multiscale fusion approaches, independent of the underlying fusion rule, and reduces unwanted side effects such as ringing artifacts in the fused reconstruction.

  18. Infrared and visible image fusion based on visual saliency map and weighted least square optimization

    NASA Astrophysics Data System (ADS)

    Ma, Jinlei; Zhou, Zhiqiang; Wang, Bo; Zong, Hua

    2017-05-01

    The goal of infrared (IR) and visible image fusion is to produce a more informative image for human observation or some other computer vision tasks. In this paper, we propose a novel multi-scale fusion method based on visual saliency map (VSM) and weighted least square (WLS) optimization, aiming to overcome some common deficiencies of conventional methods. Firstly, we introduce a multi-scale decomposition (MSD) using the rolling guidance filter (RGF) and Gaussian filter to decompose input images into base and detail layers. Compared with conventional MSDs, this MSD can achieve the unique property of preserving the information of specific scales and reducing halos near edges. Secondly, we argue that the base layers obtained by most MSDs would contain a certain amount of residual low-frequency information, which is important for controlling the contrast and overall visual appearance of the fused image, and the conventional "averaging" fusion scheme is unable to achieve desired effects. To address this problem, an improved VSM-based technique is proposed to fuse the base layers. Lastly, a novel WLS optimization scheme is proposed to fuse the detail layers. This optimization aims to transfer more visual details and less irrelevant IR details or noise into the fused image. As a result, the fused image details would appear more naturally and be suitable for human visual perception. Experimental results demonstrate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.

  19. A new multi-spectral feature level image fusion method for human interpretation

    NASA Astrophysics Data System (ADS)

    Leviner, Marom; Maltz, Masha

    2009-03-01

    Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in a three-task experiment using MSSF against two established methods: averaging and principle components analysis (PCA), and against its two source bands, visible and infrared. The three tasks that we studied were: (1) simple target detection, (2) spatial orientation, and (3) camouflaged target detection. MSSF proved superior to the other fusion methods in all three tests; MSSF also outperformed the source images in the spatial orientation and camouflaged target detection tasks. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.

  20. Multisource image fusion method using support value transform.

    PubMed

    Zheng, Sheng; Shi, Wen-Zhong; Liu, Jian; Zhu, Guang-Xi; Tian, Jin-Wen

    2007-07-01

    With the development of numerous imaging sensors, many images can be simultaneously pictured by various sensors. However, there are many scenarios where no one sensor can give the complete picture. Image fusion is an important approach to solve this problem and produces a single image which preserves all relevant information from a set of different sensors. In this paper, we proposed a new image fusion method using the support value transform, which uses the support value to represent the salient features of image. This is based on the fact that, in support vector machines (SVMs), the data with larger support values have a physical meaning in the sense that they reveal relative more importance of the data points for contributing to the SVM model. The mapped least squares SVM (mapped LS-SVM) is used to efficiently compute the support values of image. The support value analysis is developed by using a series of multiscale support value filters, which are obtained by filling zeros in the basic support value filter deduced from the mapped LS-SVM to match the resolution of the desired level. Compared with the widely used image fusion methods, such as the Laplacian pyramid, discrete wavelet transform methods, the proposed method is an undecimated transform-based approach. The fusion experiments are undertaken on multisource images. The results demonstrate that the proposed approach is effective and is superior to the conventional image fusion methods in terms of the pertained quantitative fusion evaluation indexes, such as quality of visual information (Q(AB/F)), the mutual information, etc.

  1. Electromagnetic image-guided orbital decompression: technique, principles, and preliminary experience with 6 consecutive cases.

    PubMed

    Servat, Juan J; Elia, Maxwell Dominic; Gong, Dan; Manes, R Peter; Black, Evan H; Levin, Flora

    2014-12-01

    To assess the feasibility of routine use of electromagnetic image guidance systems in orbital decompression. Six consecutive patients underwent stereotactic-guided three wall orbital decompression using the novel Fusion ENT Navigation System (Medtronic), a portable and expandable electromagnetic guidance system with multi-instrument tracking capabilities. The system consists of the Medtronic LandmarX System software-enabled computer station, signal generator, field-generating magnet, head-mounted marker coil, and surgical tracking instruments. In preparation for use of the LandmarX/Fusion protocol, all patients underwent preoperative non-contrast CT scan from the superior aspect of the frontal sinuses to the inferior aspect of the maxillary sinuses that includes the nasal tip. The Fusion ENT Navigation System (Medtronic™) was used in 6 patients undergoing maximal 3-wall orbital decompression for Graves' orbitopthy after a minimum of six months of disease inactivity. Preoperative Hertel exophthalmometry measured more than 27 mm in all patients. The navigation system proved to be no more difficult technically than the traditional orbital decompression approach. Electromagnetic image guidance is a stereotactic surgical navigation system that provides additional intraoperative flexibility in orbital surgery. Electromagnetic image-guidance offers the ability to perform more aggressive orbital decompressions with reduced risk.

  2. Multispectral Image Enhancement Through Adaptive Wavelet Fusion

    DTIC Science & Technology

    2016-09-14

    13. SUPPLEMENTARY NOTES 14. ABSTRACT This research developed a multiresolution image fusion scheme based on guided filtering . Guided filtering can...effectively reduce noise while preserving detail boundaries. When applied in an iterative mode, guided filtering selectively eliminates small scale...details while restoring larger scale edges. The proposed multi-scale image fusion scheme achieves spatial consistency by using guided filtering both at

  3. Detail-enhanced multimodality medical image fusion based on gradient minimization smoothing filter and shearing filter.

    PubMed

    Liu, Xingbin; Mei, Wenbo; Du, Huiqian

    2018-02-13

    In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images. Graphical abstract The detailed implementation of the proposed medical image fusion algorithm.

  4. Weber-aware weighted mutual information evaluation for infrared-visible image fusion

    NASA Astrophysics Data System (ADS)

    Luo, Xiaoyan; Wang, Shining; Yuan, Ding

    2016-10-01

    A performance metric for infrared and visible image fusion is proposed based on Weber's law. To indicate the stimulus of source images, two Weber components are provided. One is differential excitation to reflect the spectral signal of visible and infrared images, and the other is orientation to capture the scene structure feature. By comparing the corresponding Weber component in infrared and visible images, the source pixels can be marked with different dominant properties in intensity or structure. If the pixels have the same dominant property label, the pixels are grouped to calculate the mutual information (MI) on the corresponding Weber components between dominant source and fused images. Then, the final fusion metric is obtained via weighting the group-wise MI values according to the number of pixels in different groups. Experimental results demonstrate that the proposed metric performs well on popular image fusion cases and outperforms other image fusion metrics.

  5. An effective content-based image retrieval technique for image visuals representation based on the bag-of-visual-words model

    PubMed Central

    Jabeen, Safia; Mehmood, Zahid; Mahmood, Toqeer; Saba, Tanzila; Rehman, Amjad; Mahmood, Muhammad Tariq

    2018-01-01

    For the last three decades, content-based image retrieval (CBIR) has been an active research area, representing a viable solution for retrieving similar images from an image repository. In this article, we propose a novel CBIR technique based on the visual words fusion of speeded-up robust features (SURF) and fast retina keypoint (FREAK) feature descriptors. SURF is a sparse descriptor whereas FREAK is a dense descriptor. Moreover, SURF is a scale and rotation-invariant descriptor that performs better in the case of repeatability, distinctiveness, and robustness. It is robust to noise, detection errors, geometric, and photometric deformations. It also performs better at low illumination within an image as compared to the FREAK descriptor. In contrast, FREAK is a retina-inspired speedy descriptor that performs better for classification-based problems as compared to the SURF descriptor. Experimental results show that the proposed technique based on the visual words fusion of SURF-FREAK descriptors combines the features of both descriptors and resolves the aforementioned issues. The qualitative and quantitative analysis performed on three image collections, namely Corel-1000, Corel-1500, and Caltech-256, shows that proposed technique based on visual words fusion significantly improved the performance of the CBIR as compared to the feature fusion of both descriptors and state-of-the-art image retrieval techniques. PMID:29694429

  6. An effective content-based image retrieval technique for image visuals representation based on the bag-of-visual-words model.

    PubMed

    Jabeen, Safia; Mehmood, Zahid; Mahmood, Toqeer; Saba, Tanzila; Rehman, Amjad; Mahmood, Muhammad Tariq

    2018-01-01

    For the last three decades, content-based image retrieval (CBIR) has been an active research area, representing a viable solution for retrieving similar images from an image repository. In this article, we propose a novel CBIR technique based on the visual words fusion of speeded-up robust features (SURF) and fast retina keypoint (FREAK) feature descriptors. SURF is a sparse descriptor whereas FREAK is a dense descriptor. Moreover, SURF is a scale and rotation-invariant descriptor that performs better in the case of repeatability, distinctiveness, and robustness. It is robust to noise, detection errors, geometric, and photometric deformations. It also performs better at low illumination within an image as compared to the FREAK descriptor. In contrast, FREAK is a retina-inspired speedy descriptor that performs better for classification-based problems as compared to the SURF descriptor. Experimental results show that the proposed technique based on the visual words fusion of SURF-FREAK descriptors combines the features of both descriptors and resolves the aforementioned issues. The qualitative and quantitative analysis performed on three image collections, namely Corel-1000, Corel-1500, and Caltech-256, shows that proposed technique based on visual words fusion significantly improved the performance of the CBIR as compared to the feature fusion of both descriptors and state-of-the-art image retrieval techniques.

  7. Software for Partly Automated Recognition of Targets

    NASA Technical Reports Server (NTRS)

    Opitz, David; Blundell, Stuart; Bain, William; Morris, Matthew; Carlson, Ian; Mangrich, Mark; Selinsky, T.

    2002-01-01

    The Feature Analyst is a computer program for assisted (partially automated) recognition of targets in images. This program was developed to accelerate the processing of high-resolution satellite image data for incorporation into geographic information systems (GIS). This program creates an advanced user interface that embeds proprietary machine-learning algorithms in commercial image-processing and GIS software. A human analyst provides samples of target features from multiple sets of data, then the software develops a data-fusion model that automatically extracts the remaining features from selected sets of data. The program thus leverages the natural ability of humans to recognize objects in complex scenes, without requiring the user to explain the human visual recognition process by means of lengthy software. Two major subprograms are the reactive agent and the thinking agent. The reactive agent strives to quickly learn the user's tendencies while the user is selecting targets and to increase the user's productivity by immediately suggesting the next set of pixels that the user may wish to select. The thinking agent utilizes all available resources, taking as much time as needed, to produce the most accurate autonomous feature-extraction model possible.

  8. Multiscale infrared and visible image fusion using gradient domain guided image filtering

    NASA Astrophysics Data System (ADS)

    Zhu, Jin; Jin, Weiqi; Li, Li; Han, Zhenghao; Wang, Xia

    2018-03-01

    For better surveillance with infrared and visible imaging, a novel hybrid multiscale decomposition fusion method using gradient domain guided image filtering (HMSD-GDGF) is proposed in this study. In this method, hybrid multiscale decomposition with guided image filtering and gradient domain guided image filtering of source images are first applied before the weight maps of each scale are obtained using a saliency detection technology and filtering means with three different fusion rules at different scales. The three types of fusion rules are for small-scale detail level, large-scale detail level, and base level. Finally, the target becomes more salient and can be more easily detected in the fusion result, with the detail information of the scene being fully displayed. After analyzing the experimental comparisons with state-of-the-art fusion methods, the HMSD-GDGF method has obvious advantages in fidelity of salient information (including structural similarity, brightness, and contrast), preservation of edge features, and human visual perception. Therefore, visual effects can be improved by using the proposed HMSD-GDGF method.

  9. Visual tracking for multi-modality computer-assisted image guidance

    NASA Astrophysics Data System (ADS)

    Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp

    2017-03-01

    With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.

  10. Progressive multi-atlas label fusion by dictionary evolution.

    PubMed

    Song, Yantao; Wu, Guorong; Bahrami, Khosro; Sun, Quansen; Shen, Dinggang

    2017-02-01

    Accurate segmentation of anatomical structures in medical images is important in recent imaging based studies. In the past years, multi-atlas patch-based label fusion methods have achieved a great success in medical image segmentation. In these methods, the appearance of each input image patch is first represented by an atlas patch dictionary (in the image domain), and then the latent label of the input image patch is predicted by applying the estimated representation coefficients to the corresponding anatomical labels of the atlas patches in the atlas label dictionary (in the label domain). However, due to the generally large gap between the patch appearance in the image domain and the patch structure in the label domain, the estimated (patch) representation coefficients from the image domain may not be optimal for the final label fusion, thus reducing the labeling accuracy. To address this issue, we propose a novel label fusion framework to seek for the suitable label fusion weights by progressively constructing a dynamic dictionary in a layer-by-layer manner, where the intermediate dictionaries act as a sequence of guidance to steer the transition of (patch) representation coefficients from the image domain to the label domain. Our proposed multi-layer label fusion framework is flexible enough to be applied to the existing labeling methods for improving their label fusion performance, i.e., by extending their single-layer static dictionary to the multi-layer dynamic dictionary. The experimental results show that our proposed progressive label fusion method achieves more accurate hippocampal segmentation results for the ADNI dataset, compared to the counterpart methods using only the single-layer static dictionary. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Image fusion pitfalls for cranial radiosurgery

    PubMed Central

    Jonker, Benjamin P.

    2013-01-01

    Stereotactic radiosurgery requires imaging to define both the stereotactic space in which the treatment is delivered and the target itself. Image fusion is the process of using rotation and translation to bring a second image set into alignment with the first image set. This allows the potential concurrent use of multiple image sets to define the target and stereotactic space. While a single magnetic resonance imaging (MRI) sequence alone can be used for delineation of the target and fiducials, there may be significant advantages to using additional imaging sets including other MRI sequences, computed tomography (CT) scans, and advanced imaging sets such as catheter-based angiography, diffusor tension imaging-based fiber tracking and positon emission tomography in order to more accurately define the target and surrounding critical structures. Stereotactic space is usually defined by detection of fiducials on the stereotactic head frame or mask system. Unfortunately MRI sequences are susceptible to geometric distortion, whereas CT scans do not face this problem (although they have poorer resolution of the target in most cases). Thus image fusion can allow the definition of stereotactic space to proceed from the geometrically accurate CT images at the same time as using MRI to define the target. The use of image fusion is associated with risk of error introduced by inaccuracies of the fusion process, as well as workflow changes that if not properly accounted for can mislead the treating clinician. The purpose of this review is to describe the uses of image fusion in stereotactic radiosurgery as well as its potential pitfalls. PMID:23682338

  12. Infrared and visual image fusion method based on discrete cosine transform and local spatial frequency in discrete stationary wavelet transform domain

    NASA Astrophysics Data System (ADS)

    Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Lee, Shin-Jye; He, Kangjian

    2018-01-01

    In order to promote the performance of infrared and visual image fusion and provide better visual effects, this paper proposes a hybrid fusion method for infrared and visual image by the combination of discrete stationary wavelet transform (DSWT), discrete cosine transform (DCT) and local spatial frequency (LSF). The proposed method has three key processing steps. Firstly, DSWT is employed to decompose the important features of the source image into a series of sub-images with different levels and spatial frequencies. Secondly, DCT is used to separate the significant details of the sub-images according to the energy of different frequencies. Thirdly, LSF is applied to enhance the regional features of DCT coefficients, and it can be helpful and useful for image feature extraction. Some frequently-used image fusion methods and evaluation metrics are employed to evaluate the validity of the proposed method. The experiments indicate that the proposed method can achieve good fusion effect, and it is more efficient than other conventional image fusion methods.

  13. PET-CT image fusion using random forest and à-trous wavelet transform.

    PubMed

    Seal, Ayan; Bhattacharjee, Debotosh; Nasipuri, Mita; Rodríguez-Esparragón, Dionisio; Menasalvas, Ernestina; Gonzalo-Martin, Consuelo

    2018-03-01

    New image fusion rules for multimodal medical images are proposed in this work. Image fusion rules are defined by random forest learning algorithm and a translation-invariant à-trous wavelet transform (AWT). The proposed method is threefold. First, source images are decomposed into approximation and detail coefficients using AWT. Second, random forest is used to choose pixels from the approximation and detail coefficients for forming the approximation and detail coefficients of the fused image. Lastly, inverse AWT is applied to reconstruct fused image. All experiments have been performed on 198 slices of both computed tomography and positron emission tomography images of a patient. A traditional fusion method based on Mallat wavelet transform has also been implemented on these slices. A new image fusion performance measure along with 4 existing measures has been presented, which helps to compare the performance of 2 pixel level fusion methods. The experimental results clearly indicate that the proposed method outperforms the traditional method in terms of visual and quantitative qualities and the new measure is meaningful. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Collection Fusion Using Bayesian Estimation of a Linear Regression Model in Image Databases on the Web.

    ERIC Educational Resources Information Center

    Kim, Deok-Hwan; Chung, Chin-Wan

    2003-01-01

    Discusses the collection fusion problem of image databases, concerned with retrieving relevant images by content based retrieval from image databases distributed on the Web. Focuses on a metaserver which selects image databases supporting similarity measures and proposes a new algorithm which exploits a probabilistic technique using Bayesian…

  15. Fusion method of SAR and optical images for urban object extraction

    NASA Astrophysics Data System (ADS)

    Jia, Yonghong; Blum, Rick S.; Li, Fangfang

    2007-11-01

    A new image fusion method of SAR, Panchromatic (Pan) and multispectral (MS) data is proposed. First of all, SAR texture is extracted by ratioing the despeckled SAR image to its low pass approximation, and is used to modulate high pass details extracted from the available Pan image by means of the á trous wavelet decomposition. Then, high pass details modulated with the texture is applied to obtain the fusion product by HPFM (High pass Filter-based Modulation) fusion method. A set of image data including co-registered Landsat TM, ENVISAT SAR and SPOT Pan is used for the experiment. The results demonstrate accurate spectral preservation on vegetated regions, bare soil, and also on textured areas (buildings and road network) where SAR texture information enhances the fusion product, and the proposed approach is effective for image interpret and classification.

  16. Integration of oncologic margins in three-dimensional virtual planning for head and neck surgery, including a validation of the software pathway.

    PubMed

    Kraeima, Joep; Schepers, Rutger H; van Ooijen, Peter M A; Steenbakkers, Roel J H M; Roodenburg, Jan L N; Witjes, Max J H

    2015-10-01

    Three-dimensional (3D) virtual planning of reconstructive surgery, after resection, is a frequently used method for improving accuracy and predictability. However, when applied to malignant cases, the planning of the oncologic resection margins is difficult due to visualisation of tumours in the current 3D planning. Embedding tumour delineation on a magnetic resonance image, similar to the routinely performed radiotherapeutic contouring of tumours, is expected to provide better margin planning. A new software pathway was developed for embedding tumour delineation on magnetic resonance imaging (MRI) within the 3D virtual surgical planning. The software pathway was validated by the use of five bovine cadavers implanted with phantom tumour objects. MRI and computed tomography (CT) images were fused and the tumour was delineated using radiation oncology software. This data was converted to the 3D virtual planning software by means of a conversion algorithm. Tumour volumes and localization were determined in both software stages for comparison analysis. The approach was applied to three clinical cases. A conversion algorithm was developed to translate the tumour delineation data to the 3D virtual plan environment. The average difference in volume of the tumours was 1.7%. This study reports a validated software pathway, providing multi-modality image fusion for 3D virtual surgical planning. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  17. Satellite image fusion based on principal component analysis and high-pass filtering.

    PubMed

    Metwalli, Mohamed R; Nasr, Ayman H; Allah, Osama S Farag; El-Rabaie, S; Abd El-Samie, Fathi E

    2010-06-01

    This paper presents an integrated method for the fusion of satellite images. Several commercial earth observation satellites carry dual-resolution sensors, which provide high spatial resolution or simply high-resolution (HR) panchromatic (pan) images and low-resolution (LR) multi-spectral (MS) images. Image fusion methods are therefore required to integrate a high-spectral-resolution MS image with a high-spatial-resolution pan image to produce a pan-sharpened image with high spectral and spatial resolutions. Some image fusion methods such as the intensity, hue, and saturation (IHS) method, the principal component analysis (PCA) method, and the Brovey transform (BT) method provide HR MS images, but with low spectral quality. Another family of image fusion methods, such as the high-pass-filtering (HPF) method, operates on the basis of the injection of high frequency components from the HR pan image into the MS image. This family of methods provides less spectral distortion. In this paper, we propose the integration of the PCA method and the HPF method to provide a pan-sharpened MS image with superior spatial resolution and less spectral distortion. The experimental results show that the proposed fusion method retains the spectral characteristics of the MS image and, at the same time, improves the spatial resolution of the pan-sharpened image.

  18. LIME: 3D visualisation and interpretation of virtual geoscience models

    NASA Astrophysics Data System (ADS)

    Buckley, Simon; Ringdal, Kari; Dolva, Benjamin; Naumann, Nicole; Kurz, Tobias

    2017-04-01

    Three-dimensional and photorealistic acquisition of surface topography, using methods such as laser scanning and photogrammetry, has become widespread across the geosciences over the last decade. With recent innovations in photogrammetric processing software, robust and automated data capture hardware, and novel sensor platforms, including unmanned aerial vehicles, obtaining 3D representations of exposed topography has never been easier. In addition to 3D datasets, fusion of surface geometry with imaging sensors, such as multi/hyperspectral, thermal and ground-based InSAR, and geophysical methods, create novel and highly visual datasets that provide a fundamental spatial framework to address open geoscience research questions. Although data capture and processing routines are becoming well-established and widely reported in the scientific literature, challenges remain related to the analysis, co-visualisation and presentation of 3D photorealistic models, especially for new users (e.g. students and scientists new to geomatics methods). Interpretation and measurement is essential for quantitative analysis of 3D datasets, and qualitative methods are valuable for presentation purposes, for planning and in education. Motivated by this background, the current contribution presents LIME, a lightweight and high performance 3D software for interpreting and co-visualising 3D models and related image data in geoscience applications. The software focuses on novel data integration and visualisation of 3D topography with image sources such as hyperspectral imagery, logs and interpretation panels, geophysical datasets and georeferenced maps and images. High quality visual output can be generated for dissemination purposes, to aid researchers with communication of their research results. The background of the software is described and case studies from outcrop geology, in hyperspectral mineral mapping and geophysical-geospatial data integration are used to showcase the novel methods developed.

  19. Correlations with operative anatomy of real time three-dimensional echocardiographic imaging of congenital aortic valvar stenosis.

    PubMed

    Sadagopan, Shankar N; Veldtman, Gruschen R; Sivaprakasam, Muthukumaran C; Keeton, Barry R; Gnanapragasam, James P; Salmon, Anthony P; Haw, Marcus P; Vettukattil, Joseph J

    2006-10-01

    To define the anatomic characteristics of the congenitally malformed and severely stenotic aortic valve using trans-thoracic real time three-dimensional echocardiography, and to compare and contrast this with the valvar morphology as seen at surgery. Prospective cross-sectional observational study. Tertiary centre for paediatric cardiology. All patients requiring aortic valvotomy between December 2003 and July 2004 were evaluated prior to surgery with three-dimensional echocardiography. Full volume loop images were acquired using the Phillips Sonos 7500 system. A single observer analysed the images using "Q lab 4.1" software. The details were then compared with operative findings. We identified 8 consecutive patients, with a median age of 16 weeks, ranging from 1 day to 11 years, with median weight of 7.22 kilograms, ranging from 2.78 to 22 kilograms. The measured diameter of the valvar orifice, and the number of leaflets identified, corresponded closely with surgical assessment. The sites of fusion of the leaflets were correctly identified by the echocardiographic imaging in all cases. Fusion between the right and non-coronary leaflets was identified in half the patients. Dysplasia was observed in 3 patients, with 1 patient having nodules and 2 shown to have excrescences. At surgery, nodules were excised, and excrescences were trimmed. The dysplastic changes correlated well with operative findings, though statistically not significant. We recommend trans-thoracic real time three-dimensional echocardiography for the assessment of the congenitally malformed aortic valve, particularly to identify sites of fusion between leaflets and to measure the orificial diameter. The definition of nodularity, and the prognosis of nodules based on the mode of intervention, will need a comparative study of patients submitted to balloon dilation as well as those undergoing surgical valvotomy.

  20. Multi-focus image fusion and robust encryption algorithm based on compressive sensing

    NASA Astrophysics Data System (ADS)

    Xiao, Di; Wang, Lan; Xiang, Tao; Wang, Yong

    2017-06-01

    Multi-focus image fusion schemes have been studied in recent years. However, little work has been done in multi-focus image transmission security. This paper proposes a scheme that can reduce data transmission volume and resist various attacks. First, multi-focus image fusion based on wavelet decomposition can generate complete scene images and optimize the perception of the human eye. The fused images are sparsely represented with DCT and sampled with structurally random matrix (SRM), which reduces the data volume and realizes the initial encryption. Then the obtained measurements are further encrypted to resist noise and crop attack through combining permutation and diffusion stages. At the receiver, the cipher images can be jointly decrypted and reconstructed. Simulation results demonstrate the security and robustness of the proposed scheme.

  1. ChimeRScope: a novel alignment-free algorithm for fusion transcript prediction using paired-end RNA-Seq data

    PubMed Central

    Li, You; Heavican, Tayla B.; Vellichirammal, Neetha N.; Iqbal, Javeed

    2017-01-01

    Abstract The RNA-Seq technology has revolutionized transcriptome characterization not only by accurately quantifying gene expression, but also by the identification of novel transcripts like chimeric fusion transcripts. The ‘fusion’ or ‘chimeric’ transcripts have improved the diagnosis and prognosis of several tumors, and have led to the development of novel therapeutic regimen. The fusion transcript detection is currently accomplished by several software packages, primarily relying on sequence alignment algorithms. The alignment of sequencing reads from fusion transcript loci in cancer genomes can be highly challenging due to the incorrect mapping induced by genomic alterations, thereby limiting the performance of alignment-based fusion transcript detection methods. Here, we developed a novel alignment-free method, ChimeRScope that accurately predicts fusion transcripts based on the gene fingerprint (as k-mers) profiles of the RNA-Seq paired-end reads. Results on published datasets and in-house cancer cell line datasets followed by experimental validations demonstrate that ChimeRScope consistently outperforms other popular methods irrespective of the read lengths and sequencing depth. More importantly, results on our in-house datasets show that ChimeRScope is a better tool that is capable of identifying novel fusion transcripts with potential oncogenic functions. ChimeRScope is accessible as a standalone software at (https://github.com/ChimeRScope/ChimeRScope/wiki) or via the Galaxy web-interface at (https://galaxy.unmc.edu/). PMID:28472320

  2. Information recovery through image sequence fusion under wavelet transformation

    NASA Astrophysics Data System (ADS)

    He, Qiang

    2010-04-01

    Remote sensing is widely applied to provide information of areas with limited ground access with applications such as to assess the destruction from natural disasters and to plan relief and recovery operations. However, the data collection of aerial digital images is constrained by bad weather, atmospheric conditions, and unstable camera or camcorder. Therefore, how to recover the information from the low-quality remote sensing images and how to enhance the image quality becomes very important for many visual understanding tasks, such like feature detection, object segmentation, and object recognition. The quality of remote sensing imagery can be improved through meaningful combination of the employed images captured from different sensors or from different conditions through information fusion. Here we particularly address information fusion to remote sensing images under multi-resolution analysis in the employed image sequences. The image fusion is to recover complete information by integrating multiple images captured from the same scene. Through image fusion, a new image with high-resolution or more perceptive for human and machine is created from a time series of low-quality images based on image registration between different video frames.

  3. The coming of age of the first hybrid metrology software platform dedicated to nanotechnologies (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Foucher, Johann; Labrosse, Aurelien; Dervillé, Alexandre; Zimmermann, Yann; Bernard, Guilhem; Martinez, Sergio; Grönqvist, Hanna; Baderot, Julien; Pinzan, Florian

    2017-03-01

    The development and integration of new materials and structures at the nanoscale require multiple parallel characterizations in order to control mostly physico-chemical properties as a function of applications. Among all properties, we can list physical properties such as: size, shape, specific surface area, aspect ratio, agglomeration/aggregation state, size distribution, surface morphology/topography, structure (including crystallinity and defect structure), solubility and chemical properties such as: structural formula/molecular structure, composition (including degree of purity, known impurities or additives), phase identity, surface chemistry (composition, charge, tension, reactive sites, physical structure, photocatalytic properties, zeta potential), hydrophilicity/lipophilicity. Depending on the final material formulation (aerosol, powder, nanostructuration…) and the industrial application (semiconductor, cosmetics, chemistry, automotive…), a fleet of complementary characterization equipments must be used in synergy for accurate process tuning and high production yield. The synergy between equipment so-called hybrid metrology consists in using the strength of each technique in order to reduce the global uncertainty for better and faster process control. The only way to succeed doing this exercise is to use data fusion methodology. In this paper, we will introduce the work that has been done to create the first generic hybrid metrology software platform dedicated to nanotechnologies process control. The first part will be dedicated to process flow modeling that is related to a fleet of metrology tools. The second part will introduce the concept of entity model which describes the various parameters that have to be extracted. The entity model is fed with data analysis as a function of the application (automatic analysis or semi-automated analysis). The final part will introduce two ways of doing data fusion on real data coming from imaging (SEM, TEM, AFM) and non-imaging techniques (SAXS). First approach is dedicated to high level fusion which is the art of combining various populations of results from homogeneous or heterogeneous tools, taking into account precision and repeatability of each of them to obtain a new more accurate result. The second approach is dedicated to deep level fusion which is the art of combining raw data from various tools in order to create a new raw data. We will introduce a new concept of virtual tool creator based on deep level fusion. As a conclusion we will discuss the implementation of hybrid metrology in semiconductor environment for advanced process control

  4. A new hyperspectral image compression paradigm based on fusion

    NASA Astrophysics Data System (ADS)

    Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.

  5. Image processing with cellular nonlinear networks implemented on field-programmable gate arrays for real-time applications in nuclear fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palazzo, S.; Vagliasindi, G.; Arena, P.

    2010-08-15

    In the past years cameras have become increasingly common tools in scientific applications. They are now quite systematically used in magnetic confinement fusion, to the point that infrared imaging is starting to be used systematically for real-time machine protection in major devices. However, in order to guarantee that the control system can always react rapidly in case of critical situations, the time required for the processing of the images must be as predictable as possible. The approach described in this paper combines the new computational paradigm of cellular nonlinear networks (CNNs) with field-programmable gate arrays and has been tested inmore » an application for the detection of hot spots on the plasma facing components in JET. The developed system is able to perform real-time hot spot recognition, by processing the image stream captured by JET wide angle infrared camera, with the guarantee that computational time is constant and deterministic. The statistical results obtained from a quite extensive set of examples show that this solution approximates very well an ad hoc serial software algorithm, with no false or missed alarms and an almost perfect overlapping of alarm intervals. The computational time can be reduced to a millisecond time scale for 8 bit 496x560-sized images. Moreover, in our implementation, the computational time, besides being deterministic, is practically independent of the number of iterations performed by the CNN - unlike software CNN implementations.« less

  6. Three-Dimensional Liver Surgery Simulation: Computer-Assisted Surgical Planning with Three-Dimensional Simulation Software and Three-Dimensional Printing.

    PubMed

    Oshiro, Yukio; Ohkohchi, Nobuhiro

    2017-06-01

    To perform accurate hepatectomy without injury, it is necessary to understand the anatomical relationship among the branches of Glisson's sheath, hepatic veins, and tumor. In Japan, three-dimensional (3D) preoperative simulation for liver surgery is becoming increasingly common, and liver 3D modeling and 3D hepatectomy simulation by 3D analysis software for liver surgery have been covered by universal healthcare insurance since 2012. Herein, we review the history of virtual hepatectomy using computer-assisted surgery (CAS) and our research to date, and we discuss the future prospects of CAS. We have used the SYNAPSE VINCENT medical imaging system (Fujifilm Medical, Tokyo, Japan) for 3D visualization and virtual resection of the liver since 2010. We developed a novel fusion imaging technique combining 3D computed tomography (CT) with magnetic resonance imaging (MRI). The fusion image enables us to easily visualize anatomic relationships among the hepatic arteries, portal veins, bile duct, and tumor in the hepatic hilum. In 2013, we developed an original software, called Liversim, which enables real-time deformation of the liver using physical simulation, and a randomized control trial has recently been conducted to evaluate the use of Liversim and SYNAPSE VINCENT for preoperative simulation and planning. Furthermore, we developed a novel hollow 3D-printed liver model whose surface is covered with frames. This model is useful for safe liver resection, has better visibility, and the production cost is reduced to one-third of a previous model. Preoperative simulation and navigation with CAS in liver resection are expected to help planning and conducting a surgery and surgical education. Thus, a novel CAS system will contribute to not only the performance of reliable hepatectomy but also to surgical education.

  7. Poster — Thur Eve — 09: Evaluation of electrical impedance and computed tomography fusion algorithms using an anthropomorphic phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chugh, Brige Paul; Krishnan, Kalpagam; Liu, Jeff

    2014-08-15

    Integration of biological conductivity information provided by Electrical Impedance Tomography (EIT) with anatomical information provided by Computed Tomography (CT) imaging could improve the ability to characterize tissues in clinical applications. In this paper, we report results of our study which compared the fusion of EIT with CT using three different image fusion algorithms, namely: weighted averaging, wavelet fusion, and ROI indexing. The ROI indexing method of fusion involves segmenting the regions of interest from the CT image and replacing the pixels with the pixels of the EIT image. The three algorithms were applied to a CT and EIT image ofmore » an anthropomorphic phantom, constructed out of five acrylic contrast targets with varying diameter embedded in a base of gelatin bolus. The imaging performance was assessed using Detectability and Structural Similarity Index Measure (SSIM). Wavelet fusion and ROI-indexing resulted in lower Detectability (by 35% and 47%, respectively) yet higher SSIM (by 66% and 73%, respectively) than weighted averaging. Our results suggest that wavelet fusion and ROI-indexing yielded more consistent and optimal fusion performance than weighted averaging.« less

  8. Dynamic image fusion and general observer preference

    NASA Astrophysics Data System (ADS)

    Burks, Stephen D.; Doe, Joshua M.

    2010-04-01

    Recent developments in image fusion give the user community many options for ways of presenting the imagery to an end-user. Individuals at the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate have developed an electronic system that allows users to quickly and efficiently determine optimal image fusion algorithms and color parameters based upon collected imagery and videos from environments that are typical to observers in a military environment. After performing multiple multi-band data collections in a variety of military-like scenarios, different waveband, fusion algorithm, image post-processing, and color choices are presented to observers as an output of the fusion system. The observer preferences can give guidelines as to how specific scenarios should affect the presentation of fused imagery.

  9. GIS-based automated management of highway surface crack inspection system

    NASA Astrophysics Data System (ADS)

    Chung, Hung-Chi; Shinozuka, Masanobu; Soeller, Tony; Girardello, Roberto

    2004-07-01

    An automated in-situ road surface distress surveying and management system, AMPIS, has been developed on the basis of video images within the framework of GIS software. Video image processing techniques are introduced to acquire, process and analyze the road surface images obtained from a moving vehicle. ArcGIS platform is used to integrate the routines of image processing and spatial analysis in handling the full-scale metropolitan highway surface distress detection and data fusion/management. This makes it possible to present user-friendly interfaces in GIS and to provide efficient visualizations of surveyed results not only for the use of transportation engineers to manage road surveying documentations, data acquisition, analysis and management, but also for financial officials to plan maintenance and repair programs and further evaluate the socio-economic impacts of highway degradation and deterioration. A review performed in this study on fundamental principle of Pavement Management System (PMS) and its implementation indicates that the proposed approach of using GIS concept and its tools for PMS application will reshape PMS into a new information technology-based system that can provide convenient and efficient pavement inspection and management.

  10. Benchmarking of data fusion algorithms in support of earth observation based Antarctic wildlife monitoring

    NASA Astrophysics Data System (ADS)

    Witharana, Chandi; LaRue, Michelle A.; Lynch, Heather J.

    2016-03-01

    Remote sensing is a rapidly developing tool for mapping the abundance and distribution of Antarctic wildlife. While both panchromatic and multispectral imagery have been used in this context, image fusion techniques have received little attention. We tasked seven widely-used fusion algorithms: Ehlers fusion, hyperspherical color space fusion, high-pass fusion, principal component analysis (PCA) fusion, University of New Brunswick fusion, and wavelet-PCA fusion to resolution enhance a series of single-date QuickBird-2 and Worldview-2 image scenes comprising penguin guano, seals, and vegetation. Fused images were assessed for spectral and spatial fidelity using a variety of quantitative quality indicators and visual inspection methods. Our visual evaluation elected the high-pass fusion algorithm and the University of New Brunswick fusion algorithm as best for manual wildlife detection while the quantitative assessment suggested the Gram-Schmidt fusion algorithm and the University of New Brunswick fusion algorithm as best for automated classification. The hyperspherical color space fusion algorithm exhibited mediocre results in terms of spectral and spatial fidelities. The PCA fusion algorithm showed spatial superiority at the expense of spectral inconsistencies. The Ehlers fusion algorithm and the wavelet-PCA algorithm showed the weakest performances. As remote sensing becomes a more routine method of surveying Antarctic wildlife, these benchmarks will provide guidance for image fusion and pave the way for more standardized products for specific types of wildlife surveys.

  11. Research on polarization imaging information parsing method

    NASA Astrophysics Data System (ADS)

    Yuan, Hongwu; Zhou, Pucheng; Wang, Xiaolong

    2016-11-01

    Polarization information parsing plays an important role in polarization imaging detection. This paper focus on the polarization information parsing method: Firstly, the general process of polarization information parsing is given, mainly including polarization image preprocessing, multiple polarization parameters calculation, polarization image fusion and polarization image tracking, etc.; And then the research achievements of the polarization information parsing method are presented, in terms of polarization image preprocessing, the polarization image registration method based on the maximum mutual information is designed. The experiment shows that this method can improve the precision of registration and be satisfied the need of polarization information parsing; In terms of multiple polarization parameters calculation, based on the omnidirectional polarization inversion model is built, a variety of polarization parameter images are obtained and the precision of inversion is to be improve obviously; In terms of polarization image fusion , using fuzzy integral and sparse representation, the multiple polarization parameters adaptive optimal fusion method is given, and the targets detection in complex scene is completed by using the clustering image segmentation algorithm based on fractal characters; In polarization image tracking, the average displacement polarization image characteristics of auxiliary particle filtering fusion tracking algorithm is put forward to achieve the smooth tracking of moving targets. Finally, the polarization information parsing method is applied to the polarization imaging detection of typical targets such as the camouflage target, the fog and latent fingerprints.

  12. Automated image-based assay for evaluation of HIV neutralization and cell-to-cell fusion inhibition.

    PubMed

    Sheik-Khalil, Enas; Bray, Mark-Anthony; Özkaya Şahin, Gülsen; Scarlatti, Gabriella; Jansson, Marianne; Carpenter, Anne E; Fenyö, Eva Maria

    2014-08-30

    Standardized techniques to detect HIV-neutralizing antibody responses are of great importance in the search for an HIV vaccine. Here, we present a high-throughput, high-content automated plaque reduction (APR) assay based on automated microscopy and image analysis that allows evaluation of neutralization and inhibition of cell-cell fusion within the same assay. Neutralization of virus particles is measured as a reduction in the number of fluorescent plaques, and inhibition of cell-cell fusion as a reduction in plaque area. We found neutralization strength to be a significant factor in the ability of virus to form syncytia. Further, we introduce the inhibitory concentration of plaque area reduction (ICpar) as an additional measure of antiviral activity, i.e. fusion inhibition. We present an automated image based high-throughput, high-content HIV plaque reduction assay. This allows, for the first time, simultaneous evaluation of neutralization and inhibition of cell-cell fusion within the same assay, by quantifying the reduction in number of plaques and mean plaque area, respectively. Inhibition of cell-to-cell fusion requires higher quantities of inhibitory reagent than inhibition of virus neutralization.

  13. The optimal algorithm for Multi-source RS image fusion.

    PubMed

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.

  14. Pixel-based image fusion with false color mapping

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Mao, Shiyi

    2003-06-01

    In this paper, we propose a pixel-based image fusion algorithm that combines the gray-level image fusion method with the false color mapping. This algorithm integrates two gray-level images presenting different sensor modalities or at different frequencies and produces a fused false-color image. The resulting image has higher information content than each of the original images. The objects in the fused color image are easy to be recognized. This algorithm has three steps: first, obtaining the fused gray-level image of two original images; second, giving the generalized high-boost filtering images between fused gray-level image and two source images respectively; third, generating the fused false-color image. We use the hybrid averaging and selection fusion method to obtain the fused gray-level image. The fused gray-level image will provide better details than two original images and reduce noise at the same time. But the fused gray-level image can't contain all detail information in two source images. At the same time, the details in gray-level image cannot be discerned as easy as in a color image. So a color fused image is necessary. In order to create color variation and enhance details in the final fusion image, we produce three generalized high-boost filtering images. These three images are displayed through red, green and blue channel respectively. A fused color image is produced finally. This method is used to fuse two SAR images acquired on the San Francisco area (California, USA). The result shows that fused false-color image enhances the visibility of certain details. The resolution of the final false-color image is the same as the resolution of the input images.

  15. Study on Hybrid Image Search Technology Based on Texts and Contents

    NASA Astrophysics Data System (ADS)

    Wang, H. T.; Ma, F. L.; Yan, C.; Pan, H.

    2018-05-01

    Image search was studied first here based on texts and contents, respectively. The text-based image feature extraction was put forward by integrating the statistical and topic features in view of the limitation of extraction of keywords only by means of statistical features of words. On the other hand, a search-by-image method was put forward based on multi-feature fusion in view of the imprecision of the content-based image search by means of a single feature. The layered-searching method depended on primarily the text-based image search method and additionally the content-based image search was then put forward in view of differences between the text-based and content-based methods and their difficult direct fusion. The feasibility and effectiveness of the hybrid search algorithm were experimentally verified.

  16. Multiple-algorithm parallel fusion of infrared polarization and intensity images based on algorithmic complementarity and synergy

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Yang, Fengbao; Ji, Linna; Lv, Sheng

    2018-01-01

    Diverse image fusion methods perform differently. Each method has advantages and disadvantages compared with others. One notion is that the advantages of different image methods can be effectively combined. A multiple-algorithm parallel fusion method based on algorithmic complementarity and synergy is proposed. First, in view of the characteristics of the different algorithms and difference-features among images, an index vector-based feature-similarity is proposed to define the degree of complementarity and synergy. This proposed index vector is a reliable evidence indicator for algorithm selection. Second, the algorithms with a high degree of complementarity and synergy are selected. Then, the different degrees of various features and infrared intensity images are used as the initial weights for the nonnegative matrix factorization (NMF). This avoids randomness of the NMF initialization parameter. Finally, the fused images of different algorithms are integrated using the NMF because of its excellent data fusing performance on independent features. Experimental results demonstrate that the visual effect and objective evaluation index of the fused images obtained using the proposed method are better than those obtained using traditional methods. The proposed method retains all the advantages that individual fusion algorithms have.

  17. A review of potential image fusion methods for remote sensing-based irrigation management: Part II

    USDA-ARS?s Scientific Manuscript database

    Satellite-based sensors provide data at either greater spectral and coarser spatial resolutions, or lower spectral and finer spatial resolutions due to complementary spectral and spatial characteristics of optical sensor systems. In order to overcome this limitation, image fusion has been suggested ...

  18. Label fusion based brain MR image segmentation via a latent selective model

    NASA Astrophysics Data System (ADS)

    Liu, Gang; Guo, Xiantang; Zhu, Kai; Liao, Hengxu

    2018-04-01

    Multi-atlas segmentation is an effective approach and increasingly popular for automatically labeling objects of interest in medical images. Recently, segmentation methods based on generative models and patch-based techniques have become the two principal branches of label fusion. However, these generative models and patch-based techniques are only loosely related, and the requirement for higher accuracy, faster segmentation, and robustness is always a great challenge. In this paper, we propose novel algorithm that combines the two branches using global weighted fusion strategy based on a patch latent selective model to perform segmentation of specific anatomical structures for human brain magnetic resonance (MR) images. In establishing this probabilistic model of label fusion between the target patch and patch dictionary, we explored the Kronecker delta function in the label prior, which is more suitable than other models, and designed a latent selective model as a membership prior to determine from which training patch the intensity and label of the target patch are generated at each spatial location. Because the image background is an equally important factor for segmentation, it is analyzed in label fusion procedure and we regard it as an isolated label to keep the same privilege between the background and the regions of interest. During label fusion with the global weighted fusion scheme, we use Bayesian inference and expectation maximization algorithm to estimate the labels of the target scan to produce the segmentation map. Experimental results indicate that the proposed algorithm is more accurate and robust than the other segmentation methods.

  19. Study on Mosaic and Uniform Color Method of Satellite Image Fusion in Large Srea

    NASA Astrophysics Data System (ADS)

    Liu, S.; Li, H.; Wang, X.; Guo, L.; Wang, R.

    2018-04-01

    Due to the improvement of satellite radiometric resolution and the color difference for multi-temporal satellite remote sensing images and the large amount of satellite image data, how to complete the mosaic and uniform color process of satellite images is always an important problem in image processing. First of all using the bundle uniform color method and least squares mosaic method of GXL and the dodging function, the uniform transition of color and brightness can be realized in large area and multi-temporal satellite images. Secondly, using Color Mapping software to color mosaic images of 16bit to mosaic images of 8bit based on uniform color method with low resolution reference images. At last, qualitative and quantitative analytical methods are used respectively to analyse and evaluate satellite image after mosaic and uniformity coloring. The test reflects the correlation of mosaic images before and after coloring is higher than 95 % and image information entropy increases, texture features are enhanced which have been proved by calculation of quantitative indexes such as correlation coefficient and information entropy. Satellite image mosaic and color processing in large area has been well implemented.

  20. Ultrasound fusion image error correction using subject-specific liver motion model and automatic image registration.

    PubMed

    Yang, Minglei; Ding, Hui; Zhu, Lei; Wang, Guangzhi

    2016-12-01

    Ultrasound fusion imaging is an emerging tool and benefits a variety of clinical applications, such as image-guided diagnosis and treatment of hepatocellular carcinoma and unresectable liver metastases. However, respiratory liver motion-induced misalignment of multimodal images (i.e., fusion error) compromises the effectiveness and practicability of this method. The purpose of this paper is to develop a subject-specific liver motion model and automatic registration-based method to correct the fusion error. An online-built subject-specific motion model and automatic image registration method for 2D ultrasound-3D magnetic resonance (MR) images were combined to compensate for the respiratory liver motion. The key steps included: 1) Build a subject-specific liver motion model for current subject online and perform the initial registration of pre-acquired 3D MR and intra-operative ultrasound images; 2) During fusion imaging, compensate for liver motion first using the motion model, and then using an automatic registration method to further correct the respiratory fusion error. Evaluation experiments were conducted on liver phantom and five subjects. In the phantom study, the fusion error (superior-inferior axis) was reduced from 13.90±2.38mm to 4.26±0.78mm by using the motion model only. The fusion error further decreased to 0.63±0.53mm by using the registration method. The registration method also decreased the rotation error from 7.06±0.21° to 1.18±0.66°. In the clinical study, the fusion error was reduced from 12.90±9.58mm to 6.12±2.90mm by using the motion model alone. Moreover, the fusion error decreased to 1.96±0.33mm by using the registration method. The proposed method can effectively correct the respiration-induced fusion error to improve the fusion image quality. This method can also reduce the error correction dependency on the initial registration of ultrasound and MR images. Overall, the proposed method can improve the clinical practicability of ultrasound fusion imaging. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Target detection method by airborne and spaceborne images fusion based on past images

    NASA Astrophysics Data System (ADS)

    Chen, Shanjing; Kang, Qing; Wang, Zhenggang; Shen, ZhiQiang; Pu, Huan; Han, Hao; Gu, Zhongzheng

    2017-11-01

    To solve the problem that remote sensing target detection method has low utilization rate of past remote sensing data on target area, and can not recognize camouflage target accurately, a target detection method by airborne and spaceborne images fusion based on past images is proposed in this paper. The target area's past of space remote sensing image is taken as background. The airborne and spaceborne remote sensing data is fused and target feature is extracted by the means of airborne and spaceborne images registration, target change feature extraction, background noise suppression and artificial target feature extraction based on real-time aerial optical remote sensing image. Finally, the support vector machine is used to detect and recognize the target on feature fusion data. The experimental results have established that the proposed method combines the target area change feature of airborne and spaceborne remote sensing images with target detection algorithm, and obtains fine detection and recognition effect on camouflage and non-camouflage targets.

  2. Multimodal Medical Image Fusion by Adaptive Manifold Filter.

    PubMed

    Geng, Peng; Liu, Shuaiqi; Zhuang, Shanna

    2015-01-01

    Medical image fusion plays an important role in diagnosis and treatment of diseases such as image-guided radiotherapy and surgery. The modified local contrast information is proposed to fuse multimodal medical images. Firstly, the adaptive manifold filter is introduced into filtering source images as the low-frequency part in the modified local contrast. Secondly, the modified spatial frequency of the source images is adopted as the high-frequency part in the modified local contrast. Finally, the pixel with larger modified local contrast is selected into the fused image. The presented scheme outperforms the guided filter method in spatial domain, the dual-tree complex wavelet transform-based method, nonsubsampled contourlet transform-based method, and four classic fusion methods in terms of visual quality. Furthermore, the mutual information values by the presented method are averagely 55%, 41%, and 62% higher than the three methods and those values of edge based similarity measure by the presented method are averagely 13%, 33%, and 14% higher than the three methods for the six pairs of source images.

  3. Multispectral image fusion for illumination-invariant palmprint recognition

    PubMed Central

    Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng

    2017-01-01

    Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied. PMID:28558064

  4. Multispectral image fusion for illumination-invariant palmprint recognition.

    PubMed

    Lu, Longbin; Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng

    2017-01-01

    Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied.

  5. Infrared and visible image fusion based on robust principal component analysis and compressed sensing

    NASA Astrophysics Data System (ADS)

    Li, Jun; Song, Minghui; Peng, Yuanxi

    2018-03-01

    Current infrared and visible image fusion methods do not achieve adequate information extraction, i.e., they cannot extract the target information from infrared images while retaining the background information from visible images. Moreover, most of them have high complexity and are time-consuming. This paper proposes an efficient image fusion framework for infrared and visible images on the basis of robust principal component analysis (RPCA) and compressed sensing (CS). The novel framework consists of three phases. First, RPCA decomposition is applied to the infrared and visible images to obtain their sparse and low-rank components, which represent the salient features and background information of the images, respectively. Second, the sparse and low-rank coefficients are fused by different strategies. On the one hand, the measurements of the sparse coefficients are obtained by the random Gaussian matrix, and they are then fused by the standard deviation (SD) based fusion rule. Next, the fused sparse component is obtained by reconstructing the result of the fused measurement using the fast continuous linearized augmented Lagrangian algorithm (FCLALM). On the other hand, the low-rank coefficients are fused using the max-absolute rule. Subsequently, the fused image is superposed by the fused sparse and low-rank components. For comparison, several popular fusion algorithms are tested experimentally. By comparing the fused results subjectively and objectively, we find that the proposed framework can extract the infrared targets while retaining the background information in the visible images. Thus, it exhibits state-of-the-art performance in terms of both fusion effects and timeliness.

  6. General software design for multisensor data fusion

    NASA Astrophysics Data System (ADS)

    Zhang, Junliang; Zhao, Yuming

    1999-03-01

    In this paper a general method of software design for multisensor data fusion is discussed in detail, which adopts object-oriented technology under UNIX operation system. The software for multisensor data fusion is divided into six functional modules: data collection, database management, GIS, target display and alarming data simulation etc. Furthermore, the primary function, the components and some realization methods of each modular is given. The interfaces among these functional modular relations are discussed. The data exchange among each functional modular is performed by interprocess communication IPC, including message queue, semaphore and shared memory. Thus, each functional modular is executed independently, which reduces the dependence among functional modules and helps software programing and testing. This software for multisensor data fusion is designed as hierarchical structure by the inheritance character of classes. Each functional modular is abstracted and encapsulated through class structure, which avoids software redundancy and enhances readability.

  7. Fusion of infrared polarization and intensity images based on improved toggle operator

    NASA Astrophysics Data System (ADS)

    Zhu, Pan; Ding, Lei; Ma, Xiaoqing; Huang, Zhanhua

    2018-01-01

    Integration of infrared polarization and intensity images has been a new topic in infrared image understanding and interpretation. The abundant infrared details and target from infrared image and the salient edge and shape information from polarization image should be preserved or even enhanced in the fused result. In this paper, a new fusion method is proposed for infrared polarization and intensity images based on the improved multi-scale toggle operator with spatial scale, which can effectively extract the feature information of source images and heavily reduce redundancy among different scale. Firstly, the multi-scale image features of infrared polarization and intensity images are respectively extracted at different scale levels by the improved multi-scale toggle operator. Secondly, the redundancy of the features among different scales is reduced by using spatial scale. Thirdly, the final image features are combined by simply adding all scales of feature images together, and a base image is calculated by performing mean value weighted method on smoothed source images. Finally, the fusion image is obtained by importing the combined image features into the base image with a suitable strategy. Both objective assessment and subjective vision of the experimental results indicate that the proposed method obtains better performance in preserving the details and edge information as well as improving the image contrast.

  8. An acceleration system for Laplacian image fusion based on SoC

    NASA Astrophysics Data System (ADS)

    Gao, Liwen; Zhao, Hongtu; Qu, Xiujie; Wei, Tianbo; Du, Peng

    2018-04-01

    Based on the analysis of Laplacian image fusion algorithm, this paper proposes a partial pipelining and modular processing architecture, and a SoC based acceleration system is implemented accordingly. Full pipelining method is used for the design of each module, and modules in series form the partial pipelining with unified data formation, which is easy for management and reuse. Integrated with ARM processor, DMA and embedded bare-mental program, this system achieves 4 layers of Laplacian pyramid on the Zynq-7000 board. Experiments show that, with small resources consumption, a couple of 256×256 images can be fused within 1ms, maintaining a fine fusion effect at the same time.

  9. Infrared and Visible Image Fusion Based on Different Constraints in the Non-Subsampled Shearlet Transform Domain.

    PubMed

    Huang, Yan; Bi, Duyan; Wu, Dongpeng

    2018-04-11

    There are many artificial parameters when fuse infrared and visible images, to overcome the lack of detail in the fusion image because of the artifacts, a novel fusion algorithm for infrared and visible images that is based on different constraints in non-subsampled shearlet transform (NSST) domain is proposed. There are high bands and low bands of images that are decomposed by the NSST. After analyzing the characters of the bands, fusing the high level bands by the gradient constraint, the fused image can obtain more details; fusing the low bands by the constraint of saliency in the images, the targets are more salient. Before the inverse NSST, the Nash equilibrium is used to update the coefficient. The fused images and the quantitative results demonstrate that our method is more effective in reserving details and highlighting the targets when compared with other state-of-the-art methods.

  10. Image retrieval by information fusion based on scalable vocabulary tree and robust Hausdorff distance

    NASA Astrophysics Data System (ADS)

    Che, Chang; Yu, Xiaoyang; Sun, Xiaoming; Yu, Boyang

    2017-12-01

    In recent years, Scalable Vocabulary Tree (SVT) has been shown to be effective in image retrieval. However, for general images where the foreground is the object to be recognized while the background is cluttered, the performance of the current SVT framework is restricted. In this paper, a new image retrieval framework that incorporates a robust distance metric and information fusion is proposed, which improves the retrieval performance relative to the baseline SVT approach. First, the visual words that represent the background are diminished by using a robust Hausdorff distance between different images. Second, image matching results based on three image signature representations are fused, which enhances the retrieval precision. We conducted intensive experiments on small-scale to large-scale image datasets: Corel-9, Corel-48, and PKU-198, where the proposed Hausdorff metric and information fusion outperforms the state-of-the-art methods by about 13, 15, and 15%, respectively.

  11. Infrared and Visible Image Fusion Based on Different Constraints in the Non-Subsampled Shearlet Transform Domain

    PubMed Central

    Huang, Yan; Bi, Duyan; Wu, Dongpeng

    2018-01-01

    There are many artificial parameters when fuse infrared and visible images, to overcome the lack of detail in the fusion image because of the artifacts, a novel fusion algorithm for infrared and visible images that is based on different constraints in non-subsampled shearlet transform (NSST) domain is proposed. There are high bands and low bands of images that are decomposed by the NSST. After analyzing the characters of the bands, fusing the high level bands by the gradient constraint, the fused image can obtain more details; fusing the low bands by the constraint of saliency in the images, the targets are more salient. Before the inverse NSST, the Nash equilibrium is used to update the coefficient. The fused images and the quantitative results demonstrate that our method is more effective in reserving details and highlighting the targets when compared with other state-of-the-art methods. PMID:29641505

  12. Multi-atlas segmentation with joint label fusion and corrective learning—an open source implementation

    PubMed Central

    Wang, Hongzhi; Yushkevich, Paul A.

    2013-01-01

    Label fusion based multi-atlas segmentation has proven to be one of the most competitive techniques for medical image segmentation. This technique transfers segmentations from expert-labeled images, called atlases, to a novel image using deformable image registration. Errors produced by label transfer are further reduced by label fusion that combines the results produced by all atlases into a consensus solution. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity is a simple and highly effective label fusion technique. However, one limitation of most weighted voting methods is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this problem, we recently developed the joint label fusion technique and the corrective learning technique, which won the first place of the 2012 MICCAI Multi-Atlas Labeling Challenge and was one of the top performers in 2013 MICCAI Segmentation: Algorithms, Theory and Applications (SATA) challenge. To make our techniques more accessible to the scientific research community, we describe an Insight-Toolkit based open source implementation of our label fusion methods. Our implementation extends our methods to work with multi-modality imaging data and is more suitable for segmentation problems with multiple labels. We demonstrate the usage of our tools through applying them to the 2012 MICCAI Multi-Atlas Labeling Challenge brain image dataset and the 2013 SATA challenge canine leg image dataset. We report the best results on these two datasets so far. PMID:24319427

  13. Object-Based Image Analysis of WORLDVIEW-2 Satellite Data for the Classification of Mangrove Areas in the City of SÃO LUÍS, MARANHÃO State, Brazil

    NASA Astrophysics Data System (ADS)

    Kux, H. J. H.; Souza, U. D. V.

    2012-07-01

    Taking into account the importance of mangrove environments for the biodiversity of coastal areas, the objective of this paper is to classify the different types of irregular human occupation on the areas of mangrove vegetation in São Luis, capital of Maranhão State, Brazil, considering the OBIA (Object-based Image Analysis) approach with WorldView-2 satellite data and using InterIMAGE, a free image analysis software. A methodology for the study of the area covered by mangroves at the northern portion of the city was proposed to identify the main targets of this area, such as: marsh areas (known locally as Apicum), mangrove forests, tidal channels, blockhouses (irregular constructions), embankments, paved streets and different condominiums. Initially a databank including information on the main types of occupation and environments was established for the area under study. An image fusion (multispectral bands with panchromatic band) was done, to improve the information content of WorldView-2 data. Following an ortho-rectification was made with the dataset used, in order to compare with cartographical data from the municipality, using Ground Control Points (GCPs) collected during field survey. Using the data mining software GEODMA, a series of attributes which characterize the targets of interest was established. Afterwards the classes were structured, a knowledge model was created and the classification performed. The OBIA approach eased mapping of such sensitive areas, showing the irregular occupations and embankments of mangrove forests, reducing its area and damaging the marine biodiversity.

  14. Comparing fusion techniques for the ImageCLEF 2013 medical case retrieval task.

    PubMed

    G Seco de Herrera, Alba; Schaer, Roger; Markonis, Dimitrios; Müller, Henning

    2015-01-01

    Retrieval systems can supply similar cases with a proven diagnosis to a new example case under observation to help clinicians during their work. The ImageCLEFmed evaluation campaign proposes a framework where research groups can compare case-based retrieval approaches. This paper focuses on the case-based task and adds results of the compound figure separation and modality classification tasks. Several fusion approaches are compared to identify the approaches best adapted to the heterogeneous data of the task. Fusion of visual and textual features is analyzed, demonstrating that the selection of the fusion strategy can improve the best performance on the case-based retrieval task. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Fusion Analytics: A Data Integration System for Public Health and Medical Disaster Response Decision Support

    PubMed Central

    Passman, Dina B.

    2013-01-01

    Objective The objective of this demonstration is to show conference attendees how they can integrate, analyze, and visualize diverse data type data from across a variety of systems by leveraging an off-the-shelf enterprise business intelligence (EBI) solution to support decision-making in disasters. Introduction Fusion Analytics is the data integration system developed by the Fusion Cell at the U.S. Department of Health and Human Services (HHS), Office of the Assistant Secretary for Preparedness and Response (ASPR). Fusion Analytics meaningfully augments traditional public and population health surveillance reporting by providing web-based data analysis and visualization tools. Methods Fusion Analytics serves as a one-stop-shop for the web-based data visualizations of multiple real-time data sources within ASPR. The 24-7 web availability makes it an ideal analytic tool for situational awareness and response allowing stakeholders to access the portal from any internet-enabled device without installing any software. The Fusion Analytics data integration system was built using off-the-shelf EBI software. Fusion Analytics leverages the full power of statistical analysis software and delivers reports to users in a secure web-based environment. Fusion Analytics provides an example of how public health staff can develop and deploy a robust public health informatics solution using an off-the shelf product and with limited development funding. It also provides the unique example of a public health information system that combines patient data for traditional disease surveillance with manpower and resource data to provide overall decision support for federal public health and medical disaster response operations. Conclusions We are currently in a unique position within public health. One the one hand, we have been gaining greater and greater access to electronic data of all kinds over the last few years. On the other, we are working in a time of reduced government spending to support leveraging this data for decision support with robust analytics and visualizations. Fusion Analytics provides an opportunity for attendees to see how various types of data are integrated into a single application for population health decision support. It also can provide them with ideas of how they can use their own staff to create analyses and reports that support their public health activities.

  16. Fusion of PAN and multispectral remote sensing images in shearlet domain by considering regional metrics

    NASA Astrophysics Data System (ADS)

    Poobalasubramanian, Mangalraj; Agrawal, Anupam

    2016-10-01

    The presented work proposes fusion of panchromatic and multispectral images in a shearlet domain. The proposed fusion rules rely on the regional considerations which makes the system efficient in terms of spatial enhancement. The luminance hue saturation-based color conversion system is utilized to avoid spectral distortions. The proposed fusion method is tested on Worldview2 and Ikonos datasets, and the proposed method is compared against other methodologies. The proposed fusion method performs well against the other compared methods in terms of subjective and objective evaluations.

  17. Do High Dynamic Range threatments improve the results of Structure from Motion approaches in Geomorphology?

    NASA Astrophysics Data System (ADS)

    Gómez-Gutiérrez, Álvaro; Juan de Sanjosé-Blasco, José; Schnabel, Susanne; de Matías-Bejarano, Javier; Pulido-Fernández, Manuel; Berenguer-Sempere, Fernando

    2015-04-01

    In this work, the hypothesis of improving 3D models obtained with Structure from Motion (SfM) approaches using images pre-processed by High Dynamic Range (HDR) techniques is tested. Photographs of the Veleta Rock Glacier in Spain were captured with different exposure values (EV0, EV+1 and EV-1), two focal lengths (35 and 100 mm) and under different weather conditions for the years 2008, 2009, 2011, 2012 and 2014. HDR images were produced using the different EV steps within Fusion F.1 software. Point clouds were generated using commercial and free available SfM software: Agisoft Photoscan and 123D Catch. Models Obtained using pre-processed images and non-preprocessed images were compared in a 3D environment with a benchmark 3D model obtained by means of a Terrestrial Laser Scanner (TLS). A total of 40 point clouds were produced, georeferenced and compared. Results indicated that for Agisoft Photoscan software differences in the accuracy between models obtained with pre-processed and non-preprocessed images were not significant from a statistical viewpoint. However, in the case of the free available software 123D Catch, models obtained using images pre-processed by HDR techniques presented a higher point density and were more accurate. This tendency was observed along the 5 studied years and under different capture conditions. More work should be done in the near future to corroborate whether the results of similar software packages can be improved by HDR techniques (e.g. ARC3D, Bundler and PMVS2, CMP SfM, Photosynth and VisualSFM).

  18. Software for Partly Automated Recognition of Targets

    NASA Technical Reports Server (NTRS)

    Opitz, David; Blundell, Stuart; Bain, William; Morris, Matthew; Carlson, Ian; Mangrich, Mark

    2003-01-01

    The Feature Analyst is a computer program for assisted (partially automated) recognition of targets in images. This program was developed to accelerate the processing of high-resolution satellite image data for incorporation into geographic information systems (GIS). This program creates an advanced user interface that embeds proprietary machine-learning algorithms in commercial image-processing and GIS software. A human analyst provides samples of target features from multiple sets of data, then the software develops a data-fusion model that automatically extracts the remaining features from selected sets of data. The program thus leverages the natural ability of humans to recognize objects in complex scenes, without requiring the user to explain the human visual recognition process by means of lengthy software. Two major subprograms are the reactive agent and the thinking agent. The reactive agent strives to quickly learn the user s tendencies while the user is selecting targets and to increase the user s productivity by immediately suggesting the next set of pixels that the user may wish to select. The thinking agent utilizes all available resources, taking as much time as needed, to produce the most accurate autonomous feature-extraction model possible.

  19. Research on fusion algorithm of polarization image in tetrolet domain

    NASA Astrophysics Data System (ADS)

    Zhang, Dexiang; Yuan, BaoHong; Zhang, Jingjing

    2015-12-01

    Tetrolets are Haar-type wavelets whose supports are tetrominoes which are shapes made by connecting four equal-sized squares. A fusion method for polarization images based on tetrolet transform is proposed. Firstly, the magnitude of polarization image and angle of polarization image can be decomposed into low-frequency coefficients and high-frequency coefficients with multi-scales and multi-directions using tetrolet transform. For the low-frequency coefficients, the average fusion method is used. According to edge distribution differences in high frequency sub-band images, for the directional high-frequency coefficients are used to select the better coefficients by region spectrum entropy algorithm for fusion. At last the fused image can be obtained by utilizing inverse transform for fused tetrolet coefficients. Experimental results show that the proposed method can detect image features more effectively and the fused image has better subjective visual effect

  20. Fast single image dehazing based on image fusion

    NASA Astrophysics Data System (ADS)

    Liu, Haibo; Yang, Jie; Wu, Zhengping; Zhang, Qingnian

    2015-01-01

    Images captured in foggy weather conditions often fade the colors and reduce the contrast of the observed objects. An efficient image fusion method is proposed to remove haze from a single input image. First, the initial medium transmission is estimated based on the dark channel prior. Second, the method adopts an assumption that the degradation level affected by haze of each region is the same, which is similar to the Retinex theory, and uses a simple Gaussian filter to get the coarse medium transmission. Then, pixel-level fusion is achieved between the initial medium transmission and coarse medium transmission. The proposed method can recover a high-quality haze-free image based on the physical model, and the complexity of the proposed method is only a linear function of the number of input image pixels. Experimental results demonstrate that the proposed method can allow a very fast implementation and achieve better restoration for visibility and color fidelity compared to some state-of-the-art methods.

  1. Perception-oriented fusion of multi-sensor imagery: visible, IR, and SAR

    NASA Astrophysics Data System (ADS)

    Sidorchuk, D.; Volkov, V.; Gladilin, S.

    2018-04-01

    This paper addresses the problem of image fusion of optical (visible and thermal domain) data and radar data for the purpose of visualization. These types of images typically contain a lot of complimentary information, and their joint visualization can be useful and more convenient for human user than a set of individual images. To solve the image fusion problem we propose a novel algorithm that utilizes some peculiarities of human color perception and based on the grey-scale structural visualization. Benefits of presented algorithm are exemplified by satellite imagery.

  2. Application of Sensor Fusion to Improve Uav Image Classification

    NASA Astrophysics Data System (ADS)

    Jabari, S.; Fathollahi, F.; Zhang, Y.

    2017-08-01

    Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

  3. Brain Atlas Fusion from High-Thickness Diagnostic Magnetic Resonance Images by Learning-Based Super-Resolution

    PubMed Central

    Zhang, Jinpeng; Zhang, Lichi; Xiang, Lei; Shao, Yeqin; Wu, Guorong; Zhou, Xiaodong; Shen, Dinggang; Wang, Qian

    2017-01-01

    It is fundamentally important to fuse the brain atlas from magnetic resonance (MR) images for many imaging-based studies. Most existing works focus on fusing the atlases from high-quality MR images. However, for low-quality diagnostic images (i.e., with high inter-slice thickness), the problem of atlas fusion has not been addressed yet. In this paper, we intend to fuse the brain atlas from the high-thickness diagnostic MR images that are prevalent for clinical routines. The main idea of our works is to extend the conventional groupwise registration by incorporating a novel super-resolution strategy. The contribution of the proposed super-resolution framework is two-fold. First, each high-thickness subject image is reconstructed to be isotropic by the patch-based sparsity learning. Then, the reconstructed isotropic image is enhanced for better quality through the random-forest-based regression model. In this way, the images obtained by the super-resolution strategy can be fused together by applying the groupwise registration method to construct the required atlas. Our experiments have shown that the proposed framework can effectively solve the problem of atlas fusion from the low-quality brain MR images. PMID:29062159

  4. Brain Atlas Fusion from High-Thickness Diagnostic Magnetic Resonance Images by Learning-Based Super-Resolution.

    PubMed

    Zhang, Jinpeng; Zhang, Lichi; Xiang, Lei; Shao, Yeqin; Wu, Guorong; Zhou, Xiaodong; Shen, Dinggang; Wang, Qian

    2017-03-01

    It is fundamentally important to fuse the brain atlas from magnetic resonance (MR) images for many imaging-based studies. Most existing works focus on fusing the atlases from high-quality MR images. However, for low-quality diagnostic images (i.e., with high inter-slice thickness), the problem of atlas fusion has not been addressed yet. In this paper, we intend to fuse the brain atlas from the high-thickness diagnostic MR images that are prevalent for clinical routines. The main idea of our works is to extend the conventional groupwise registration by incorporating a novel super-resolution strategy. The contribution of the proposed super-resolution framework is two-fold. First, each high-thickness subject image is reconstructed to be isotropic by the patch-based sparsity learning. Then, the reconstructed isotropic image is enhanced for better quality through the random-forest-based regression model. In this way, the images obtained by the super-resolution strategy can be fused together by applying the groupwise registration method to construct the required atlas. Our experiments have shown that the proposed framework can effectively solve the problem of atlas fusion from the low-quality brain MR images.

  5. A near-infrared fluorescence-based surgical navigation system imaging software for sentinel lymph node detection

    NASA Astrophysics Data System (ADS)

    Ye, Jinzuo; Chi, Chongwei; Zhang, Shuang; Ma, Xibo; Tian, Jie

    2014-02-01

    Sentinel lymph node (SLN) in vivo detection is vital in breast cancer surgery. A new near-infrared fluorescence-based surgical navigation system (SNS) imaging software, which has been developed by our research group, is presented for SLN detection surgery in this paper. The software is based on the fluorescence-based surgical navigation hardware system (SNHS) which has been developed in our lab, and is designed specifically for intraoperative imaging and postoperative data analysis. The surgical navigation imaging software consists of the following software modules, which mainly include the control module, the image grabbing module, the real-time display module, the data saving module and the image processing module. And some algorithms have been designed to achieve the performance of the software, for example, the image registration algorithm based on correlation matching. Some of the key features of the software include: setting the control parameters of the SNS; acquiring, display and storing the intraoperative imaging data in real-time automatically; analysis and processing of the saved image data. The developed software has been used to successfully detect the SLNs in 21 cases of breast cancer patients. In the near future, we plan to improve the software performance and it will be extensively used for clinical purpose.

  6. Lineage mapper: A versatile cell and particle tracker

    NASA Astrophysics Data System (ADS)

    Chalfoun, Joe; Majurski, Michael; Dima, Alden; Halter, Michael; Bhadriraju, Kiran; Brady, Mary

    2016-11-01

    The ability to accurately track cells and particles from images is critical to many biomedical problems. To address this, we developed Lineage Mapper, an open-source tracker for time-lapse images of biological cells, colonies, and particles. Lineage Mapper tracks objects independently of the segmentation method, detects mitosis in confluence, separates cell clumps mistakenly segmented as a single cell, provides accuracy and scalability even on terabyte-sized datasets, and creates division and/or fusion lineages. Lineage Mapper has been tested and validated on multiple biological and simulated problems. The software is available in ImageJ and Matlab at isg.nist.gov.

  7. Preliminary Studies for a CBCT Imaging Protocol for Offline Organ Motion Analysis: Registration Software Validation and CTDI Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falco, Maria Daniela, E-mail: mdanielafalco@hotmail.co; Fontanarosa, Davide; Miceli, Roberto

    2011-04-01

    Cone-beam X-ray volumetric imaging in the treatment room, allows online correction of set-up errors and offline assessment of residual set-up errors and organ motion. In this study the registration algorithm of the X-ray volume imaging software (XVI, Elekta, Crawley, United Kingdom), which manages a commercial cone-beam computed tomography (CBCT)-based positioning system, has been tested using a homemade and an anthropomorphic phantom to: (1) assess its performance in detecting known translational and rotational set-up errors and (2) transfer the transformation matrix of its registrations into a commercial treatment planning system (TPS) for offline organ motion analysis. Furthermore, CBCT dose index hasmore » been measured for a particular site (prostate: 120 kV, 1028.8 mAs, approximately 640 frames) using a standard Perspex cylindrical body phantom (diameter 32 cm, length 15 cm) and a 10-cm-long pencil ionization chamber. We have found that known displacements were correctly calculated by the registration software to within 1.3 mm and 0.4{sup o}. For the anthropomorphic phantom, only translational displacements have been considered. Both studies have shown errors within the intrinsic uncertainty of our system for translational displacements (estimated as 0.87 mm) and rotational displacements (estimated as 0.22{sup o}). The resulting table translations proposed by the system to correct the displacements were also checked with portal images and found to place the isocenter of the plan on the linac isocenter within an error of 1 mm, which is the dimension of the spherical lead marker inserted at the center of the homemade phantom. The registration matrix translated into the TPS image fusion module correctly reproduced the alignment between planning CT scans and CBCT scans. Finally, measurements on the CBCT dose index indicate that CBCT acquisition delivers less dose than conventional CT scans and electronic portal imaging device portals. The registration software was found to be accurate, and its registration matrix can be easily translated into the TPS and a low dose is delivered to the patient during image acquisition. These results can help in designing imaging protocols for offline evaluations.« less

  8. Single-Side Two-Location Spotlight Imaging for Building Based on MIMO Through-Wall-Radar.

    PubMed

    Jia, Yong; Zhong, Xiaoling; Liu, Jiangang; Guo, Yong

    2016-09-07

    Through-wall-radar imaging is of interest for mapping the wall layout of buildings and for the detection of stationary targets within buildings. In this paper, we present an easy single-side two-location spotlight imaging method for both wall layout mapping and stationary target detection by utilizing multiple-input multiple-output (MIMO) through-wall-radar. Rather than imaging for building walls directly, the images of all building corners are generated to speculate wall layout indirectly by successively deploying the MIMO through-wall-radar at two appropriate locations on only one side of the building and then carrying out spotlight imaging with two different squint-views. In addition to the ease of implementation, the single-side two-location squint-view detection also has two other advantages for stationary target imaging. The first one is the fewer multi-path ghosts, and the second one is the smaller region of side-lobe interferences from the corner images in comparison to the wall images. Based on Computer Simulation Technology (CST) electromagnetic simulation software, we provide multiple sets of validation results where multiple binary panorama images with clear images of all corners and stationary targets are obtained by combining two single-location images with the use of incoherent additive fusion and two-dimensional cell-averaging constant-false-alarm-rate (2D CA-CFAR) detection.

  9. Image fusion algorithm based on energy of Laplacian and PCNN

    NASA Astrophysics Data System (ADS)

    Li, Meili; Wang, Hongmei; Li, Yanjun; Zhang, Ke

    2009-12-01

    Owing to the global coupling and pulse synchronization characteristic of pulse coupled neural networks (PCNN), it has been proved to be suitable for image processing and successfully employed in image fusion. However, in almost all the literatures of image processing about PCNN, linking strength of each neuron is assigned the same value which is chosen by experiments. This is not consistent with the human vision system in which the responses to the region with notable features are stronger than that to the region with nonnotable features. It is more reasonable that notable features, rather than the same value, are employed to linking strength of each neuron. As notable feature, energy of Laplacian (EOL) is used to obtain the value of linking strength in PCNN in this paper. Experimental results demonstrate that the proposed algorithm outperforms Laplacian-based, wavelet-based, PCNN -based fusion algorithms.

  10. Upgrading a ColdFusion-Based Academic Medical Library Staff Intranet

    ERIC Educational Resources Information Center

    Vander Hart, Robert; Ingrassia, Barbara; Mayotte, Kerry; Palmer, Lisa A.; Powell, Julia

    2010-01-01

    This article details the process of upgrading and expanding an existing academic medical library intranet to include a wiki, blog, discussion forum, and photo collection manager. The first version of the library's intranet from early 2002 was powered by ColdFusion software and existed primarily to allow staff members to author and store minutes of…

  11. Image Fusion for Radiosurgery, Neurosurgery and Hypofractionated Radiotherapy.

    PubMed

    Inoue, Hiroshi K; Nakajima, Atsushi; Sato, Hiro; Noda, Shin-Ei; Saitoh, Jun-Ichi; Suzuki, Yoshiyuki

    2015-03-01

    Precise target detection is essential for radiosurgery, neurosurgery and hypofractionated radiotherapy because treatment results and complication rates are related to accuracy of the target definition. In skull base tumors and tumors around the optic pathways, exact anatomical evaluation of cranial nerves are important to avoid adverse effects on these structures close to lesions. Three-dimensional analyses of structures obtained with MR heavy T2-images and image fusion with CT thin-sliced sections are desirable to evaluate fine structures during radiosurgery and microsurgery. In vascular lesions, angiography is most important for evaluations of whole structures from feeder to drainer, shunt, blood flow and risk factors of bleeding. However, exact sites and surrounding structures in the brain are not shown on angiography. True image fusions of angiography, MR images and CT on axial planes are ideal for precise target definition. In malignant tumors, especially recurrent head and neck tumors, biologically active areas of recurrent tumors are main targets of radiosurgery. PET scan is useful for quantitative evaluation of recurrences. However, the examination is not always available at the time of radiosurgery. Image fusion of MR diffusion images with CT is always available during radiosurgery and useful for the detection of recurrent lesions. All images are fused and registered on thin sliced CT sections and exactly demarcated targets are planned for treatment. Follow-up images are also able to register on this CT. Exact target changes, including volume, are possible in this fusion system. The purpose of this review is to describe the usefulness of image fusion for 1) skull base, 2) vascular, 3) recurrent target detection, and 4) follow-up analyses in radiosurgery, neurosurgery and hypofractionated radiotherapy.

  12. Image Fusion for Radiosurgery, Neurosurgery and Hypofractionated Radiotherapy

    PubMed Central

    Nakajima, Atsushi; Sato, Hiro; Noda, Shin-ei; Saitoh, Jun-ichi; Suzuki, Yoshiyuki

    2015-01-01

    Precise target detection is essential for radiosurgery, neurosurgery and hypofractionated radiotherapy because treatment results and complication rates are related to accuracy of the target definition. In skull base tumors and tumors around the optic pathways, exact anatomical evaluation of cranial nerves are important to avoid adverse effects on these structures close to lesions. Three-dimensional analyses of structures obtained with MR heavy T2-images and image fusion with CT thin-sliced sections are desirable to evaluate fine structures during radiosurgery and microsurgery. In vascular lesions, angiography is most important for evaluations of whole structures from feeder to drainer, shunt, blood flow and risk factors of bleeding. However, exact sites and surrounding structures in the brain are not shown on angiography. True image fusions of angiography, MR images and CT on axial planes are ideal for precise target definition. In malignant tumors, especially recurrent head and neck tumors, biologically active areas of recurrent tumors are main targets of radiosurgery. PET scan is useful for quantitative evaluation of recurrences. However, the examination is not always available at the time of radiosurgery. Image fusion of MR diffusion images with CT is always available during radiosurgery and useful for the detection of recurrent lesions. All images are fused and registered on thin sliced CT sections and exactly demarcated targets are planned for treatment. Follow-up images are also able to register on this CT. Exact target changes, including volume, are possible in this fusion system. The purpose of this review is to describe the usefulness of image fusion for 1) skull base, 2) vascular, 3) recurrent target detection, and 4) follow-up analyses in radiosurgery, neurosurgery and hypofractionated radiotherapy. PMID:26180676

  13. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    PubMed Central

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-01-01

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated by OKTAL-SE. PMID:27447635

  14. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection.

    PubMed

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-07-19

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated by OKTAL-SE.

  15. Fusion of MultiSpectral and Panchromatic Images Based on Morphological Operators.

    PubMed

    Restaino, Rocco; Vivone, Gemine; Dalla Mura, Mauro; Chanussot, Jocelyn

    2016-04-20

    Nonlinear decomposition schemes constitute an alternative to classical approaches for facing the problem of data fusion. In this paper we discuss the application of this methodology to a popular remote sensing application called pansharpening, which consists in the fusion of a low resolution multispectral image and a high resolution panchromatic image. We design a complete pansharpening scheme based on the use of morphological half gradients operators and demonstrate the suitability of this algorithm through the comparison with state of the art approaches. Four datasets acquired by the Pleiades, Worldview-2, Ikonos and Geoeye-1 satellites are employed for the performance assessment, testifying the effectiveness of the proposed approach in producing top-class images with a setting independent of the specific sensor.

  16. Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT.

    PubMed

    Mazaheri, Samaneh; Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah

    2015-01-01

    Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics.

  17. Multi-PSF fusion in image restoration of range-gated systems

    NASA Astrophysics Data System (ADS)

    Wang, Canjin; Sun, Tao; Wang, Tingfeng; Miao, Xikui; Wang, Rui

    2018-07-01

    For the task of image restoration, an accurate estimation of degrading PSF/kernel is the premise of recovering a visually superior image. The imaging process of range-gated imaging system in atmosphere associates with lots of factors, such as back scattering, background radiation, diffraction limit and the vibration of the platform. On one hand, due to the difficulty of constructing models for all factors, the kernels from physical-model based methods are not strictly accurate and practical. On the other hand, there are few strong edges in images, which brings significant errors to most of image-feature-based methods. Since different methods focus on different formation factors of the kernel, their results often complement each other. Therefore, we propose an approach which combines physical model with image features. With an fusion strategy using GCRF (Gaussian Conditional Random Fields) framework, we get a final kernel which is closer to the actual one. Aiming at the problem that ground-truth image is difficult to obtain, we then propose a semi data-driven fusion method in which different data sets are used to train fusion parameters. Finally, a semi blind restoration strategy based on EM (Expectation Maximization) and RL (Richardson-Lucy) algorithm is proposed. Our methods not only models how the lasers transfer in the atmosphere and imaging in the ICCD (Intensified CCD) plane, but also quantifies other unknown degraded factors using image-based methods, revealing how multiple kernel elements interact with each other. The experimental results demonstrate that our method achieves better performance than state-of-the-art restoration approaches.

  18. Multispectral image fusion based on fractal features

    NASA Astrophysics Data System (ADS)

    Tian, Jie; Chen, Jie; Zhang, Chunhua

    2004-01-01

    Imagery sensors have been one indispensable part of the detection and recognition systems. They are widely used to the field of surveillance, navigation, control and guide, et. However, different imagery sensors depend on diverse imaging mechanisms, and work within diverse range of spectrum. They also perform diverse functions and have diverse circumstance requires. So it is unpractical to accomplish the task of detection or recognition with a single imagery sensor under the conditions of different circumstances, different backgrounds and different targets. Fortunately, the multi-sensor image fusion technique emerged as important route to solve this problem. So image fusion has been one of the main technical routines used to detect and recognize objects from images. While, loss of information is unavoidable during fusion process, so it is always a very important content of image fusion how to preserve the useful information to the utmost. That is to say, it should be taken into account before designing the fusion schemes how to avoid the loss of useful information or how to preserve the features helpful to the detection. In consideration of these issues and the fact that most detection problems are actually to distinguish man-made objects from natural background, a fractal-based multi-spectral fusion algorithm has been proposed in this paper aiming at the recognition of battlefield targets in the complicated backgrounds. According to this algorithm, source images are firstly orthogonally decomposed according to wavelet transform theories, and then fractal-based detection is held to each decomposed image. At this step, natural background and man-made targets are distinguished by use of fractal models that can well imitate natural objects. Special fusion operators are employed during the fusion of area that contains man-made targets so that useful information could be preserved and features of targets could be extruded. The final fused image is reconstructed from the composition of source pyramid images. So this fusion scheme is a multi-resolution analysis. The wavelet decomposition of image can be actually considered as special pyramid decomposition. According to wavelet decomposition theories, the approximation of image (formula available in paper) at resolution 2j+1 equal to its orthogonal projection in space , that is, where Ajf is the low-frequency approximation of image f(x, y) at resolution 2j and , , represent the vertical, horizontal and diagonal wavelet coefficients respectively at resolution 2j. These coefficients describe the high-frequency information of image at direction of vertical, horizontal and diagonal respectively. Ajf, , and are independent and can be considered as images. In this paper J is set to be 1, so the source image is decomposed to produce the son-images Af, D1f, D2f and D3f. To solve the problem of detecting artifacts, the concepts of vertical fractal dimension FD1, horizontal fractal dimension FD2 and diagonal fractal dimension FD3 are proposed in this paper. The vertical fractal dimension FD1 corresponds to the vertical wavelet coefficients image after the wavelet decomposition of source image, the horizontal fractal dimension FD2 corresponds to the horizontal wavelet coefficients and the diagonal fractal dimension FD3 the diagonal one. These definitions enrich the illustration of source images. Therefore they are helpful to classify the targets. Then the detection of artifacts in the decomposed images is a problem of pattern recognition in 4-D space. The combination of FD0, FD1, FD2 and FD3 make a vector of (FD0, FD1, FD2, FD3), which can be considered as a united feature vector of the studied image. All the parts of the images are classified in the 4-D pattern space created by the vector of (FD0, FD1, FD2, FD3) so that the area that contains man-made objects could be detected. This detection can be considered as a coarse recognition, and then the significant areas in each son-images are signed so that they can be dealt with special rules. There has been various fusion rules developed with each one aiming at a special problem. These rules have different performance, so it is very important to select an appropriate rule during the design of an image fusion system. Recent research denotes that the rule should be adjustable so that it is always suitable to extrude the features of targets and to preserve the pixels of useful information. In this paper, owing to the consideration that fractal dimension is one of the main features to distinguish man-made targets from natural objects, the fusion rule was defined that if the studied region of image contains man-made target, the pixels of the source image whose fractal dimension is minimal are saved to be the pixels of the fused image, otherwise, a weighted average operator is adopted to avoid loss of information. The main idea of this rule is to store the pixels with low fractal dimensions, so it can be named Minimal Fractal dimensions (MFD) fusion rule. This fractal-based algorithm is compared with a common weighted average fusion algorithm. An objective assessment is taken to the two fusion results. The criteria of Entropy, Cross-Entropy, Peak Signal-to-Noise Ratio (PSNR) and Standard Gray Scale Difference are defined in this paper. Reversely to the idea of constructing an ideal image as the assessing reference, the source images are selected to be the reference in this paper. It can be deemed that this assessment is to calculate how much the image quality has been enhanced and the quantity of information has been increased when the fused image is compared with the source images. The experimental results imply that the fractal-based multi-spectral fusion algorithm can effectively preserve the information of man-made objects with a high contrast. It is proved that this algorithm could well preserve features of military targets because that battlefield targets are most man-made objects and in common their images differ from fractal models obviously. Furthermore, the fractal features are not sensitive to the imaging conditions and the movement of targets, so this fractal-based algorithm may be very practical.

  19. A scalable multi-DLP pico-projector system for virtual reality

    NASA Astrophysics Data System (ADS)

    Teubl, F.; Kurashima, C.; Cabral, M.; Fels, S.; Lopes, R.; Zuffo, M.

    2014-03-01

    Virtual Reality (VR) environments can offer immersion, interaction and realistic images to users. A VR system is usually expensive and requires special equipment in a complex setup. One approach is to use Commodity-Off-The-Shelf (COTS) desktop multi-projectors manually or camera based calibrated to reduce the cost of VR systems without significant decrease of the visual experience. Additionally, for non-planar screen shapes, special optics such as lenses and mirrors are required thus increasing costs. We propose a low-cost, scalable, flexible and mobile solution that allows building complex VR systems that projects images onto a variety of arbitrary surfaces such as planar, cylindrical and spherical surfaces. This approach combines three key aspects: 1) clusters of DLP-picoprojectors to provide homogeneous and continuous pixel density upon arbitrary surfaces without additional optics; 2) LED lighting technology for energy efficiency and light control; 3) smaller physical footprint for flexibility purposes. Therefore, the proposed system is scalable in terms of pixel density, energy and physical space. To achieve these goals, we developed a multi-projector software library called FastFusion that calibrates all projectors in a uniform image that is presented to viewers. FastFusion uses a camera to automatically calibrate geometric and photometric correction of projected images from ad-hoc positioned projectors, the only requirement is some few pixels overlapping amongst them. We present results with eight Pico-projectors, with 7 lumens (LED) and DLP 0.17 HVGA Chipset.

  20. Novel fusion for hybrid optical/microcomputed tomography imaging based on natural light surface reconstruction and iterated closest point

    NASA Astrophysics Data System (ADS)

    Ning, Nannan; Tian, Jie; Liu, Xia; Deng, Kexin; Wu, Ping; Wang, Bo; Wang, Kun; Ma, Xibo

    2014-02-01

    In mathematics, optical molecular imaging including bioluminescence tomography (BLT), fluorescence tomography (FMT) and Cerenkov luminescence tomography (CLT) are concerned with a similar inverse source problem. They all involve the reconstruction of the 3D location of a single/multiple internal luminescent/fluorescent sources based on 3D surface flux distribution. To achieve that, an accurate fusion between 2D luminescent/fluorescent images and 3D structural images that may be acquired form micro-CT, MRI or beam scanning is extremely critical. However, the absence of a universal method that can effectively convert 2D optical information into 3D makes the accurate fusion challengeable. In this study, to improve the fusion accuracy, a new fusion method for dual-modality tomography (luminescence/fluorescence and micro-CT) based on natural light surface reconstruction (NLSR) and iterated closest point (ICP) was presented. It consisted of Octree structure, exact visual hull from marching cubes and ICP. Different from conventional limited projection methods, it is 360° free-space registration, and utilizes more luminescence/fluorescence distribution information from unlimited multi-orientation 2D optical images. A mouse mimicking phantom (one XPM-2 Phantom Light Source, XENOGEN Corporation) and an in-vivo BALB/C mouse with implanted one luminescent light source were used to evaluate the performance of the new fusion method. Compared with conventional fusion methods, the average error of preset markers was improved by 0.3 and 0.2 pixels from the new method, respectively. After running the same 3D internal light source reconstruction algorithm of the BALB/C mouse, the distance error between the actual and reconstructed internal source was decreased by 0.19 mm.

  1. Infrared small target detection technology based on OpenCV

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Huang, Zhijian

    2013-05-01

    Accurate and fast detection of infrared (IR) dim target has very important meaning for infrared precise guidance, early warning, video surveillance, etc. In this paper, some basic principles and the implementing flow charts of a series of algorithms for target detection are described. These algorithms are traditional two-frame difference method, improved three-frame difference method, background estimate and frame difference fusion method, and building background with neighborhood mean method. On the foundation of above works, an infrared target detection software platform which is developed by OpenCV and MFC is introduced. Three kinds of tracking algorithms are integrated in this software. In order to explain the software clearly, the framework and the function are described in this paper. At last, the experiments are performed for some real-life IR images. The whole algorithm implementing processes and results are analyzed, and those algorithms for detection targets are evaluated from the two aspects of subjective and objective. The results prove that the proposed method has satisfying detection effectiveness and robustness. Meanwhile, it has high detection efficiency and can be used for real-time detection.

  2. Infrared small target detection technology based on OpenCV

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Huang, Zhijian

    2013-09-01

    Accurate and fast detection of infrared (IR) dim target has very important meaning for infrared precise guidance, early warning, video surveillance, etc. In this paper, some basic principles and the implementing flow charts of a series of algorithms for target detection are described. These algorithms are traditional two-frame difference method, improved three-frame difference method, background estimate and frame difference fusion method, and building background with neighborhood mean method. On the foundation of above works, an infrared target detection software platform which is developed by OpenCV and MFC is introduced. Three kinds of tracking algorithms are integrated in this software. In order to explain the software clearly, the framework and the function are described in this paper. At last, the experiments are performed for some real-life IR images. The whole algorithm implementing processes and results are analyzed, and those algorithms for detection targets are evaluated from the two aspects of subjective and objective. The results prove that the proposed method has satisfying detection effectiveness and robustness. Meanwhile, it has high detection efficiency and can be used for real-time detection.

  3. [Effect of image fusion technology of radioactive particles implantation before and after the planning target and dosimetry].

    PubMed

    Jiang, Y L; Yu, J P; Sun, H T; Guo, F X; Ji, Z; Fan, J H; Zhang, L J; Li, X; Wang, J J

    2017-08-01

    Objective: To compare the post-implant target volumes and dosimetric evaluation with pre-plan, the gross tumor volume(GTV) by CT image fusion-based and the manual delineation of target volume in CT guided radioactive seeds implantation. Methods: A total of 10 patients treated under CT-guidance (125)I seed implantation during March 2016 to April 2016 were analyzed in Peking University Third Hospital.All patients underwent pre-operative CT simulation, pre-operative planning, implantation seeds, CT scanning after seed implantation and dosimetric evaluation of GTV.In every patient, post-implant target volumes were delineated by both two methods, and were divided into two groups. Group 1: image fusion pre-implantation simulation and post-operative CT image, then the contours of GTV were automatically performed by brachytherapy treatment planning system; Group 2: the contouring of the GTV on post-operative CT image were performed manually by three senior radiation oncologists independently. The average of three data was sets. Statistical analyses were performed using SPSS software, version 3.2.0. The paired t -test was used to compare the target volumes and D(90) parameters in two modality. Results: In Group 1, average volume of GTV in post-operation group was 12-167(73±56) cm(3). D(90) was 101-153 (142±19)Gy. In Group 2, they were 14-186(80±58)cm(3) and 96-146(122±16) Gy respectively. In both target volumes and D(90), there was no statistical difference between pre-operation and post-operation in Group 1.The D(90) was slightly lower than that of pre-plan group, but there was no statistical difference ( P =0.142); in Group 2, between pre-operation and post-operation group, there was a significant statistical difference in the GTV ( P =0.002). The difference of D(90) was similarly ( P <0.01). Conclusion: The method of delineation of post-implant GTV through fusion pre-implantation simulation and post-operative CT scan images, the contours of GTV are automatically performed by brachytherapy treatment planning system appears to have improved more accuracy, reproducibility and convenience than manual delineation of target volume by maximum reduce the interference from artificial factor and metal artifacts. Further work and more cases are required in the future.

  4. Integrative Multi-Spectral Sensor Device for Far-Infrared and Visible Light Fusion

    NASA Astrophysics Data System (ADS)

    Qiao, Tiezhu; Chen, Lulu; Pang, Yusong; Yan, Gaowei

    2018-06-01

    Infrared and visible light image fusion technology is a hot spot in the research of multi-sensor fusion technology in recent years. Existing infrared and visible light fusion technologies need to register before fusion because of using two cameras. However, the application effect of the registration technology has yet to be improved. Hence, a novel integrative multi-spectral sensor device is proposed for infrared and visible light fusion, and by using the beam splitter prism, the coaxial light incident from the same lens is projected to the infrared charge coupled device (CCD) and visible light CCD, respectively. In this paper, the imaging mechanism of the proposed sensor device is studied with the process of the signals acquisition and fusion. The simulation experiment, which involves the entire process of the optic system, signal acquisition, and signal fusion, is constructed based on imaging effect model. Additionally, the quality evaluation index is adopted to analyze the simulation result. The experimental results demonstrate that the proposed sensor device is effective and feasible.

  5. Change Detection of High-Resolution Remote Sensing Images Based on Adaptive Fusion of Multiple Features

    NASA Astrophysics Data System (ADS)

    Wang, G. H.; Wang, H. B.; Fan, W. F.; Liu, Y.; Chen, C.

    2018-04-01

    In view of the traditional change detection algorithm mainly depends on the spectral information image spot, failed to effectively mining and fusion of multi-image feature detection advantage, the article borrows the ideas of object oriented analysis proposed a multi feature fusion of remote sensing image change detection algorithm. First by the multi-scale segmentation of image objects based; then calculate the various objects of color histogram and linear gradient histogram; utilizes the color distance and edge line feature distance between EMD statistical operator in different periods of the object, using the adaptive weighted method, the color feature distance and edge in a straight line distance of combination is constructed object heterogeneity. Finally, the curvature histogram analysis image spot change detection results. The experimental results show that the method can fully fuse the color and edge line features, thus improving the accuracy of the change detection.

  6. Enhancement of Tropical Land Cover Mapping with Wavelet-Based Fusion and Unsupervised Clustering of SAR and Landsat Image Data

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Laporte, Nadine; Netanyahuy, Nathan S.; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    The characterization and the mapping of land cover/land use of forest areas, such as the Central African rainforest, is a very complex task. This complexity is mainly due to the extent of such areas and, as a consequence, to the lack of full and continuous cloud-free coverage of those large regions by one single remote sensing instrument, In order to provide improved vegetation maps of Central Africa and to develop forest monitoring techniques for applications at the local and regional scales, we propose to utilize multi-sensor remote sensing observations coupled with in-situ data. Fusion and clustering of multi-sensor data are the first steps towards the development of such a forest monitoring system. In this paper, we will describe some preliminary experiments involving the fusion of SAR and Landsat image data of the Lope Reserve in Gabon. Similarly to previous fusion studies, our fusion method is wavelet-based. The fusion provides a new image data set which contains more detailed texture features and preserves the large homogeneous regions that are observed by the Thematic Mapper sensor. The fusion step is followed by unsupervised clustering and provides a vegetation map of the area.

  7. Assessment of SPOT-6 optical remote sensing data against GF-1 using NNDiffuse image fusion algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Jinling; Guo, Junjie; Cheng, Wenjie; Xu, Chao; Huang, Linsheng

    2017-07-01

    A cross-comparison method was used to assess the SPOT-6 optical satellite imagery against Chinese GF-1 imagery using three types of indicators: spectral and color quality, fusion effect and identification potential. More specifically, spectral response function (SRF) curves were used to compare the two imagery, showing that the SRF curve shape of SPOT-6 is more like a rectangle compared to GF-1 in blue, green, red and near-infrared bands. NNDiffuse image fusion algorithm was used to evaluate the capability of information conservation in comparison with wavelet transform (WT) and principal component (PC) algorithms. The results show that NNDiffuse fused image has extremely similar entropy vales than original image (1.849 versus 1.852) and better color quality. In addition, the object-oriented classification toolset (ENVI EX) was used to identify greenlands for comparing the effect of self-fusion image of SPOT-6 and inter-fusion image between SPOT-6 and GF-1 based on the NNDiffuse algorithm. The overall accuracy is 97.27% and 76.88%, respectively, showing that self-fused image of SPOT-6 has better identification capability.

  8. A multimodal image guiding system for Navigated Ultrasound Bronchoscopy (EBUS): A human feasibility study

    PubMed Central

    Hofstad, Erlend Fagertun; Amundsen, Tore; Langø, Thomas; Bakeng, Janne Beate Lervik; Leira, Håkon Olav

    2017-01-01

    Background Endobronchial ultrasound transbronchial needle aspiration (EBUS-TBNA) is the endoscopic method of choice for confirming lung cancer metastasis to mediastinal lymph nodes. Precision is crucial for correct staging and clinical decision-making. Navigation and multimodal imaging can potentially improve EBUS-TBNA efficiency. Aims To demonstrate the feasibility of a multimodal image guiding system using electromagnetic navigation for ultrasound bronchoschopy in humans. Methods Four patients referred for lung cancer diagnosis and staging with EBUS-TBNA were enrolled in the study. Target lymph nodes were predefined from the preoperative computed tomography (CT) images. A prototype convex probe ultrasound bronchoscope with an attached sensor for position tracking was used for EBUS-TBNA. Electromagnetic tracking of the ultrasound bronchoscope and ultrasound images allowed fusion of preoperative CT and intraoperative ultrasound in the navigation software. Navigated EBUS-TBNA was used to guide target lymph node localization and sampling. Navigation system accuracy was calculated, measured by the deviation between lymph node position in ultrasound and CT in three planes. Procedure time, diagnostic yield and adverse events were recorded. Results Preoperative CT and real-time ultrasound images were successfully fused and displayed in the navigation software during the procedures. Overall navigation accuracy (11 measurements) was 10.0 ± 3.8 mm, maximum 17.6 mm, minimum 4.5 mm. An adequate sample was obtained in 6/6 (100%) of targeted lymph nodes. No adverse events were registered. Conclusions Electromagnetic navigated EBUS-TBNA was feasible, safe and easy in this human pilot study. The clinical usefulness was clearly demonstrated. Fusion of real-time ultrasound, preoperative CT and electromagnetic navigational bronchoscopy provided a controlled guiding to level of target, intraoperative overview and procedure documentation. PMID:28182758

  9. [Experience of Fusion image guided system in endonasal endoscopic surgery].

    PubMed

    Wen, Jingying; Zhen, Hongtao; Shi, Lili; Cao, Pingping; Cui, Yonghua

    2015-08-01

    To review endonasal endoscopic surgeries aided by Fusion image guided system, and to explore the application value of Fusion image guided system in endonasal endoscopic surgeries. Retrospective research. Sixty cases of endonasal endoscopic surgeries aided by Fusion image guided system were analysed including chronic rhinosinusitis with polyp (n = 10), fungus sinusitis (n = 5), endoscopic optic nerve decompression (n = 16), inverted papilloma of the paranasal sinus (n = 9), ossifying fibroma of sphenoid bone (n = 1), malignance of the paranasal sinus (n = 9), cerebrospinal fluid leak (n = 5), hemangioma of orbital apex (n = 2) and orbital reconstruction (n = 3). Sixty cases of endonasal endoscopic surgeries completed successfully without any complications. Fusion image guided system can help to identify the ostium of paranasal sinus, lamina papyracea and skull base. Fused CT-CTA images, or fused MR-MRA images can help to localize the optic nerve or internal carotid arteiy . Fused CT-MR images can help to detect the range of the tumor. It spent (7.13 ± 1.358) minutes for image guided system to do preoperative preparation and the surgical navigation accuracy reached less than 1mm after proficient. There was no device localization problem because of block or head set loosed. Fusion image guided system make endonasal endoscopic surgery to be a true microinvasive and exact surgery. It spends less preoperative preparation time, has high surgical navigation accuracy, improves the surgical safety and reduces the surgical complications.

  10. BP fusion model for the detection of oil spills on the sea by remote sensing

    NASA Astrophysics Data System (ADS)

    Chen, Weiwei; An, Jubai; Zhang, Hande; Lin, Bin

    2003-06-01

    Oil spills are very serious marine pollution in many countries. In order to detect and identify the oil-spilled on the sea by remote sensor, scientists have to conduct a research work on the remote sensing image. As to the detection of oil spills on the sea, edge detection is an important technology in image processing. There are many algorithms of edge detection developed for image processing. These edge detection algorithms always have their own advantages and disadvantages in the image processing. Based on the primary requirements of edge detection of the oil spills" image on the sea, computation time and detection accuracy, we developed a fusion model. The model employed a BP neural net to fuse the detection results of simple operators. The reason we selected BP neural net as the fusion technology is that the relation between simple operators" result of edge gray level and the image"s true edge gray level is nonlinear, while BP neural net is good at solving the nonlinear identification problem. Therefore in this paper we trained a BP neural net by some oil spill images, then applied the BP fusion model on the edge detection of other oil spill images and obtained a good result. In this paper the detection result of some gradient operators and Laplacian operator are also compared with the result of BP fusion model to analysis the fusion effect. At last the paper pointed out that the fusion model has higher accuracy and higher speed in the processing oil spill image"s edge detection.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Svetlana Shasharina

    The goal of the Center for Technology for Advanced Scientific Component Software is to fundamentally changing the way scientific software is developed and used by bringing component-based software development technologies to high-performance scientific and engineering computing. The role of Tech-X work in TASCS project is to provide an outreach to accelerator physics and fusion applications by introducing TASCS tools into applications, testing tools in the applications and modifying the tools to be more usable.

  12. A research on radiation calibration of high dynamic range based on the dual channel CMOS

    NASA Astrophysics Data System (ADS)

    Ma, Kai; Shi, Zhan; Pan, Xiaodong; Wang, Yongsheng; Wang, Jianghua

    2017-10-01

    The dual channel complementary metal-oxide semiconductor (CMOS) can get high dynamic range (HDR) image through extending the gray level of the image by using image fusion with high gain channel image and low gain channel image in a same frame. In the process of image fusion with dual channel, it adopts the coefficients of radiation response of a pixel from dual channel in a same frame, and then calculates the gray level of the pixel in the HDR image. For the coefficients of radiation response play a crucial role in image fusion, it has to find an effective method to acquire these parameters. In this article, it makes a research on radiation calibration of high dynamic range based on the dual channel CMOS, and designs an experiment to calibrate the coefficients of radiation response for the sensor it used. In the end, it applies these response parameters in the dual channel CMOS which calibrates, and verifies the correctness and feasibility of the method mentioned in this paper.

  13. Multi exposure image fusion algorithm based on YCbCr space

    NASA Astrophysics Data System (ADS)

    Yang, T. T.; Fang, P. Y.

    2018-05-01

    To solve the problem that scene details and visual effects are difficult to be optimized in high dynamic image synthesis, we proposes a multi exposure image fusion algorithm for processing low dynamic range images in YCbCr space, and weighted blending of luminance and chromatic aberration components respectively. The experimental results show that the method can retain color effect of the fused image while balancing details of the bright and dark regions of the high dynamic image.

  14. Adaptive structured dictionary learning for image fusion based on group-sparse-representation

    NASA Astrophysics Data System (ADS)

    Yang, Jiajie; Sun, Bin; Luo, Chengwei; Wu, Yuzhong; Xu, Limei

    2018-04-01

    Dictionary learning is the key process of sparse representation which is one of the most widely used image representation theories in image fusion. The existing dictionary learning method does not use the group structure information and the sparse coefficients well. In this paper, we propose a new adaptive structured dictionary learning algorithm and a l1-norm maximum fusion rule that innovatively utilizes grouped sparse coefficients to merge the images. In the dictionary learning algorithm, we do not need prior knowledge about any group structure of the dictionary. By using the characteristics of the dictionary in expressing the signal, our algorithm can automatically find the desired potential structure information that hidden in the dictionary. The fusion rule takes the physical meaning of the group structure dictionary, and makes activity-level judgement on the structure information when the images are being merged. Therefore, the fused image can retain more significant information. Comparisons have been made with several state-of-the-art dictionary learning methods and fusion rules. The experimental results demonstrate that, the dictionary learning algorithm and the fusion rule both outperform others in terms of several objective evaluation metrics.

  15. Focus measure method based on the modulus of the gradient of the color planes for digital microscopy

    NASA Astrophysics Data System (ADS)

    Hurtado-Pérez, Román; Toxqui-Quitl, Carina; Padilla-Vivanco, Alfonso; Aguilar-Valdez, J. Félix; Ortega-Mendoza, Gabriel

    2018-02-01

    The modulus of the gradient of the color planes (MGC) is implemented to transform multichannel information to a grayscale image. This digital technique is used in two applications: (a) focus measurements during autofocusing (AF) process and (b) extending the depth of field (EDoF) by means of multifocus image fusion. In the first case, the MGC procedure is based on an edge detection technique and is implemented in over 15 focus metrics that are typically handled in digital microscopy. The MGC approach is tested on color images of histological sections for the selection of in-focus images. An appealing attribute of all the AF metrics working in the MGC space is their monotonic behavior even up to a magnification of 100×. An advantage of the MGC method is its computational simplicity and inherent parallelism. In the second application, a multifocus image fusion algorithm based on the MGC approach has been implemented on graphics processing units (GPUs). The resulting fused images are evaluated using a nonreference image quality metric. The proposed fusion method reveals a high-quality image independently of faulty illumination during the image acquisition. Finally, the three-dimensional visualization of the in-focus image is shown.

  16. Direct fusion of geostationary meteorological satellite visible and infrared images based on thermal physical properties.

    PubMed

    Han, Lei; Wulie, Buzha; Yang, Yiling; Wang, Hongqing

    2015-01-05

    This study investigated a novel method of fusing visible (VIS) and infrared (IR) images with the major objective of obtaining higher-resolution IR images. Most existing image fusion methods focus only on visual performance and many fail to consider the thermal physical properties of the IR images, leading to spectral distortion in the fused image. In this study, we use the IR thermal physical property to correct the VIS image directly. Specifically, the Stefan-Boltzmann Law is used as a strong constraint to modulate the VIS image, such that the fused result shows a similar level of regional thermal energy as the original IR image, while preserving the high-resolution structural features from the VIS image. This method is an improvement over our previous study, which required VIS-IR multi-wavelet fusion before the same correction method was applied. The results of experiments show that applying this correction to the VIS image directly without multi-resolution analysis (MRA) processing achieves similar results, but is considerably more computationally efficient, thereby providing a new perspective on VIS and IR image fusion.

  17. Direct Fusion of Geostationary Meteorological Satellite Visible and Infrared Images Based on Thermal Physical Properties

    PubMed Central

    Han, Lei; Wulie, Buzha; Yang, Yiling; Wang, Hongqing

    2015-01-01

    This study investigated a novel method of fusing visible (VIS) and infrared (IR) images with the major objective of obtaining higher-resolution IR images. Most existing image fusion methods focus only on visual performance and many fail to consider the thermal physical properties of the IR images, leading to spectral distortion in the fused image. In this study, we use the IR thermal physical property to correct the VIS image directly. Specifically, the Stefan-Boltzmann Law is used as a strong constraint to modulate the VIS image, such that the fused result shows a similar level of regional thermal energy as the original IR image, while preserving the high-resolution structural features from the VIS image. This method is an improvement over our previous study, which required VIS-IR multi-wavelet fusion before the same correction method was applied. The results of experiments show that applying this correction to the VIS image directly without multi-resolution analysis (MRA) processing achieves similar results, but is considerably more computationally efficient, thereby providing a new perspective on VIS and IR image fusion. PMID:25569749

  18. Automatical and accurate segmentation of cerebral tissues in fMRI dataset with combination of image processing and deep learning

    NASA Astrophysics Data System (ADS)

    Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting

    2018-02-01

    Image segmentation plays an important role in medical science. One application is multimodality imaging, especially the fusion of structural imaging with functional imaging, which includes CT, MRI and new types of imaging technology such as optical imaging to obtain functional images. The fusion process require precisely extracted structural information, in order to register the image to it. Here we used image enhancement, morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in deep learning way. Such approach greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. The contours of the borders of different tissues on all images were accurately extracted and 3D visualized. This can be used in low-level light therapy and optical simulation software such as MCVM. We obtained a precise three-dimensional distribution of brain, which offered doctors and researchers quantitative volume data and detailed morphological characterization for personal precise medicine of Cerebral atrophy/expansion. We hope this technique can bring convenience to visualization medical and personalized medicine.

  19. Multitask saliency detection model for synthetic aperture radar (SAR) image and its application in SAR and optical image fusion

    NASA Astrophysics Data System (ADS)

    Liu, Chunhui; Zhang, Duona; Zhao, Xintao

    2018-03-01

    Saliency detection in synthetic aperture radar (SAR) images is a difficult problem. This paper proposed a multitask saliency detection (MSD) model for the saliency detection task of SAR images. We extract four features of the SAR image, which include the intensity, orientation, uniqueness, and global contrast, as the input of the MSD model. The saliency map is generated by the multitask sparsity pursuit, which integrates the multiple features collaboratively. Detection of different scale features is also taken into consideration. Subjective and objective evaluation of the MSD model verifies its effectiveness. Based on the saliency maps obtained by the MSD model, we apply the saliency map of the SAR image to the SAR and color optical image fusion. The experimental results of real data show that the saliency map obtained by the MSD model helps to improve the fusion effect, and the salient areas in the SAR image can be highlighted in the fusion results.

  20. Fusion of LBP and SWLD using spatio-spectral information for hyperspectral face recognition

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Jiang, Peng; Zhang, Shuai; Xiong, Jinquan

    2018-01-01

    Hyperspectral imaging, recording intrinsic spectral information of the skin cross different spectral bands, become an important issue for robust face recognition. However, the main challenges for hyperspectral face recognition are high data dimensionality, low signal to noise ratio and inter band misalignment. In this paper, hyperspectral face recognition based on LBP (Local binary pattern) and SWLD (Simplified Weber local descriptor) is proposed to extract discriminative local features from spatio-spectral fusion information. Firstly, the spatio-spectral fusion strategy based on statistical information is used to attain discriminative features of hyperspectral face images. Secondly, LBP is applied to extract the orientation of the fusion face edges. Thirdly, SWLD is proposed to encode the intensity information in hyperspectral images. Finally, we adopt a symmetric Kullback-Leibler distance to compute the encoded face images. The hyperspectral face recognition is tested on Hong Kong Polytechnic University Hyperspectral Face database (PolyUHSFD). Experimental results show that the proposed method has higher recognition rate (92.8%) than the state of the art hyperspectral face recognition algorithms.

  1. Infrared and visible image fusion using discrete cosine transform and swarm intelligence for surveillance applications

    NASA Astrophysics Data System (ADS)

    Paramanandham, Nirmala; Rajendiran, Kishore

    2018-01-01

    A novel image fusion technique is presented for integrating infrared and visible images. Integration of images from the same or various sensing modalities can deliver the required information that cannot be delivered by viewing the sensor outputs individually and consecutively. In this paper, a swarm intelligence based image fusion technique using discrete cosine transform (DCT) domain is proposed for surveillance application which integrates the infrared image with the visible image for generating a single informative fused image. Particle swarm optimization (PSO) is used in the fusion process for obtaining the optimized weighting factor. These optimized weighting factors are used for fusing the DCT coefficients of visible and infrared images. Inverse DCT is applied for obtaining the initial fused image. An enhanced fused image is obtained through adaptive histogram equalization for a better visual understanding and target detection. The proposed framework is evaluated using quantitative metrics such as standard deviation, spatial frequency, entropy and mean gradient. The experimental results demonstrate the outperformance of the proposed algorithm over many other state- of- the- art techniques reported in literature.

  2. Entropy-functional-based online adaptive decision fusion framework with application to wildfire detection in video.

    PubMed

    Gunay, Osman; Toreyin, Behçet Ugur; Kose, Kivanc; Cetin, A Enis

    2012-05-01

    In this paper, an entropy-functional-based online adaptive decision fusion (EADF) framework is developed for image analysis and computer vision applications. In this framework, it is assumed that the compound algorithm consists of several subalgorithms, each of which yields its own decision as a real number centered around zero, representing the confidence level of that particular subalgorithm. Decision values are linearly combined with weights that are updated online according to an active fusion method based on performing entropic projections onto convex sets describing subalgorithms. It is assumed that there is an oracle, who is usually a human operator, providing feedback to the decision fusion method. A video-based wildfire detection system was developed to evaluate the performance of the decision fusion algorithm. In this case, image data arrive sequentially, and the oracle is the security guard of the forest lookout tower, verifying the decision of the combined algorithm. The simulation results are presented.

  3. Near infrared and visible face recognition based on decision fusion of LBP and DCT features

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan

    2018-03-01

    Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In order to extract the discriminative complementary features between near infrared and visible images, in this paper, we proposed a novel near infrared and visible face fusion recognition algorithm based on DCT and LBP features. Firstly, the effective features in near-infrared face image are extracted by the low frequency part of DCT coefficients and the partition histograms of LBP operator. Secondly, the LBP features of visible-light face image are extracted to compensate for the lacking detail features of the near-infrared face image. Then, the LBP features of visible-light face image, the DCT and LBP features of near-infrared face image are sent to each classifier for labeling. Finally, decision level fusion strategy is used to obtain the final recognition result. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. The experiment results show that the proposed method extracts the complementary features of near-infrared and visible face images and improves the robustness of unconstrained face recognition. Especially for the circumstance of small training samples, the recognition rate of proposed method can reach 96.13%, which has improved significantly than 92.75 % of the method based on statistical feature fusion.

  4. A wavelet-based adaptive fusion algorithm of infrared polarization imaging

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Gu, Guohua; Chen, Qian; Zeng, Haifang

    2011-08-01

    The purpose of infrared polarization image is to highlight man-made target from a complex natural background. For the infrared polarization images can significantly distinguish target from background with different features, this paper presents a wavelet-based infrared polarization image fusion algorithm. The method is mainly for image processing of high-frequency signal portion, as for the low frequency signal, the original weighted average method has been applied. High-frequency part is processed as follows: first, the source image of the high frequency information has been extracted by way of wavelet transform, then signal strength of 3*3 window area has been calculated, making the regional signal intensity ration of source image as a matching measurement. Extraction method and decision mode of the details are determined by the decision making module. Image fusion effect is closely related to the setting threshold of decision making module. Compared to the commonly used experiment way, quadratic interpolation optimization algorithm is proposed in this paper to obtain threshold. Set the endpoints and midpoint of the threshold searching interval as initial interpolation nodes, and compute the minimum quadratic interpolation function. The best threshold can be obtained by comparing the minimum quadratic interpolation function. A series of image quality evaluation results show this method has got improvement in fusion effect; moreover, it is not only effective for some individual image, but also for a large number of images.

  5. TU-AB-202-11: Tumor Segmentation by Fusion of Multi-Tracer PET Images Using Copula Based Statistical Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapuyade-Lahorgue, J; Ruan, S; Li, H

    Purpose: Multi-tracer PET imaging is getting more attention in radiotherapy by providing additional tumor volume information such as glucose and oxygenation. However, automatic PET-based tumor segmentation is still a very challenging problem. We propose a statistical fusion approach to joint segment the sub-area of tumors from the two tracers FDG and FMISO PET images. Methods: Non-standardized Gamma distributions are convenient to model intensity distributions in PET. As a serious correlation exists in multi-tracer PET images, we proposed a new fusion method based on copula which is capable to represent dependency between different tracers. The Hidden Markov Field (HMF) model ismore » used to represent spatial relationship between PET image voxels and statistical dynamics of intensities for each modality. Real PET images of five patients with FDG and FMISO are used to evaluate quantitatively and qualitatively our method. A comparison between individual and multi-tracer segmentations was conducted to show advantages of the proposed fusion method. Results: The segmentation results show that fusion with Gaussian copula can receive high Dice coefficient of 0.84 compared to that of 0.54 and 0.3 of monomodal segmentation results based on individual segmentation of FDG and FMISO PET images. In addition, high correlation coefficients (0.75 to 0.91) for the Gaussian copula for all five testing patients indicates the dependency between tumor regions in the multi-tracer PET images. Conclusion: This study shows that using multi-tracer PET imaging can efficiently improve the segmentation of tumor region where hypoxia and glucidic consumption are present at the same time. Introduction of copulas for modeling the dependency between two tracers can simultaneously take into account information from both tracers and deal with two pathological phenomena. Future work will be to consider other families of copula such as spherical and archimedian copulas, and to eliminate partial volume effect by considering dependency between neighboring voxels.« less

  6. Fusion of infrared and visible images based on saliency scale-space in frequency domain

    NASA Astrophysics Data System (ADS)

    Chen, Yanfei; Sang, Nong; Dan, Zhiping

    2015-12-01

    A fusion algorithm of infrared and visible images based on saliency scale-space in the frequency domain was proposed. Focus of human attention is directed towards the salient targets which interpret the most important information in the image. For the given registered infrared and visible images, firstly, visual features are extracted to obtain the input hypercomplex matrix. Secondly, the Hypercomplex Fourier Transform (HFT) is used to obtain the salient regions of the infrared and visible images respectively, the convolution of the input hypercomplex matrix amplitude spectrum with a low-pass Gaussian kernel of an appropriate scale which is equivalent to an image saliency detector are done. The saliency maps are obtained by reconstructing the 2D signal using the original phase and the amplitude spectrum, filtered at a scale selected by minimizing saliency map entropy. Thirdly, the salient regions are fused with the adoptive weighting fusion rules, and the nonsalient regions are fused with the rule based on region energy (RE) and region sharpness (RS), then the fused image is obtained. Experimental results show that the presented algorithm can hold high spectrum information of the visual image, and effectively get the thermal targets information at different scales of the infrared image.

  7. PyDBS: an automated image processing workflow for deep brain stimulation surgery.

    PubMed

    D'Albis, Tiziano; Haegelen, Claire; Essert, Caroline; Fernández-Vidal, Sara; Lalys, Florent; Jannin, Pierre

    2015-02-01

    Deep brain stimulation (DBS) is a surgical procedure for treating motor-related neurological disorders. DBS clinical efficacy hinges on precise surgical planning and accurate electrode placement, which in turn call upon several image processing and visualization tasks, such as image registration, image segmentation, image fusion, and 3D visualization. These tasks are often performed by a heterogeneous set of software tools, which adopt differing formats and geometrical conventions and require patient-specific parameterization or interactive tuning. To overcome these issues, we introduce in this article PyDBS, a fully integrated and automated image processing workflow for DBS surgery. PyDBS consists of three image processing pipelines and three visualization modules assisting clinicians through the entire DBS surgical workflow, from the preoperative planning of electrode trajectories to the postoperative assessment of electrode placement. The system's robustness, speed, and accuracy were assessed by means of a retrospective validation, based on 92 clinical cases. The complete PyDBS workflow achieved satisfactory results in 92 % of tested cases, with a median processing time of 28 min per patient. The results obtained are compatible with the adoption of PyDBS in clinical practice.

  8. Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT

    PubMed Central

    Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah

    2015-01-01

    Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics. PMID:26089965

  9. Evaluation of the ablation margin of hepatocellular carcinoma using CEUS-CT/MR image fusion in a phantom model and in patients.

    PubMed

    Li, Kai; Su, Zhongzhen; Xu, Erjiao; Huang, Qiannan; Zeng, Qingjing; Zheng, Rongqin

    2017-01-19

    To assess the accuracy of contrast-enhanced ultrasound (CEUS)-CT/MR image fusion in evaluating the radiofrequency ablative margin (AM) of hepatocellular carcinoma (HCC) based on a custom-made phantom model and in HCC patients. Twenty-four phantoms were randomly divided into a complete ablation group (n = 6) and an incomplete ablation group (n = 18). After radiofrequency ablation (RFA), the AM was evaluated using ultrasound (US)-CT image fusion, and the results were compared with the AM results that were directly measured in a gross specimen. CEUS-CT/MR image fusion and CT-CT / MR-MR image fusion were used to evaluate the AM in 37 tumors from 33 HCC patients who underwent RFA. The sensitivity, specificity, and accuracy of US-CT image fusion for evaluating AM in the phantom model were 93.8, 85.7 and 91.3%, respectively. The maximal thicknesses of the residual AM were 3.5 ± 2.0 mm and 3.2 ± 2.0 mm in the US-CT image fusion and gross specimen, respectively. No significant difference was observed between the US-CT image fusion and direct measurements of the AM of HCC. In the clinical study, the success rate of the AM evaluation was 100% for both CEUS-CT/MR and CT-CT/MR-MR, and the duration was 8.5 ± 2.8 min (range: 4-12 min) and 13.5 ± 4.5 min (range: 8-16 min) for CEUS-CT/MR and CT-CT/MR-MR, respectively. The sensitivity, specificity, and accuracy of CEUS-CT/MR imaging for evaluating the AM were 100.0, 80.0, and 90.0%, respectively. A phantom model composed of carrageenan gel and additives was suitable for the evaluation of HCC AM. CEUS-CT/MR image fusion can be used to evaluate HCC AM with high accuracy.

  10. Nonsubsampled rotated complex wavelet transform (NSRCxWT) for medical image fusion related to clinical aspects in neurocysticercosis.

    PubMed

    Chavan, Satishkumar S; Mahajan, Abhishek; Talbar, Sanjay N; Desai, Subhash; Thakur, Meenakshi; D'cruz, Anil

    2017-02-01

    Neurocysticercosis (NCC) is a parasite infection caused by the tapeworm Taenia solium in its larvae stage which affects the central nervous system of the human body (a definite host). It results in the formation of multiple lesions in the brain at different locations during its various stages. During diagnosis of such symptomatic patients, these lesions can be better visualized using a feature based fusion of Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). This paper presents a novel approach to Multimodality Medical Image Fusion (MMIF) used for the analysis of the lesions for the diagnostic purpose and post treatment review of NCC. The MMIF presented here is a technique of combining CT and MRI data of the same patient into a new slice using a Nonsubsampled Rotated Complex Wavelet Transform (NSRCxWT). The forward NSRCxWT is applied on both the source modalities separately to extract the complementary and the edge related features. These features are then combined to form a composite spectral plane using average and maximum value selection fusion rules. The inverse transformation on this composite plane results into a new, visually better, and enriched fused image. The proposed technique is tested on the pilot study data sets of patients infected with NCC. The quality of these fused images is measured using objective and subjective evaluation metrics. Objective evaluation is performed by estimating the fusion parameters like entropy, fusion factor, image quality index, edge quality measure, mean structural similarity index measure, etc. The fused images are also evaluated for their visual quality using subjective analysis with the help of three expert radiologists. The experimental results on 43 image data sets of 17 patients are promising and superior when compared with the state of the art wavelet based fusion algorithms. The proposed algorithm can be a part of computer-aided detection and diagnosis (CADD) system which assists the radiologists in clinical practices. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Joint interpretation of geophysical data using Image Fusion techniques

    NASA Astrophysics Data System (ADS)

    Karamitrou, A.; Tsokas, G.; Petrou, M.

    2013-12-01

    Joint interpretation of geophysical data produced from different methods is a challenging area of research in a wide range of applications. In this work we apply several image fusion approaches to combine maps of electrical resistivity, electromagnetic conductivity, vertical gradient of the magnetic field, magnetic susceptibility, and ground penetrating radar reflections, in order to detect archaeological relics. We utilize data gathered from Arkansas University, with the support of the U.S. Department of Defense, through the Strategic Environmental Research and Development Program (SERDP-CS1263). The area of investigation is the Army City, situated in Riley Country of Kansas, USA. The depth of the relics is estimated about 30 cm from the surface, yet the surface indications of its existence are limited. We initially register the images from the different methods to correct from random offsets due to the use of hand-held devices during the measurement procedure. Next, we apply four different image fusion approaches to create combined images, using fusion with mean values, wavelet decomposition, curvelet transform, and curvelet transform enhancing the images along specific angles. We create seven combinations of pairs between the available geophysical datasets. The combinations are such that for every pair at least one high-resolution method (resistivity or magnetic gradiometry) is included. Our results indicate that in almost every case the method of mean values produces satisfactory fused images that corporate the majority of the features of the initial images. However, the contrast of the final image is reduced, and in some cases the averaging process nearly eliminated features that are fade in the original images. Wavelet based fusion outputs also good results, providing additional control in selecting the feature wavelength. Curvelet based fusion is proved the most effective method in most of the cases. The ability of curvelet domain to unfold the image in terms of space, wavenumber, and orientation, provides important advantages compared with the rest of the methods by allowing the incorporation of a-priori information about the orientation of the potential targets.

  12. Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study

    PubMed Central

    Sappa, Angel D.; Carvajal, Juan A.; Aguilera, Cristhian A.; Oliveira, Miguel; Romero, Dennis; Vintimilla, Boris X.

    2016-01-01

    This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and Long Wave InfraRed (LWIR). PMID:27294938

  13. Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study.

    PubMed

    Sappa, Angel D; Carvajal, Juan A; Aguilera, Cristhian A; Oliveira, Miguel; Romero, Dennis; Vintimilla, Boris X

    2016-06-10

    This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and Long Wave InfraRed (LWIR).

  14. Single-Scale Fusion: An Effective Approach to Merging Images.

    PubMed

    Ancuti, Codruta O; Ancuti, Cosmin; De Vleeschouwer, Christophe; Bovik, Alan C

    2017-01-01

    Due to its robustness and effectiveness, multi-scale fusion (MSF) based on the Laplacian pyramid decomposition has emerged as a popular technique that has shown utility in many applications. Guided by several intuitive measures (weight maps) the MSF process is versatile and straightforward to be implemented. However, the number of pyramid levels increases with the image size, which implies sophisticated data management and memory accesses, as well as additional computations. Here, we introduce a simplified formulation that reduces MSF to only a single level process. Starting from the MSF decomposition, we explain both mathematically and intuitively (visually) a way to simplify the classical MSF approach with minimal loss of information. The resulting single-scale fusion (SSF) solution is a close approximation of the MSF process that eliminates important redundant computations. It also provides insights regarding why MSF is so effective. While our simplified expression is derived in the context of high dynamic range imaging, we show its generality on several well-known fusion-based applications, such as image compositing, extended depth of field, medical imaging, and blending thermal (infrared) images with visible light. Besides visual validation, quantitative evaluations demonstrate that our SSF strategy is able to yield results that are highly competitive with traditional MSF approaches.

  15. Added Value of Contrast-Enhanced Ultrasound on Biopsies of Focal Hepatic Lesions Invisible on Fusion Imaging Guidance.

    PubMed

    Kang, Tae Wook; Lee, Min Woo; Song, Kyoung Doo; Kim, Mimi; Kim, Seung Soo; Kim, Seong Hyun; Ha, Sang Yun

    2017-01-01

    To assess whether contrast-enhanced ultrasonography (CEUS) with Sonazoid can improve the lesion conspicuity and feasibility of percutaneous biopsies for focal hepatic lesions invisible on fusion imaging of real-time ultrasonography (US) with computed tomography/magnetic resonance images, and evaluate its impact on clinical decision making. The Institutional Review Board approved this retrospective study. Between June 2013 and January 2015, 711 US-guided percutaneous biopsies were performed for focal hepatic lesions. Biopsies were performed using CEUS for guidance if lesions were invisible on fusion imaging. We retrospectively evaluated the number of target lesions initially invisible on fusion imaging that became visible after applying CEUS, using a 4-point scale. Technical success rates of biopsies were evaluated based on histopathological results. In addition, the occurrence of changes in clinical decision making was assessed. Among 711 patients, 16 patients (2.3%) were included in the study. The median size of target lesions was 1.1 cm (range, 0.5-1.9 cm) in pre-procedural imaging. After CEUS, 15 of 16 (93.8%) focal hepatic lesions were visualized. The conspicuity score was significantly increased after adding CEUS, as compared to that on fusion imaging (p < 0.001). The technical success rate of biopsy was 87.6% (14/16). After biopsy, there were changes in clinical decision making for 11 of 16 patients (68.8%). The addition of CEUS could improve the conspicuity of focal hepatic lesions invisible on fusion imaging. This dual guidance using CEUS and fusion imaging may affect patient management via changes in clinical decision-making.

  16. Dual-axis reflective continuous-wave terahertz confocal scanning polarization imaging and image fusion

    NASA Astrophysics Data System (ADS)

    Zhou, Yi; Li, Qi

    2017-01-01

    A dual-axis reflective continuous-wave terahertz (THz) confocal scanning polarization imaging system was adopted. THz polarization imaging experiments on gaps on film and metallic letters "BeLLE" were carried out. Imaging results indicate that the THz polarization imaging is sensitive to the tilted gap or wide flat gap, suggesting the THz polarization imaging is able to detect edges and stains. An image fusion method based on the digital image processing was proposed to ameliorate the imaging quality of metallic letters "BeLLE." Objective and subjective evaluation both prove that this method can improve the imaging quality.

  17. Ghost detection and removal based on super-pixel grouping in exposure fusion

    NASA Astrophysics Data System (ADS)

    Jiang, Shenyu; Xu, Zhihai; Li, Qi; Chen, Yueting; Feng, Huajun

    2014-09-01

    A novel multi-exposure images fusion method for dynamic scenes is proposed. The commonly used techniques for high dynamic range (HDR) imaging are based on the combination of multiple differently exposed images of the same scene. The drawback of these methods is that ghosting artifacts will be introduced into the final HDR image if the scene is not static. In this paper, a super-pixel grouping based method is proposed to detect the ghost in the image sequences. We introduce the zero mean normalized cross correlation (ZNCC) as a measure of similarity between a given exposure image and the reference. The calculation of ZNCC is implemented in super-pixel level, and the super-pixels which have low correlation with the reference are excluded by adjusting the weight maps for fusion. Without any prior information on camera response function or exposure settings, the proposed method generates low dynamic range (LDR) images which can be shown on conventional display devices directly with details preserving and ghost effects reduced. Experimental results show that the proposed method generates high quality images which have less ghost artifacts and provide a better visual quality than previous approaches.

  18. The effect of multispectral image fusion enhancement on human efficiency.

    PubMed

    Bittner, Jennifer L; Schill, M Trent; Mohd-Zaid, Fairul; Blaha, Leslie M

    2017-01-01

    The visual system can be highly influenced by changes to visual presentation. Thus, numerous techniques have been developed to augment imagery in an attempt to improve human perception. The current paper examines the potential impact of one such enhancement, multispectral image fusion, where imagery captured in varying spectral bands (e.g., visible, thermal, night vision) is algorithmically combined to produce an output to strengthen visual perception. We employ ideal observer analysis over a series of experimental conditions to (1) establish a framework for testing the impact of image fusion over the varying aspects surrounding its implementation (e.g., stimulus content, task) and (2) examine the effectiveness of fusion on human information processing efficiency in a basic application. We used a set of rotated Landolt C images captured with a number of individual sensor cameras and combined across seven traditional fusion algorithms (e.g., Laplacian pyramid, principal component analysis, averaging) in a 1-of-8 orientation task. We found that, contrary to the idea of fused imagery always producing a greater impact on perception, single-band imagery can be just as influential. Additionally, efficiency data were shown to fluctuate based on sensor combination instead of fusion algorithm, suggesting the need for examining multiple factors to determine the success of image fusion. Our use of ideal observer analysis, a popular technique from the vision sciences, provides not only a standard for testing fusion in direct relation to the visual system but also allows for comparable examination of fusion across its associated problem space of application.

  19. Contrast-Enhanced Ultrasound (CEUS) and Quantitative Perfusion Analysis in Patients with Suspicion for Prostate Cancer.

    PubMed

    Maxeiner, Andreas; Fischer, Thomas; Schwabe, Julia; Baur, Alexander Daniel Jacques; Stephan, Carsten; Peters, Robert; Slowinski, Torsten; von Laffert, Maximilian; Marticorena Garcia, Stephan Rodrigo; Hamm, Bernd; Jung, Ernst-Michael

    2018-06-06

     The aim of this study was to investigate contrast-enhanced ultrasound (CEUS) parameters acquired by software during magnetic resonance imaging (MRI) US fusion-guided biopsy for prostate cancer (PCa) detection and discrimination.  From 2012 to 2015, 158 out of 165 men with suspicion for PCa and with at least 1 negative biopsy of the prostate were included and underwent a multi-parametric 3 Tesla MRI and an MRI/US fusion-guided biopsy, consecutively. CEUS was conducted during biopsy with intravenous bolus application of 2.4 mL of SonoVue ® (Bracco, Milan, Italy). In the latter CEUS clips were investigated using quantitative perfusion analysis software (VueBox, Bracco). The area of strongest enhancement within the MRI pre-located region was investigated and all available parameters from the quantification tool box were collected and analyzed for PCa and its further differentiation was based on the histopathological results.  The overall detection rate was 74 (47 %) PCa cases in 158 included patients. From these 74 PCa cases, 49 (66 %) were graded Gleason ≥ 3 + 4 = 7 (ISUP ≥ 2) PCa. The best results for cancer detection over all quantitative perfusion parameters were rise time (p = 0.026) and time to peak (p = 0.037). Within the subgroup analysis (> vs ≤ 3 + 4 = 7a (ISUP 2)), peak enhancement (p = 0.012), wash-in rate (p = 0.011), wash-out rate (p = 0.007) and wash-in perfusion index (p = 0.014) also showed statistical significance.  The quantification of CEUS parameters was able to discriminate PCa aggressiveness during MRI/US fusion-guided prostate biopsy. © Georg Thieme Verlag KG Stuttgart · New York.

  20. Intensity-hue-saturation-based image fusion using iterative linear regression

    NASA Astrophysics Data System (ADS)

    Cetin, Mufit; Tepecik, Abdulkadir

    2016-10-01

    The image fusion process basically produces a high-resolution image by combining the superior features of a low-resolution spatial image and a high-resolution panchromatic image. Despite its common usage due to its fast computing capability and high sharpening ability, the intensity-hue-saturation (IHS) fusion method may cause some color distortions, especially when a large number of gray value differences exist among the images to be combined. This paper proposes a spatially adaptive IHS (SA-IHS) technique to avoid these distortions by automatically adjusting the exact spatial information to be injected into the multispectral image during the fusion process. The SA-IHS method essentially suppresses the effects of those pixels that cause the spectral distortions by assigning weaker weights to them and avoiding a large number of redundancies on the fused image. The experimental database consists of IKONOS images, and the experimental results both visually and statistically prove the enhancement of the proposed algorithm when compared with the several other IHS-like methods such as IHS, generalized IHS, fast IHS, and generalized adaptive IHS.

  1. The fusion of satellite and UAV data: simulation of high spatial resolution band

    NASA Astrophysics Data System (ADS)

    Jenerowicz, Agnieszka; Siok, Katarzyna; Woroszkiewicz, Malgorzata; Orych, Agata

    2017-10-01

    Remote sensing techniques used in the precision agriculture and farming that apply imagery data obtained with sensors mounted on UAV platforms became more popular in the last few years due to the availability of low- cost UAV platforms and low- cost sensors. Data obtained from low altitudes with low- cost sensors can be characterised by high spatial and radiometric resolution but quite low spectral resolution, therefore the application of imagery data obtained with such technology is quite limited and can be used only for the basic land cover classification. To enrich the spectral resolution of imagery data acquired with low- cost sensors from low altitudes, the authors proposed the fusion of RGB data obtained with UAV platform with multispectral satellite imagery. The fusion is based on the pansharpening process, that aims to integrate the spatial details of the high-resolution panchromatic image with the spectral information of lower resolution multispectral or hyperspectral imagery to obtain multispectral or hyperspectral images with high spatial resolution. The key of pansharpening is to properly estimate the missing spatial details of multispectral images while preserving their spectral properties. In the research, the authors presented the fusion of RGB images (with high spatial resolution) obtained with sensors mounted on low- cost UAV platforms and multispectral satellite imagery with satellite sensors, i.e. Landsat 8 OLI. To perform the fusion of UAV data with satellite imagery, the simulation of the panchromatic bands from RGB data based on the spectral channels linear combination, was conducted. Next, for simulated bands and multispectral satellite images, the Gram-Schmidt pansharpening method was applied. As a result of the fusion, the authors obtained several multispectral images with very high spatial resolution and then analysed the spatial and spectral accuracies of processed images.

  2. Progressive Label Fusion Framework for Multi-atlas Segmentation by Dictionary Evolution

    PubMed Central

    Song, Yantao; Wu, Guorong; Sun, Quansen; Bahrami, Khosro; Li, Chunming; Shen, Dinggang

    2015-01-01

    Accurate segmentation of anatomical structures in medical images is very important in neuroscience studies. Recently, multi-atlas patch-based label fusion methods have achieved many successes, which generally represent each target patch from an atlas patch dictionary in the image domain and then predict the latent label by directly applying the estimated representation coefficients in the label domain. However, due to the large gap between these two domains, the estimated representation coefficients in the image domain may not stay optimal for the label fusion. To overcome this dilemma, we propose a novel label fusion framework to make the weighting coefficients eventually to be optimal for the label fusion by progressively constructing a dynamic dictionary in a layer-by-layer manner, where a sequence of intermediate patch dictionaries gradually encode the transition from the patch representation coefficients in image domain to the optimal weights for label fusion. Our proposed framework is general to augment the label fusion performance of the current state-of-the-art methods. In our experiments, we apply our proposed method to hippocampus segmentation on ADNI dataset and achieve more accurate labeling results, compared to the counterpart methods with single-layer dictionary. PMID:26942233

  3. Progressive Label Fusion Framework for Multi-atlas Segmentation by Dictionary Evolution.

    PubMed

    Song, Yantao; Wu, Guorong; Sun, Quansen; Bahrami, Khosro; Li, Chunming; Shen, Dinggang

    2015-10-01

    Accurate segmentation of anatomical structures in medical images is very important in neuroscience studies. Recently, multi-atlas patch-based label fusion methods have achieved many successes, which generally represent each target patch from an atlas patch dictionary in the image domain and then predict the latent label by directly applying the estimated representation coefficients in the label domain. However, due to the large gap between these two domains, the estimated representation coefficients in the image domain may not stay optimal for the label fusion. To overcome this dilemma, we propose a novel label fusion framework to make the weighting coefficients eventually to be optimal for the label fusion by progressively constructing a dynamic dictionary in a layer-by-layer manner, where a sequence of intermediate patch dictionaries gradually encode the transition from the patch representation coefficients in image domain to the optimal weights for label fusion. Our proposed framework is general to augment the label fusion performance of the current state-of-the-art methods. In our experiments, we apply our proposed method to hippocampus segmentation on ADNI dataset and achieve more accurate labeling results, compared to the counterpart methods with single-layer dictionary.

  4. Automated management for pavement inspection system (AMPIS)

    NASA Astrophysics Data System (ADS)

    Chung, Hung Chi; Girardello, Roberto; Soeller, Tony; Shinozuka, Masanobu

    2003-08-01

    An automated in-situ road surface distress surveying and management system, AMPIS, has been developed on the basis of video images within the framework of GIS software. Video image processing techniques are introduced to acquire, process and analyze the road surface images obtained from a moving vehicle. ArcGIS platform is used to integrate the routines of image processing and spatial analysis in handling the full-scale metropolitan highway surface distress detection and data fusion/management. This makes it possible to present user-friendly interfaces in GIS and to provide efficient visualizations of surveyed results not only for the use of transportation engineers to manage road surveying documentations, data acquisition, analysis and management, but also for financial officials to plan maintenance and repair programs and further evaluate the socio-economic impacts of highway degradation and deterioration. A review performed in this study on fundamental principle of Pavement Management System (PMS) and its implementation indicates that the proposed approach of using GIS concept and its tools for PMS application will reshape PMS into a new information technology-based system providing a convenient and efficient pavement inspection and management.

  5. Tumor or abnormality identification from magnetic resonance images using statistical region fusion based segmentation.

    PubMed

    Subudhi, Badri Narayan; Thangaraj, Veerakumar; Sankaralingam, Esakkirajan; Ghosh, Ashish

    2016-11-01

    In this article, a statistical fusion based segmentation technique is proposed to identify different abnormality in magnetic resonance images (MRI). The proposed scheme follows seed selection, region growing-merging and fusion of multiple image segments. In this process initially, an image is divided into a number of blocks and for each block we compute the phase component of the Fourier transform. The phase component of each block reflects the gray level variation among the block but contains a large correlation among them. Hence a singular value decomposition (SVD) technique is adhered to generate a singular value of each block. Then a thresholding procedure is applied on these singular values to identify edgy and smooth regions and some seed points are selected for segmentation. By considering each seed point we perform a binary segmentation of the complete MRI and hence with all seed points we get an equal number of binary images. A parcel based statistical fusion process is used to fuse all the binary images into multiple segments. Effectiveness of the proposed scheme is tested on identifying different abnormalities: prostatic carcinoma detection, tuberculous granulomas identification and intracranial neoplasm or brain tumor detection. The proposed technique is established by comparing its results against seven state-of-the-art techniques with six performance evaluation measures. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Multispectral image sharpening using wavelet transform techniques and spatial correlation of edges

    USGS Publications Warehouse

    Lemeshewsky, George P.; Schowengerdt, Robert A.

    2000-01-01

    Several reported image fusion or sharpening techniques are based on the discrete wavelet transform (DWT). The technique described here uses a pixel-based maximum selection rule to combine respective transform coefficients of lower spatial resolution near-infrared (NIR) and higher spatial resolution panchromatic (pan) imagery to produce a sharpened NIR image. Sharpening assumes a radiometric correlation between the spectral band images. However, there can be poor correlation, including edge contrast reversals (e.g., at soil-vegetation boundaries), between the fused images and, consequently, degraded performance. To improve sharpening, a local area-based correlation technique originally reported for edge comparison with image pyramid fusion is modified for application with the DWT process. Further improvements are obtained by using redundant, shift-invariant implementation of the DWT. Example images demonstrate the improvements in NIR image sharpening with higher resolution pan imagery.

  7. An imaging method of wavefront coding system based on phase plate rotation

    NASA Astrophysics Data System (ADS)

    Yi, Rigui; Chen, Xi; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua

    2018-01-01

    Wave-front coding has a great prospect in extending the depth of the optical imaging system and reducing optical aberrations, but the image quality and noise performance are inevitably reduced. According to the theoretical analysis of the wave-front coding system and the phase function expression of the cubic phase plate, this paper analyzed and utilized the feature that the phase function expression would be invariant in the new coordinate system when the phase plate rotates at different angles around the z-axis, and we proposed a method based on the rotation of the phase plate and image fusion. First, let the phase plate rotated at a certain angle around the z-axis, the shape and distribution of the PSF obtained on the image surface remain unchanged, the rotation angle and direction are consistent with the rotation angle of the phase plate. Then, the middle blurred image is filtered by the point spread function of the rotation adjustment. Finally, the reconstruction images were fused by the method of the Laplacian pyramid image fusion and the Fourier transform spectrum fusion method, and the results were evaluated subjectively and objectively. In this paper, we used Matlab to simulate the images. By using the Laplacian pyramid image fusion method, the signal-to-noise ratio of the image is increased by 19% 27%, the clarity is increased by 11% 15% , and the average gradient is increased by 4% 9% . By using the Fourier transform spectrum fusion method, the signal-to-noise ratio of the image is increased by 14% 23%, the clarity is increased by 6% 11% , and the average gradient is improved by 2% 6%. The experimental results show that the image processing by the above method can improve the quality of the restored image, improving the image clarity, and can effectively preserve the image information.

  8. Novel cooperative neural fusion algorithms for image restoration and image fusion.

    PubMed

    Xia, Youshen; Kamel, Mohamed S

    2007-02-01

    To deal with the problem of restoring degraded images with non-Gaussian noise, this paper proposes a novel cooperative neural fusion regularization (CNFR) algorithm for image restoration. Compared with conventional regularization algorithms for image restoration, the proposed CNFR algorithm can relax need of the optimal regularization parameter to be estimated. Furthermore, to enhance the quality of restored images, this paper presents a cooperative neural fusion (CNF) algorithm for image fusion. Compared with existing signal-level image fusion algorithms, the proposed CNF algorithm can greatly reduce the loss of contrast information under blind Gaussian noise environments. The performance analysis shows that the proposed two neural fusion algorithms can converge globally to the robust and optimal image estimate. Simulation results confirm that in different noise environments, the proposed two neural fusion algorithms can obtain a better image estimate than several well known image restoration and image fusion methods.

  9. Cone beam computed tomography images fusion in predicting lung ablation volumes: a feasibility study.

    PubMed

    Ierardi, Anna Maria; Petrillo, Mario; Xhepa, Genti; Laganà, Domenico; Piacentino, Filippo; Floridi, Chiara; Duka, Ejona; Fugazzola, Carlo; Carrafiello, Gianpaolo

    2016-02-01

    Recently different software with the ability to plan ablation volumes have been developed in order to minimize the number of attempts of positioning electrodes and to improve a safe overall tumor coverage. To assess the feasibility of three-dimensional cone beam computed tomography (3D CBCT) fusion imaging with "virtual probe" positioning, to predict ablation volume in lung tumors treated percutaneously. Pre-procedural computed tomography contrast-enhanced scans (CECT) were merged with a CBCT volume obtained to plan the ablation. An offline tumor segmentation was performed to determine the number of antennae and their positioning within the tumor. The volume of ablation obtained, evaluated on CECT performed after 1 month, was compared with the pre-procedural predicted one. Feasibility was assessed on the basis of accuracy evaluation (visual evaluation [VE] and quantitative evaluation [QE]), technical success (TS), and technical effectiveness (TE). Seven of the patients with lung tumor treated by percutaneous thermal ablation were selected and treated on the basis of the 3D CBCT fusion imaging. In all cases the volume of ablation predicted was in accordance with that obtained. The difference in volume between predicted ablation volumes and obtained ones on CECT at 1 month was 1.8 cm(3) (SD ± 2, min. 0.4, max. 0.9) for MW and 0.9 cm(3) (SD ± 1.1, min. 0.1, max. 0.7) for RF. Use of pre-procedural 3D CBCT fusion imaging could be useful to define expected ablation volumes. However, more patients are needed to ensure stronger evidence. © The Foundation Acta Radiologica 2015.

  10. Knowledge guided information fusion for segmentation of multiple sclerosis lesions in MRI images

    NASA Astrophysics Data System (ADS)

    Zhu, Chaozhe; Jiang, Tianzi

    2003-05-01

    In this work, T1-, T2- and PD-weighted MR images of multiple sclerosis (MS) patients, providing information on the properties of tissues from different aspects, are treated as three independent information sources for the detection and segmentation of MS lesions. Based on information fusion theory, a knowledge guided information fusion framework is proposed to accomplish 3-D segmentation of MS lesions. This framework consists of three parts: (1) information extraction, (2) information fusion, and (3) decision. Information provided by different spectral images is extracted and modeled separately in each spectrum using fuzzy sets, aiming at managing the uncertainty and ambiguity in the images due to noise and partial volume effect. In the second part, the possible fuzzy map of MS lesions in each spectral image is constructed from the extracted information under the guidance of experts' knowledge, and then the final fuzzy map of MS lesions is constructed through the fusion of the fuzzy maps obtained from different spectrum. Finally, 3-D segmentation of MS lesions is derived from the final fuzzy map. Experimental results show that this method is fast and accurate.

  11. A weighted variational gradient-based fusion method for high-fidelity thin cloud removal of Landsat images

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Chen, Xiu; Wang, Yueyun

    2018-03-01

    Landsat data are widely used in various earth observations, but the clouds interfere with the applications of the images. This paper proposes a weighted variational gradient-based fusion method (WVGBF) for high-fidelity thin cloud removal of Landsat images, which is an improvement of the variational gradient-based fusion (VGBF) method. The VGBF method integrates the gradient information from the reference band into visible bands of cloudy image to enable spatial details and remove thin clouds. The VGBF method utilizes the same gradient constraints to the entire image, which causes the color distortion in cloudless areas. In our method, a weight coefficient is introduced into the gradient approximation term to ensure the fidelity of image. The distribution of weight coefficient is related to the cloud thickness map. The map is built on Independence Component Analysis (ICA) by using multi-temporal Landsat images. Quantitatively, we use R value to evaluate the fidelity in the cloudless regions and metric Q to evaluate the clarity in the cloud areas. The experimental results indicate that the proposed method has the better ability to remove thin cloud and achieve high fidelity.

  12. Sensor fusion display evaluation using information integration models in enhanced/synthetic vision applications

    NASA Technical Reports Server (NTRS)

    Foyle, David C.

    1993-01-01

    Based on existing integration models in the psychological literature, an evaluation framework is developed to assess sensor fusion displays as might be implemented in an enhanced/synthetic vision system. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The pilot's performance with the sensor fusion image is compared to models' predictions based on the pilot's performance when viewing the original component sensor images prior to fusion. This allows for the determination as to when a sensor fusion system leads to: poorer performance than one of the original sensor displays, clearly an undesirable system in which the fused sensor system causes some distortion or interference; better performance than with either single sensor system alone, but at a sub-optimal level compared to model predictions; optimal performance compared to model predictions; or, super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays.

  13. Enhancement of low light level images using color-plus-mono dual camera.

    PubMed

    Jung, Yong Ju

    2017-05-15

    In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.

  14. Distributed ISAR Subimage Fusion of Nonuniform Rotating Target Based on Matching Fourier Transform.

    PubMed

    Li, Yuanyuan; Fu, Yaowen; Zhang, Wenpeng

    2018-06-04

    In real applications, the image quality of the conventional monostatic Inverse Synthetic Aperture Radar (ISAR) for the maneuvering target is subject to the strong fluctuation of Radar Cross Section (RCS), as the target aspect varies enormously. Meanwhile, the maneuvering target introduces nonuniform rotation after translation motion compensation which degrades the imaging performance of the conventional Fourier Transform (FT)-based method in the cross-range dimension. In this paper, a method which combines the distributed ISAR technique and the Matching Fourier Transform (MFT) is proposed to overcome these problems. Firstly, according to the characteristics of the distributed ISAR, the multiple channel echoes of the nonuniform rotation target from different observation angles can be acquired. Then, by applying the MFT to the echo of each channel, the defocused problem of nonuniform rotation target which is inevitable by using the FT-based imaging method can be avoided. Finally, after preprocessing, scaling and rotation of all subimages, the noncoherent fusion image containing all the RCS information in all channels can be obtained. The accumulation coefficients of all subimages are calculated adaptively according to the their image qualities. Simulation and experimental data are used to validate the effectiveness of the proposed approach, and fusion image with improved recognizability can be obtained. Therefore, by using the distributed ISAR technique and MFT, subimages of high-maneuvering target from different observation angles can be obtained. Meanwhile, by employing the adaptive subimage fusion method, the RCS fluctuation can be alleviated and more recognizable final image can be obtained.

  15. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method

    NASA Astrophysics Data System (ADS)

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR.

  16. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method

    PubMed Central

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR. PMID:26781194

  17. F-18 Labeled Diabody-Luciferase Fusion Proteins for Optical-ImmunoPET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Anna M.

    2013-01-18

    The goal of the proposed work is to develop novel dual-labeled molecular imaging probes for multimodality imaging. Based on small, engineered antibodies called diabodies, these probes will be radioactively tagged with Fluorine-18 for PET imaging, and fused to luciferases for optical (bioluminescence) detection. Performance will be evaluated and validated using a prototype integrated optical-PET imaging system, OPET. Multimodality probes for optical-PET imaging will be based on diabodies that are dually labeled with 18F for PET detection and fused to luciferases for optical imaging. 1) Two sets of fusion proteins will be built, targeting the cell surface markers CEA or HER2.more » Coelenterazine-based luciferases and variant forms will be evaluated in combination with native substrate and analogs, in order to obtain two distinct probes recognizing different targets with different spectral signatures. 2) Diabody-luciferase fusion proteins will be labeled with 18F using amine reactive [18F]-SFB produced using a novel microwave-assisted, one-pot method. 3) Sitespecific, chemoselective radiolabeling methods will be devised, to reduce the chance that radiolabeling will inactivate either the target-binding properties or the bioluminescence properties of the diabody-luciferase fusion proteins. 4) Combined optical and PET imaging of these dual modality probes will be evaluated and validated in vitro and in vivo using a prototype integrated optical-PET imaging system, OPET. Each imaging modality has its strengths and weaknesses. Development and use of dual modality probes allows optical imaging to benefit from the localization and quantitation offered by the PET mode, and enhances the PET imaging by enabling simultaneous detection of more than one probe.« less

  18. PET/CT image registration: preliminary tests for its application to clinical dosimetry in radiotherapy.

    PubMed

    Baños-Capilla, M C; García, M A; Bea, J; Pla, C; Larrea, L; López, E

    2007-06-01

    The quality of dosimetry in radiotherapy treatment requires the accurate delimitation of the gross tumor volume. This can be achieved by complementing the anatomical detail provided by CT images through fusion with other imaging modalities that provide additional metabolic and physiological information. Therefore, use of multiple imaging modalities for radiotherapy treatment planning requires an accurate image registration method. This work describes tests carried out on a Discovery LS positron emission/computed tomography (PET/CT) system by General Electric Medical Systems (GEMS), for its later use to obtain images to delimit the target in radiotherapy treatment. Several phantoms have been used to verify image correlation, in combination with fiducial markers, which were used as a system of external landmarks. We analyzed the geometrical accuracy of two different fusion methods with the images obtained with these phantoms. We first studied the fusion method used by the PET/CT system by GEMS (hardware fusion) on the basis that there is satisfactory coincidence between the reconstruction centers in CT and PET systems; and secondly the fiducial fusion, a registration method, by means of least-squares fitting algorithm of a landmark points system. The study concluded with the verification of the centroid position of some phantom components in both imaging modalities. Centroids were estimated through a calculation similar to center-of-mass, weighted by the value of the CT number and the uptake intensity in PET. The mean deviations found for the hardware fusion method were: deltax/ +/-sigma = 3.3 mm +/- 1.0 mm and /deltax/ +/-sigma = 3.6 mm +/- 1.0 mm. These values were substantially improved upon applying fiducial fusion based on external landmark points: /deltax/ +/-sigma = 0.7 mm +/- 0.8 mm and /deltax/ +/-sigma = 0.3 mm 1.7 mm. We also noted that differences found for each of the fusion methods were similar for both the axial and helical CT image acquisition protocols.

  19. The mechanism in junctional failure of thoraco-lumbar fusions. Part II: Analysis of a series of PJK after thoraco-lumbar fusion to determine parameters allowing to predict the risk of junctional breakdown.

    PubMed

    Faundez, Antonio A; Richards, Jonathon; Maxy, Philippe; Price, Rachel; Léglise, Amélie; Le Huec, Jean-Charles

    2018-02-01

    To identify risk factors, in 12 patients with junctional breakdown (JBD) after thoraco-sacral fusions and to test a software locating maximal bending moment on full spine EOS images. Twelve patients underwent long fusions for lumbar degenerative pathologies. Preop EOS images were compared to first postop EOS showing JBD. Parameters analyzed were: spinopelvic parameters [pelvic incidence (PI), pelvic tilt (PT), sacral slope (SS), sagittal vertical axis (SVA), spinosacral angle (SSA), lordosis, and kyphosis], proximal junctional angle (PJA), odontoid-hip axis angle (ODHA), and CIA. A new software estimated the location of maximum bending moment (M max ) before and after JBD. All patients except one had a JBD located between T10 and L1, diagnosed at average follow-up of 18.58 months. JBD was a fracture in six patients, severe adjacent disc degeneration in the remaining. Average PI was 52°. PT increased, SS decreased after JBD versus preop (p > 0.05). Average PJA was 34.5°. Global lordosis (GLL), upper lordosis (ULL), L4-S1 lordosis, and thoracic kyphosis (TK) were increased (p < 0.05). Lower lumbar lordosis (LLL), was not increased postJBD (p = 0.6). SVA, SSA, ODHA, and C7 slope were not modified (p > 0.05). CIA average value decreased by 7.5% after JBD. T1-T5 alignment was correlated to C7 slope before (R 2  = 0.77075) and after JBD (R 2  = 0.85409). ODHA decreased after JBD (p > 0.05). Most JBD occurred at or one level away from preoperative M max location. This study confirms the importance of harmonious distribution of lumbar (GLL, ULL, and ILL) and thoracic curves (TK, T1-T5 segment) in thoraco-sacral fusions. All patients showed an exaggerated ULL, resulting in a posterior shift and increased lever arm at the thoraco-lumbar junction, leading to JBD.

  20. Multi-Atlas Based Segmentation of Brainstem Nuclei from MR Images by Deep Hyper-Graph Learning.

    PubMed

    Dong, Pei; Guo, Yangrong; Gao, Yue; Liang, Peipeng; Shi, Yonghong; Wang, Qian; Shen, Dinggang; Wu, Guorong

    2016-10-01

    Accurate segmentation of brainstem nuclei (red nucleus and substantia nigra) is very important in various neuroimaging applications such as deep brain stimulation and the investigation of imaging biomarkers for Parkinson's disease (PD). Due to iron deposition during aging, image contrast in the brainstem is very low in Magnetic Resonance (MR) images. Hence, the ambiguity of patch-wise similarity makes the recently successful multi-atlas patch-based label fusion methods have difficulty to perform as competitive as segmenting cortical and sub-cortical regions from MR images. To address this challenge, we propose a novel multi-atlas brainstem nuclei segmentation method using deep hyper-graph learning. Specifically, we achieve this goal in three-fold. First , we employ hyper-graph to combine the advantage of maintaining spatial coherence from graph-based segmentation approaches and the benefit of harnessing population priors from multi-atlas based framework. Second , besides using low-level image appearance, we also extract high-level context features to measure the complex patch-wise relationship. Since the context features are calculated on a tentatively estimated label probability map, we eventually turn our hyper-graph learning based label propagation into a deep and self-refining model. Third , since anatomical labels on some voxels (usually located in uniform regions) can be identified much more reliably than other voxels (usually located at the boundary between two regions), we allow these reliable voxels to propagate their labels to the nearby difficult-to-label voxels. Such hierarchical strategy makes our proposed label fusion method deep and dynamic. We evaluate our proposed label fusion method in segmenting substantia nigra (SN) and red nucleus (RN) from 3.0 T MR images, where our proposed method achieves significant improvement over the state-of-the-art label fusion methods.

  1. BIM and IoT: A Synopsis from GIS Perspective

    NASA Astrophysics Data System (ADS)

    Isikdag, U.

    2015-10-01

    Internet-of-Things (IoT) focuses on enabling communication between all devices, things that are existent in real life or that are virtual. Building Information Models (BIMs) and Building Information Modelling is a hype that has been the buzzword of the construction industry for last 15 years. BIMs emerged as a result of a push by the software companies, to tackle the problems of inefficient information exchange between different software and to enable true interoperability. In BIM approach most up-to-date an accurate models of a building are stored in shared central databases during the design and the construction of a project and at post-construction stages. GIS based city monitoring / city management applications require the fusion of information acquired from multiple resources, BIMs, City Models and Sensors. This paper focuses on providing a method for facilitating the GIS based fusion of information residing in digital building "Models" and information acquired from the city objects i.e. "Things". Once this information fusion is accomplished, many fields ranging from Emergency Response, Urban Surveillance, Urban Monitoring to Smart Buildings will have potential benefits.

  2. Added Value of Contrast-Enhanced Ultrasound on Biopsies of Focal Hepatic Lesions Invisible on Fusion Imaging Guidance

    PubMed Central

    Kang, Tae Wook; Song, Kyoung Doo; Kim, Mimi; Kim, Seung Soo; Kim, Seong Hyun; Ha, Sang Yun

    2017-01-01

    Objective To assess whether contrast-enhanced ultrasonography (CEUS) with Sonazoid can improve the lesion conspicuity and feasibility of percutaneous biopsies for focal hepatic lesions invisible on fusion imaging of real-time ultrasonography (US) with computed tomography/magnetic resonance images, and evaluate its impact on clinical decision making. Materials and Methods The Institutional Review Board approved this retrospective study. Between June 2013 and January 2015, 711 US-guided percutaneous biopsies were performed for focal hepatic lesions. Biopsies were performed using CEUS for guidance if lesions were invisible on fusion imaging. We retrospectively evaluated the number of target lesions initially invisible on fusion imaging that became visible after applying CEUS, using a 4-point scale. Technical success rates of biopsies were evaluated based on histopathological results. In addition, the occurrence of changes in clinical decision making was assessed. Results Among 711 patients, 16 patients (2.3%) were included in the study. The median size of target lesions was 1.1 cm (range, 0.5–1.9 cm) in pre-procedural imaging. After CEUS, 15 of 16 (93.8%) focal hepatic lesions were visualized. The conspicuity score was significantly increased after adding CEUS, as compared to that on fusion imaging (p < 0.001). The technical success rate of biopsy was 87.6% (14/16). After biopsy, there were changes in clinical decision making for 11 of 16 patients (68.8%). Conclusion The addition of CEUS could improve the conspicuity of focal hepatic lesions invisible on fusion imaging. This dual guidance using CEUS and fusion imaging may affect patient management via changes in clinical decision-making. PMID:28096725

  3. Design of an Image Fusion Phantom for a Small Animal microPET/CT Scanner Prototype

    NASA Astrophysics Data System (ADS)

    Nava-García, Dante; Alva-Sánchez, Héctor; Murrieta-Rodríguez, Tirso; Martínez-Dávalos, Arnulfo; Rodríguez-Villafuerte, Mercedes

    2010-12-01

    Two separate microtomography systems recently developed at Instituto de Física, UNAM, produce anatomical (microCT) and physiological images (microPET) of small animals. In this work, the development and initial tests of an image fusion method based on fiducial markers for image registration between the two modalities are presented. A modular Helix/Line-Sources phantom was designed and constructed; this phantom contains fiducial markers that can be visualized in both imaging systems. The registration was carried out by solving the rigid body alignment problem of Procrustes to obtain rotation and translation matrices required to align the two sets of images. The microCT/microPET image fusion of the Helix/Line-Sources phantom shows excellent visual coincidence between different structures, showing a calculated target-registration-error of 0.32 mm.

  4. Multi-modality image fusion based on enhanced fuzzy radial basis function neural networks.

    PubMed

    Chao, Zhen; Kim, Dohyeon; Kim, Hee-Joung

    2018-04-01

    In clinical applications, single modality images do not provide sufficient diagnostic information. Therefore, it is necessary to combine the advantages or complementarities of different modalities of images. Recently, neural network technique was applied to medical image fusion by many researchers, but there are still many deficiencies. In this study, we propose a novel fusion method to combine multi-modality medical images based on the enhanced fuzzy radial basis function neural network (Fuzzy-RBFNN), which includes five layers: input, fuzzy partition, front combination, inference, and output. Moreover, we propose a hybrid of the gravitational search algorithm (GSA) and error back propagation algorithm (EBPA) to train the network to update the parameters of the network. Two different patterns of images are used as inputs of the neural network, and the output is the fused image. A comparison with the conventional fusion methods and another neural network method through subjective observation and objective evaluation indexes reveals that the proposed method effectively synthesized the information of input images and achieved better results. Meanwhile, we also trained the network by using the EBPA and GSA, individually. The results reveal that the EBPGSA not only outperformed both EBPA and GSA, but also trained the neural network more accurately by analyzing the same evaluation indexes. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  5. A new method for fusion, denoising and enhancement of x-ray images retrieved from Talbot-Lau grating interferometry.

    PubMed

    Scholkmann, Felix; Revol, Vincent; Kaufmann, Rolf; Baronowski, Heidrun; Kottler, Christian

    2014-03-21

    This paper introduces a new image denoising, fusion and enhancement framework for combining and optimal visualization of x-ray attenuation contrast (AC), differential phase contrast (DPC) and dark-field contrast (DFC) images retrieved from x-ray Talbot-Lau grating interferometry. The new image fusion framework comprises three steps: (i) denoising each input image (AC, DPC and DFC) through adaptive Wiener filtering, (ii) performing a two-step image fusion process based on the shift-invariant wavelet transform, i.e. first fusing the AC with the DPC image and then fusing the resulting image with the DFC image, and finally (iii) enhancing the fused image to obtain a final image using adaptive histogram equalization, adaptive sharpening and contrast optimization. Application examples are presented for two biological objects (a human tooth and a cherry) and the proposed method is compared to two recently published AC/DPC/DFC image processing techniques. In conclusion, the new framework for the processing of AC, DPC and DFC allows the most relevant features of all three images to be combined in one image while reducing the noise and enhancing adaptively the relevant image features. The newly developed framework may be used in technical and medical applications.

  6. Fusion of Geophysical Images in the Study of Archaeological Sites

    NASA Astrophysics Data System (ADS)

    Karamitrou, A. A.; Petrou, M.; Tsokas, G. N.

    2011-12-01

    This paper presents results from different fusion techniques between geophysical images from different modalities in order to combine them into one image with higher information content than the two original images independently. The resultant image will be useful for the detection and mapping of buried archaeological relics. The examined archaeological area is situated in Kampana site (NE Greece) near the ancient theater of Maronia city. Archaeological excavations revealed an ancient theater, an aristocratic house and the temple of the ancient Greek God Dionysus. Numerous ceramic objects found in the broader area indicated the probability of the existence of buried urban structure. In order to accurately locate and map the latter, geophysical measurements performed with the use of the magnetic method (vertical gradient of the magnetic field) and of the electrical method (apparent resistivity). We performed a semi-stochastic pixel based registration method between the geophysical images in order to fine register them by correcting their local spatial offsets produced by the use of hand held devices. After this procedure we applied to the registered images three different fusion approaches. Image fusion is a relatively new technique that not only allows integration of different information sources, but also takes advantage of the spatial and spectral resolution as well as the orientation characteristics of each image. We have used three different fusion techniques, fusion with mean values, with wavelets by enhancing selected frequency bands and curvelets giving emphasis at specific bands and angles (according the expecting orientation of the relics). In all three cases the fused images gave significantly better results than each of the original geophysical images separately. The comparison of the results of the three different approaches showed that the fusion with the use of curvelets, giving emphasis at the features' orientation, seems to give the best fused image. In the resultant image appear clear linear and ellipsoid features corresponding to potential archaeological relics.

  7. Phantom Study Investigating the Accuracy of Manual and Automatic Image Fusion with the GE Logiq E9: Implications for use in Percutaneous Liver Interventions.

    PubMed

    Burgmans, Mark Christiaan; den Harder, J Michiel; Meershoek, Philippa; van den Berg, Nynke S; Chan, Shaun Xavier Ju Min; van Leeuwen, Fijs W B; van Erkel, Arian R

    2017-06-01

    To determine the accuracy of automatic and manual co-registration methods for image fusion of three-dimensional computed tomography (CT) with real-time ultrasonography (US) for image-guided liver interventions. CT images of a skills phantom with liver lesions were acquired and co-registered to US using GE Logiq E9 navigation software. Manual co-registration was compared to automatic and semiautomatic co-registration using an active tracker. Also, manual point registration was compared to plane registration with and without an additional translation point. Finally, comparison was made between manual and automatic selection of reference points. In each experiment, accuracy of the co-registration method was determined by measurement of the residual displacement in phantom lesions by two independent observers. Mean displacements for a superficial and deep liver lesion were comparable after manual and semiautomatic co-registration: 2.4 and 2.0 mm versus 2.0 and 2.5 mm, respectively. Both methods were significantly better than automatic co-registration: 5.9 and 5.2 mm residual displacement (p < 0.001; p < 0.01). The accuracy of manual point registration was higher than that of plane registration, the latter being heavily dependent on accurate matching of axial CT and US images by the operator. Automatic reference point selection resulted in significantly lower registration accuracy compared to manual point selection despite lower root-mean-square deviation (RMSD) values. The accuracy of manual and semiautomatic co-registration is better than that of automatic co-registration. For manual co-registration using a plane, choosing the correct plane orientation is an essential first step in the registration process. Automatic reference point selection based on RMSD values is error-prone.

  8. Speaker-independent phoneme recognition with a binaural auditory image model

    NASA Astrophysics Data System (ADS)

    Francis, Keith Ivan

    1997-09-01

    This dissertation presents phoneme recognition techniques based on a binaural fusion of outputs of the auditory image model and subsequent azimuth-selective phoneme recognition in a noisy environment. Background information concerning speech variations, phoneme recognition, current binaural fusion techniques and auditory modeling issues is explained. The research is constrained to sources in the frontal azimuthal plane of a simulated listener. A new method based on coincidence detection of neural activity patterns from the auditory image model of Patterson is used for azimuth-selective phoneme recognition. The method is tested in various levels of noise and the results are reported in contrast to binaural fusion methods based on various forms of correlation to demonstrate the potential of coincidence- based binaural phoneme recognition. This method overcomes smearing of fine speech detail typical of correlation based methods. Nevertheless, coincidence is able to measure similarity of left and right inputs and fuse them into useful feature vectors for phoneme recognition in noise.

  9. Salient region detection by fusing bottom-up and top-down features extracted from a single image.

    PubMed

    Tian, Huawei; Fang, Yuming; Zhao, Yao; Lin, Weisi; Ni, Rongrong; Zhu, Zhenfeng

    2014-10-01

    Recently, some global contrast-based salient region detection models have been proposed based on only the low-level feature of color. It is necessary to consider both color and orientation features to overcome their limitations, and thus improve the performance of salient region detection for images with low-contrast in color and high-contrast in orientation. In addition, the existing fusion methods for different feature maps, like the simple averaging method and the selective method, are not effective sufficiently. To overcome these limitations of existing salient region detection models, we propose a novel salient region model based on the bottom-up and top-down mechanisms: the color contrast and orientation contrast are adopted to calculate the bottom-up feature maps, while the top-down cue of depth-from-focus from the same single image is used to guide the generation of final salient regions, since depth-from-focus reflects the photographer's preference and knowledge of the task. A more general and effective fusion method is designed to combine the bottom-up feature maps. According to the degree-of-scattering and eccentricities of feature maps, the proposed fusion method can assign adaptive weights to different feature maps to reflect the confidence level of each feature map. The depth-from-focus of the image as a significant top-down feature for visual attention in the image is used to guide the salient regions during the fusion process; with its aid, the proposed fusion method can filter out the background and highlight salient regions for the image. Experimental results show that the proposed model outperforms the state-of-the-art models on three public available data sets.

  10. Data Strategies to Support Automated Multi-Sensor Data Fusion in a Service Oriented Architecture

    DTIC Science & Technology

    2008-06-01

    and employ vast quantities of content. This dissertation provides two software architectural patterns and an auto-fusion process that guide the...UDDI), Simple Order Access Protocol (SOAP), Java, Maritime Domain Awareness (MDA), Business Process Execution Language for Web Service (BPEL4WS) 16...content. This dissertation provides two software architectural patterns and an auto-fusion process that guide the development of a distributed

  11. Combining transrectal ultrasound and CT for image-guided adaptive brachytherapy of cervical cancer: Proof of concept.

    PubMed

    Nesvacil, Nicole; Schmid, Maximilian P; Pötter, Richard; Kronreif, Gernot; Kirisits, Christian

    To investigate the feasibility of a treatment planning workflow for three-dimensional image-guided cervix cancer brachytherapy, combining volumetric transrectal ultrasound (TRUS) for target definition with CT for dose optimization to organs at risk (OARs), for settings with no access to MRI. A workflow for TRUS/CT-based volumetric treatment planning was developed, based on a customized system including ultrasound probe, stepper unit, and software for image volume acquisition. A full TRUS/CT-based workflow was simulated in a clinical case and compared with MR- or CT-only delineation. High-risk clinical target volume was delineated on TRUS, and OARs were delineated on CT. Manually defined tandem/ring applicator positions on TRUS and CT were used as a reference for rigid registration of the image volumes. Treatment plan optimization for TRUS target and CT organ volumes was performed and compared to MRI and CT target contours. TRUS/CT-based contouring, applicator reconstruction, image fusion, and treatment planning were feasible, and the full workflow could be successfully demonstrated. The TRUS/CT plan fulfilled all clinical planning aims. Dose-volume histogram evaluation of the TRUS/CT-optimized plan (high-risk clinical target volume D 90 , OARs D 2cm³ for) on different image modalities showed good agreement between dose values reported for TRUS/CT and MRI-only reference contours and large deviations for CT-only target parameters. A TRUS/CT-based workflow for full three-dimensional image-guided cervix brachytherapy treatment planning seems feasible and may be clinically comparable to MRI-based treatment planning. Further development to solve challenges with applicator definition in the TRUS volume is required before systematic applicability of this workflow. Copyright © 2016 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  12. [Research Progress of Multi-Model Medical Image Fusion at Feature Level].

    PubMed

    Zhang, Junjie; Zhou, Tao; Lu, Huiling; Wang, Huiqun

    2016-04-01

    Medical image fusion realizes advantage integration of functional images and anatomical images.This article discusses the research progress of multi-model medical image fusion at feature level.We firstly describe the principle of medical image fusion at feature level.Then we analyze and summarize fuzzy sets,rough sets,D-S evidence theory,artificial neural network,principal component analysis and other fusion methods’ applications in medical image fusion and get summery.Lastly,we in this article indicate present problems and the research direction of multi-model medical images in the future.

  13. Enhanced image capture through fusion

    NASA Technical Reports Server (NTRS)

    Burt, Peter J.; Hanna, Keith; Kolczynski, Raymond J.

    1993-01-01

    Image fusion may be used to combine images from different sensors, such as IR and visible cameras, to obtain a single composite with extended information content. Fusion may also be used to combine multiple images from a given sensor to form a composite image in which information of interest is enhanced. We present a general method for performing image fusion and show that this method is effective for diverse fusion applications. We suggest that fusion may provide a powerful tool for enhanced image capture with broad utility in image processing and computer vision.

  14. Spatial, Temporal and Spectral Satellite Image Fusion via Sparse Representation

    NASA Astrophysics Data System (ADS)

    Song, Huihui

    Remote sensing provides good measurements for monitoring and further analyzing the climate change, dynamics of ecosystem, and human activities in global or regional scales. Over the past two decades, the number of launched satellite sensors has been increasing with the development of aerospace technologies and the growing requirements on remote sensing data in a vast amount of application fields. However, a key technological challenge confronting these sensors is that they tradeoff between spatial resolution and other properties, including temporal resolution, spectral resolution, swath width, etc., due to the limitations of hardware technology and budget constraints. To increase the spatial resolution of data with other good properties, one possible cost-effective solution is to explore data integration methods that can fuse multi-resolution data from multiple sensors, thereby enhancing the application capabilities of available remote sensing data. In this thesis, we propose to fuse the spatial resolution with temporal resolution and spectral resolution, respectively, based on sparse representation theory. Taking the study case of Landsat ETM+ (with spatial resolution of 30m and temporal resolution of 16 days) and MODIS (with spatial resolution of 250m ~ 1km and daily temporal resolution) reflectance, we propose two spatial-temporal fusion methods to combine the fine spatial information of Landsat image and the daily temporal resolution of MODIS image. Motivated by that the images from these two sensors are comparable on corresponding bands, we propose to link their spatial information on available Landsat- MODIS image pair (captured on prior date) and then predict the Landsat image from the MODIS counterpart on prediction date. To well-learn the spatial details from the prior images, we use a redundant dictionary to extract the basic representation atoms for both Landsat and MODIS images based on sparse representation. Under the scenario of two prior Landsat-MODIS image pairs, we build the corresponding relationship between the difference images of MODIS and ETM+ by training a low- and high-resolution dictionary pair from the given prior image pairs. In the second scenario, i.e., only one Landsat- MODIS image pair being available, we directly correlate MODIS and ETM+ data through an image degradation model. Then, the fusion stage is achieved by super-resolving the MODIS image combining the high-pass modulation in a two-layer fusion framework. Remarkably, the proposed spatial-temporal fusion methods form a unified framework for blending remote sensing images with phenology change or land-cover-type change. Based on the proposed spatial-temporal fusion models, we propose to monitor the land use/land cover changes in Shenzhen, China. As a fast-growing city, Shenzhen faces the problem of detecting the rapid changes for both rational city planning and sustainable development. However, the cloudy and rainy weather in region Shenzhen located makes the capturing circle of high-quality satellite images longer than their normal revisit periods. Spatial-temporal fusion methods are capable to tackle this problem by improving the spatial resolution of images with coarse spatial resolution but frequent temporal coverage, thereby making the detection of rapid changes possible. On two Landsat-MODIS datasets with annual and monthly changes, respectively, we apply the proposed spatial-temporal fusion methods to the task of multiple change detection. Afterward, we propose a novel spatial and spectral fusion method for satellite multispectral and hyperspectral (or high-spectral) images based on dictionary-pair learning and sparse non-negative matrix factorization. By combining the spectral information from hyperspectral image, which is characterized by low spatial resolution but high spectral resolution and abbreviated as LSHS, and the spatial information from multispectral image, which is featured by high spatial resolution but low spectral resolution and abbreviated as HSLS, this method aims to generate the fused data with both high spatial and high spectral resolutions. Motivated by the observation that each hyperspectral pixel can be represented by a linear combination of a few endmembers, this method first extracts the spectral bases of LSHS and HSLS images by making full use of the rich spectral information in LSHS data. The spectral bases of these two categories data then formulate a dictionary-pair due to their correspondence in representing each pixel spectra of LSHS data and HSLS data, respectively. Subsequently, the LSHS image is spatially unmixed by representing the HSLS image with respect to the corresponding learned dictionary to derive its representation coefficients. Combining the spectral bases of LSHS data and the representation coefficients of HSLS data, we finally derive the fused data characterized by the spectral resolution of LSHS data and the spatial resolution of HSLS data.

  15. Nonlinear Fusion of Multispectral Citrus Fruit Image Data with Information Contents.

    PubMed

    Li, Peilin; Lee, Sang-Heon; Hsu, Hung-Yao; Park, Jae-Sam

    2017-01-13

    The main issue of vison-based automatic harvesting manipulators is the difficulty in the correct fruit identification in the images under natural lighting conditions. Mostly, the solution has been based on a linear combination of color components in the multispectral images. However, the results have not reached a satisfactory level. To overcome this issue, this paper proposes a robust nonlinear fusion method to augment the original color image with the synchronized near infrared image. The two images are fused with Daubechies wavelet transform (DWT) in a multiscale decomposition approach. With DWT, the background noises are reduced and the necessary image features are enhanced by fusing the color contrast of the color components and the homogeneity of the near infrared (NIR) component. The resulting fused color image is classified with a C-means algorithm for reconstruction. The performance of the proposed approach is evaluated with the statistical F measure in comparison to some existing methods using linear combinations of color components. The results show that the fusion of information in different spectral components has the advantage of enhancing the image quality, therefore improving the classification accuracy in citrus fruit identification in natural lighting conditions.

  16. Nonlinear Fusion of Multispectral Citrus Fruit Image Data with Information Contents

    PubMed Central

    Li, Peilin; Lee, Sang-Heon; Hsu, Hung-Yao; Park, Jae-Sam

    2017-01-01

    The main issue of vison-based automatic harvesting manipulators is the difficulty in the correct fruit identification in the images under natural lighting conditions. Mostly, the solution has been based on a linear combination of color components in the multispectral images. However, the results have not reached a satisfactory level. To overcome this issue, this paper proposes a robust nonlinear fusion method to augment the original color image with the synchronized near infrared image. The two images are fused with Daubechies wavelet transform (DWT) in a multiscale decomposition approach. With DWT, the background noises are reduced and the necessary image features are enhanced by fusing the color contrast of the color components and the homogeneity of the near infrared (NIR) component. The resulting fused color image is classified with a C-means algorithm for reconstruction. The performance of the proposed approach is evaluated with the statistical F measure in comparison to some existing methods using linear combinations of color components. The results show that the fusion of information in different spectral components has the advantage of enhancing the image quality, therefore improving the classification accuracy in citrus fruit identification in natural lighting conditions. PMID:28098797

  17. Covariance descriptor fusion for target detection

    NASA Astrophysics Data System (ADS)

    Cukur, Huseyin; Binol, Hamidullah; Bal, Abdullah; Yavuz, Fatih

    2016-05-01

    Target detection is one of the most important topics for military or civilian applications. In order to address such detection tasks, hyperspectral imaging sensors provide useful images data containing both spatial and spectral information. Target detection has various challenging scenarios for hyperspectral images. To overcome these challenges, covariance descriptor presents many advantages. Detection capability of the conventional covariance descriptor technique can be improved by fusion methods. In this paper, hyperspectral bands are clustered according to inter-bands correlation. Target detection is then realized by fusion of covariance descriptor results based on the band clusters. The proposed combination technique is denoted Covariance Descriptor Fusion (CDF). The efficiency of the CDF is evaluated by applying to hyperspectral imagery to detect man-made objects. The obtained results show that the CDF presents better performance than the conventional covariance descriptor.

  18. Data fusion and photometric restoration

    NASA Astrophysics Data System (ADS)

    Pirzkal, Norbert; Hook, Richard N.

    2001-11-01

    The current generation of 8-10m optical ground-based telescopes have a symbiotic relationship with space telescopes. For direct imaging in the optical the former can collect photons relatively cheaply but the latter can still achieve, even in the era of adaptive optics, significantly higher spatial resolution, point-spread function stability and astrometric fidelity over fields of a few arcminutes. The large archives of HST imaging already in place, when combined with the ease of access to ground-based data afforded by the virtual observatory currently under development, will make space-ground data fusion a powerful tool for the future. We describe a photometric image restoration method that we have developed which allows the efficient and accurate use of high-resolution space imaging of crowded fields to extract high quality photometry from very crowded ground-based images. We illustrate the method using HST and ESO VLT/FORS imaging of a globular cluster and demonstrate quantitatively the photometric measurements quality that can achieved using the data fusion approach instead of just using data from just one telescope. This method can handle most of the common difficulties encountered when attempting this problem such as determining the geometric mapping to the requisite precision, deriving the PSF and the background.

  19. Integrating image quality in 2nu-SVM biometric match score fusion.

    PubMed

    Vatsa, Mayank; Singh, Richa; Noore, Afzel

    2007-10-01

    This paper proposes an intelligent 2nu-support vector machine based match score fusion algorithm to improve the performance of face and iris recognition by integrating the quality of images. The proposed algorithm applies redundant discrete wavelet transform to evaluate the underlying linear and non-linear features present in the image. A composite quality score is computed to determine the extent of smoothness, sharpness, noise, and other pertinent features present in each subband of the image. The match score and the corresponding quality score of an image are fused using 2nu-support vector machine to improve the verification performance. The proposed algorithm is experimentally validated using the FERET face database and the CASIA iris database. The verification performance and statistical evaluation show that the proposed algorithm outperforms existing fusion algorithms.

  20. A new method of Quickbird own image fusion

    NASA Astrophysics Data System (ADS)

    Han, Ying; Jiang, Hong; Zhang, Xiuying

    2009-10-01

    With the rapid development of remote sensing technology, the means of accessing to remote sensing data become increasingly abundant, thus the same area can form a large number of multi-temporal, different resolution image sequence. At present, the fusion methods are mainly: HPF, IHS transform method, PCA method, Brovey, Mallat algorithm and wavelet transform and so on. There exists a serious distortion of the spectrums in the IHS transform, Mallat algorithm omits low-frequency information of the high spatial resolution images, the integration results of which has obvious blocking effects. Wavelet multi-scale decomposition for different sizes, the directions, details and the edges can have achieved very good results, but different fusion rules and algorithms can achieve different effects. This article takes the Quickbird own image fusion as an example, basing on wavelet transform and HVS, wavelet transform and IHS integration. The result shows that the former better. This paper introduces the correlation coefficient, the relative average spectral error index and usual index to evaluate the quality of image.

  1. GPU-Based Real-Time Volumetric Ultrasound Image Reconstruction for a Ring Array

    PubMed Central

    Choe, Jung Woo; Nikoozadeh, Amin; Oralkan, Ömer; Khuri-Yakub, Butrus T.

    2014-01-01

    Synthetic phased array (SPA) beamforming with Hadamard coding and aperture weighting is an optimal option for real-time volumetric imaging with a ring array, a particularly attractive geometry in intracardiac and intravascular applications. However, the imaging frame rate of this method is limited by the immense computational load required in synthetic beamforming. For fast imaging with a ring array, we developed graphics processing unit (GPU)-based, real-time image reconstruction software that exploits massive data-level parallelism in beamforming operations. The GPU-based software reconstructs and displays three cross-sectional images at 45 frames per second (fps). This frame rate is 4.5 times higher than that for our previously-developed multi-core CPU-based software. In an alternative imaging mode, it shows one B-mode image rotating about the axis and its maximum intensity projection (MIP), processed at a rate of 104 fps. This paper describes the image reconstruction procedure on the GPU platform and presents the experimental images obtained using this software. PMID:23529080

  2. Test technology on divergence angle of laser range finder based on CCD imaging fusion

    NASA Astrophysics Data System (ADS)

    Shi, Sheng-bing; Chen, Zhen-xing; Lv, Yao

    2016-09-01

    Laser range finder has been equipped with all kinds of weapons, such as tank, ship, plane and so on, is important component of fire control system. Divergence angle is important performance and incarnation of horizontal resolving power for laser range finder, is necessary appraised test item in appraisal test. In this paper, based on high accuracy test on divergence angle of laser range finder, divergence angle test system is designed based on CCD imaging, divergence angle of laser range finder is acquired through fusion technology for different attenuation imaging, problem that CCD characteristic influences divergence angle test is solved.

  3. Fundus image fusion in EYEPLAN software: An evaluation of a novel technique for ocular melanoma radiation treatment planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daftari, Inder K.; Mishra, Kavita K.; O'Brien, Joan M.

    Purpose: The purpose of this study is to evaluate a novel approach for treatment planning using digital fundus image fusion in EYEPLAN for proton beam radiation therapy (PBRT) planning for ocular melanoma. The authors used a prototype version of EYEPLAN software, which allows for digital registration of high-resolution fundus photographs. The authors examined the improvement in tumor localization by replanning with the addition of fundus photo superimposition in patients with macular area tumors. Methods: The new version of EYEPLAN (v3.05) software allows for the registration of fundus photographs as a background image. This is then used in conjunction with clinicalmore » examination, tantalum marker clips, surgeon's mapping, and ultrasound to draw the tumor contour accurately. In order to determine if the fundus image superimposition helps in tumor delineation and treatment planning, the authors identified 79 patients with choroidal melanoma in the macular location that were treated with PBRT. All patients were treated to a dose of 56 GyE in four fractions. The authors reviewed and replanned all 79 macular melanoma cases with superimposition of pretreatment and post-treatment fundus imaging in the new EYEPLAN software. For patients with no local failure, the authors analyzed whether fundus photograph fusion accurately depicted and confirmed tumor volumes as outlined in the original treatment plan. For patients with local failure, the authors determined whether the addition of the fundus photograph might have benefited in terms of more accurate tumor volume delineation. Results: The mean follow-up of patients was 33.6{+-}23 months. Tumor growth was seen in six eyes of the 79 macular lesions. All six patients were marginal failures or tumor miss in the region of dose fall-off, including one patient with both in-field recurrence as well as marginal. Among the six recurrences, three were managed by enucleation and one underwent retreatment with proton therapy. Three patients developed distant metastasis and all three patients have since died. The replanning of six patients with their original fundus photograph superimposed showed that in four cases, the treatment field adequately covered the tumor volume. In the other two patients, the overlaid fundus photographs indicated the area of marginal miss. The replanning with the fundus photograph showed improved tumor coverage in these two macular lesions. For the remaining patients without local failure, replanning with fundus photograph superimposition confirmed the tumor volume as drawn in the original treatment plan. Conclusions: Local control was excellent in patients receiving 56 GyE of PBRT for uveal melanomas in the macular region, which traditionally can be more difficult to control. Posterior lesions are better defined with the additional use of fundus image since they can be difficult to mark surgically. In one-third of treatment failing patients, the superposition of the fundus photograph would have clearly allowed improved localization of tumor. The current practice standard is to use the superimposition of the fundus photograph in addition to the surgeon's clinical and clip mapping of the tumor and ultrasound measurement to draw the tumor volume.« less

  4. Research on Remote Sensing Image Classification Based on Feature Level Fusion

    NASA Astrophysics Data System (ADS)

    Yuan, L.; Zhu, G.

    2018-04-01

    Remote sensing image classification, as an important direction of remote sensing image processing and application, has been widely studied. However, in the process of existing classification algorithms, there still exists the phenomenon of misclassification and missing points, which leads to the final classification accuracy is not high. In this paper, we selected Sentinel-1A and Landsat8 OLI images as data sources, and propose a classification method based on feature level fusion. Compare three kind of feature level fusion algorithms (i.e., Gram-Schmidt spectral sharpening, Principal Component Analysis transform and Brovey transform), and then select the best fused image for the classification experimental. In the classification process, we choose four kinds of image classification algorithms (i.e. Minimum distance, Mahalanobis distance, Support Vector Machine and ISODATA) to do contrast experiment. We use overall classification precision and Kappa coefficient as the classification accuracy evaluation criteria, and the four classification results of fused image are analysed. The experimental results show that the fusion effect of Gram-Schmidt spectral sharpening is better than other methods. In four kinds of classification algorithms, the fused image has the best applicability to Support Vector Machine classification, the overall classification precision is 94.01 % and the Kappa coefficients is 0.91. The fused image with Sentinel-1A and Landsat8 OLI is not only have more spatial information and spectral texture characteristics, but also enhances the distinguishing features of the images. The proposed method is beneficial to improve the accuracy and stability of remote sensing image classification.

  5. A novel imaging technique for fusion of high-quality immobilised MR images of the head and neck with CT scans for radiotherapy target delineation.

    PubMed

    Webster, G J; Kilgallon, J E; Ho, K F; Rowbottom, C G; Slevin, N J; Mackay, R I

    2009-06-01

    Uncertainty and inconsistency are observed in target volume delineation in the head and neck for radiotherapy treatment planning based only on CT imaging. Alternative modalities such as MRI have previously been incorporated into the delineation process to provide additional anatomical information. This work aims to improve on previous studies by combining good image quality with precise patient immobilisation in order to maintain patient position between scans. MR images were acquired using quadrature coils placed over the head and neck while the patient was immobilised in the treatment position using a five-point thermoplastic shell. The MR image and CT images were automatically fused in the Pinnacle treatment planning system using Syntegra software. Image quality, distortion and accuracy of the image registration using patient anatomy were evaluated. Image quality was found to be superior to that acquired using the body coil, while distortion was < 1.0 mm to a radius of 8.7 cm from the scan centre. Image registration accuracy was found to be 2.2 mm (+/- 0.9 mm) and < 3.0 degrees (n = 6). A novel MRI technique that combines good image quality with patient immobilization has been developed and is now in clinical use. The scan duration of approximately 15 min has been well tolerated by all patients.

  6. Pilot Study of an Open-source Image Analysis Software for Automated Screening of Conventional Cervical Smears.

    PubMed

    Sanyal, Parikshit; Ganguli, Prosenjit; Barui, Sanghita; Deb, Prabal

    2018-01-01

    The Pap stained cervical smear is a screening tool for cervical cancer. Commercial systems are used for automated screening of liquid based cervical smears. However, there is no image analysis software used for conventional cervical smears. The aim of this study was to develop and test the diagnostic accuracy of a software for analysis of conventional smears. The software was developed using Python programming language and open source libraries. It was standardized with images from Bethesda Interobserver Reproducibility Project. One hundred and thirty images from smears which were reported Negative for Intraepithelial Lesion or Malignancy (NILM), and 45 images where some abnormality has been reported, were collected from the archives of the hospital. The software was then tested on the images. The software was able to segregate images based on overall nuclear: cytoplasmic ratio, coefficient of variation (CV) in nuclear size, nuclear membrane irregularity, and clustering. 68.88% of abnormal images were flagged by the software, as well as 19.23% of NILM images. The major difficulties faced were segmentation of overlapping cell clusters and separation of neutrophils. The software shows potential as a screening tool for conventional cervical smears; however, further refinement in technique is required.

  7. A general framework to learn surrogate relevance criterion for atlas based image segmentation

    NASA Astrophysics Data System (ADS)

    Zhao, Tingting; Ruan, Dan

    2016-09-01

    Multi-atlas based image segmentation sees great opportunities in the big data era but also faces unprecedented challenges in identifying positive contributors from extensive heterogeneous data. To assess data relevance, image similarity criteria based on various image features widely serve as surrogates for the inaccessible geometric agreement criteria. This paper proposes a general framework to learn image based surrogate relevance criteria to better mimic the behaviors of segmentation based oracle geometric relevance. The validity of its general rationale is verified in the specific context of fusion set selection for image segmentation. More specifically, we first present a unified formulation for surrogate relevance criteria and model the neighborhood relationship among atlases based on the oracle relevance knowledge. Surrogates are then trained to be small for geometrically relevant neighbors and large for irrelevant remotes to the given targets. The proposed surrogate learning framework is verified in corpus callosum segmentation. The learned surrogates demonstrate superiority in inferring the underlying oracle value and selecting relevant fusion set, compared to benchmark surrogates.

  8. Fusion set selection with surrogate metric in multi-atlas based image segmentation

    NASA Astrophysics Data System (ADS)

    Zhao, Tingting; Ruan, Dan

    2016-02-01

    Multi-atlas based image segmentation sees unprecedented opportunities but also demanding challenges in the big data era. Relevant atlas selection before label fusion plays a crucial role in reducing potential performance loss from heterogeneous data quality and high computation cost from extensive data. This paper starts with investigating the image similarity metric (termed ‘surrogate’), an alternative to the inaccessible geometric agreement metric (termed ‘oracle’) in atlas relevance assessment, and probes into the problem of how to select the ‘most-relevant’ atlases and how many such atlases to incorporate. We propose an inference model to relate the surrogates and the oracle geometric agreement metrics. Based on this model, we quantify the behavior of the surrogates in mimicking oracle metrics for atlas relevance ordering. Finally, analytical insights on the choice of fusion set size are presented from a probabilistic perspective, with the integrated goal of including the most relevant atlases and excluding the irrelevant ones. Empirical evidence and performance assessment are provided based on prostate and corpus callosum segmentation.

  9. Technique for writing of fiber Bragg gratings over or near preliminary formed macro-structure defects in silica optical fibers

    NASA Astrophysics Data System (ADS)

    Evtushenko, Alexander S.; Faskhutdinov, Lenar M.; Kafarova, Anastasia M.; Kazakov, Vadim S.; Kuznetzov, Artem A.; Minaeva, Alina Yu.; Sevruk, Nikita L.; Nureev, Ilnur I.; Vasilets, Alexander A.; Andreev, Vladimir A.; Morozov, Oleg G.; Burdin, Vladimir A.; Bourdine, Anton V.

    2017-04-01

    This work presents method for performing precision macro-structure defects "tapers" and "up-tapers" written in conventional silica telecommunication multimode optical fibers by commercially available field fusion splicer with modified software settings and following writing fiber Bragg gratings over or near them. We developed technique for macrodefect geometry parameters estimation via analysis of photo-image performed after defect writing and displayed on fusion splicer screen. Some research results of defect geometry dependence on fusion current and fusion time values re-set in splicer program are represented that provided ability to choose their "the best" combination. Also experimental statistical researches concerned with "taper" and "up-taper" diameter stability as well as their insertion loss values during their writing under fixed corrected splicer program parameters were performed. We developed technique for FBG writing over or near macro-structure defect. Some results of spectral response measurements produced for short-length samples of multimode optical fiber with fiber Bragg gratings written over and near macro-defects prepared by using proposed technique are presented.

  10. A fast and automatic fusion algorithm for unregistered multi-exposure image sequence

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Yu, Feihong

    2014-09-01

    Human visual system (HVS) can visualize all the brightness levels of the scene through visual adaptation. However, the dynamic range of most commercial digital cameras and display devices are smaller than the dynamic range of human eye. This implies low dynamic range (LDR) images captured by normal digital camera may lose image details. We propose an efficient approach to high dynamic (HDR) image fusion that copes with image displacement and image blur degradation in a computationally efficient manner, which is suitable for implementation on mobile devices. The various image registration algorithms proposed in the previous literatures are unable to meet the efficiency and performance requirements in the application of mobile devices. In this paper, we selected Oriented Brief (ORB) detector to extract local image structures. The descriptor selected in multi-exposure image fusion algorithm has to be fast and robust to illumination variations and geometric deformations. ORB descriptor is the best candidate in our algorithm. Further, we perform an improved RANdom Sample Consensus (RANSAC) algorithm to reject incorrect matches. For the fusion of images, a new approach based on Stationary Wavelet Transform (SWT) is used. The experimental results demonstrate that the proposed algorithm generates high quality images at low computational cost. Comparisons with a number of other feature matching methods show that our method gets better performance.

  11. Comparative assessment of methods for the fusion transcripts detection from RNA-Seq data

    PubMed Central

    Kumar, Shailesh; Vo, Angie Duy; Qin, Fujun; Li, Hui

    2016-01-01

    RNA-Seq made possible the global identification of fusion transcripts, i.e. “chimeric RNAs”. Even though various software packages have been developed to serve this purpose, they behave differently in different datasets provided by different developers. It is important for both users, and developers to have an unbiased assessment of the performance of existing fusion detection tools. Toward this goal, we compared the performance of 12 well-known fusion detection software packages. We evaluated the sensitivity, false discovery rate, computing time, and memory usage of these tools in four different datasets (positive, negative, mixed, and test). We conclude that some tools are better than others in terms of sensitivity, positive prediction value, time consumption and memory usage. We also observed small overlaps of the fusions detected by different tools in the real dataset (test dataset). This could be due to false discoveries by various tools, but could also be due to the reason that none of the tools are inclusive. We have found that the performance of the tools depends on the quality, read length, and number of reads of the RNA-Seq data. We recommend that users choose the proper tools for their purpose based on the properties of their RNA-Seq data. PMID:26862001

  12. [Fusion of MRI, fMRI and intraoperative MRI data. Methods and clinical significance exemplified by neurosurgical interventions].

    PubMed

    Moche, M; Busse, H; Dannenberg, C; Schulz, T; Schmitgen, A; Trantakis, C; Winkler, D; Schmidt, F; Kahn, T

    2001-11-01

    The aim of this work was to realize and clinically evaluate an image fusion platform for the integration of preoperative MRI and fMRI data into the intraoperative images of an interventional MRI system with a focus on neurosurgical procedures. A vertically open 0.5 T MRI scanner was equipped with a dedicated navigation system enabling the registration of additional imaging modalities (MRI, fMRI, CT) with the intraoperatively acquired data sets. These merged image data served as the basis for interventional planning and multimodal navigation. So far, the system has been used in 70 neurosurgical interventions (13 of which involved image data fusion--requiring 15 minutes extra time). The augmented navigation system is characterized by a higher frame rate and a higher image quality as compared to the system-integrated navigation based on continuously acquired (near) real time images. Patient movement and tissue shifts can be immediately detected by monitoring the morphological differences between both navigation scenes. The multimodal image fusion allowed a refined navigation planning especially for the resection of deeply seated brain lesions or pathologies close to eloquent areas. Augmented intraoperative orientation and instrument guidance improve the safety and accuracy of neurosurgical interventions.

  13. A novel false color mapping model-based fusion method of visual and infrared images

    NASA Astrophysics Data System (ADS)

    Qi, Bin; Kun, Gao; Tian, Yue-xin; Zhu, Zhen-yu

    2013-12-01

    A fast and efficient image fusion method is presented to generate near-natural colors from panchromatic visual and thermal imaging sensors. Firstly, a set of daytime color reference images are analyzed and the false color mapping principle is proposed according to human's visual and emotional habits. That is, object colors should remain invariant after color mapping operations, differences between infrared and visual images should be enhanced and the background color should be consistent with the main scene content. Then a novel nonlinear color mapping model is given by introducing the geometric average value of the input visual and infrared image gray and the weighted average algorithm. To determine the control parameters in the mapping model, the boundary conditions are listed according to the mapping principle above. Fusion experiments show that the new fusion method can achieve the near-natural appearance of the fused image, and has the features of enhancing color contrasts and highlighting the infrared brilliant objects when comparing with the traditional TNO algorithm. Moreover, it owns the low complexity and is easy to realize real-time processing. So it is quite suitable for the nighttime imaging apparatus.

  14. [Three-dimensional tooth model reconstruction based on fusion of dental computed tomography images and laser-scanned images].

    PubMed

    Zhang, Dongxia; Gan, Yangzhou; Xiong, Jing; Xia, Zeyang

    2017-02-01

    Complete three-dimensional(3D) tooth model provides essential information to assist orthodontists for diagnosis and treatment planning. Currently, 3D tooth model is mainly obtained by segmentation and reconstruction from dental computed tomography(CT) images. However, the accuracy of 3D tooth model reconstructed from dental CT images is low and not applicable for invisalign design. And another serious problem also occurs, i.e. frequentative dental CT scan during different intervals of orthodontic treatment often leads to radiation to the patients. Hence, this paper proposed a method to reconstruct tooth model based on fusion of dental CT images and laser-scanned images. A complete3 D tooth model was reconstructed with the registration and fusion between the root reconstructed from dental CT images and the crown reconstructed from laser-scanned images. The crown of the complete 3D tooth model reconstructed with the proposed method has higher accuracy. Moreover, in order to reconstruct complete 3D tooth model of each orthodontic treatment interval, only one pre-treatment CT scan is needed and in the orthodontic treatment process only the laser-scan is required. Therefore, radiation to the patients can be reduced significantly.

  15. Region-based multifocus image fusion for the precise acquisition of Pap smear images.

    PubMed

    Tello-Mijares, Santiago; Bescós, Jesús

    2018-05-01

    A multifocus image fusion method to obtain a single focused image from a sequence of microscopic high-magnification Papanicolau source (Pap smear) images is presented. These images, captured each in a different position of the microscope lens, frequently show partially focused cells or parts of cells, which makes them unpractical for the direct application of image analysis techniques. The proposed method obtains a focused image with a high preservation of original pixels information while achieving a negligible visibility of the fusion artifacts. The method starts by identifying the best-focused image of the sequence; then, it performs a mean-shift segmentation over this image; the focus level of the segmented regions is evaluated in all the images of the sequence, and best-focused regions are merged in a single combined image; finally, this image is processed with an adaptive artifact removal process. The combination of a region-oriented approach, instead of block-based approaches, and a minimum modification of the value of focused pixels in the original images achieve a highly contrasted image with no visible artifacts, which makes this method especially convenient for the medical imaging domain. The proposed method is compared with several state-of-the-art alternatives over a representative dataset. The experimental results show that our proposal obtains the best and more stable quality indicators. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  16. Usability evaluation of cloud-based mapping tools for the display of very large datasets

    NASA Astrophysics Data System (ADS)

    Stotz, Nicole Marie

    The elasticity and on-demand nature of cloud services have made it easier to create web maps. Users only need access to a web browser and the Internet to utilize cloud based web maps, eliminating the need for specialized software. To encourage a wide variety of users, a map must be well designed; usability is a very important concept in designing a web map. Fusion Tables, a new product from Google, is one example of newer cloud-based distributed GIS services. It allows for easy spatial data manipulation and visualization, within the Google Maps framework. ESRI has also introduced a cloud based version of their software, called ArcGIS Online, built on Amazon's EC2 cloud. Utilizing a user-centered design framework, two prototype maps were created with data from the San Diego East County Economic Development Council. One map was built on Fusion Tables, and another on ESRI's ArcGIS Online. A usability analysis was conducted and used to compare both map prototypes in term so of design and functionality. Load tests were also ran, and performance metrics gathered on both map prototypes. The usability analysis was taken by 25 geography students, and consisted of time based tasks and questions on map design and functionality. Survey participants completed the time based tasks for the Fusion Tables map prototype quicker than those of the ArcGIS Online map prototype. While response was generally positive towards the design and functionality of both prototypes, overall the Fusion Tables map prototype was preferred. For the load tests, the data set was broken into 22 groups for a total of 44 tests. While the Fusion Tables map prototype performed more efficiently than the ArcGIS Online prototype, differences are almost unnoticeable. A SWOT analysis was conducted for each prototype. The results from this research point to the Fusion Tables map prototype. A redesign of this prototype would incorporate design suggestions from the usability survey, while some functionality would need to be dropped. This is a free product and would therefore be the best option if cost is an issue, but this map may not be supported in the future.

  17. MR and CT image fusion for postimplant analysis in permanent prostate seed implants.

    PubMed

    Polo, Alfredo; Cattani, Federica; Vavassori, Andrea; Origgi, Daniela; Villa, Gaetano; Marsiglia, Hugo; Bellomi, Massimo; Tosi, Giampiero; De Cobelli, Ottavio; Orecchia, Roberto

    2004-12-01

    To compare the outcome of two different image-based postimplant dosimetry methods in permanent seed implantation. Between October 1999 and October 2002, 150 patients with low-risk prostate carcinoma were treated with (125)I and (103)Pd in our institution. A CT-MRI image fusion protocol was used in 21 consecutive patients treated with exclusive brachytherapy. The accuracy and reproducibility of the method was calculated, and then the CT-based dosimetry was compared with the CT-MRI-based dosimetry using the dose-volume histogram (DVH) related parameters recommended by the American Brachytherapy Society and the American Association of Physicists in Medicine. Our method for CT-MRI image fusion was accurate and reproducible (median shift <1 mm). Differences in prostate volume were found, depending on the image modality used. Quality assurance DVH-related parameters strongly depended on the image modality (CT vs. CT-MRI): V(100) = 82% vs. 88%, p < 0.05. D(90) = 96% vs. 115%, p < 0.05. Those results depend on the institutional implant technique and reflect the importance of lowering inter- and intraobserver discrepancies when outlining prostate and organs at risk for postimplant dosimetry. Computed tomography-MRI fused images allow accurate determination of prostate size, significantly improving the dosimetric evaluation based on DVH analysis. This provides a consistent method to judge a prostate seed implant's quality.

  18. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition

    PubMed Central

    Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.

    2010-01-01

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475

  19. Lesion classification using clinical and visual data fusion by multiple kernel learning

    NASA Astrophysics Data System (ADS)

    Kisilev, Pavel; Hashoul, Sharbell; Walach, Eugene; Tzadok, Asaf

    2014-03-01

    To overcome operator dependency and to increase diagnosis accuracy in breast ultrasound (US), a lot of effort has been devoted to developing computer-aided diagnosis (CAD) systems for breast cancer detection and classification. Unfortunately, the efficacy of such CAD systems is limited since they rely on correct automatic lesions detection and localization, and on robustness of features computed based on the detected areas. In this paper we propose a new approach to boost the performance of a Machine Learning based CAD system, by combining visual and clinical data from patient files. We compute a set of visual features from breast ultrasound images, and construct the textual descriptor of patients by extracting relevant keywords from patients' clinical data files. We then use the Multiple Kernel Learning (MKL) framework to train SVM based classifier to discriminate between benign and malignant cases. We investigate different types of data fusion methods, namely, early, late, and intermediate (MKL-based) fusion. Our database consists of 408 patient cases, each containing US images, textual description of complaints and symptoms filled by physicians, and confirmed diagnoses. We show experimentally that the proposed MKL-based approach is superior to other classification methods. Even though the clinical data is very sparse and noisy, its MKL-based fusion with visual features yields significant improvement of the classification accuracy, as compared to the image features only based classifier.

  20. The use of 3D image fusion for percutaneous transluminal angioplasty and stenting of iliac artery obstructions: validation of the technique and systematic review of literature.

    PubMed

    Goudeketting, Seline R; Heinen, Stefan G; van den Heuvel, Daniel A; van Strijen, Marco J; de Haan, Michiel W; Slump, Cornelis H; de Vries, Jean-Paul P

    2018-02-01

    The effect of the insertion of guidewires and catheters on fusion accuracy of the three-dimensional (3D) image fusion technique during iliac percutaneous transluminal angioplasty (PTA) procedures has not yet been investigated. Technical validation of the 3D fusion technique was evaluated in 11 patients with common and/or external iliac artery lesions. A preprocedural contrast-enhanced magnetic resonance angiogram (CE-MRA) was segmented and manually registered to a cone-beam computed tomography image created at the beginning of the procedure for each patient. The treating physician visually scored the fusion accuracy (i.e., accurate [<2 mm], mismatch [2-5 mm], or inaccurate [>5 mm]) of the entire vasculature of the overlay with respect to the digital subtraction angiography (DSA) directly after the first obtained DSA. Contours of the vasculature of the fusion images and DSAs were drawn after the procedure. The cranial-caudal, lateral-medial, and absolute displacement were calculated between the vessel centerlines. To determine the influence of the catheters, displacement of the catheterized iliac trajectories were compared with the noncatheterized trajectories. Electronic databases were systematically searched for available literature published between January 2010 till August 2017. The mean registration error for all iliac trajectories (N.=20) was small (4.0±2.5 mm). No significant difference in fusion displacement was observed between catheterized (N.=11) and noncatheterized (N.=9) iliac arteries. The systematic literature search yielded 2 manuscripts with a total of 22 patients. The methodological quality of these studies was poor (≤11 MINORS Score), mainly due to a lack of a control group. Accurate image fusion based on preprocedural CE-MRA is possible and could potentially be of help in iliac PTA procedures. The flexible guidewires and angiographic catheters, routinely used during endovascular procedures of iliac arteries, did not cause significant displacement that influenced the image fusion. Current literature on 3D image fusion in iliac PTA procedures is of limited methodological quality.

  1. Formulation of image fusion as a constrained least squares optimization problem

    PubMed Central

    Dwork, Nicholas; Lasry, Eric M.; Pauly, John M.; Balbás, Jorge

    2017-01-01

    Abstract. Fusing a lower resolution color image with a higher resolution monochrome image is a common practice in medical imaging. By incorporating spatial context and/or improving the signal-to-noise ratio, it provides clinicians with a single frame of the most complete information for diagnosis. In this paper, image fusion is formulated as a convex optimization problem that avoids image decomposition and permits operations at the pixel level. This results in a highly efficient and embarrassingly parallelizable algorithm based on widely available robust and simple numerical methods that realizes the fused image as the global minimizer of the convex optimization problem. PMID:28331885

  2. Spatial resolution enhancement of satellite image data using fusion approach

    NASA Astrophysics Data System (ADS)

    Lestiana, H.; Sukristiyanti

    2018-02-01

    Object identification using remote sensing data has a problem when the spatial resolution is not in accordance with the object. The fusion approach is one of methods to solve the problem, to improve the object recognition and to increase the objects information by combining data from multiple sensors. The application of fusion image can be used to estimate the environmental component that is needed to monitor in multiple views, such as evapotranspiration estimation, 3D ground-based characterisation, smart city application, urban environments, terrestrial mapping, and water vegetation. Based on fusion application method, the visible object in land area has been easily recognized using the method. The variety of object information in land area has increased the variation of environmental component estimation. The difficulties in recognizing the invisible object like Submarine Groundwater Discharge (SGD), especially in tropical area, might be decreased by the fusion method. The less variation of the object in the sea surface temperature is a challenge to be solved.

  3. [Magnetic resonance imaging in facial injuries and digital fusion CT/MRI].

    PubMed

    Kozakiewicz, Marcin; Olszycki, Marek; Arkuszewski, Piotr; Stefańczyk, Ludomir

    2006-01-01

    Magnetic resonance images [MRI] and their digital fusion with computed tomography [CT] data, observed in patients affected with facial injuries, are presented in this study. The MR imaging of 12 posttraumatic patients was performed in the same plains as their previous CT scans. Evaluation focused on quality of the facial soft tissues depicting, which was unsatisfactory in CT. Using the own "Dental Studio" programme the digital fusion of the both modalities was performed. Pathologic dislocations and injures of facial soft tissues are visualized better in MRI than in CT examination. Especially MRI properly reveals disturbances in intraorbital soft structures. MRI-based assessment is valuable in patients affected with facial soft tissues injuries, especially in case of orbita/sinuses hernia. Fusion CT/MRI scans allows to evaluate simultaneously bone structure and soft tissues of the same region.

  4. Fusion of an Ensemble of Augmented Image Detectors for Robust Object Detection

    PubMed Central

    Wei, Pan; Anderson, Derek T.

    2018-01-01

    A significant challenge in object detection is accurate identification of an object’s position in image space, whereas one algorithm with one set of parameters is usually not enough, and the fusion of multiple algorithms and/or parameters can lead to more robust results. Herein, a new computational intelligence fusion approach based on the dynamic analysis of agreement among object detection outputs is proposed. Furthermore, we propose an online versus just in training image augmentation strategy. Experiments comparing the results both with and without fusion are presented. We demonstrate that the augmented and fused combination results are the best, with respect to higher accuracy rates and reduction of outlier influences. The approach is demonstrated in the context of cone, pedestrian and box detection for Advanced Driver Assistance Systems (ADAS) applications. PMID:29562609

  5. Magnetic Resonance and Ultrasound Image Fusion Supported Transperineal Prostate Biopsy Using the Ginsburg Protocol: Technique, Learning Points, and Biopsy Results.

    PubMed

    Hansen, Nienke; Patruno, Giulio; Wadhwa, Karan; Gaziev, Gabriele; Miano, Roberto; Barrett, Tristan; Gnanapragasam, Vincent; Doble, Andrew; Warren, Anne; Bratt, Ola; Kastner, Christof

    2016-08-01

    Prostate biopsy supported by transperineal image fusion has recently been developed as a new method to the improve accuracy of prostate cancer detection. To describe the Ginsburg protocol for transperineal prostate biopsy supported by multiparametric magnetic resonance imaging (mpMRI) and transrectal ultrasound (TRUS) image fusion, provide learning points for its application, and report biopsy results. The article is supplemented by a Surgery in Motion video. This single-centre retrospective outcome study included 534 patients from March 2012 to October 2015. A total of 107 had no previous prostate biopsy, 295 had benign TRUS-guided biopsies, and 159 were on active surveillance for low-risk cancer. A Likert scale reported mpMRI for suspicion of cancer from 1 (no suspicion) to 5 (cancer highly likely). Transperineal biopsies were obtained under general anaesthesia using BiopSee fusion software (Medcom, Darmstadt, Germany). All patients had systematic biopsies, two cores from each of 12 anatomic sectors. Likert 3-5 lesions were targeted with a further two cores per lesion. Any cancer and Gleason score 7-10 cancer on biopsy were noted. Descriptive statistics and positive predictive values (PPVs) and negative predictive values (NPVs) were calculated. The detection rate of Gleason score 7-10 cancer was similar across clinical groups. Likert scale 3-5 MRI lesions were reported in 378 (71%) of the patients. Cancer was detected in 249 (66%) and Gleason score 7-10 cancer was noted in 157 (42%) of these patients. PPV for detecting 7-10 cancer was 0.15 for Likert score 3, 0.43 for score 4, and 0.63 for score 5. NPV of Likert 1-2 findings was 0.87 for Gleason score 7-10 and 0.97 for Gleason score ≥4+3=7 cancer. Limitations include lack of data on complications. Transperineal prostate biopsy supported by MRI/TRUS image fusion using the Ginsburg protocol yielded high detection rates of Gleason score 7-10 cancer. Because the NPV for excluding Gleason score 7-10 cancer was very high, prostate biopsies may not be needed for all men with elevated prostate-specific antigen values and nonsuspicious mpMRI. We present our technique to sample (biopsy) the prostate by the transperineal route (the area between the scrotum and the anus) to detect prostate cancer using a fusion of magnetic resonance and ultrasound images to guide the sampling. Copyright © 2016 European Association of Urology. All rights reserved.

  6. Cognitive versus Software-Assisted Registration: Development of a New Nomogram Predicting Prostate Cancer at MRI-Targeted Biopsies.

    PubMed

    Kaufmann, Sascha; Russo, Giorgio I; Thaiss, Wolfgang; Notohamiprodjo, Mike; Bamberg, Fabian; Bedke, Jens; Morgia, Giuseppe; Nikolaou, Konstantin; Stenzl, Arnulf; Kruck, Stephan

    2018-04-03

    Multiparametric magnetic resonance imaging (mpMRI) is gaining acceptance to guide targeted biopsy (TB) in prostate cancer (PC) diagnosis. We aimed to compare the detection rate of software-assisted fusion TB (SA-TB) versus cognitive fusion TB (COG-TB) for PC and to evaluate potential clinical features in detecting PC and clinically significant PC (csPC) at TB. This was a retrospective cohort study of patients with rising and/or persistently elevated prostate-specific antigen (PSA) undergoing mpMRI followed by either transperineal SA-TB or transrectal COG-TB. The analysis showed a matched-paired analysis between SA-TB versus COG-TB without differences in clinical or radiological characteristics. Differences among detection of PC/csPC among groups were analyzed. A multivariable logistic regression model predicting PC at TB was fitted. The model was evaluated using the receiver operating characteristic-derived area under the curve, goodness of fit test, and decision-curve analyses. One hundred ninety-one and 87 patients underwent SA-TB or COG-TB, respectively. The multivariate logistic analysis showed that SA-TB was associated with overall PC (odds ratio [OR], 5.70; P < .01) and PC at TB (OR, 3.00; P < .01) but not with overall csPC (P = .40) and csPC at TB (P = .40). A nomogram predicting PC at TB was constructed using the Prostate Imaging Reporting and Data System version 2.0, age, PSA density and biopsy technique, showing improved clinical risk prediction against a threshold probability of 10% with a c-index of 0.83. In patients with suspected PC, software-assisted biopsy detects most cancers and outperforms the cognitive approach in targeting magnetic resonance imaging-visible lesions. Furthermore, we introduced a prebiopsy nomogram for the probability of PC in TB. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Multi-focus image fusion algorithm using NSCT and MPCNN

    NASA Astrophysics Data System (ADS)

    Liu, Kang; Wang, Lianli

    2018-04-01

    Based on nonsubsampled contourlet transform (NSCT) and modified pulse coupled neural network (MPCNN), the paper proposes an effective method of image fusion. Firstly, the paper decomposes the source image into the low-frequency components and high-frequency components using NSCT, and then processes the low-frequency components by regional statistical fusion rules. For high-frequency components, the paper calculates the spatial frequency (SF), which is input into MPCNN model to get relevant coefficients according to the fire-mapping image of MPCNN. At last, the paper restructures the final image by inverse transformation of low-frequency and high-frequency components. Compared with the wavelet transformation (WT) and the traditional NSCT algorithm, experimental results indicate that the method proposed in this paper achieves an improvement both in human visual perception and objective evaluation. It indicates that the method is effective, practical and good performance.

  8. Color image guided depth image super resolution using fusion filter

    NASA Astrophysics Data System (ADS)

    He, Jin; Liang, Bin; He, Ying; Yang, Jun

    2018-04-01

    Depth cameras are currently playing an important role in many areas. However, most of them can only obtain lowresolution (LR) depth images. Color cameras can easily provide high-resolution (HR) color images. Using color image as a guide image is an efficient way to get a HR depth image. In this paper, we propose a depth image super resolution (SR) algorithm, which uses a HR color image as a guide image and a LR depth image as input. We use the fusion filter of guided filter and edge based joint bilateral filter to get HR depth image. Our experimental results on Middlebury 2005 datasets show that our method can provide better quality in HR depth images both numerically and visually.

  9. Face-iris multimodal biometric scheme based on feature level fusion

    NASA Astrophysics Data System (ADS)

    Huo, Guang; Liu, Yuanning; Zhu, Xiaodong; Dong, Hongxing; He, Fei

    2015-11-01

    Unlike score level fusion, feature level fusion demands all the features extracted from unimodal traits with high distinguishability, as well as homogeneity and compatibility, which is difficult to achieve. Therefore, most multimodal biometric research focuses on score level fusion, whereas few investigate feature level fusion. We propose a face-iris recognition method based on feature level fusion. We build a special two-dimensional-Gabor filter bank to extract local texture features from face and iris images, and then transform them by histogram statistics into an energy-orientation variance histogram feature with lower dimensions and higher distinguishability. Finally, through a fusion-recognition strategy based on principal components analysis and support vector machine (FRSPS), feature level fusion and one-to-n identification are accomplished. The experimental results demonstrate that this method can not only effectively extract face and iris features but also provide higher recognition accuracy. Compared with some state-of-the-art fusion methods, the proposed method has a significant performance advantage.

  10. PET and MRI image fusion based on combination of 2-D Hilbert transform and IHS method.

    PubMed

    Haddadpour, Mozhdeh; Daneshvar, Sabalan; Seyedarabi, Hadi

    2017-08-01

    The process of medical image fusion is combining two or more medical images such as Magnetic Resonance Image (MRI) and Positron Emission Tomography (PET) and mapping them to a single image as fused image. So purpose of our study is assisting physicians to diagnose and treat the diseases in the least of the time. We used Magnetic Resonance Image (MRI) and Positron Emission Tomography (PET) as input images, so fused them based on combination of two dimensional Hilbert transform (2-D HT) and Intensity Hue Saturation (IHS) method. Evaluation metrics that we apply are Discrepancy (D k ) as an assessing spectral features and Average Gradient (AG k ) as an evaluating spatial features and also Overall Performance (O.P) to verify properly of the proposed method. In this paper we used three common evaluation metrics like Average Gradient (AG k ) and the lowest Discrepancy (D k ) and Overall Performance (O.P) to evaluate the performance of our method. Simulated and numerical results represent the desired performance of proposed method. Since that the main purpose of medical image fusion is preserving both spatial and spectral features of input images, so based on numerical results of evaluation metrics such as Average Gradient (AG k ), Discrepancy (D k ) and Overall Performance (O.P) and also desired simulated results, it can be concluded that our proposed method can preserve both spatial and spectral features of input images. Copyright © 2017 Chang Gung University. Published by Elsevier B.V. All rights reserved.

  11. Hybrid Histogram Descriptor: A Fusion Feature Representation for Image Retrieval.

    PubMed

    Feng, Qinghe; Hao, Qiaohong; Chen, Yuqi; Yi, Yugen; Wei, Ying; Dai, Jiangyan

    2018-06-15

    Currently, visual sensors are becoming increasingly affordable and fashionable, acceleratingly the increasing number of image data. Image retrieval has attracted increasing interest due to space exploration, industrial, and biomedical applications. Nevertheless, designing effective feature representation is acknowledged as a hard yet fundamental issue. This paper presents a fusion feature representation called a hybrid histogram descriptor (HHD) for image retrieval. The proposed descriptor comprises two histograms jointly: a perceptually uniform histogram which is extracted by exploiting the color and edge orientation information in perceptually uniform regions; and a motif co-occurrence histogram which is acquired by calculating the probability of a pair of motif patterns. To evaluate the performance, we benchmarked the proposed descriptor on RSSCN7, AID, Outex-00013, Outex-00014 and ETHZ-53 datasets. Experimental results suggest that the proposed descriptor is more effective and robust than ten recent fusion-based descriptors under the content-based image retrieval framework. The computational complexity was also analyzed to give an in-depth evaluation. Furthermore, compared with the state-of-the-art convolutional neural network (CNN)-based descriptors, the proposed descriptor also achieves comparable performance, but does not require any training process.

  12. Multifocus watermarking approach based on discrete cosine transform.

    PubMed

    Waheed, Safa Riyadh; Alkawaz, Mohammed Hazim; Rehman, Amjad; Almazyad, Abdulaziz S; Saba, Tanzila

    2016-05-01

    Image fusion process consolidates data and information from various images of same sight into a solitary image. Each of the source images might speak to a fractional perspective of the scene, and contains both "pertinent" and "immaterial" information. In this study, a new image fusion method is proposed utilizing the Discrete Cosine Transform (DCT) to join the source image into a solitary minimized image containing more exact depiction of the sight than any of the individual source images. In addition, the fused image comes out with most ideal quality image without bending appearance or loss of data. DCT algorithm is considered efficient in image fusion. The proposed scheme is performed in five steps: (1) RGB colour image (input image) is split into three channels R, G, and B for source images. (2) DCT algorithm is applied to each channel (R, G, and B). (3) The variance values are computed for the corresponding 8 × 8 blocks of each channel. (4) Each block of R of source images is compared with each other based on the variance value and then the block with maximum variance value is selected to be the block in the new image. This process is repeated for all channels of source images. (5) Inverse discrete cosine transform is applied on each fused channel to convert coefficient values to pixel values, and then combined all the channels to generate the fused image. The proposed technique can potentially solve the problem of unwanted side effects such as blurring or blocking artifacts by reducing the quality of the subsequent image in image fusion process. The proposed approach is evaluated using three measurement units: the average of Q(abf), standard deviation, and peak Signal Noise Rate. The experimental results of this proposed technique have shown good results as compared with older techniques. © 2016 Wiley Periodicals, Inc.

  13. Research on multi-source image fusion technology in haze environment

    NASA Astrophysics Data System (ADS)

    Ma, GuoDong; Piao, Yan; Li, Bing

    2017-11-01

    In the haze environment, the visible image collected by a single sensor can express the details of the shape, color and texture of the target very well, but because of the haze, the sharpness is low and some of the target subjects are lost; Because of the expression of thermal radiation and strong penetration ability, infrared image collected by a single sensor can clearly express the target subject, but it will lose detail information. Therefore, the multi-source image fusion method is proposed to exploit their respective advantages. Firstly, the improved Dark Channel Prior algorithm is used to preprocess the visible haze image. Secondly, the improved SURF algorithm is used to register the infrared image and the haze-free visible image. Finally, the weighted fusion algorithm based on information complementary is used to fuse the image. Experiments show that the proposed method can improve the clarity of the visible target and highlight the occluded infrared target for target recognition.

  14. Present status and trends of image fusion

    NASA Astrophysics Data System (ADS)

    Xiang, Dachao; Fu, Sheng; Cai, Yiheng

    2009-10-01

    Image fusion information extracted from multiple images which is more accurate and reliable than that from just a single image. Since various images contain different information aspects of the measured parts, and comprehensive information can be obtained by integrating them together. Image fusion is a main branch of the application of data fusion technology. At present, it was widely used in computer vision technology, remote sensing, robot vision, medical image processing and military field. This paper mainly presents image fusion's contents, research methods, and the status quo at home and abroad, and analyzes the development trend.

  15. [Possibilities of sonographic image fusion: Current developments].

    PubMed

    Jung, E M; Clevert, D-A

    2015-11-01

    For diagnostic and interventional procedures ultrasound (US) image fusion can be used as a complementary imaging technique. Image fusion has the advantage of real time imaging and can be combined with other cross-sectional imaging techniques. With the introduction of US contrast agents sonography and image fusion have gained more importance in the detection and characterization of liver lesions. Fusion of US images with computed tomography (CT) or magnetic resonance imaging (MRI) facilitates the diagnostics and postinterventional therapy control. In addition to the primary application of image fusion in the diagnosis and treatment of liver lesions, there are more useful indications for contrast-enhanced US (CEUS) in routine clinical diagnostic procedures, such as intraoperative US (IOUS), vascular imaging and diagnostics of other organs, such as the kidneys and prostate gland.

  16. Ultrahigh field magnetic resonance and colour Doppler real-time fusion imaging of the orbit--a hybrid tool for assessment of choroidal melanoma.

    PubMed

    Walter, Uwe; Niendorf, Thoralf; Graessl, Andreas; Rieger, Jan; Krüger, Paul-Christian; Langner, Sönke; Guthoff, Rudolf F; Stachs, Oliver

    2014-05-01

    A combination of magnetic resonance images with real-time high-resolution ultrasound known as fusion imaging may improve ophthalmologic examination. This study was undertaken to evaluate the feasibility of orbital high-field magnetic resonance and real-time colour Doppler ultrasound image fusion and navigation. This case study, performed between April and June 2013, included one healthy man (age, 47 years) and two patients (one woman, 57 years; one man, 67 years) with choroidal melanomas. All cases underwent 7.0-T magnetic resonance imaging using a custom-made ocular imaging surface coil. The Digital Imaging and Communications in Medicine volume data set was then loaded into the ultrasound system for manual registration of the live ultrasound image and fusion imaging examination. Data registration, matching and then volume navigation were feasible in all cases. Fusion imaging provided real-time imaging capabilities and high tissue contrast of choroidal tumour and optic nerve. It also allowed adding a real-time colour Doppler signal on magnetic resonance images for assessment of vasculature of tumour and retrobulbar structures. The combination of orbital high-field magnetic resonance and colour Doppler ultrasound image fusion and navigation is feasible. Multimodal fusion imaging promises to foster assessment and monitoring of choroidal melanoma and optic nerve disorders. • Orbital magnetic resonance and colour Doppler ultrasound real-time fusion imaging is feasible • Fusion imaging combines the spatial and temporal resolution advantages of each modality • Magnetic resonance and ultrasound fusion imaging improves assessment of choroidal melanoma vascularisation.

  17. The applicability of holography in forensic identification: a fusion of the traditional optical technique and digital technique.

    PubMed

    Biwasaka, Hitoshi; Saigusa, Kiyoshi; Aoki, Yasuhiro

    2005-03-01

    In this study, the applicability of holography in the 3-dimensional recording of forensic objects such as skulls and mandibulae, and the accuracy of the reconstructed 3-D images, were examined. The virtual holographic image, which records the 3-dimensional data of the original object, is visually observed on the other side of the holographic plate, and reproduces the 3-dimensional shape of the object well. Another type of holographic image, the real image, is focused on a frosted glass screen, and cross-sectional images of the object can be observed. When measuring the distances between anatomical reference points using an image-processing software, the average deviations in the holographic images as compared to the actual objects were less than 0.1 mm. Therefore, holography could be useful as a 3-dimensional recording method of forensic objects. Two superimposition systems using holographic images were examined. In the 2D-3D system, the transparent virtual holographic image of an object is directly superimposed onto the digitized photograph of the same object on the LCD monitor. On the other hand, in the video system, the holographic image captured by the CCD camera is superimposed onto the digitized photographic image using a personal computer. We found that the discrepancy between the outlines of the superimposed holographic and photographic dental images using the video system was smaller than that using the 2D-3D system. Holography seemed to perform comparably to the computer graphic system; however, a fusion with the digital technique would expand the utility of holography in superimposition.

  18. Multi-observation PET image analysis for patient follow-up quantitation and therapy assessment

    NASA Astrophysics Data System (ADS)

    David, S.; Visvikis, D.; Roux, C.; Hatt, M.

    2011-09-01

    In positron emission tomography (PET) imaging, an early therapeutic response is usually characterized by variations of semi-quantitative parameters restricted to maximum SUV measured in PET scans during the treatment. Such measurements do not reflect overall tumor volume and radiotracer uptake variations. The proposed approach is based on multi-observation image analysis for merging several PET acquisitions to assess tumor metabolic volume and uptake variations. The fusion algorithm is based on iterative estimation using a stochastic expectation maximization (SEM) algorithm. The proposed method was applied to simulated and clinical follow-up PET images. We compared the multi-observation fusion performance to threshold-based methods, proposed for the assessment of the therapeutic response based on functional volumes. On simulated datasets the adaptive threshold applied independently on both images led to higher errors than the ASEM fusion and on clinical datasets it failed to provide coherent measurements for four patients out of seven due to aberrant delineations. The ASEM method demonstrated improved and more robust estimation of the evaluation leading to more pertinent measurements. Future work will consist in extending the methodology and applying it to clinical multi-tracer datasets in order to evaluate its potential impact on the biological tumor volume definition for radiotherapy applications.

  19. A label field fusion bayesian model and its penalized maximum rand estimator for image segmentation.

    PubMed

    Mignotte, Max

    2010-06-01

    This paper presents a novel segmentation approach based on a Markov random field (MRF) fusion model which aims at combining several segmentation results associated with simpler clustering models in order to achieve a more reliable and accurate segmentation result. The proposed fusion model is derived from the recently introduced probabilistic Rand measure for comparing one segmentation result to one or more manual segmentations of the same image. This non-parametric measure allows us to easily derive an appealing fusion model of label fields, easily expressed as a Gibbs distribution, or as a nonstationary MRF model defined on a complete graph. Concretely, this Gibbs energy model encodes the set of binary constraints, in terms of pairs of pixel labels, provided by each segmentation results to be fused. Combined with a prior distribution, this energy-based Gibbs model also allows for definition of an interesting penalized maximum probabilistic rand estimator with which the fusion of simple, quickly estimated, segmentation results appears as an interesting alternative to complex segmentation models existing in the literature. This fusion framework has been successfully applied on the Berkeley image database. The experiments reported in this paper demonstrate that the proposed method is efficient in terms of visual evaluation and quantitative performance measures and performs well compared to the best existing state-of-the-art segmentation methods recently proposed in the literature.

  20. Thermal Physical Property-Based Fusion of Geostationary Meteorological Satellite Visible and Infrared Channel Images

    PubMed Central

    Han, Lei; Shi, Lu; Yang, Yiling; Song, Dalei

    2014-01-01

    Geostationary meteorological satellite infrared (IR) channel data contain important spectral information for meteorological research and applications, but their spatial resolution is relatively low. The objective of this study is to obtain higher-resolution IR images. One common method of increasing resolution fuses the IR data with high-resolution visible (VIS) channel data. However, most existing image fusion methods focus only on visual performance, and often fail to take into account the thermal physical properties of the IR images. As a result, spectral distortion occurs frequently. To tackle this problem, we propose a thermal physical properties-based correction method for fusing geostationary meteorological satellite IR and VIS images. In our two-step process, the high-resolution structural features of the VIS image are first extracted and incorporated into the IR image using regular multi-resolution fusion approach, such as the multiwavelet analysis. This step significantly increases the visual details in the IR image, but fake thermal information may be included. Next, the Stefan-Boltzmann Law is applied to correct the distortion, to retain or recover the thermal infrared nature of the fused image. The results of both the qualitative and quantitative evaluation demonstrate that the proposed physical correction method both improves the spatial resolution and preserves the infrared thermal properties. PMID:24919017

  1. Thermal physical property-based fusion of geostationary meteorological satellite visible and infrared channel images.

    PubMed

    Han, Lei; Shi, Lu; Yang, Yiling; Song, Dalei

    2014-06-10

    Geostationary meteorological satellite infrared (IR) channel data contain important spectral information for meteorological research and applications, but their spatial resolution is relatively low. The objective of this study is to obtain higher-resolution IR images. One common method of increasing resolution fuses the IR data with high-resolution visible (VIS) channel data. However, most existing image fusion methods focus only on visual performance, and often fail to take into account the thermal physical properties of the IR images. As a result, spectral distortion occurs frequently. To tackle this problem, we propose a thermal physical properties-based correction method for fusing geostationary meteorological satellite IR and VIS images. In our two-step process, the high-resolution structural features of the VIS image are first extracted and incorporated into the IR image using regular multi-resolution fusion approach, such as the multiwavelet analysis. This step significantly increases the visual details in the IR image, but fake thermal information may be included. Next, the Stefan-Boltzmann Law is applied to correct the distortion, to retain or recover the thermal infrared nature of the fused image. The results of both the qualitative and quantitative evaluation demonstrate that the proposed physical correction method both improves the spatial resolution and preserves the infrared thermal properties.

  2. Segmentation by fusion of histogram-based k-means clusters in different color spaces.

    PubMed

    Mignotte, Max

    2008-05-01

    This paper presents a new, simple, and efficient segmentation approach, based on a fusion procedure which aims at combining several segmentation maps associated to simpler partition models in order to finally get a more reliable and accurate segmentation result. The different label fields to be fused in our application are given by the same and simple (K-means based) clustering technique on an input image expressed in different color spaces. Our fusion strategy aims at combining these segmentation maps with a final clustering procedure using as input features, the local histogram of the class labels, previously estimated and associated to each site and for all these initial partitions. This fusion framework remains simple to implement, fast, general enough to be applied to various computer vision applications (e.g., motion detection and segmentation), and has been successfully applied on the Berkeley image database. The experiments herein reported in this paper illustrate the potential of this approach compared to the state-of-the-art segmentation methods recently proposed in the literature.

  3. Evaluation of bile reflux in HIDA images based on fluid mechanics.

    PubMed

    Lo, Rong-Chin; Huang, Wen-Lin; Fan, Yu-Ming

    2015-05-01

    We propose a new method to help physicians assess, using a hepatobiliary iminodiacetic acid scan image, whether or not there is bile reflux into the stomach. The degree of bile reflux is an important index for clinical diagnosis of stomach diseases. The proposed method applies image-processing technology combined with a hydrodynamic model to determine the extent of bile reflux or whether the duodenum is also folded above the stomach. This condition in 2D dynamic images suggests that bile refluxes into the stomach, when endoscopy shows no bile reflux. In this study, we used optical flow to analyze images from Tc99m-diisopropyl iminodiacetic acid cholescintigraphy (Tc99m-DISIDA) to ascertain the direction and velocity of bile passing through the pylorus. In clinical diagnoses, single photon emission computed tomography (SPECT) is the main clinical tool for evaluating functional images of hepatobiliary metabolism. Computed tomography (CT) shows anatomical images of the external contours of the stomach, liver, and biliary extent. By exploiting the functional fusion of the two kinds of medical image, physicians can obtain a more accurate diagnosis. We accordingly reconstructed 3D images from SPECT and CT to help physicians choose which cross sections to fuse with software and to help them more accurately diagnose the extent and quantity of bile reflux. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Probabilistic sparse matching for robust 3D/3D fusion in minimally invasive surgery.

    PubMed

    Neumann, Dominik; Grbic, Sasa; John, Matthias; Navab, Nassir; Hornegger, Joachim; Ionasec, Razvan

    2015-01-01

    Classical surgery is being overtaken by minimally invasive and transcatheter procedures. As there is no direct view or access to the affected anatomy, advanced imaging techniques such as 3D C-arm computed tomography (CT) and C-arm fluoroscopy are routinely used in clinical practice for intraoperative guidance. However, due to constraints regarding acquisition time and device configuration, intraoperative modalities have limited soft tissue image quality and reliable assessment of the cardiac anatomy typically requires contrast agent, which is harmful to the patient and requires complex acquisition protocols. We propose a probabilistic sparse matching approach to fuse high-quality preoperative CT images and nongated, noncontrast intraoperative C-arm CT images by utilizing robust machine learning and numerical optimization techniques. Thus, high-quality patient-specific models can be extracted from the preoperative CT and mapped to the intraoperative imaging environment to guide minimally invasive procedures. Extensive quantitative experiments on 95 clinical datasets demonstrate that our model-based fusion approach has an average execution time of 1.56 s, while the accuracy of 5.48 mm between the anchor anatomy in both images lies within expert user confidence intervals. In direct comparison with image-to-image registration based on an open-source state-of-the-art medical imaging library and a recently proposed quasi-global, knowledge-driven multi-modal fusion approach for thoracic-abdominal images, our model-based method exhibits superior performance in terms of registration accuracy and robustness with respect to both target anatomy and anchor anatomy alignment errors.

  5. Object-oriented design of medical imaging software.

    PubMed

    Ligier, Y; Ratib, O; Logean, M; Girard, C; Perrier, R; Scherrer, J R

    1994-01-01

    A special software package for interactive display and manipulation of medical images was developed at the University Hospital of Geneva, as part of a hospital wide Picture Archiving and Communication System (PACS). This software package, called Osiris, was especially designed to be easily usable and adaptable to the needs of noncomputer-oriented physicians. The Osiris software has been developed to allow the visualization of medical images obtained from any imaging modality. It provides generic manipulation tools, processing tools, and analysis tools more specific to clinical applications. This software, based on an object-oriented paradigm, is portable and extensible. Osiris is available on two different operating systems: the Unix X-11/OSF-Motif based workstations, and the Macintosh family.

  6. Image-fusion of MR spectroscopic images for treatment planning of gliomas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang Jenghwa; Thakur, Sunitha; Perera, Gerard

    2006-01-15

    {sup 1}H magnetic resonance spectroscopic imaging (MRSI) can improve the accuracy of target delineation for gliomas, but it lacks the anatomic resolution needed for image fusion. This paper presents a simple protocol for fusing simulation computer tomography (CT) and MRSI images for glioma intensity-modulated radiotherapy (IMRT), including a retrospective study of 12 patients. Each patient first underwent whole-brain axial fluid-attenuated-inversion-recovery (FLAIR) MRI (3 mm slice thickness, no spacing), followed by three-dimensional (3D) MRSI measurements (TE/TR: 144/1000 ms) of a user-specified volume encompassing the extent of the tumor. The nominal voxel size of MRSI ranged from 8x8x10 mm{sup 3} to 12x12x10more » mm{sup 3}. A system was developed to grade the tumor using the choline-to-creatine (Cho/Cr) ratios from each MRSI voxel. The merged MRSI images were then generated by replacing the Cho/Cr value of each MRSI voxel with intensities according to the Cho/Cr grades, and resampling the poorer-resolution Cho/Cr map into the higher-resolution FLAIR image space. The FUNCTOOL processing software was also used to create the screen-dumped MRSI images in which these data were overlaid with each FLAIR MRI image. The screen-dumped MRSI images were manually translated and fused with the FLAIR MRI images. Since the merged MRSI images were intrinsically fused with the FLAIR MRI images, they were also registered with the screen-dumped MRSI images. The position of the MRSI volume on the merged MRSI images was compared with that of the screen-dumped MRSI images and was shifted until agreement was within a predetermined tolerance. Three clinical target volumes (CTVs) were then contoured on the FLAIR MRI images corresponding to the Cho/Cr grades. Finally, the FLAIR MRI images were fused with the simulation CT images using a mutual-information algorithm, yielding an IMRT plan that simultaneously delivers three different dose levels to the three CTVs. The image-fusion protocol was tested on 12 (six high-grade and six low-grade) glioma patients. The average agreement of the MRSI volume position on the screen-dumped MRSI images and the merged MRSI images was 0.29 mm with a standard deviation of 0.07 mm. Of all the voxels with Cho/Cr grade one or above, the distribution of Cho/Cr grade was found to correlate with the glioma grade from pathologic finding and is consistent with literature results indicating Cho/Cr elevation as a marker for malignancy. In conclusion, an image-fusion protocol was developed that successfully incorporates MRSI information into the IMRT treatment plan for glioma.« less

  7. Multi-exposure high dynamic range image synthesis with camera shake correction

    NASA Astrophysics Data System (ADS)

    Li, Xudong; Chen, Yongfu; Jiang, Hongzhi; Zhao, Huijie

    2017-10-01

    Machine vision plays an important part in industrial online inspection. Owing to the nonuniform illuminance conditions and variable working distances, the captured image tends to be over-exposed or under-exposed. As a result, when processing the image such as crack inspection, the algorithm complexity and computing time increase. Multiexposure high dynamic range (HDR) image synthesis is used to improve the quality of the captured image, whose dynamic range is limited. Inevitably, camera shake will result in ghost effect, which blurs the synthesis image to some extent. However, existed exposure fusion algorithms assume that the input images are either perfectly aligned or captured in the same scene. These assumptions limit the application. At present, widely used registration based on Scale Invariant Feature Transform (SIFT) is usually time consuming. In order to rapidly obtain a high quality HDR image without ghost effect, we come up with an efficient Low Dynamic Range (LDR) images capturing approach and propose a registration method based on ORiented Brief (ORB) and histogram equalization which can eliminate the illumination differences between the LDR images. The fusion is performed after alignment. The experiment results demonstrate that the proposed method is robust to illumination changes and local geometric distortion. Comparing with other exposure fusion methods, our method is more efficient and can produce HDR images without ghost effect by registering and fusing four multi-exposure images.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kohli, K; Liu, F; Krishnan, K

    Purpose: Multi-frequency EIT has been reported to be a potential tool in distinguishing a tissue anomaly from background. In this study, we investigate the feasibility of acquiring functional information by comparing multi-frequency EIT images in reference to the structural information from the CT image through fusion. Methods: EIT data was acquired from a slice of winter melon using sixteen electrodes around the phantom, injecting a current of 0.4mA at 100, 66, 24.8 and 9.9 kHz. Differential EIT images were generated by considering different combinations of pair frequencies, one serving as reference data and the other as test data. The experimentmore » was repeated after creating an anomaly in the form of an off-centered cavity of diameter 4.5 cm inside the melon. All EIT images were reconstructed using Electrical Impedance Tomography and Diffuse Optical Tomography Reconstruction Software (EIDORS) package in 2-D differential imaging mode using one-step Gaussian Newton minimization solver. CT image of the melon was obtained using a Phillips CT Scanner. A segmented binary mask image was generated based on the reference electrode position and the CT image to define the regions of interest. The region selected by the user was fused with the CT image through logical indexing. Results: Differential images based on the reference and test signal frequencies were reconstructed from EIT data. Result illustrated distinct structural inhomogeneity in seeded region compared to fruit flesh. The seeded region was seen as a higherimpedance region if the test frequency was lower than the base frequency in the differential EIT reconstruction. When the test frequency was higher than the base frequency, the signal experienced less electrical impedance in the seeded region during the EIT data acquisition. Conclusion: Frequency-based differential EIT imaging can be explored to provide additional functional information along with structural information from CT for identifying different tissues.« less

  9. SAFE Software and FED Database to Uncover Protein-Protein Interactions using Gene Fusion Analysis.

    PubMed

    Tsagrasoulis, Dimosthenis; Danos, Vasilis; Kissa, Maria; Trimpalis, Philip; Koumandou, V Lila; Karagouni, Amalia D; Tsakalidis, Athanasios; Kossida, Sophia

    2012-01-01

    Domain Fusion Analysis takes advantage of the fact that certain proteins in a given proteome A, are found to have statistically significant similarity with two separate proteins in another proteome B. In other words, the result of a fusion event between two separate proteins in proteome B is a specific full-length protein in proteome A. In such a case, it can be safely concluded that the protein pair has a common biological function or even interacts physically. In this paper, we present the Fusion Events Database (FED), a database for the maintenance and retrieval of fusion data both in prokaryotic and eukaryotic organisms and the Software for the Analysis of Fusion Events (SAFE), a computational platform implemented for the automated detection, filtering and visualization of fusion events (both available at: http://www.bioacademy.gr/bioinformatics/projects/ProteinFusion/index.htm). Finally, we analyze the proteomes of three microorganisms using these tools in order to demonstrate their functionality.

  10. SAFE Software and FED Database to Uncover Protein-Protein Interactions using Gene Fusion Analysis

    PubMed Central

    Tsagrasoulis, Dimosthenis; Danos, Vasilis; Kissa, Maria; Trimpalis, Philip; Koumandou, V. Lila; Karagouni, Amalia D.; Tsakalidis, Athanasios; Kossida, Sophia

    2012-01-01

    Domain Fusion Analysis takes advantage of the fact that certain proteins in a given proteome A, are found to have statistically significant similarity with two separate proteins in another proteome B. In other words, the result of a fusion event between two separate proteins in proteome B is a specific full-length protein in proteome A. In such a case, it can be safely concluded that the protein pair has a common biological function or even interacts physically. In this paper, we present the Fusion Events Database (FED), a database for the maintenance and retrieval of fusion data both in prokaryotic and eukaryotic organisms and the Software for the Analysis of Fusion Events (SAFE), a computational platform implemented for the automated detection, filtering and visualization of fusion events (both available at: http://www.bioacademy.gr/bioinformatics/projects/ProteinFusion/index.htm). Finally, we analyze the proteomes of three microorganisms using these tools in order to demonstrate their functionality. PMID:22267904

  11. Super-resolution fusion of complementary panoramic images based on cross-selection kernel regression interpolation.

    PubMed

    Chen, Lidong; Basu, Anup; Zhang, Maojun; Wang, Wei; Liu, Yu

    2014-03-20

    A complementary catadioptric imaging technique was proposed to solve the problem of low and nonuniform resolution in omnidirectional imaging. To enhance this research, our paper focuses on how to generate a high-resolution panoramic image from the captured omnidirectional image. To avoid the interference between the inner and outer images while fusing the two complementary views, a cross-selection kernel regression method is proposed. First, in view of the complementarity of sampling resolution in the tangential and radial directions between the inner and the outer images, respectively, the horizontal gradients in the expected panoramic image are estimated based on the scattered neighboring pixels mapped from the outer, while the vertical gradients are estimated using the inner image. Then, the size and shape of the regression kernel are adaptively steered based on the local gradients. Furthermore, the neighboring pixels in the next interpolation step of kernel regression are also selected based on the comparison between the horizontal and vertical gradients. In simulation and real-image experiments, the proposed method outperforms existing kernel regression methods and our previous wavelet-based fusion method in terms of both visual quality and objective evaluation.

  12. Multispectral image fusion using neural networks

    NASA Technical Reports Server (NTRS)

    Kagel, J. H.; Platt, C. A.; Donaven, T. W.; Samstad, E. A.

    1990-01-01

    A prototype system is being developed to demonstrate the use of neural network hardware to fuse multispectral imagery. This system consists of a neural network IC on a motherboard, a circuit card assembly, and a set of software routines hosted by a PC-class computer. Research in support of this consists of neural network simulations fusing 4 to 7 bands of Landsat imagery and fusing (separately) multiple bands of synthetic imagery. The simulations, results, and a description of the prototype system are presented.

  13. Minimally Invasive Transforaminal Lumbar Interbody Fusion: Meta-analysis of the Fusion Rates. What is the Optimal Graft Material?

    PubMed

    Parajón, Avelino; Alimi, Marjan; Navarro-Ramirez, Rodrigo; Christos, Paul; Torres-Campa, Jose M; Moriguchi, Yu; Lang, Gernot; Härtl, Roger

    2017-12-01

    Minimally invasive transforaminal lumbar interbody fusion (MIS-TLIF) is an increasingly popular procedure with several potential advantages over traditional open TLIF. The current study aimed to compare fusion rates of different graft materials used in MIS-TLIF, via meta-analysis of the published literature. A Medline search was performed and a database was created including patient's type of graft, clinical outcome, fusion rate, fusion assessment modality, and duration of follow-up. Meta-analysis of the fusion rate was performed using StatsDirect software (StatsDirect Ltd, Cheshire, United Kingdom). A total of 1533 patients from 40 series were included. Fusion rates were high, ranging from 91.8% to 99%. The imaging modalities used to assess fusion were computed tomography scans (30%) and X-rays (70%). Comparison of all recombinant human bone morphogenetic protein (rhBMP) series with all non-rhBMP series showed fusion rates of 96.6% and 92.5%, respectively. The lowest fusion rate was seen with isolated use of autologous local bone (91.8%). The highest fusion rate was observed with combination of autologous local bone with bone extender and rhBMP (99.1%). The highest fusion rate without the use of BMP was seen with autologous local bone + bone extender (93.1%). The reported complication rate ranged from 0% to 35.71%. Clinical improvement was observed in all studies. Fusion rates are generally high with MIS-TLIF regardless of the graft material used. Given the potential complications of iliac bone harvesting and rhBMP, use of other bone graft options for MIS-TLIF is reasonable. The highest fusion rate without the use of rhBMP was seen with autologous local bone plus bone extender (93.1%). Published by Oxford University Press on behalf of Congress of Neurological Surgeons 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  14. Graph-based Data Modeling and Analysis for Data Fusion in Remote Sensing

    NASA Astrophysics Data System (ADS)

    Fan, Lei

    Hyperspectral imaging provides the capability of increased sensitivity and discrimination over traditional imaging methods by combining standard digital imaging with spectroscopic methods. For each individual pixel in a hyperspectral image (HSI), a continuous spectrum is sampled as the spectral reflectance/radiance signature to facilitate identification of ground cover and surface material. The abundant spectrum knowledge allows all available information from the data to be mined. The superior qualities within hyperspectral imaging allow wide applications such as mineral exploration, agriculture monitoring, and ecological surveillance, etc. The processing of massive high-dimensional HSI datasets is a challenge since many data processing techniques have a computational complexity that grows exponentially with the dimension. Besides, a HSI dataset may contain a limited number of degrees of freedom due to the high correlations between data points and among the spectra. On the other hand, merely taking advantage of the sampled spectrum of individual HSI data point may produce inaccurate results due to the mixed nature of raw HSI data, such as mixed pixels, optical interferences and etc. Fusion strategies are widely adopted in data processing to achieve better performance, especially in the field of classification and clustering. There are mainly three types of fusion strategies, namely low-level data fusion, intermediate-level feature fusion, and high-level decision fusion. Low-level data fusion combines multi-source data that is expected to be complementary or cooperative. Intermediate-level feature fusion aims at selection and combination of features to remove redundant information. Decision level fusion exploits a set of classifiers to provide more accurate results. The fusion strategies have wide applications including HSI data processing. With the fast development of multiple remote sensing modalities, e.g. Very High Resolution (VHR) optical sensors, LiDAR, etc., fusion of multi-source data can in principal produce more detailed information than each single source. On the other hand, besides the abundant spectral information contained in HSI data, features such as texture and shape may be employed to represent data points from a spatial perspective. Furthermore, feature fusion also includes the strategy of removing redundant and noisy features in the dataset. One of the major problems in machine learning and pattern recognition is to develop appropriate representations for complex nonlinear data. In HSI processing, a particular data point is usually described as a vector with coordinates corresponding to the intensities measured in the spectral bands. This vector representation permits the application of linear and nonlinear transformations with linear algebra to find an alternative representation of the data. More generally, HSI is multi-dimensional in nature and the vector representation may lose the contextual correlations. Tensor representation provides a more sophisticated modeling technique and a higher-order generalization to linear subspace analysis. In graph theory, data points can be generalized as nodes with connectivities measured from the proximity of a local neighborhood. The graph-based framework efficiently characterizes the relationships among the data and allows for convenient mathematical manipulation in many applications, such as data clustering, feature extraction, feature selection and data alignment. In this thesis, graph-based approaches applied in the field of multi-source feature and data fusion in remote sensing area are explored. We will mainly investigate the fusion of spatial, spectral and LiDAR information with linear and multilinear algebra under graph-based framework for data clustering and classification problems.

  15. A real-time photogrammetric algorithm for sensor and synthetic image fusion with application to aviation combined vision

    NASA Astrophysics Data System (ADS)

    Lebedev, M. A.; Stepaniants, D. G.; Komarov, D. V.; Vygolov, O. V.; Vizilter, Yu. V.; Zheltov, S. Yu.

    2014-08-01

    The paper addresses a promising visualization concept related to combination of sensor and synthetic images in order to enhance situation awareness of a pilot during an aircraft landing. A real-time algorithm for a fusion of a sensor image, acquired by an onboard camera, and a synthetic 3D image of the external view, generated in an onboard computer, is proposed. The pixel correspondence between the sensor and the synthetic images is obtained by an exterior orientation of a "virtual" camera using runway points as a geospatial reference. The runway points are detected by the Projective Hough Transform, which idea is to project the edge map onto a horizontal plane in the object space (the runway plane) and then to calculate intensity projections of edge pixels on different directions of intensity gradient. The performed experiments on simulated images show that on a base glide path the algorithm provides image fusion with pixel accuracy, even in the case of significant navigation errors.

  16. Navigation and Elctro-Optic Sensor Integration Technology for Fusion of Imagery and Digital Mapping Products

    DTIC Science & Technology

    1999-08-01

    Electro - Optic Sensor Integration Technology (NEOSIT) software application. The design is highly modular and based on COTS tools to facilitate integration with sensors, navigation and digital data sources already installed on different host

  17. Data acquisition and processing system for the HT-6M tokamak fusion experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shu, Y.T.; Liu, G.C.; Pang, J.Q.

    1987-08-01

    This paper describes a high-speed data acquisition and processing system which has been successfully operated on the HT-6M tokamak fusion experimental device. The system collects, archives and analyzes up to 512 kilobytes of data from each shot of the experiment. A shot lasts 50-150 milliseconds and occurs every 5-10 minutes. The system consists of two PDP-11/24 computer systems. One PDP-11/24 is used for real-time data taking and on-line data analysis. It is based upon five CAMAC crates organized into a parallel branch. Another PDP-11/24 is used for off-line data processing. Both data acquisition software RSX-DAS and data processing software RSX-DAPmore » have modular, multi-tasking and concurrent processing features.« less

  18. Time-resolved wide-field optically sectioned fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Dupuis, Guillaume; Benabdallah, Nadia; Chopinaud, Aurélien; Mayet, Céline; Lévêque-Fort, Sandrine

    2013-02-01

    We present the implementation of a fast wide-field optical sectioning technique called HiLo microscopy on a fluorescence lifetime imaging microscope. HiLo microscopy is based on the fusion of two images, one with structured illumination and another with uniform illumination. Optically sectioned images are then digitally generated thanks to a fusion algorithm. HiLo images are comparable in quality with confocal images but they can be acquired faster over larger fields of view. We obtain 4D imaging by combining HiLo optical sectioning, time-gated detection, and z-displacement. We characterize the performances of this set-up in terms of 3D spatial resolution and time-resolved capabilities in both fixed- and live-cell imaging modes.

  19. The design of real time infrared image generation software based on Creator and Vega

    NASA Astrophysics Data System (ADS)

    Wang, Rui-feng; Wu, Wei-dong; Huo, Jun-xiu

    2013-09-01

    Considering the requirement of high reality and real-time quality dynamic infrared image of an infrared image simulation, a method to design real-time infrared image simulation application on the platform of VC++ is proposed. This is based on visual simulation software Creator and Vega. The functions of Creator are introduced simply, and the main features of Vega developing environment are analyzed. The methods of infrared modeling and background are offered, the designing flow chart of the developing process of IR image real-time generation software and the functions of TMM Tool and MAT Tool and sensor module are explained, at the same time, the real-time of software is designed.

  20. Sensor fusion V; Proceedings of the Meeting, Boston, MA, Nov. 15-17, 1992

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1992-01-01

    Topics addressed include 3D object perception, human-machine interface in multisensor systems, sensor fusion architecture, fusion of multiple and distributed sensors, interface and decision models for sensor fusion, computational networks, simple sensing for complex action, multisensor-based control, and metrology and calibration of multisensor systems. Particular attention is given to controlling 3D objects by sketching 2D views, the graphical simulation and animation environment for flexible structure robots, designing robotic systems from sensorimotor modules, cylindrical object reconstruction from a sequence of images, an accurate estimation of surface properties by integrating information using Bayesian networks, an adaptive fusion model for a distributed detection system, multiple concurrent object descriptions in support of autonomous navigation, robot control with multiple sensors and heuristic knowledge, and optical array detectors for image sensors calibration. (No individual items are abstracted in this volume)

  1. Improving lung cancer prognosis assessment by incorporating synthetic minority oversampling technique and score fusion method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Shiju; Qian, Wei; Guan, Yubao

    2016-06-15

    Purpose: This study aims to investigate the potential to improve lung cancer recurrence risk prediction performance for stage I NSCLS patients by integrating oversampling, feature selection, and score fusion techniques and develop an optimal prediction model. Methods: A dataset involving 94 early stage lung cancer patients was retrospectively assembled, which includes CT images, nine clinical and biological (CB) markers, and outcome of 3-yr disease-free survival (DFS) after surgery. Among the 94 patients, 74 remained DFS and 20 had cancer recurrence. Applying a computer-aided detection scheme, tumors were segmented from the CT images and 35 quantitative image (QI) features were initiallymore » computed. Two normalized Gaussian radial basis function network (RBFN) based classifiers were built based on QI features and CB markers separately. To improve prediction performance, the authors applied a synthetic minority oversampling technique (SMOTE) and a BestFirst based feature selection method to optimize the classifiers and also tested fusion methods to combine QI and CB based prediction results. Results: Using a leave-one-case-out cross-validation (K-fold cross-validation) method, the computed areas under a receiver operating characteristic curve (AUCs) were 0.716 ± 0.071 and 0.642 ± 0.061, when using the QI and CB based classifiers, respectively. By fusion of the scores generated by the two classifiers, AUC significantly increased to 0.859 ± 0.052 (p < 0.05) with an overall prediction accuracy of 89.4%. Conclusions: This study demonstrated the feasibility of improving prediction performance by integrating SMOTE, feature selection, and score fusion techniques. Combining QI features and CB markers and performing SMOTE prior to feature selection in classifier training enabled RBFN based classifier to yield improved prediction accuracy.« less

  2. Guided filter-based fusion method for multiexposure images

    NASA Astrophysics Data System (ADS)

    Hou, Xinglin; Luo, Haibo; Qi, Feng; Zhou, Peipei

    2016-11-01

    It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range camera. A weighted sum-based image fusion (IF) algorithm is proposed so as to express an HDR scene with a high-quality image. This method mainly includes three parts. First, two image features, i.e., gradients and well-exposedness are measured to estimate the initial weight maps. Second, the initial weight maps are refined by a guided filter, in which the source image is considered as the guidance image. This process could reduce the noise in initial weight maps and preserve more texture consistent with the original images. Finally, the fused image is constructed by a weighted sum of source images in the spatial domain. The main contributions of this method are the estimation of the initial weight maps and the appropriate use of the guided filter-based weight maps refinement. It provides accurate weight maps for IF. Compared to traditional IF methods, this algorithm avoids image segmentation, combination, and the camera response curve calibration. Furthermore, experimental results demonstrate the superiority of the proposed method in both subjective and objective evaluations.

  3. Comparative evaluation of three-dimensional Gd-EOB-DTPA-enhanced MR fusion imaging with CT fusion imaging in the assessment of treatment effect of radiofrequency ablation of hepatocellular carcinoma.

    PubMed

    Makino, Yuki; Imai, Yasuharu; Igura, Takumi; Hori, Masatoshi; Fukuda, Kazuto; Sawai, Yoshiyuki; Kogita, Sachiyo; Fujita, Norihiko; Takehara, Tetsuo; Murakami, Takamichi

    2015-01-01

    To assess the feasibility of fusion of pre- and post-ablation gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid-enhanced magnetic resonance imaging (Gd-EOB-DTPA-MRI) to evaluate the effects of radiofrequency ablation (RFA) of hepatocellular carcinoma (HCC), compared with similarly fused CT images This retrospective study included 67 patients with 92 HCCs treated with RFA. Fusion images of pre- and post-RFA dynamic CT, and pre- and post-RFA Gd-EOB-DTPA-MRI were created, using a rigid registration method. The minimal ablative margin measured on fusion imaging was categorized into three groups: (1) tumor protruding outside the ablation zone boundary, (2) ablative margin 0-<5.0 mm beyond the tumor boundary, and (3) ablative margin ≥5.0 mm beyond the tumor boundary. The categorization of minimal ablative margins was compared between CT and MR fusion images. In 57 (62.0%) HCCs, treatment evaluation was possible both on CT and MR fusion images, and the overall agreement between them for the categorization of minimal ablative margin was good (κ coefficient = 0.676, P < 0.01). MR fusion imaging enabled treatment evaluation in a significantly larger number of HCCs than CT fusion imaging (86/92 [93.5%] vs. 62/92 [67.4%], P < 0.05). Fusion of pre- and post-ablation Gd-EOB-DTPA-MRI is feasible for treatment evaluation after RFA. It may enable accurate treatment evaluation in cases where CT fusion imaging is not helpful.

  4. Intergraph video and images exploitation capabilities

    NASA Astrophysics Data System (ADS)

    Colla, Simone; Manesis, Charalampos

    2013-08-01

    The current paper focuses on the capture, fusion and process of aerial imagery in order to leverage full motion video, giving analysts the ability to collect, analyze, and maximize the value of video assets. Unmanned aerial vehicles (UAV) have provided critical real-time surveillance and operational support to military organizations, and are a key source of intelligence, particularly when integrated with other geospatial data. In the current workflow, at first, the UAV operators plan the flight by using a flight planning software. During the flight the UAV send a live video stream directly on the field to be processed by Intergraph software, to generate and disseminate georeferenced images trough a service oriented architecture based on ERDAS Apollo suite. The raw video-based data sources provide the most recent view of a situation and can augment other forms of geospatial intelligence - such as satellite imagery and aerial photos - to provide a richer, more detailed view of the area of interest. To effectively use video as a source of intelligence, however, the analyst needs to seamlessly fuse the video with these other types of intelligence, such as map features and annotations. Intergraph has developed an application that automatically generates mosaicked georeferenced image, tags along the video route which can then be seamlessly integrated with other forms of static data, such as aerial photos, satellite imagery, or geospatial layers and features. Consumers will finally have the ability to use a single, streamlined system to complete the entire geospatial information lifecycle: capturing geospatial data using sensor technology; processing vector, raster, terrain data into actionable information; managing, fusing, and sharing geospatial data and video toghether; and finally, rapidly and securely delivering integrated information products, ensuring individuals can make timely decisions.

  5. Computed tomography angiography-fluoroscopy image fusion allows visceral vessel cannulation without angiography during fenestrated endovascular aneurysm repair.

    PubMed

    Schwein, Adeline; Chinnadurai, Ponraj; Behler, Greg; Lumsden, Alan B; Bismuth, Jean; Bechara, Carlos F

    2018-07-01

    Fenestrated endovascular aneurysm repair (FEVAR) is an evolving technique to treat juxtarenal abdominal aortic aneurysms (AAAs). Catheterization of visceral and renal vessels after the deployment of the fenestrated main body device is often challenging, usually requiring additional fluoroscopy and multiple digital subtraction angiograms. The aim of this study was to assess the clinical utility and accuracy of a computed tomography angiography (CTA)-fluoroscopy image fusion technique in guiding visceral vessel cannulation during FEVAR. Between August 2014 and September 2016, all consecutive patients who underwent FEVAR at our institution using image fusion guidance were included. Preoperative CTA images were fused with intraoperative fluoroscopy after coregistering with non-contrast-enhanced cone beam computed tomography (syngo 3D3D image fusion; Siemens Healthcare, Forchheim, Germany). The ostia of the visceral vessels were electronically marked on CTA images (syngo iGuide Toolbox) and overlaid on live fluoroscopy to guide vessel cannulation after fenestrated device deployment. Clinical utility of image fusion was evaluated by assessing the number of dedicated angiograms required for each visceral or renal vessel cannulation and the use of optimized C-arm angulation. Accuracy of image fusion was evaluated from video recordings by three raters using a binary qualitative assessment scale. A total of 26 patients (17 men; mean age, 73.8 years) underwent FEVAR during the study period for juxtarenal AAA (17), pararenal AAA (6), and thoracoabdominal aortic aneurysm (3). Video recordings of fluoroscopy from 19 cases were available for review and assessment. A total of 46 vessels were cannulated; 38 of 46 (83%) of these vessels were cannulated without angiography but based only on image fusion guidance: 9 of 11 superior mesenteric artery cannulations and 29 of 35 renal artery cannulations. Binary qualitative assessment showed that 90% (36/40) of the virtual ostia overlaid on live fluoroscopy were accurate. Optimized C-arm angulations were achieved in 35% of vessel cannulations (0/9 for superior mesenteric artery cannulation, 12/25 for renal arteries). Preoperative CTA-fluoroscopy image fusion guidance during FEVAR is a valuable and accurate tool that allows visceral and renal vessel cannulation without the need of dedicated angiograms, thus avoiding additional injection of contrast material and radiation exposure. Further refinements, such as accounting for device-induced aortic deformation and automating the image fusion workflow, will bolster this technology toward optimal routine clinical use. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  6. Analysis of C-shaped root canal configuration in maxillary molars in a Korean population using cone-beam computed tomography

    PubMed Central

    Jo, Hyoung-Hoon; Min, Jeong-Bum

    2016-01-01

    Objectives The purpose of this study was to investigate the incidence of root fusion and C-shaped root canals in maxillary molars, and to classify the types of C-shaped canal by analyzing cone-beam computed tomography (CBCT) in a Korean population. Materials and Methods Digitized CBCT images from 911 subjects were obtained in Chosun University Dental Hospital between February 2010 and July 2012 for orthodontic treatment. Among them, a total of selected 3,553 data of maxillary molars were analyzed retrospectively. Tomography sections in the axial, coronal, and sagittal planes were displayed by PiViewstar and Rapidia MPR software (Infinitt Co.). The incidence and types of root fusion and C-shaped root canals were evaluated and the incidence between the first and the second molar was compared using Chi-square test. Results Root fusion was present in 3.2% of the first molars and 19.5% of the second molars, and fusion of mesiobuccal and palatal root was dominant. C-shaped root canals were present in 0.8% of the first molars and 2.7% of the second molars. The frequency of root fusion and C-shaped canal was significantly higher in the second molar than the first molar (p < 0.001). Conclusions In a Korean population, maxillary molars showed total 11.3% of root fusion and 1.8% of C-shaped root canals. Furthermore, root fusion and C-shaped root canals were seen more frequently in the maxillary second molars. PMID:26877991

  7. [Research progress of multi-model medical image fusion and recognition].

    PubMed

    Zhou, Tao; Lu, Huiling; Chen, Zhiqiang; Ma, Jingxian

    2013-10-01

    Medical image fusion and recognition has a wide range of applications, such as focal location, cancer staging and treatment effect assessment. Multi-model medical image fusion and recognition are analyzed and summarized in this paper. Firstly, the question of multi-model medical image fusion and recognition is discussed, and its advantage and key steps are discussed. Secondly, three fusion strategies are reviewed from the point of algorithm, and four fusion recognition structures are discussed. Thirdly, difficulties, challenges and possible future research direction are discussed.

  8. Self-assessed performance improves statistical fusion of image labels

    PubMed Central

    Bryan, Frederick W.; Xu, Zhoubing; Asman, Andrew J.; Allen, Wade M.; Reich, Daniel S.; Landman, Bennett A.

    2014-01-01

    Purpose: Expert manual labeling is the gold standard for image segmentation, but this process is difficult, time-consuming, and prone to inter-individual differences. While fully automated methods have successfully targeted many anatomies, automated methods have not yet been developed for numerous essential structures (e.g., the internal structure of the spinal cord as seen on magnetic resonance imaging). Collaborative labeling is a new paradigm that offers a robust alternative that may realize both the throughput of automation and the guidance of experts. Yet, distributing manual labeling expertise across individuals and sites introduces potential human factors concerns (e.g., training, software usability) and statistical considerations (e.g., fusion of information, assessment of confidence, bias) that must be further explored. During the labeling process, it is simple to ask raters to self-assess the confidence of their labels, but this is rarely done and has not been previously quantitatively studied. Herein, the authors explore the utility of self-assessment in relation to automated assessment of rater performance in the context of statistical fusion. Methods: The authors conducted a study of 66 volumes manually labeled by 75 minimally trained human raters recruited from the university undergraduate population. Raters were given 15 min of training during which they were shown examples of correct segmentation, and the online segmentation tool was demonstrated. The volumes were labeled 2D slice-wise, and the slices were unordered. A self-assessed quality metric was produced by raters for each slice by marking a confidence bar superimposed on the slice. Volumes produced by both voting and statistical fusion algorithms were compared against a set of expert segmentations of the same volumes. Results: Labels for 8825 distinct slices were obtained. Simple majority voting resulted in statistically poorer performance than voting weighted by self-assessed performance. Statistical fusion resulted in statistically indistinguishable performance from self-assessed weighted voting. The authors developed a new theoretical basis for using self-assessed performance in the framework of statistical fusion and demonstrated that the combined sources of information (both statistical assessment and self-assessment) yielded statistically significant improvement over the methods considered separately. Conclusions: The authors present the first systematic characterization of self-assessed performance in manual labeling. The authors demonstrate that self-assessment and statistical fusion yield similar, but complementary, benefits for label fusion. Finally, the authors present a new theoretical basis for combining self-assessments with statistical label fusion. PMID:24593721

  9. A user-friendly LabVIEW software platform for grating based X-ray phase-contrast imaging.

    PubMed

    Wang, Shenghao; Han, Huajie; Gao, Kun; Wang, Zhili; Zhang, Can; Yang, Meng; Wu, Zhao; Wu, Ziyu

    2015-01-01

    X-ray phase-contrast imaging can provide greatly improved contrast over conventional absorption-based imaging for weakly absorbing samples, such as biological soft tissues and fibre composites. In this study, we introduced an easy and fast way to develop a user-friendly software platform dedicated to the new grating-based X-ray phase-contrast imaging setup at the National Synchrotron Radiation Laboratory of the University of Science and Technology of China. The control of 21 motorized stages, of a piezoelectric stage and of an X-ray tube are achieved with this software, it also covers image acquisition with a flat panel detector for automatic phase stepping scan. Moreover, a data post-processing module for signals retrieval and other custom features are in principle available. With a seamless integration of all the necessary functions in one software package, this platform greatly facilitate users' activities during experimental runs with this grating based X-ray phase contrast imaging setup.

  10. Evaluation of Effective Parameters on Quality of Magnetic Resonance Imaging-computed Tomography Image Fusion in Head and Neck Tumors for Application in Treatment Planning.

    PubMed

    Shirvani, Atefeh; Jabbari, Keyvan; Amouheidari, Alireza

    2017-01-01

    In radiation therapy, computed tomography (CT) simulation is used for treatment planning to define the location of tumor. Magnetic resonance imaging (MRI)-CT image fusion leads to more efficient tumor contouring. This work tried to identify the practical issues for the combination of CT and MRI images in real clinical cases. The effect of various factors is evaluated on image fusion quality. In this study, the data of thirty patients with brain tumors were used for image fusion. The effect of several parameters on possibility and quality of image fusion was evaluated. These parameters include angles of the patient's head on the bed, slices thickness, slice gap, and height of the patient's head. According to the results, the first dominating factor on quality of image fusion was the difference slice gap between CT and MRI images (cor = 0.86, P < 0.005) and second factor was the angle between CT and MRI slice in the sagittal plane (cor = 0.75, P < 0.005). In 20% of patients, this angle was more than 28° and image fusion was not efficient. In 17% of patients, difference slice gap in CT and MRI was >4 cm and image fusion quality was <25%. The most important problem in image fusion is that MRI images are taken without regard to their use in treatment planning. In general, parameters related to the patient position during MRI imaging should be chosen to be consistent with CT images of the patient in terms of location and angle.

  11. A robust vision-based sensor fusion approach for real-time pose estimation.

    PubMed

    Assa, Akbar; Janabi-Sharifi, Farrokh

    2014-02-01

    Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.

  12. Automatic calibration and signal switching system for the particle beam fusion research data acquisition facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyer, W.B.

    1979-09-01

    This report describes both the hardware and software components of an automatic calibration and signal system (Autocal) for the data acquisition system for the Sandia particle beam fusion research accelerators Hydra, Proto I, and Proto II. The Autocal hardware consists of off-the-shelf commercial equipment. The various hardware components, special modifications and overall system configuration are described. Special software has been developed to support the Autocal hardware. Software operation and maintenance are described.

  13. Fusion of lens-free microscopy and mobile-phone microscopy images for high-color-accuracy and high-resolution pathology imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2017-03-01

    Digital pathology and telepathology require imaging tools with high-throughput, high-resolution and accurate color reproduction. Lens-free on-chip microscopy based on digital in-line holography is a promising technique towards these needs, as it offers a wide field of view (FOV >20 mm2) and high resolution with a compact, low-cost and portable setup. Color imaging has been previously demonstrated by combining reconstructed images at three discrete wavelengths in the red, green and blue parts of the visible spectrum, i.e., the RGB combination method. However, this RGB combination method is subject to color distortions. To improve the color performance of lens-free microscopy for pathology imaging, here we present a wavelet-based color fusion imaging framework, termed "digital color fusion microscopy" (DCFM), which digitally fuses together a grayscale lens-free microscope image taken at a single wavelength and a low-resolution and low-magnification color-calibrated image taken by a lens-based microscope, which can simply be a mobile phone based cost-effective microscope. We show that the imaging results of an H&E stained breast cancer tissue slide with the DCFM technique come very close to a color-calibrated microscope using a 40x objective lens with 0.75 NA. Quantitative comparison showed 2-fold reduction in the mean color distance using the DCFM method compared to the RGB combination method, while also preserving the high-resolution features of the lens-free microscope. Due to the cost-effective and field-portable nature of both lens-free and mobile-phone microscopy techniques, their combination through the DCFM framework could be useful for digital pathology and telepathology applications, in low-resource and point-of-care settings.

  14. Automatic image fusion of real-time ultrasound with computed tomography images: a prospective comparison between two auto-registration methods.

    PubMed

    Cha, Dong Ik; Lee, Min Woo; Kim, Ah Yeong; Kang, Tae Wook; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga

    2017-11-01

    Background A major drawback of conventional manual image fusion is that the process may be complex, especially for less-experienced operators. Recently, two automatic image fusion techniques called Positioning and Sweeping auto-registration have been developed. Purpose To compare the accuracy and required time for image fusion of real-time ultrasonography (US) and computed tomography (CT) images between Positioning and Sweeping auto-registration. Material and Methods Eighteen consecutive patients referred for planning US for radiofrequency ablation or biopsy for focal hepatic lesions were enrolled. Image fusion using both auto-registration methods was performed for each patient. Registration error, time required for image fusion, and number of point locks used were compared using the Wilcoxon signed rank test. Results Image fusion was successful in all patients. Positioning auto-registration was significantly faster than Sweeping auto-registration for both initial (median, 11 s [range, 3-16 s] vs. 32 s [range, 21-38 s]; P < 0.001] and complete (median, 34.0 s [range, 26-66 s] vs. 47.5 s [range, 32-90]; P = 0.001] image fusion. Registration error of Positioning auto-registration was significantly higher for initial image fusion (median, 38.8 mm [range, 16.0-84.6 mm] vs. 18.2 mm [6.7-73.4 mm]; P = 0.029), but not for complete image fusion (median, 4.75 mm [range, 1.7-9.9 mm] vs. 5.8 mm [range, 2.0-13.0 mm]; P = 0.338]. Number of point locks required to refine the initially fused images was significantly higher with Positioning auto-registration (median, 2 [range, 2-3] vs. 1 [range, 1-2]; P = 0.012]. Conclusion Positioning auto-registration offers faster image fusion between real-time US and pre-procedural CT images than Sweeping auto-registration. The final registration error is similar between the two methods.

  15. Bio-inspired nano-sensor-enhanced CNN visual computer.

    PubMed

    Porod, Wolfgang; Werblin, Frank; Chua, Leon O; Roska, Tamas; Rodriguez-Vazquez, Angel; Roska, Botond; Fay, Patrick; Bernstein, Gary H; Huang, Yih-Fang; Csurgay, Arpad I

    2004-05-01

    Nanotechnology opens new ways to utilize recent discoveries in biological image processing by translating the underlying functional concepts into the design of CNN (cellular neural/nonlinear network)-based systems incorporating nanoelectronic devices. There is a natural intersection joining studies of retinal processing, spatio-temporal nonlinear dynamics embodied in CNN, and the possibility of miniaturizing the technology through nanotechnology. This intersection serves as the springboard for our multidisciplinary project. Biological feature and motion detectors map directly into the spatio-temporal dynamics of CNN for target recognition, image stabilization, and tracking. The neural interactions underlying color processing will drive the development of nanoscale multispectral sensor arrays for image fusion. Implementing such nanoscale sensors on a CNN platform will allow the implementation of device feedback control, a hallmark of biological sensory systems. These biologically inspired CNN subroutines are incorporated into the new world of analog-and-logic algorithms and software, containing also many other active-wave computing mechanisms, including nature-inspired (physics and chemistry) as well as PDE-based sophisticated spatio-temporal algorithms. Our goal is to design and develop several miniature prototype devices for target detection, navigation, tracking, and robotics. This paper presents an example illustrating the synergies emerging from the convergence of nanotechnology, biotechnology, and information and cognitive science.

  16. HALO: a reconfigurable image enhancement and multisensor fusion system

    NASA Astrophysics Data System (ADS)

    Wu, F.; Hickman, D. L.; Parker, Steve J.

    2014-06-01

    Contemporary high definition (HD) cameras and affordable infrared (IR) imagers are set to dramatically improve the effectiveness of security, surveillance and military vision systems. However, the quality of imagery is often compromised by camera shake, or poor scene visibility due to inadequate illumination or bad atmospheric conditions. A versatile vision processing system called HALO™ is presented that can address these issues, by providing flexible image processing functionality on a low size, weight and power (SWaP) platform. Example processing functions include video distortion correction, stabilisation, multi-sensor fusion and image contrast enhancement (ICE). The system is based around an all-programmable system-on-a-chip (SoC), which combines the computational power of a field-programmable gate array (FPGA) with the flexibility of a CPU. The FPGA accelerates computationally intensive real-time processes, whereas the CPU provides management and decision making functions that can automatically reconfigure the platform based on user input and scene content. These capabilities enable a HALO™ equipped reconnaissance or surveillance system to operate in poor visibility, providing potentially critical operational advantages in visually complex and challenging usage scenarios. The choice of an FPGA based SoC is discussed, and the HALO™ architecture and its implementation are described. The capabilities of image distortion correction, stabilisation, fusion and ICE are illustrated using laboratory and trials data.

  17. Fully automatic multi-atlas segmentation of CTA for partial volume correction in cardiac SPECT/CT

    NASA Astrophysics Data System (ADS)

    Liu, Qingyi; Mohy-ud-Din, Hassan; Boutagy, Nabil E.; Jiang, Mingyan; Ren, Silin; Stendahl, John C.; Sinusas, Albert J.; Liu, Chi

    2017-05-01

    Anatomical-based partial volume correction (PVC) has been shown to improve image quality and quantitative accuracy in cardiac SPECT/CT. However, this method requires manual segmentation of various organs from contrast-enhanced computed tomography angiography (CTA) data. In order to achieve fully automatic CTA segmentation for clinical translation, we investigated the most common multi-atlas segmentation methods. We also modified the multi-atlas segmentation method by introducing a novel label fusion algorithm for multiple organ segmentation to eliminate overlap and gap voxels. To evaluate our proposed automatic segmentation, eight canine 99mTc-labeled red blood cell SPECT/CT datasets that incorporated PVC were analyzed, using the leave-one-out approach. The Dice similarity coefficient of each organ was computed. Compared to the conventional label fusion method, our proposed label fusion method effectively eliminated gaps and overlaps and improved the CTA segmentation accuracy. The anatomical-based PVC of cardiac SPECT images with automatic multi-atlas segmentation provided consistent image quality and quantitative estimation of intramyocardial blood volume, as compared to those derived using manual segmentation. In conclusion, our proposed automatic multi-atlas segmentation method of CTAs is feasible, practical, and facilitates anatomical-based PVC of cardiac SPECT/CT images.

  18. The development of an automatically produced cholangiography procedure using the reconstruction of portal-phase multidetector-row computed tomography images: preliminary experience.

    PubMed

    Hirose, Tomoaki; Igami, Tsuyoshi; Koga, Kusuto; Hayashi, Yuichiro; Ebata, Tomoki; Yokoyama, Yukihiro; Sugawara, Gen; Mizuno, Takashi; Yamaguchi, Junpei; Mori, Kensaku; Nagino, Masato

    2017-03-01

    Fusion angiography using reconstructed multidetector-row computed tomography (MDCT) images, and cholangiography using reconstructed images from MDCT with a cholangiographic agent include an anatomical gap due to the different periods of MDCT scanning. To conquer such gaps, we attempted to develop a cholangiography procedure that automatically reconstructs a cholangiogram from portal-phase MDCT images. The automatically produced cholangiography procedure utilized an original software program that was developed by the Graduate School of Information Science, Nagoya University. This program structured 5 candidate biliary tracts, and automatically selected one as the candidate for cholangiography. The clinical value of the automatically produced cholangiography procedure was estimated based on a comparison with manually produced cholangiography. Automatically produced cholangiograms were reconstructed for 20 patients who underwent MDCT scanning before biliary drainage for distal biliary obstruction. The procedure showed the ability to extract the 5 main biliary branches and the 21 subsegmental biliary branches in 55 and 25 % of the cases, respectively. The extent of aberrant connections and aberrant extractions outside the biliary tract was acceptable. Among all of the cholangiograms, 5 were clinically applied with no correction, 8 were applied with modest improvements, and 3 produced a correct cholangiography before automatic selection. Although our procedure requires further improvement based on the analysis of additional patient data, it may represent an alternative to direct cholangiography in the future.

  19. Statistical image quantification toward optimal scan fusion and change quantification

    NASA Astrophysics Data System (ADS)

    Potesil, Vaclav; Zhou, Xiang Sean

    2007-03-01

    Recent advance of imaging technology has brought new challenges and opportunities for automatic and quantitative analysis of medical images. With broader accessibility of more imaging modalities for more patients, fusion of modalities/scans from one time point and longitudinal analysis of changes across time points have become the two most critical differentiators to support more informed, more reliable and more reproducible diagnosis and therapy decisions. Unfortunately, scan fusion and longitudinal analysis are both inherently plagued with increased levels of statistical errors. A lack of comprehensive analysis by imaging scientists and a lack of full awareness by physicians pose potential risks in clinical practice. In this paper, we discuss several key error factors affecting imaging quantification, studying their interactions, and introducing a simulation strategy to establish general error bounds for change quantification across time. We quantitatively show that image resolution, voxel anisotropy, lesion size, eccentricity, and orientation are all contributing factors to quantification error; and there is an intricate relationship between voxel anisotropy and lesion shape in affecting quantification error. Specifically, when two or more scans are to be fused at feature level, optimal linear fusion analysis reveals that scans with voxel anisotropy aligned with lesion elongation should receive a higher weight than other scans. As a result of such optimal linear fusion, we will achieve a lower variance than naïve averaging. Simulated experiments are used to validate theoretical predictions. Future work based on the proposed simulation methods may lead to general guidelines and error lower bounds for quantitative image analysis and change detection.

  20. Image Understanding Architecture

    DTIC Science & Technology

    1991-09-01

    architecture to support real-time, knowledge -based image understanding , and develop the software support environment that will be needed to utilize...NUMBER OF PAGES Image Understanding Architecture, Knowledge -Based Vision, AI Real-Time Computer Vision, Software Simulator, Parallel Processor IL PRICE... information . In addition to sensory and knowledge -based processing it is useful to introduce a level of symbolic processing. Thus, vision researchers

  1. VA's Integrated Imaging System on three platforms.

    PubMed

    Dayhoff, R E; Maloney, D L; Majurski, W J

    1992-01-01

    The DHCP Integrated Imaging System provides users with integrated patient data including text, image and graphics data. This system has been transferred from its original two screen DOS-based MUMPS platform to an X window workstation and a Microsoft Windows-based workstation. There are differences between these various platforms that impact on software design and on software development strategy. Data structures and conventions were used to isolate hardware, operating system, imaging software, and user-interface differences between platforms in the implementation of functionality for text and image display and interaction. The use of an object-oriented approach greatly increased system portability.

  2. VA's Integrated Imaging System on three platforms.

    PubMed Central

    Dayhoff, R. E.; Maloney, D. L.; Majurski, W. J.

    1992-01-01

    The DHCP Integrated Imaging System provides users with integrated patient data including text, image and graphics data. This system has been transferred from its original two screen DOS-based MUMPS platform to an X window workstation and a Microsoft Windows-based workstation. There are differences between these various platforms that impact on software design and on software development strategy. Data structures and conventions were used to isolate hardware, operating system, imaging software, and user-interface differences between platforms in the implementation of functionality for text and image display and interaction. The use of an object-oriented approach greatly increased system portability. PMID:1482983

  3. Applicability of common measures in multifocus image fusion comparison

    NASA Astrophysics Data System (ADS)

    Vajgl, Marek

    2017-11-01

    Image fusion is an image processing area aimed at fusion of multiple input images to achieve an output image somehow better then each of the input ones. In the case of "multifocus fusion", input images are capturing the same scene differing ina focus distance. The aim is to obtain an image, which is sharp in all its areas. The are several different approaches and methods used to solve this problem. However, it is common question which one is the best. This work describes a research covering the field of common measures with a question, if some of them can be used as a quality measure of the fusion result evaluation.

  4. Application of Multimodality Imaging Fusion Technology in Diagnosis and Treatment of Malignant Tumors under the Precision Medicine Plan.

    PubMed

    Wang, Shun-Yi; Chen, Xian-Xia; Li, Yi; Zhang, Yu-Ying

    2016-12-20

    The arrival of precision medicine plan brings new opportunities and challenges for patients undergoing precision diagnosis and treatment of malignant tumors. With the development of medical imaging, information on different modality imaging can be integrated and comprehensively analyzed by imaging fusion system. This review aimed to update the application of multimodality imaging fusion technology in the precise diagnosis and treatment of malignant tumors under the precision medicine plan. We introduced several multimodality imaging fusion technologies and their application to the diagnosis and treatment of malignant tumors in clinical practice. The data cited in this review were obtained mainly from the PubMed database from 1996 to 2016, using the keywords of "precision medicine", "fusion imaging", "multimodality", and "tumor diagnosis and treatment". Original articles, clinical practice, reviews, and other relevant literatures published in English were reviewed. Papers focusing on precision medicine, fusion imaging, multimodality, and tumor diagnosis and treatment were selected. Duplicated papers were excluded. Multimodality imaging fusion technology plays an important role in tumor diagnosis and treatment under the precision medicine plan, such as accurate location, qualitative diagnosis, tumor staging, treatment plan design, and real-time intraoperative monitoring. Multimodality imaging fusion systems could provide more imaging information of tumors from different dimensions and angles, thereby offing strong technical support for the implementation of precision oncology. Under the precision medicine plan, personalized treatment of tumors is a distinct possibility. We believe that multimodality imaging fusion technology will find an increasingly wide application in clinical practice.

  5. Parametric PET/MR Fusion Imaging to Differentiate Aggressive from Indolent Primary Prostate Cancer with Application for Image-Guided Prostate Cancer Biopsies

    DTIC Science & Technology

    2013-10-01

    AD_________________ Award Number: W81XWH-12-1-0597 TITLE: Parametric PET /MR Fusion Imaging to...Parametric PET /MR Fusion Imaging to Differentiate Aggressive from Indolent Primary Prostate Cancer with Application for Image-Guided Prostate Cancer Biopsies...The study investigates whether fusion PET /MRI imaging with 18F-choline PET /CT and diffusion-weighted MRI can be successfully applied to target prostate

  6. Image classification using multiscale information fusion based on saliency driven nonlinear diffusion filtering.

    PubMed

    Hu, Weiming; Hu, Ruiguang; Xie, Nianhua; Ling, Haibin; Maybank, Stephen

    2014-04-01

    In this paper, we propose saliency driven image multiscale nonlinear diffusion filtering. The resulting scale space in general preserves or even enhances semantically important structures such as edges, lines, or flow-like structures in the foreground, and inhibits and smoothes clutter in the background. The image is classified using multiscale information fusion based on the original image, the image at the final scale at which the diffusion process converges, and the image at a midscale. Our algorithm emphasizes the foreground features, which are important for image classification. The background image regions, whether considered as contexts of the foreground or noise to the foreground, can be globally handled by fusing information from different scales. Experimental tests of the effectiveness of the multiscale space for the image classification are conducted on the following publicly available datasets: 1) the PASCAL 2005 dataset; 2) the Oxford 102 flowers dataset; and 3) the Oxford 17 flowers dataset, with high classification rates.

  7. Multi-Focus Image Fusion Based on NSCT and NSST

    NASA Astrophysics Data System (ADS)

    Moonon, Altan-Ulzii; Hu, Jianwen

    2015-12-01

    In this paper, a multi-focus image fusion algorithm based on the nonsubsampled contourlet transform (NSCT) and the nonsubsampled shearlet transform (NSST) is proposed. The source images are first decomposed by the NSCT and NSST into low frequency coefficients and high frequency coefficients. Then, the average method is used to fuse low frequency coefficient of the NSCT. To obtain more accurate salience measurement, the high frequency coefficients of the NSST and NSCT are combined to measure salience. The high frequency coefficients of the NSCT with larger salience are selected as fused high frequency coefficients. Finally, the fused image is reconstructed by the inverse NSCT. We adopt three metrics (Q AB/F , Q e and Q w ) to evaluate the quality of fused images. The experimental results demonstrate that the proposed method outperforms other methods. It retains highly detailed edges and contours.

  8. Active-passive data fusion algorithms for seafloor imaging and classification from CZMIL data

    NASA Astrophysics Data System (ADS)

    Park, Joong Yong; Ramnath, Vinod; Feygels, Viktor; Kim, Minsu; Mathur, Abhinav; Aitken, Jennifer; Tuell, Grady

    2010-04-01

    CZMIL will simultaneously acquire lidar and passive spectral data. These data will be fused to produce enhanced seafloor reflectance images from each sensor, and combined at a higher level to achieve seafloor classification. In the DPS software, the lidar data will first be processed to solve for depth, attenuation, and reflectance. The depth measurements will then be used to constrain the spectral optimization of the passive spectral data, and the resulting water column estimates will be used recursively to improve the estimates of seafloor reflectance from the lidar. Finally, the resulting seafloor reflectance cube will be combined with texture metrics estimated from the seafloor topography to produce classifications of the seafloor.

  9. Evaluation of Effective Parameters on Quality of Magnetic Resonance Imaging-computed Tomography Image Fusion in Head and Neck Tumors for Application in Treatment Planning

    PubMed Central

    Shirvani, Atefeh; Jabbari, Keyvan; Amouheidari, Alireza

    2017-01-01

    Background: In radiation therapy, computed tomography (CT) simulation is used for treatment planning to define the location of tumor. Magnetic resonance imaging (MRI)-CT image fusion leads to more efficient tumor contouring. This work tried to identify the practical issues for the combination of CT and MRI images in real clinical cases. The effect of various factors is evaluated on image fusion quality. Materials and Methods: In this study, the data of thirty patients with brain tumors were used for image fusion. The effect of several parameters on possibility and quality of image fusion was evaluated. These parameters include angles of the patient's head on the bed, slices thickness, slice gap, and height of the patient's head. Results: According to the results, the first dominating factor on quality of image fusion was the difference slice gap between CT and MRI images (cor = 0.86, P < 0.005) and second factor was the angle between CT and MRI slice in the sagittal plane (cor = 0.75, P < 0.005). In 20% of patients, this angle was more than 28° and image fusion was not efficient. In 17% of patients, difference slice gap in CT and MRI was >4 cm and image fusion quality was <25%. Conclusion: The most important problem in image fusion is that MRI images are taken without regard to their use in treatment planning. In general, parameters related to the patient position during MRI imaging should be chosen to be consistent with CT images of the patient in terms of location and angle. PMID:29387672

  10. Robust multi-atlas label propagation by deep sparse representation

    PubMed Central

    Zu, Chen; Wang, Zhengxia; Zhang, Daoqiang; Liang, Peipeng; Shi, Yonghong; Shen, Dinggang; Wu, Guorong

    2016-01-01

    Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer (label-specific dictionaries) consists of groups of representative atlas patches and the subsequent layers (residual dictionaries) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared to other counterpart label fusion methods. PMID:27942077

  11. Robust multi-atlas label propagation by deep sparse representation.

    PubMed

    Zu, Chen; Wang, Zhengxia; Zhang, Daoqiang; Liang, Peipeng; Shi, Yonghong; Shen, Dinggang; Wu, Guorong

    2017-03-01

    Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer ( label-specific dictionaries ) consists of groups of representative atlas patches and the subsequent layers ( residual dictionaries ) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared to other counterpart label fusion methods.

  12. Spatial data software integration - Merging CAD/CAM/mapping with GIS and image processing

    NASA Technical Reports Server (NTRS)

    Logan, Thomas L.; Bryant, Nevin A.

    1987-01-01

    The integration of CAD/CAM/mapping with image processing using geographic information systems (GISs) as the interface is examined. Particular emphasis is given to the development of software interfaces between JPL's Video Image Communication and Retrieval (VICAR)/Imaged Based Information System (IBIS) raster-based GIS and the CAD/CAM/mapping system. The design and functions of the VICAR and IBIS are described. Vector data capture and editing are studied. Various software programs for interfacing between the VICAR/IBIS and CAD/CAM/mapping are presented and analyzed.

  13. Fusion solution for soldier wearable gunfire detection systems

    NASA Astrophysics Data System (ADS)

    Cakiades, George; Desai, Sachi; Deligeorges, Socrates; Buckland, Bruce E.; George, Jemin

    2012-06-01

    Currently existing acoustic based Gunfire Detection Systems (GDS) such as soldier wearable, vehicle mounted, and fixed site devices provide enemy detection and localization capabilities to the user. However, the solution to the problem of portability versus performance tradeoff remains elusive. The Data Fusion Module (DFM), described herein, is a sensor/platform agnostic software supplemental tool that addresses this tradeoff problem by leveraging existing soldier networks to enhance GDS performance across a Tactical Combat Unit (TCU). The DFM software enhances performance by leveraging all available acoustic GDS information across the TCU synergistically to calculate highly accurate solutions more consistently than any individual GDS in the TCU. The networked sensor architecture provides additional capabilities addressing the multiple shooter/fire-fight problems in addition to sniper detection/localization. The addition of the fusion solution to the overall Size, Weight and Power & Cost (SWaP&C) is zero to negligible. At the end of the first-year effort, the DFM integrated sensor network's performance was impressive showing improvements upwards of 50% in comparison to a single sensor solution. Further improvements are expected when the networked sensor architecture created in this effort is fully exploited.

  14. SU-G-JeP2-07: Fusion Optimization of Multi-Contrast MRI Scans for MR-Based Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, L; Yin, F; Liang, X

    Purpose: To develop an image fusion method using multiple contrast MRI scans for MR-based treatment planning. Methods: T1 weighted (T1-w), T2 weighted (T2-w) and diffusion weighted images (DWI) were acquired from liver cancer patient with breath-holding. Image fade correction and deformable image registration were performed using VelocityAI (Varian Medical Systems, CA). Registered images were normalized to mean voxel intensity for each image dataset. Contrast to noise ratio (CNR) between tumor and liver was quantified. Tumor area was defined as the GTV contoured by physicians. Normal liver area with equivalent dimension was used as background. Noise was defined by the standardmore » deviation of voxel intensities in the same liver area. Linear weightings were applied to T1-w, T2-w and DWI images to generate composite image and CNR was calculated for each composite image. Optimization process were performed to achieve different clinical goals. Results: With a goal of maximizing tumor contrast, the composite image achieved a 7–12 fold increase in tumor CNR (142.8 vs. −2.3, 11.4 and 20.6 for T1-w, T2-w and DWI only, respectively), while anatomical details were largely invisible. With a weighting combination of 100%, −10% and −10%, respectively, tumor contrast was enhanced from −2.3 to −5.4, while the anatomical details were clear. With a weighting combination of 25%, 20% and 55%, balanced tumor contrast and anatomy was achieved. Conclusion: We have investigated the feasibility of performing image fusion optimization on multiple contrast MRI images. This mechanism could help utilize multiple contrast MRI scans to potentially facilitate future MR-based treatment planning.« less

  15. Discriminative confidence estimation for probabilistic multi-atlas label fusion.

    PubMed

    Benkarim, Oualid M; Piella, Gemma; González Ballester, Miguel Angel; Sanroma, Gerard

    2017-12-01

    Quantitative neuroimaging analyses often rely on the accurate segmentation of anatomical brain structures. In contrast to manual segmentation, automatic methods offer reproducible outputs and provide scalability to study large databases. Among existing approaches, multi-atlas segmentation has recently shown to yield state-of-the-art performance in automatic segmentation of brain images. It consists in propagating the labelmaps from a set of atlases to the anatomy of a target image using image registration, and then fusing these multiple warped labelmaps into a consensus segmentation on the target image. Accurately estimating the contribution of each atlas labelmap to the final segmentation is a critical step for the success of multi-atlas segmentation. Common approaches to label fusion either rely on local patch similarity, probabilistic statistical frameworks or a combination of both. In this work, we propose a probabilistic label fusion framework based on atlas label confidences computed at each voxel of the structure of interest. Maximum likelihood atlas confidences are estimated using a supervised approach, explicitly modeling the relationship between local image appearances and segmentation errors produced by each of the atlases. We evaluate different spatial pooling strategies for modeling local segmentation errors. We also present a novel type of label-dependent appearance features based on atlas labelmaps that are used during confidence estimation to increase the accuracy of our label fusion. Our approach is evaluated on the segmentation of seven subcortical brain structures from the MICCAI 2013 SATA Challenge dataset and the hippocampi from the ADNI dataset. Overall, our results indicate that the proposed label fusion framework achieves superior performance to state-of-the-art approaches in the majority of the evaluated brain structures and shows more robustness to registration errors. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Range and Panoramic Image Fusion Into a Textured Range Image for Culture Heritage Documentation

    NASA Astrophysics Data System (ADS)

    Bila, Z.; Reznicek, J.; Pavelka, K.

    2013-07-01

    This paper deals with a fusion of range and panoramic images, where the range image is acquired by a 3D laser scanner and the panoramic image is acquired with a digital still camera mounted on a panoramic head and tripod. The fused resulting dataset, called "textured range image", provides more reliable information about the investigated object for conservators and historians, than using both datasets separately. A simple example of fusion of a range and panoramic images, both obtained in St. Francis Xavier Church in town Opařany, is given here. Firstly, we describe the process of data acquisition, then the processing of both datasets into a proper format for following fusion and the process of fusion. The process of fusion can be divided into a two main parts: transformation and remapping. In the first, transformation, part, both images are related by matching similar features detected on both images with a proper detector, which results in transformation matrix enabling transformation of the range image onto a panoramic image. Then, the range data are remapped from the range image space into a panoramic image space and stored as an additional "range" channel. The process of image fusion is validated by comparing similar features extracted on both datasets.

  17. ImagePy: an open-source, Python-based and platform-independent software package for boimage analysis.

    PubMed

    Wang, Anliang; Yan, Xiaolong; Wei, Zhijun

    2018-04-27

    This note presents the design of a scalable software package named ImagePy for analysing biological images. Our contribution is concentrated on facilitating extensibility and interoperability of the software through decoupling the data model from the user interface. Especially with assistance from the Python ecosystem, this software framework makes modern computer algorithms easier to be applied in bioimage analysis. ImagePy is free and open source software, with documentation and code available at https://github.com/Image-Py/imagepy under the BSD license. It has been tested on the Windows, Mac and Linux operating systems. wzjdlut@dlut.edu.cn or yxdragon@imagepy.org.

  18. PScan 1.0: flexible software framework for polygon based multiphoton microscopy

    NASA Astrophysics Data System (ADS)

    Li, Yongxiao; Lee, Woei Ming

    2016-12-01

    Multiphoton laser scanning microscopes exhibit highly localized nonlinear optical excitation and are powerful instruments for in-vivo deep tissue imaging. Customized multiphoton microscopy has a significantly superior performance for in-vivo imaging because of precise control over the scanning and detection system. To date, there have been several flexible software platforms catered to custom built microscopy systems i.e. ScanImage, HelioScan, MicroManager, that perform at imaging speeds of 30-100fps. In this paper, we describe a flexible software framework for high speed imaging systems capable of operating from 5 fps to 1600 fps. The software is based on the MATLAB image processing toolbox. It has the capability to communicate directly with a high performing imaging card (Matrox Solios eA/XA), thus retaining high speed acquisition. The program is also designed to communicate with LabVIEW and Fiji for instrument control and image processing. Pscan 1.0 can handle high imaging rates and contains sufficient flexibility for users to adapt to their high speed imaging systems.

  19. The New Feedback Control System of RFX-mod Based on the MARTe Real-Time Framework

    NASA Astrophysics Data System (ADS)

    Manduchi, G.; Luchetta, A.; Soppelsa, A.; Taliercio, C.

    2014-06-01

    A real-time system has been successfully used since 2004 in the RFX-mod nuclear fusion experiment to control the position of the plasma and its Magneto Hydrodynamic (MHD) modes. However, its latency and the limited computation power of the used processors prevented the usage of more aggressive control algorithms. Therefore a new hardware and software architecture has been designed to overcome such limitations and to provide a shorter latency and a much increased computation power. The new system is based on a Linux multi-core server and uses MARTe, a framework for real-time control which is gaining interest in the fusion community.

  20. Infrared and visible fusion face recognition based on NSCT domain

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan

    2018-01-01

    Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In this paper, a novel fusion algorithm in non-subsampled contourlet transform (NSCT) domain is proposed for Infrared and visible face fusion recognition. Firstly, NSCT is used respectively to process the infrared and visible face images, which exploits the image information at multiple scales, orientations, and frequency bands. Then, to exploit the effective discriminant feature and balance the power of high-low frequency band of NSCT coefficients, the local Gabor binary pattern (LGBP) and Local Binary Pattern (LBP) are applied respectively in different frequency parts to obtain the robust representation of infrared and visible face images. Finally, the score-level fusion is used to fuse the all the features for final classification. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. Experiments results show that the proposed method extracts the complementary features of near-infrared and visible-light images and improves the robustness of unconstrained face recognition.

  1. Multi-sensor image fusion algorithm based on multi-objective particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Xie, Xia-zhu; Xu, Ya-wei

    2017-11-01

    On the basis of DT-CWT (Dual-Tree Complex Wavelet Transform - DT-CWT) theory, an approach based on MOPSO (Multi-objective Particle Swarm Optimization Algorithm) was proposed to objectively choose the fused weights of low frequency sub-bands. High and low frequency sub-bands were produced by DT-CWT. Absolute value of coefficients was adopted as fusion rule to fuse high frequency sub-bands. Fusion weights in low frequency sub-bands were used as particles in MOPSO. Spatial Frequency and Average Gradient were adopted as two kinds of fitness functions in MOPSO. The experimental result shows that the proposed approach performances better than Average Fusion and fusion methods based on local variance and local energy respectively in brightness, clarity and quantitative evaluation which includes Entropy, Spatial Frequency, Average Gradient and QAB/F.

  2. Kinect Fusion improvement using depth camera calibration

    NASA Astrophysics Data System (ADS)

    Pagliari, D.; Menna, F.; Roncella, R.; Remondino, F.; Pinto, L.

    2014-06-01

    Scene's 3D modelling, gesture recognition and motion tracking are fields in rapid and continuous development which have caused growing demand on interactivity in video-game and e-entertainment market. Starting from the idea of creating a sensor that allows users to play without having to hold any remote controller, the Microsoft Kinect device was created. The Kinect has always attract researchers in different fields, from robotics to Computer Vision (CV) and biomedical engineering as well as third-party communities that have released several Software Development Kit (SDK) versions for Kinect in order to use it not only as a game device but as measurement system. Microsoft Kinect Fusion control libraries (firstly released in March 2013) allow using the device as a 3D scanning and produce meshed polygonal of a static scene just moving the Kinect around. A drawback of this sensor is the geometric quality of the delivered data and the low repeatability. For this reason the authors carried out some investigation in order to evaluate the accuracy and repeatability of the depth measured delivered by the Kinect. The paper will present a throughout calibration analysis of the Kinect imaging sensor, with the aim of establishing the accuracy and precision of the delivered information: a straightforward calibration of the depth sensor in presented and then the 3D data are correct accordingly. Integrating the depth correction algorithm and correcting the IR camera interior and exterior orientation parameters, the Fusion Libraries are corrected and a new reconstruction software is created to produce more accurate models.

  3. Semi Automated Land Cover Layer Updating Process Utilizing Spectral Analysis and GIS Data Fusion

    NASA Astrophysics Data System (ADS)

    Cohen, L.; Keinan, E.; Yaniv, M.; Tal, Y.; Felus, A.; Regev, R.

    2018-04-01

    Technological improvements made in recent years of mass data gathering and analyzing, influenced the traditional methods of updating and forming of the national topographic database. It has brought a significant increase in the number of use cases and detailed geo information demands. Processes which its purpose is to alternate traditional data collection methods developed in many National Mapping and Cadaster Agencies. There has been significant progress in semi-automated methodologies aiming to facilitate updating of a topographic national geodatabase. Implementation of those is expected to allow a considerable reduction of updating costs and operation times. Our previous activity has focused on building automatic extraction (Keinan, Zilberstein et al, 2015). Before semiautomatic updating method, it was common that interpreter identification has to be as detailed as possible to hold most reliable database eventually. When using semi-automatic updating methodologies, the ability to insert human insights based knowledge is limited. Therefore, our motivations were to reduce the created gap by allowing end-users to add their data inputs to the basic geometric database. In this article, we will present a simple Land cover database updating method which combines insights extracted from the analyzed image, and a given spatial data of vector layers. The main stages of the advanced practice are multispectral image segmentation and supervised classification together with given vector data geometric fusion while maintaining the principle of low shape editorial work to be done. All coding was done utilizing open source software components.

  4. Virtual pathology of cervical radiculopathy based on 3D MR/CT fusion images: impingement, flattening or twisted condition of the compressed nerve root in three cases.

    PubMed

    Kamogawa, Junji; Kato, Osamu; Morizane, Tatsunori; Hato, Taizo

    2015-01-01

    There have been several imaging studies of cervical radiculopathy, but no three-dimensional (3D) images have shown the path, position, and pathological changes of the cervical nerve roots and spinal root ganglion relative to the cervical bony structure. The objective of this study was to introduce a technique that enables the virtual pathology of the nerve root to be assessed using 3D magnetic resonance (MR)/computed tomography (CT) fusion images that show the compression of the proximal portion of the cervical nerve root by both the herniated disc and the preforaminal or foraminal bony spur in patients with cervical radiculopathy. MR and CT images were obtained from three patients with cervical radiculopathy. 3D MR images were placed onto 3D CT images using a computer workstation. The entire nerve root could be visualized in 3D with or without the vertebrae. The most important characteristic evident on the images was flattening of the nerve root by a bony spur. The affected root was constricted at a pre-ganglion site. In cases of severe deformity, the flattened portion of the root seemed to change the angle of its path, resulting in twisted condition. The 3D MR/CT fusion imaging technique enhances visualization of pathoanatomy in cervical hidden area that is composed of the root and intervertebral foramen. This technique provides two distinct advantages for diagnosis of cervical radiculopathy. First, the isolation of individual vertebra clarifies the deformities of the whole root groove, including both the uncinate process and superior articular process in the cervical spine. Second, the tortuous or twisted condition of a compressed root can be visualized. The surgeon can identify the narrowest face of the root if they view the MR/CT fusion image from the posterolateral-inferior direction. Surgeons use MR/CT fusion images as a pre-operative map and for intraoperative navigation. The MR/CT fusion images can also be used as educational materials for all hospital staff and for patients and patients' families who provide informed consent for treatments.

  5. Fusion of multi-tracer PET images for dose painting.

    PubMed

    Lelandais, Benoît; Ruan, Su; Denœux, Thierry; Vera, Pierre; Gardin, Isabelle

    2014-10-01

    PET imaging with FluoroDesoxyGlucose (FDG) tracer is clinically used for the definition of Biological Target Volumes (BTVs) for radiotherapy. Recently, new tracers, such as FLuoroThymidine (FLT) or FluoroMisonidazol (FMiso), have been proposed. They provide complementary information for the definition of BTVs. Our work is to fuse multi-tracer PET images to obtain a good BTV definition and to help the radiation oncologist in dose painting. Due to the noise and the partial volume effect leading, respectively, to the presence of uncertainty and imprecision in PET images, the segmentation and the fusion of PET images is difficult. In this paper, a framework based on Belief Function Theory (BFT) is proposed for the segmentation of BTV from multi-tracer PET images. The first step is based on an extension of the Evidential C-Means (ECM) algorithm, taking advantage of neighboring voxels for dealing with uncertainty and imprecision in each mono-tracer PET image. Then, imprecision and uncertainty are, respectively, reduced using prior knowledge related to defects in the acquisition system and neighborhood information. Finally, a multi-tracer PET image fusion is performed. The results are represented by a set of parametric maps that provide important information for dose painting. The performances are evaluated on PET phantoms and patient data with lung cancer. Quantitative results show good performance of our method compared with other methods. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Phantom Study Investigating the Accuracy of Manual and Automatic Image Fusion with the GE Logiq E9: Implications for use in Percutaneous Liver Interventions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burgmans, Mark Christiaan, E-mail: m.c.burgmans@lumc.nl; Harder, J. Michiel den, E-mail: chiel.den.harder@gmail.com; Meershoek, Philippa, E-mail: P.Meershoek@lumc.nl

    PurposeTo determine the accuracy of automatic and manual co-registration methods for image fusion of three-dimensional computed tomography (CT) with real-time ultrasonography (US) for image-guided liver interventions.Materials and MethodsCT images of a skills phantom with liver lesions were acquired and co-registered to US using GE Logiq E9 navigation software. Manual co-registration was compared to automatic and semiautomatic co-registration using an active tracker. Also, manual point registration was compared to plane registration with and without an additional translation point. Finally, comparison was made between manual and automatic selection of reference points. In each experiment, accuracy of the co-registration method was determined bymore » measurement of the residual displacement in phantom lesions by two independent observers.ResultsMean displacements for a superficial and deep liver lesion were comparable after manual and semiautomatic co-registration: 2.4 and 2.0 mm versus 2.0 and 2.5 mm, respectively. Both methods were significantly better than automatic co-registration: 5.9 and 5.2 mm residual displacement (p < 0.001; p < 0.01). The accuracy of manual point registration was higher than that of plane registration, the latter being heavily dependent on accurate matching of axial CT and US images by the operator. Automatic reference point selection resulted in significantly lower registration accuracy compared to manual point selection despite lower root-mean-square deviation (RMSD) values.ConclusionThe accuracy of manual and semiautomatic co-registration is better than that of automatic co-registration. For manual co-registration using a plane, choosing the correct plane orientation is an essential first step in the registration process. Automatic reference point selection based on RMSD values is error-prone.« less

  7. Percutaneous Thermal Ablation with Ultrasound Guidance. Fusion Imaging Guidance to Improve Conspicuity of Liver Metastasis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hakime, Antoine, E-mail: thakime@yahoo.com; Yevich, Steven; Tselikas, Lambros

    PurposeTo assess whether fusion imaging-guided percutaneous microwave ablation (MWA) can improve visibility and targeting of liver metastasis that were deemed inconspicuous on ultrasound (US).Materials and MethodsMWA of liver metastasis not judged conspicuous enough on US was performed under CT/US fusion imaging guidance. The conspicuity before and after the fusion imaging was graded on a five-point scale, and significance was assessed by Wilcoxon test. Technical success, procedure time, and procedure-related complications were evaluated.ResultsA total of 35 patients with 40 liver metastases (mean size 1.3 ± 0.4 cm) were enrolled. Image fusion improved conspicuity sufficiently to allow fusion-targeted MWA in 33 patients. The time requiredmore » for image fusion processing and tumors’ identification averaged 10 ± 2.1 min (range 5–14). Initial conspicuity on US by inclusion criteria was 1.2 ± 0.4 (range 0–2), while conspicuity after localization on fusion imaging was 3.5 ± 1 (range 1–5, p < 0.001). Technical success rate was 83% (33/40) in intention-to-treat analysis and 100% in analysis of treated tumors. There were no major procedure-related complications.ConclusionsFusion imaging broadens the scope of US-guided MWA to metastasis lacking adequate conspicuity on conventional US. Fusion imaging is an effective tool to increase the conspicuity of liver metastases that were initially deemed non visualizable on conventional US imaging.« less

  8. Image fusion via nonlocal sparse K-SVD dictionary learning.

    PubMed

    Li, Ying; Li, Fangyi; Bai, Bendu; Shen, Qiang

    2016-03-01

    Image fusion aims to merge two or more images captured via various sensors of the same scene to construct a more informative image by integrating their details. Generally, such integration is achieved through the manipulation of the representations of the images concerned. Sparse representation plays an important role in the effective description of images, offering a great potential in a variety of image processing tasks, including image fusion. Supported by sparse representation, in this paper, an approach for image fusion by the use of a novel dictionary learning scheme is proposed. The nonlocal self-similarity property of the images is exploited, not only at the stage of learning the underlying description dictionary but during the process of image fusion. In particular, the property of nonlocal self-similarity is combined with the traditional sparse dictionary. This results in an improved learned dictionary, hereafter referred to as the nonlocal sparse K-SVD dictionary (where K-SVD stands for the K times singular value decomposition that is commonly used in the literature), and abbreviated to NL_SK_SVD. The performance of the NL_SK_SVD dictionary is applied for image fusion using simultaneous orthogonal matching pursuit. The proposed approach is evaluated with different types of images, and compared with a number of alternative image fusion techniques. The resultant superior fused images using the present approach demonstrates the efficacy of the NL_SK_SVD dictionary in sparse image representation.

  9. Sharpening Ejecta Patterns: Investigating Spectral Fidelity After Controlled Intensity-Hue-Saturation Image Fusion of LROC Images of Fresh Craters

    NASA Astrophysics Data System (ADS)

    Awumah, A.; Mahanti, P.; Robinson, M. S.

    2017-12-01

    Image fusion is often used in Earth-based remote sensing applications to merge spatial details from a high-resolution panchromatic (Pan) image with the color information from a lower-resolution multi-spectral (MS) image, resulting in a high-resolution multi-spectral image (HRMS). Previously, the performance of six well-known image fusion methods were compared using Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) and Wide Angle Camera (WAC) images (1). Results showed the Intensity-Hue-Saturation (IHS) method provided the best spatial performance, but deteriorated the spectral content. In general, there was a trade-off between spatial enhancement and spectral fidelity from the fusion process; the more spatial details from the Pan fused with the MS image, the more spectrally distorted the final HRMS. In this work, we control the amount of spatial details fused (from the LROC NAC images to WAC images) using a controlled IHS method (2), to investigate the spatial variation in spectral distortion on fresh crater ejecta. In the controlled IHS method (2), the percentage of the Pan component merged with the MS is varied. The percent of spatial detail from the Pan used is determined by a variable whose value may be varied between 1 (no Pan utilized) to infinity (entire Pan utilized). An HRMS color composite image (red=415nm, green=321/415nm, blue=321/360nm (3)) was used to assess performance (via visual inspection and metric-based evaluations) at each tested value of the control parameter (1 to 10—after which spectral distortion saturates—in 0.01 increments) within three regions: crater interiors, ejecta blankets, and the background material surrounding the craters. Increasing the control parameter introduced increased spatial sharpness and spectral distortion in all regions, but to varying degrees. Crater interiors suffered the most color distortion, while ejecta experienced less color distortion. The controlled IHS method is therefore desirable for resolution-enhancement of fresh crater ejecta; larger values of the control parameter may be used to sharpen MS images of ejecta patterns but with less impact to color distortion than in the uncontrolled IHS fusion process. References: (1) Prasun et. al (2016) ISPRS. (2) Choi, Myungjin (2006) IEEE. (3) Denevi et. al (2014) JGR.

  10. The fusion of large scale classified side-scan sonar image mosaics.

    PubMed

    Reed, Scott; Tena, Ruiz Ioseba; Capus, Chris; Petillot, Yvan

    2006-07-01

    This paper presents a unified framework for the creation of classified maps of the seafloor from sonar imagery. Significant challenges in photometric correction, classification, navigation and registration, and image fusion are addressed. The techniques described are directly applicable to a range of remote sensing problems. Recent advances in side-scan data correction are incorporated to compensate for the sonar beam pattern and motion of the acquisition platform. The corrected images are segmented using pixel-based textural features and standard classifiers. In parallel, the navigation of the sonar device is processed using Kalman filtering techniques. A simultaneous localization and mapping framework is adopted to improve the navigation accuracy and produce georeferenced mosaics of the segmented side-scan data. These are fused within a Markovian framework and two fusion models are presented. The first uses a voting scheme regularized by an isotropic Markov random field and is applicable when the reliability of each information source is unknown. The Markov model is also used to inpaint regions where no final classification decision can be reached using pixel level fusion. The second model formally introduces the reliability of each information source into a probabilistic model. Evaluation of the two models using both synthetic images and real data from a large scale survey shows significant quantitative and qualitative improvement using the fusion approach.

  11. 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display.

    PubMed

    Fan, Zhencheng; Weng, Yitong; Chen, Guowen; Liao, Hongen

    2017-07-01

    Three-dimensional (3D) visualization of preoperative and intraoperative medical information becomes more and more important in minimally invasive surgery. We develop a 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display for surgeons to observe surgical target intuitively. The spatial information of regions of interest (ROIs) is captured by the mobile device and transferred to a server for further image processing. Triangular patches of intraoperative data with texture are calculated with a dimension-reduced triangulation algorithm and a projection-weighted mapping algorithm. A point cloud selection-based warm-start iterative closest point (ICP) algorithm is also developed for fusion of the reconstructed 3D intraoperative image and the preoperative image. The fusion images are rendered for 3D autostereoscopic display using integral videography (IV) technology. Moreover, 3D visualization of medical image corresponding to observer's viewing direction is updated automatically using mutual information registration method. Experimental results show that the spatial position error between the IV-based 3D autostereoscopic fusion image and the actual object was 0.38±0.92mm (n=5). The system can be utilized in telemedicine, operating education, surgical planning, navigation, etc. to acquire spatial information conveniently and display surgical information intuitively. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Surface imaging, laser positioning or volumetric imaging for breast cancer with nodal involvement treated by helical TomoTherapy.

    PubMed

    Crop, Frederik; Pasquier, David; Baczkiewic, Amandine; Doré, Julie; Bequet, Lena; Steux, Emeline; Gadroy, Anne; Bouillon, Jacqueline; Florence, Clement; Muszynski, Laurence; Lacour, Mathilde; Lartigau, Eric

    2016-09-08

    A surface imaging system, Catalyst (C-Rad), was compared with laser-based positioning and daily mega voltage computed tomography (MVCT) setup for breast patients with nodal involvement treated by helical TomoTherapy. Catalyst-based positioning performed better than laser-based positioning. The respective modalities resulted in a standard deviation (SD), 68% confidence interval (CI) of positioning of left-right, craniocaudal, anterior-posterior, roll: 2.4 mm, 2.7 mm, 2.4 mm, 0.9° for Catalyst positioning, and 6.1 mm, 3.8 mm, 4.9 mm, 1.1° for laser-based positioning, respectively. MVCT-based precision is a combination of the interoperator variability for MVCT fusion and the patient movement during the time it takes for MVCT and fusion. The MVCT fusion interoperator variability for breast patients was evaluated at one SD left-right, craniocaudal, ant-post, roll as: 1.4 mm, 1.8 mm, 1.3 mm, 1.0°. There was no statistically significant difference between the automatic MVCT registration result and the manual adjustment; the automatic fusion results were within the 95% CI of the mean result of 10 users, except for one specific case where the patient was positioned with large yaw. We found that users add variability to the roll correction as the automatic registration was more consistent. The patient position uncertainty confidence interval was evaluated as 1.9 mm, 2.2 mm, 1.6 mm, 0.9° after 4 min, and 2.3 mm, 2.8 mm, 2.2 mm, 1° after 10 min. The combination of this patient movement with MVCT fusion interoperator variability results in total standard deviations of patient posi-tion when treatment starts 4 or 10 min after initial positioning of, respectively: 2.3 mm, 2.8 mm, 2.0 mm, 1.3° and 2.7 mm, 3.3 mm, 2.6 mm, 1.4°. Surface based positioning arrives at the same precision when taking into account the time required for MVCT imaging and fusion. These results can be used on a patient-per-patient basis to decide which positioning system performs the best after the first 5 fractions and when daily MVCT can be omitted. Ideally, real-time monitoring is required to reduce important intrafraction movement. © 2016 The Authors.

  13. [Contrast-enhanced ultrasound (CEUS) and image fusion for procedures of liver interventions].

    PubMed

    Jung, E M; Clevert, D A

    2018-06-01

    Contrast-enhanced ultrasound (CEUS) is becoming increasingly important for the detection and characterization of malignant liver lesions and allows percutaneous treatment when surgery is not possible. Contrast-enhanced ultrasound image fusion with computed tomography (CT) and magnetic resonance imaging (MRI) opens up further options for the targeted investigation of a modified tumor treatment. Ultrasound image fusion offers the potential for real-time imaging and can be combined with other cross-sectional imaging techniques as well as CEUS. With the implementation of ultrasound contrast agents and image fusion, ultrasound has been improved in the detection and characterization of liver lesions in comparison to other cross-sectional imaging techniques. In addition, this method can also be used for intervention procedures. The success rate of fusion-guided biopsies or CEUS-guided tumor ablation lies between 80 and 100% in the literature. Ultrasound-guided image fusion using CT or MRI data, in combination with CEUS, can facilitate diagnosis and therapy follow-up after liver interventions. In addition to the primary applications of image fusion in the diagnosis and treatment of liver lesions, further useful indications can be integrated into daily work. These include, for example, intraoperative and vascular applications as well applications in other organ systems.

  14. Dual wavelength imaging allows analysis of membrane fusion of influenza virus inside cells.

    PubMed

    Sakai, Tatsuya; Ohuchi, Masanobu; Imai, Masaki; Mizuno, Takafumi; Kawasaki, Kazunori; Kuroda, Kazumichi; Yamashina, Shohei

    2006-02-01

    Influenza virus hemagglutinin (HA) is a determinant of virus infectivity. Therefore, it is important to determine whether HA of a new influenza virus, which can potentially cause pandemics, is functional against human cells. The novel imaging technique reported here allows rapid analysis of HA function by visualizing viral fusion inside cells. This imaging was designed to detect fusion changing the spectrum of the fluorescence-labeled virus. Using this imaging, we detected the fusion between a virus and a very small endosome that could not be detected previously, indicating that the imaging allows highly sensitive detection of viral fusion.

  15. Automated designation of tie-points for image-to-image coregistration.

    Treesearch

    R.E. Kennedy; W.B. Cohen

    2003-01-01

    Image-to-image registration requires identification of common points in both images (image tie-points: ITPs). Here we describe software implementing an automated, area-based technique for identifying ITPs. The ITP software was designed to follow two strategies: ( I ) capitalize on human knowledge and pattern recognition strengths, and (2) favour robustness in many...

  16. Evaluation of Methods for Coregistration and Fusion of Rpas-Based 3d Point Clouds and Thermal Infrared Images

    NASA Astrophysics Data System (ADS)

    Hoegner, L.; Tuttas, S.; Xu, Y.; Eder, K.; Stilla, U.

    2016-06-01

    This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR) images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i) coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii) coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii) coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv) coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v) coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.

  17. SU-G-JeP4-12: Real-Time Organ Motion Monitoring Using Ultrasound and KV Fluoroscopy During Lung SBRT Delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Omari, E; Tai, A; Li, X

    Purpose: Real-time ultrasound monitoring during SBRT is advantageous in understanding and identifying motion irregularities which may cause geometric misses. In this work, we propose to utilize real-time ultrasound to track the diaphragm in conjunction with periodical kV fluoroscopy to monitor motion of tumor or landmarks during SBRT delivery. Methods: Transabdominal Ultrasound (TAUS) b-mode images were collected from 10 healthy volunteers using the Clarity Autoscan System (Elekta). The autoscan transducer, which has a center frequency of 5 MHz, was utilized for the scans. The acquired images were contoured using the Clarity Automatic Fusion and Contouring workstation software. Monitoring sessions of 5more » minute length were observed and recorded. The position correlation between tumor and diaphragm could be established with periodic kV fluoroscopy periodically acquired during treatment with Elekta XVI. We acquired data using a tissue mimicking ultrasound phantom with embedded spheres placed on a motion stand using ultrasound and kV Fluoroscopy. MIM software was utilized for image fusion. Correlation of diaphragm and target motion was also validated using 4D-MRI and 4D-CBCT. Results: The diaphragm was visualized as a hyperechoic region on the TAUS b-mode images. Volunteer set-up can be adjusted such that TAUS probe will not interfere with treatment beams. A segment of the diaphragm was contoured and selected as our tracking structure. Successful monitoring sessions of the diaphragm were recorded. For some volunteers, diaphragm motion over 2 times larger than the initial motion has been observed during tracking. For the phantom study, we were able to register the 2D kV Fluoroscopy with the US images for position comparison. Conclusion: We demonstrated the feasibility of tracking the diaphragm using real-time ultrasound. Real-time tracking can help in identifying such irregularities in the respiratory motion which is correlated to tumor motion. We also showed the feasibility of acquiring 2D KV Fluoroscopy and registering the images with Ultrasound.« less

  18. Design and Applications of Rapid Image Tile Producing Software Based on Mosaic Dataset

    NASA Astrophysics Data System (ADS)

    Zha, Z.; Huang, W.; Wang, C.; Tang, D.; Zhu, L.

    2018-04-01

    Map tile technology is widely used in web geographic information services. How to efficiently produce map tiles is key technology for rapid service of images on web. In this paper, a rapid producing software for image tile data based on mosaic dataset is designed, meanwhile, the flow of tile producing is given. Key technologies such as cluster processing, map representation, tile checking, tile conversion and compression in memory are discussed. Accomplished by software development and tested by actual image data, the results show that this software has a high degree of automation, would be able to effectively reducing the number of IO and improve the tile producing efficiency. Moreover, the manual operations would be reduced significantly.

  19. Platform for Post-Processing Waveform-Based NDE

    NASA Technical Reports Server (NTRS)

    Roth, Don J.

    2010-01-01

    Signal- and image-processing methods are commonly needed to extract information from the waves, improve resolution of, and highlight defects in an image. Since some similarity exists for all waveform-based nondestructive evaluation (NDE) methods, it would seem that a common software platform containing multiple signal- and image-processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. NDE Wave & Image Processor Version 2.0 software provides a single, integrated signal- and image-processing and analysis environment for total NDE data processing and analysis. It brings some of the most useful algorithms developed for NDE over the past 20 years into a commercial-grade product. The software can import signal/spectroscopic data, image data, and image series data. This software offers the user hundreds of basic and advanced signal- and image-processing capabilities including esoteric 1D and 2D wavelet-based de-noising, de-trending, and filtering. Batch processing is included for signal- and image-processing capability so that an optimized sequence of processing operations can be applied to entire folders of signals, spectra, and images. Additionally, an extensive interactive model-based curve-fitting facility has been included to allow fitting of spectroscopy data such as from Raman spectroscopy. An extensive joint-time frequency module is included for analysis of non-stationary or transient data such as that from acoustic emission, vibration, or earthquake data.

  20. Clinical implementation of intraoperative cone-beam CT in head and neck surgery

    NASA Astrophysics Data System (ADS)

    Daly, M. J.; Chan, H.; Nithiananthan, S.; Qiu, J.; Barker, E.; Bachar, G.; Dixon, B. J.; Irish, J. C.; Siewerdsen, J. H.

    2011-03-01

    A prototype mobile C-arm for cone-beam CT (CBCT) has been translated to a prospective clinical trial in head and neck surgery. The flat-panel CBCT C-arm was developed in collaboration with Siemens Healthcare, and demonstrates both sub-mm spatial resolution and soft-tissue visibility at low radiation dose (e.g., <1/5th of a typical diagnostic head CT). CBCT images are available ~15 seconds after scan completion (~1 min acquisition) and reviewed at bedside using custom 3D visualization software based on the open-source Image-Guided Surgery Toolkit (IGSTK). The CBCT C-arm has been successfully deployed in 15 head and neck cases and streamlined into the surgical environment using human factors engineering methods and expert feedback from surgeons, nurses, and anesthetists. Intraoperative imaging is implemented in a manner that maintains operating field sterility, reduces image artifacts (e.g., carbon fiber OR table) and minimizes radiation exposure. Image reviews conducted with surgical staff indicate bony detail and soft-tissue visualization sufficient for intraoperative guidance, with additional artifact management (e.g., metal, scatter) promising further improvements. Clinical trial deployment suggests a role for intraoperative CBCT in guiding complex head and neck surgical tasks, including planning mandible and maxilla resection margins, guiding subcranial and endonasal approaches to skull base tumours, and verifying maxillofacial reconstruction alignment. Ongoing translational research into complimentary image-guidance subsystems include novel methods for real-time tool tracking, fusion of endoscopic video and CBCT, and deformable registration of preoperative volumes and planning contours with intraoperative CBCT.

  1. Linear high-boost fusion of Stokes vector imagery for effective discrimination and recognition of real targets in the presence of multiple identical decoys

    NASA Astrophysics Data System (ADS)

    El-Saba, Aed; Sakla, Wesam A.

    2010-04-01

    Recently, the use of imaging polarimetry has received considerable attention for use in automatic target recognition (ATR) applications. In military remote sensing applications, there is a great demand for sensors that are capable of discriminating between real targets and decoys. Accurate discrimination of decoys from real targets is a challenging task and often requires the fusion of various sensor modalities that operate simultaneously. In this paper, we use a simple linear fusion technique known as the high-boost fusion method for effective discrimination of real targets in the presence of multiple decoys. The HBF assigns more weight to the polarization-based imagery in forming the final fused image that is used for detection. We have captured both intensity and polarization-based imagery from an experimental laboratory arrangement containing a mixture of sand/dirt, rocks, vegetation, and other objects for the purpose of simulating scenery that would be acquired in a remote sensing military application. A target object and three decoys that are identical in physical appearance (shape, surface structure and color) and different in material composition have also been placed in the scene. We use the wavelet-filter joint transform correlation (WFJTC) technique to perform detection between input scenery and the target object. Our results show that use of the HBF method increases the correlation performance metrics associated with the WFJTC-based detection process when compared to using either the traditional intensity or polarization-based images.

  2. Scale Estimation and Correction of the Monocular Simultaneous Localization and Mapping (SLAM) Based on Fusion of 1D Laser Range Finder and Vision Data.

    PubMed

    Zhang, Zhuang; Zhao, Rujin; Liu, Enhai; Yan, Kun; Ma, Yuebo

    2018-06-15

    This article presents a new sensor fusion method for visual simultaneous localization and mapping (SLAM) through integration of a monocular camera and a 1D-laser range finder. Such as a fusion method provides the scale estimation and drift correction and it is not limited by volume, e.g., the stereo camera is constrained by the baseline and overcomes the limited depth range problem associated with SLAM for RGBD cameras. We first present the analytical feasibility for estimating the absolute scale through the fusion of 1D distance information and image information. Next, the analytical derivation of the laser-vision fusion is described in detail based on the local dense reconstruction of the image sequences. We also correct the scale drift of the monocular SLAM using the laser distance information which is independent of the drift error. Finally, application of this approach to both indoor and outdoor scenes is verified by the Technical University of Munich dataset of RGBD and self-collected data. We compare the effects of the scale estimation and drift correction of the proposed method with the SLAM for a monocular camera and a RGBD camera.

  3. Fusion of spectral and panchromatic images using false color mapping and wavelet integrated approach

    NASA Astrophysics Data System (ADS)

    Zhao, Yongqiang; Pan, Quan; Zhang, Hongcai

    2006-01-01

    With the development of sensory technology, new image sensors have been introduced that provide a greater range of information to users. But as the power limitation of radiation, there will always be some trade-off between spatial and spectral resolution in the image captured by specific sensors. Images with high spatial resolution can locate objects with high accuracy, whereas images with high spectral resolution can be used to identify the materials. Many applications in remote sensing require fusing low-resolution imaging spectral images with panchromatic images to identify materials at high resolution in clutter. A pixel-based false color mapping and wavelet transform integrated fusion algorithm is presented in this paper, the resulting images have a higher information content than each of the original images and retain sensor-specific image information. The simulation results show that this algorithm can enhance the visibility of certain details and preserve the difference of different materials.

  4. Multi-atlas segmentation of subcortical brain structures via the AutoSeg software pipeline

    PubMed Central

    Wang, Jiahui; Vachet, Clement; Rumple, Ashley; Gouttard, Sylvain; Ouziel, Clémentine; Perrot, Emilie; Du, Guangwei; Huang, Xuemei; Gerig, Guido; Styner, Martin

    2014-01-01

    Automated segmenting and labeling of individual brain anatomical regions, in MRI are challenging, due to the issue of individual structural variability. Although atlas-based segmentation has shown its potential for both tissue and structure segmentation, due to the inherent natural variability as well as disease-related changes in MR appearance, a single atlas image is often inappropriate to represent the full population of datasets processed in a given neuroimaging study. As an alternative for the case of single atlas segmentation, the use of multiple atlases alongside label fusion techniques has been introduced using a set of individual “atlases” that encompasses the expected variability in the studied population. In our study, we proposed a multi-atlas segmentation scheme with a novel graph-based atlas selection technique. We first paired and co-registered all atlases and the subject MR scans. A directed graph with edge weights based on intensity and shape similarity between all MR scans is then computed. The set of neighboring templates is selected via clustering of the graph. Finally, weighted majority voting is employed to create the final segmentation over the selected atlases. This multi-atlas segmentation scheme is used to extend a single-atlas-based segmentation toolkit entitled AutoSeg, which is an open-source, extensible C++ based software pipeline employing BatchMake for its pipeline scripting, developed at the Neuro Image Research and Analysis Laboratories of the University of North Carolina at Chapel Hill. AutoSeg performs N4 intensity inhomogeneity correction, rigid registration to a common template space, automated brain tissue classification based skull-stripping, and the multi-atlas segmentation. The multi-atlas-based AutoSeg has been evaluated on subcortical structure segmentation with a testing dataset of 20 adult brain MRI scans and 15 atlas MRI scans. The AutoSeg achieved mean Dice coefficients of 81.73% for the subcortical structures. PMID:24567717

  5. HelioScan: a software framework for controlling in vivo microscopy setups with high hardware flexibility, functional diversity and extendibility.

    PubMed

    Langer, Dominik; van 't Hoff, Marcel; Keller, Andreas J; Nagaraja, Chetan; Pfäffli, Oliver A; Göldi, Maurice; Kasper, Hansjörg; Helmchen, Fritjof

    2013-04-30

    Intravital microscopy such as in vivo imaging of brain dynamics is often performed with custom-built microscope setups controlled by custom-written software to meet specific requirements. Continuous technological advancement in the field has created a need for new control software that is flexible enough to support the biological researcher with innovative imaging techniques and provide the developer with a solid platform for quickly and easily implementing new extensions. Here, we introduce HelioScan, a software package written in LabVIEW, as a platform serving this dual role. HelioScan is designed as a collection of components that can be flexibly assembled into microscope control software tailored to the particular hardware and functionality requirements. Moreover, HelioScan provides a software framework, within which new functionality can be implemented in a quick and structured manner. A specific HelioScan application assembles at run-time from individual software components, based on user-definable configuration files. Due to its component-based architecture, HelioScan can exploit synergies of multiple developers working in parallel on different components in a community effort. We exemplify the capabilities and versatility of HelioScan by demonstrating several in vivo brain imaging modes, including camera-based intrinsic optical signal imaging for functional mapping of cortical areas, standard two-photon laser-scanning microscopy using galvanometric mirrors, and high-speed in vivo two-photon calcium imaging using either acousto-optic deflectors or a resonant scanner. We recommend HelioScan as a convenient software framework for the in vivo imaging community. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Wrap-Around Out-the-Window Sensor Fusion System

    NASA Technical Reports Server (NTRS)

    Fox, Jeffrey; Boe, Eric A.; Delgado, Francisco; Secor, James B.; Clark, Michael R.; Ehlinger, Kevin D.; Abernathy, Michael F.

    2009-01-01

    The Advanced Cockpit Evaluation System (ACES) includes communication, computing, and display subsystems, mounted in a van, that synthesize out-the-window views to approximate the views of the outside world as it would be seen from the cockpit of a crewed spacecraft, aircraft, or remote control of a ground vehicle or UAV (unmanned aerial vehicle). The system includes five flat-panel display units arranged approximately in a semicircle around an operator, like cockpit windows. The scene displayed on each panel represents the view through the corresponding cockpit window. Each display unit is driven by a personal computer equipped with a video-capture card that accepts live input from any of a variety of sensors (typically, visible and/or infrared video cameras). Software running in the computers blends the live video images with synthetic images that could be generated, for example, from heads-up-display outputs, waypoints, corridors, or from satellite photographs of the same geographic region. Data from a Global Positioning System receiver and an inertial navigation system aboard the remote vehicle are used by the ACES software to keep the synthetic and live views in registration. If the live image were to fail, the synthetic scenes could still be displayed to maintain situational awareness.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barty, Christopher P.J.

    Lasers and laser-based sources are now routinely used to control and manipulate nuclear processes, e.g. fusion, fission and resonant nuclear excitation. Two such “nuclear photonics” activities with the potential for profound societal impact will be reviewed in this presentation: the pursuit of laser-driven inertial confinement fusion at the National Ignition Facility and the development of laser-based, mono-energetic gamma-rays for isotope-specific detection, assay and imaging of materials.

  8. Retinal vessel enhancement based on the Gaussian function and image fusion

    NASA Astrophysics Data System (ADS)

    Moraru, Luminita; Obreja, Cristian Dragoş

    2017-01-01

    The Gaussian function is essential in the construction of the Frangi and COSFIRE (combination of shifted filter responses) filters. The connection of the broken vessels and an accurate extraction of the vascular structure is the main goal of this study. Thus, the outcome of the Frangi and COSFIRE edge detection algorithms are fused using the Dempster-Shafer algorithm with the aim to improve detection and to enhance the retinal vascular structure. For objective results, the average diameters of the retinal vessels provided by Frangi, COSFIRE and Dempster-Shafer fusion algorithms are measured. These experimental values are compared to the ground truth values provided by manually segmented retinal images. We prove the superiority of the fusion algorithm in terms of image quality by using the figure of merit objective metric that correlates the effects of all post-processing techniques.

  9. Application and evaluation of ISVR method in QuickBird image fusion

    NASA Astrophysics Data System (ADS)

    Cheng, Bo; Song, Xiaolu

    2014-05-01

    QuickBird satellite images are widely used in many fields, and applications have put forward high requirements for the integration of the spatial information and spectral information of the imagery. A fusion method for high resolution remote sensing images based on ISVR is identified in this study. The core principle of ISVS is taking the advantage of radicalization targeting to remove the effect of different gain and error of satellites' sensors. Transformed from DN to radiance, the multi-spectral image's energy is used to simulate the panchromatic band. The linear regression analysis is carried through the simulation process to find a new synthetically panchromatic image, which is highly linearly correlated to the original panchromatic image. In order to evaluate, test and compare the algorithm results, this paper used ISVR and other two different fusion methods to give a comparative study of the spatial information and spectral information, taking the average gradient and the correlation coefficient as an indicator. Experiments showed that this method could significantly improve the quality of fused image, especially in preserving spectral information, to maximize the spectral information of original multispectral images, while maintaining abundant spatial information.

  10. Multicolor probe-based confocal laser endomicroscopy: a new world for in vivo and real-time cellular imaging

    NASA Astrophysics Data System (ADS)

    Vercauteren, Tom; Doussoux, François; Cazaux, Matthieu; Schmid, Guillaume; Linard, Nicolas; Durin, Marie-Amélie; Gharbi, Hédi; Lacombe, François

    2013-03-01

    Since its inception in the field of in vivo imaging, endomicroscopy through optical fiber bundles, or probe-based Confocal Laser Endomicroscopy (pCLE), has extensively proven the benefit of in situ and real-time examination of living tissues at the microscopic scale. By continuously increasing image quality, reducing invasiveness and improving system ergonomics, Mauna Kea Technologies has turned pCLE not only into an irreplaceable research instrument for small animal imaging, but also into an accurate clinical decision making tool with applications as diverse as gastrointestinal endoscopy, pulmonology and urology. The current implementation of pCLE relies on a single fluorescence spectral band making different sources of in vivo information challenging to distinguish. Extending the pCLE approach to multi-color endomicroscopy therefore appears as a natural plan. Coupling simultaneous multi-laser excitation with minimally invasive, microscopic resolution, thin and flexible optics, allows the fusion of complementary and valuable biological information, thus paving the way to a combination of morphological and functional imaging. This paper will detail the architecture of a new system, Cellvizio Dual Band, capable of video rate in vivo and in situ multi-spectral fluorescence imaging with a microscopic resolution. In its standard configuration, the system simultaneously operates at 488 and 660 nm, where it automatically performs the necessary spectral, photometric and geometric calibrations to provide unambiguously co-registered images in real-time. The main hardware and software features, including calibration procedures and sub-micron registration algorithms, will be presented as well as a panorama of its current applications, illustrated with recent results in the field of pre-clinical imaging.

  11. A fast fusion scheme for infrared and visible light images in NSCT domain

    NASA Astrophysics Data System (ADS)

    Zhao, Chunhui; Guo, Yunting; Wang, Yulei

    2015-09-01

    Fusion of infrared and visible light images is an effective way to obtain a simultaneous visualization of details of background provided by visible light image and hiding target information provided by infrared image, which is more suitable for browsing and further processing. Two crucial components for infrared and visual light image fusion are improving its fusion performance as well as reducing its computational burden. In this paper, a novel fusion algorithm named pixel information estimation is proposed, which determines the weights by evaluating the information of pixel and is well applied in visible light and infrared image fusion with better fusion quality and lower time-consumption. Besides, a fast realization of non-subsampled contourlet transform is also proposed in this paper to improve the computational efficiency. To verify the advantage of the proposed method, this paper compares it with several popular ones in six evaluation metrics over four different image groups. Experimental results show that the proposed algorithm gets a more effective result with much less time consuming and performs well in both subjective evaluation and objective indicators.

  12. Multi-atlas and label fusion approach for patient-specific MRI based skull estimation.

    PubMed

    Torrado-Carvajal, Angel; Herraiz, Joaquin L; Hernandez-Tamames, Juan A; San Jose-Estepar, Raul; Eryaman, Yigitcan; Rozenholc, Yves; Adalsteinsson, Elfar; Wald, Lawrence L; Malpica, Norberto

    2016-04-01

    MRI-based skull segmentation is a useful procedure for many imaging applications. This study describes a methodology for automatic segmentation of the complete skull from a single T1-weighted volume. The skull is estimated using a multi-atlas segmentation approach. Using a whole head computed tomography (CT) scan database, the skull in a new MRI volume is detected by nonrigid image registration of the volume to every CT, and combination of the individual segmentations by label-fusion. We have compared Majority Voting, Simultaneous Truth and Performance Level Estimation (STAPLE), Shape Based Averaging (SBA), and the Selective and Iterative Method for Performance Level Estimation (SIMPLE) algorithms. The pipeline has been evaluated quantitatively using images from the Retrospective Image Registration Evaluation database (reaching an overlap of 72.46 ± 6.99%), a clinical CT-MR dataset (maximum overlap of 78.31 ± 6.97%), and a whole head CT-MRI pair (maximum overlap 78.68%). A qualitative evaluation has also been performed on MRI acquisition of volunteers. It is possible to automatically segment the complete skull from MRI data using a multi-atlas and label fusion approach. This will allow the creation of complete MRI-based tissue models that can be used in electromagnetic dosimetry applications and attenuation correction in PET/MR. © 2015 Wiley Periodicals, Inc.

  13. Tools and Methods for the Registration and Fusion of Remotely Sensed Data

    NASA Technical Reports Server (NTRS)

    Goshtasby, Arthur Ardeshir; LeMoigne, Jacqueline

    2010-01-01

    Tools and methods for image registration were reviewed. Methods for the registration of remotely sensed data at NASA were discussed. Image fusion techniques were reviewed. Challenges in registration of remotely sensed data were discussed. Examples of image registration and image fusion were given.

  14. Fusion Imaging for Procedural Guidance.

    PubMed

    Wiley, Brandon M; Eleid, Mackram F; Thaden, Jeremy J

    2018-05-01

    The field of percutaneous structural heart interventions has grown tremendously in recent years. This growth has fueled the development of new imaging protocols and technologies in parallel to help facilitate these minimally-invasive procedures. Fusion imaging is an exciting new technology that combines the strength of 2 imaging modalities and has the potential to improve procedural planning and the safety of many commonly performed transcatheter procedures. In this review we discuss the basic concepts of fusion imaging along with the relative strengths and weaknesses of static vs dynamic fusion imaging modalities. This review will focus primarily on echocardiographic-fluoroscopic fusion imaging and its application in commonly performed transcatheter structural heart procedures. Copyright © 2017 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  15. A joint FED watermarking system using spatial fusion for verifying the security issues of teleradiology.

    PubMed

    Viswanathan, P; Krishna, P Venkata

    2014-05-01

    Teleradiology allows transmission of medical images for clinical data interpretation to provide improved e-health care access, delivery, and standards. The remote transmission raises various ethical and legal issues like image retention, fraud, privacy, malpractice liability, etc. A joint FED watermarking system means a joint fingerprint/encryption/dual watermarking system is proposed for addressing these issues. The system combines a region based substitution dual watermarking algorithm using spatial fusion, stream cipher algorithm using symmetric key, and fingerprint verification algorithm using invariants. This paper aims to give access to the outcomes of medical images with confidentiality, availability, integrity, and its origin. The watermarking, encryption, and fingerprint enrollment are conducted jointly in protection stage such that the extraction, decryption, and verification can be applied independently. The dual watermarking system, introducing two different embedding schemes, one used for patient data and other for fingerprint features, reduces the difficulty in maintenance of multiple documents like authentication data, personnel and diagnosis data, and medical images. The spatial fusion algorithm, which determines the region of embedding using threshold from the image to embed the encrypted patient data, follows the exact rules of fusion resulting in better quality than other fusion techniques. The four step stream cipher algorithm using symmetric key for encrypting the patient data with fingerprint verification system using algebraic invariants improves the robustness of the medical information. The experiment result of proposed scheme is evaluated for security and quality analysis in DICOM medical images resulted well in terms of attacks, quality index, and imperceptibility.

  16. A local approach for focussed Bayesian fusion

    NASA Astrophysics Data System (ADS)

    Sander, Jennifer; Heizmann, Michael; Goussev, Igor; Beyerer, Jürgen

    2009-04-01

    Local Bayesian fusion approaches aim to reduce high storage and computational costs of Bayesian fusion which is separated from fixed modeling assumptions. Using the small world formalism, we argue why this proceeding is conform with Bayesian theory. Then, we concentrate on the realization of local Bayesian fusion by focussing the fusion process solely on local regions that are task relevant with a high probability. The resulting local models correspond then to restricted versions of the original one. In a previous publication, we used bounds for the probability of misleading evidence to show the validity of the pre-evaluation of task specific knowledge and prior information which we perform to build local models. In this paper, we prove the validity of this proceeding using information theoretic arguments. For additional efficiency, local Bayesian fusion can be realized in a distributed manner. Here, several local Bayesian fusion tasks are evaluated and unified after the actual fusion process. For the practical realization of distributed local Bayesian fusion, software agents are predestinated. There is a natural analogy between the resulting agent based architecture and criminal investigations in real life. We show how this analogy can be used to improve the efficiency of distributed local Bayesian fusion additionally. Using a landscape model, we present an experimental study of distributed local Bayesian fusion in the field of reconnaissance, which highlights its high potential.

  17. Enhanced Deforestation Mapping in North Korea using Spatial-temporal Image Fusion Method and Phenology-based Index

    NASA Astrophysics Data System (ADS)

    Jin, Y.; Lee, D.

    2017-12-01

    North Korea (the Democratic People's Republic of Korea, DPRK) is known to have some of the most degraded forest in the world. The characteristics of forest landscape in North Korea is complex and heterogeneous, the major vegetation cover types in the forest are hillside farm, unstocked forest, natural forest, and plateau vegetation. Better classification of types in high spatial resolution of deforested areas could provide essential information for decisions about forest management priorities and restoration of deforested areas. For mapping heterogeneous vegetation covers, the phenology-based indices are helpful to overcome the reflectance value confusion that occurs when using one season images. Coarse spatial resolution images may be acquired with a high repetition rate and it is useful for analyzing phenology characteristics, but may not capture the spatial detail of the land cover mosaic of the region of interest. Previous spatial-temporal fusion methods were only capture the temporal change, or focused on both temporal change and spatial change but with low accuracy in heterogeneous landscapes and small patches. In this study, a new concept for spatial-temporal image fusion method focus on heterogeneous landscape was proposed to produce fine resolution images at both fine spatial and temporal resolution. We classified the three types of pixels between the base image and target image, the first type is only reflectance changed caused by phenology, this type of pixels supply the reflectance, shape and texture information; the second type is both reflectance and spectrum changed in some bands caused by phenology like rice paddy or farmland, this type of pixels only supply shape and texture information; the third type is reflectance and spectrum changed caused by land cover type change, this type of pixels don't provide any information because we can't know how land cover changed in target image; and each type of pixels were applied different prediction methods. Results show that both STARFM and FSDAF predicted in low accuracy in second type pixels and small patches. Classification results used spatial-temporal image fusion method proposed in this study showed overall classification accuracy of 89.38%, with corresponding kappa coefficients of 0.87.

  18. Development of a fusion approach selection tool

    NASA Astrophysics Data System (ADS)

    Pohl, C.; Zeng, Y.

    2015-06-01

    During the last decades number and quality of available remote sensing satellite sensors for Earth observation has grown significantly. The amount of available multi-sensor images along with their increased spatial and spectral resolution provides new challenges to Earth scientists. With a Fusion Approach Selection Tool (FAST) the remote sensing community would obtain access to an optimized and improved image processing technology. Remote sensing image fusion is a mean to produce images containing information that is not inherent in the single image alone. In the meantime the user has access to sophisticated commercialized image fusion techniques plus the option to tune the parameters of each individual technique to match the anticipated application. This leaves the operator with an uncountable number of options to combine remote sensing images, not talking about the selection of the appropriate images, resolution and bands. Image fusion can be a machine and time-consuming endeavour. In addition it requires knowledge about remote sensing, image fusion, digital image processing and the application. FAST shall provide the user with a quick overview of processing flows to choose from to reach the target. FAST will ask for available images, application parameters and desired information to process this input to come out with a workflow to quickly obtain the best results. It will optimize data and image fusion techniques. It provides an overview on the possible results from which the user can choose the best. FAST will enable even inexperienced users to use advanced processing methods to maximize the benefit of multi-sensor image exploitation.

  19. Enhanced chemical weapon warning via sensor fusion

    NASA Astrophysics Data System (ADS)

    Flaherty, Michael; Pritchett, Daniel; Cothren, Brian; Schwaiger, James

    2011-05-01

    Torch Technologies Inc., is actively involved in chemical sensor networking and data fusion via multi-year efforts with Dugway Proving Ground (DPG) and the Defense Threat Reduction Agency (DTRA). The objective of these efforts is to develop innovative concepts and advanced algorithms that enhance our national Chemical Warfare (CW) test and warning capabilities via the fusion of traditional and non-traditional CW sensor data. Under Phase I, II, and III Small Business Innovative Research (SBIR) contracts with DPG, Torch developed the Advanced Chemical Release Evaluation System (ACRES) software to support non real-time CW sensor data fusion. Under Phase I and II SBIRs with DTRA in conjunction with the Edgewood Chemical Biological Center (ECBC), Torch is using the DPG ACRES CW sensor data fuser as a framework from which to develop the Cloud state Estimation in a Networked Sensor Environment (CENSE) data fusion system. Torch is currently developing CENSE to implement and test innovative real-time sensor network based data fusion concepts using CW and non-CW ancillary sensor data to improve CW warning and detection in tactical scenarios.

  20. A Software Platform for Post-Processing Waveform-Based NDE

    NASA Technical Reports Server (NTRS)

    Roth, Donald J.; Martin, Richard E.; Seebo, Jeff P.; Trinh, Long B.; Walker, James L.; Winfree, William P.

    2007-01-01

    Ultrasonic, microwave, and terahertz nondestructive evaluation imaging systems generally require the acquisition of waveforms at each scan point to form an image. For such systems, signal and image processing methods are commonly needed to extract information from the waves and improve resolution of, and highlight, defects in the image. Since some similarity exists for all waveform-based NDE methods, it would seem a common software platform containing multiple signal and image processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. This presentation describes NASA Glenn Research Center's approach in developing a common software platform for processing waveform-based NDE signals and images. This platform is currently in use at NASA Glenn and at Lockheed Martin Michoud Assembly Facility for processing of pulsed terahertz and ultrasonic data. Highlights of the software operation will be given. A case study will be shown for use with terahertz data. The authors also request scientists and engineers who are interested in sharing customized signal and image processing algorithms to contribute to this effort by letting the authors code up and include these algorithms in future releases.

  1. Co-fuse: a new class discovery analysis tool to identify and prioritize recurrent fusion genes from RNA-sequencing data.

    PubMed

    Paisitkriangkrai, Sakrapee; Quek, Kelly; Nievergall, Eva; Jabbour, Anissa; Zannettino, Andrew; Kok, Chung Hoow

    2018-06-07

    Recurrent oncogenic fusion genes play a critical role in the development of various cancers and diseases and provide, in some cases, excellent therapeutic targets. To date, analysis tools that can identify and compare recurrent fusion genes across multiple samples have not been available to researchers. To address this deficiency, we developed Co-occurrence Fusion (Co-fuse), a new and easy to use software tool that enables biologists to merge RNA-seq information, allowing them to identify recurrent fusion genes, without the need for exhaustive data processing. Notably, Co-fuse is based on pattern mining and statistical analysis which enables the identification of hidden patterns of recurrent fusion genes. In this report, we show that Co-fuse can be used to identify 2 distinct groups within a set of 49 leukemic cell lines based on their recurrent fusion genes: a multiple myeloma (MM) samples-enriched cluster and an acute myeloid leukemia (AML) samples-enriched cluster. Our experimental results further demonstrate that Co-fuse can identify known driver fusion genes (e.g., IGH-MYC, IGH-WHSC1) in MM, when compared to AML samples, indicating the potential of Co-fuse to aid the discovery of yet unknown driver fusion genes through cohort comparisons. Additionally, using a 272 primary glioma sample RNA-seq dataset, Co-fuse was able to validate recurrent fusion genes, further demonstrating the power of this analysis tool to identify recurrent fusion genes. Taken together, Co-fuse is a powerful new analysis tool that can be readily applied to large RNA-seq datasets, and may lead to the discovery of new disease subgroups and potentially new driver genes, for which, targeted therapies could be developed. The Co-fuse R source code is publicly available at https://github.com/sakrapee/co-fuse .

  2. Multi-atlas label fusion using hybrid of discriminative and generative classifiers for segmentation of cardiac MR images.

    PubMed

    Sedai, Suman; Garnavi, Rahil; Roy, Pallab; Xi Liang

    2015-08-01

    Multi-atlas segmentation first registers each atlas image to the target image and transfers the label of atlas image to the coordinate system of the target image. The transferred labels are then combined, using a label fusion algorithm. In this paper, we propose a novel label fusion method which aggregates discriminative learning and generative modeling for segmentation of cardiac MR images. First, a probabilistic Random Forest classifier is trained as a discriminative model to obtain the prior probability of a label at the given voxel of the target image. Then, a probability distribution of image patches is modeled using Gaussian Mixture Model for each label, providing the likelihood of the voxel belonging to the label. The final label posterior is obtained by combining the classification score and the likelihood score under Bayesian rule. Comparative study performed on MICCAI 2013 SATA Segmentation Challenge demonstrates that our proposed hybrid label fusion algorithm is accurate than other five state-of-the-art label fusion methods. The proposed method obtains dice similarity coefficient of 0.94 and 0.92 in segmenting epicardium and endocardium respectively. Moreover, our label fusion method achieves more accurate segmentation results compared to four other label fusion methods.

  3. Computational polarization difference underwater imaging based on image fusion

    NASA Astrophysics Data System (ADS)

    Han, Hongwei; Zhang, Xiaohui; Guan, Feng

    2016-01-01

    Polarization difference imaging can improve the quality of images acquired underwater, whether the background and veiling light are unpolarized or partial polarized. Computational polarization difference imaging technique which replaces the mechanical rotation of polarization analyzer and shortens the time spent to select the optimum orthogonal ǁ and ⊥axes is the improvement of the conventional PDI. But it originally gets the output image by setting the weight coefficient manually to an identical constant for all pixels. In this paper, a kind of algorithm is proposed to combine the Q and U parameters of the Stokes vector through pixel-level image fusion theory based on non-subsample contourlet transform. The experimental system built by the green LED array with polarizer to illuminate a piece of flat target merged in water and the CCD with polarization analyzer to obtain target image under different angle is used to verify the effect of the proposed algorithm. The results showed that the output processed by our algorithm could show more details of the flat target and had higher contrast compared to original computational polarization difference imaging.

  4. [A study on medical image fusion].

    PubMed

    Zhang, Er-hu; Bian, Zheng-zhong

    2002-09-01

    Five algorithms with its advantages and disadvantage for medical image fusion are analyzed. Four kinds of quantitative evaluation criteria for the quality of image fusion algorithms are proposed and these will give us some guidance for future research.

  5. MO-C-17A-10: Comparison of Dose Deformable Accumulation by Using Parallel and Serial Approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Z; Li, M; Wong, J

    Purpose: The uncertainty of dose accumulation over multiple CT datasets with deformable fusion may have significant impact on clinical decisions. In this study, we investigate the difference of two dose summation approaches involving deformable fusion. Methods: Five patients, four external beam and one brachytherapy(BT), were chosen for the study. The BT patient was treated with CT-based HDR. The CT image sets acquired in the imageguidance process (8-11 CTs/patient) were used to determine the dose delivered to the four external beam patients. (prostate, pelvis, lung and head and neck). For the HDR patient (cervix), five CT image sets and the correspondingmore » BT plans were used. In total 44 CT datasets and RT dose/plans were imported into the image fusion software MiM (6.0.4) for analysis.For each of the five clinical cases, the dose from each fraction was accumulated into the primary CT dataset by using both Parallel and Serial approaches. The dose-volume histogram (DVH) for CTV and selected organs-at-risks (OAR) were generated. The D95(CTV), OAR(mean) and OAR(max) for the four external beam cases the D90(CTV), and the max dose to bladder and rectum for the BT case were compared. Results: For the four external beam patients, the difference in D95(CTV) were <1.2% PD between the parallel and the serial approaches. The differences of the OAR(mean) and the OAR(max ) range from 0 to 3.7% and <1% PD respectively. For the HDR patient, the dose difference for D90 is 11% PD while that of the max dose to bladder and rectum were 11.5% and 23.3% respectively. Conclusion: For external beam treatments, the parallel and serial approaches have <5% difference probably because tumor volume and OAR have less changes from fraction to fraction. For the brachytherapy case, >10% dose difference between the two approaches was observed as significant volume changes of tumor and OAR were observed among treatment fractions.« less

  6. Value of C-Arm Cone Beam Computed Tomography Image Fusion in Maximizing the Versatility of Endovascular Robotics.

    PubMed

    Chinnadurai, Ponraj; Duran, Cassidy; Al-Jabbari, Odeaa; Abu Saleh, Walid K; Lumsden, Alan; Bismuth, Jean

    2016-01-01

    To report our initial experience and highlight the value of using intraoperative C-arm cone beam computed tomography (CT; DynaCT(®)) image fusion guidance along with steerable robotic endovascular catheter navigation to optimize vessel cannulation. Between May 2013 and January 2015, all patients who underwent endovascular procedures using DynaCT image fusion technique along with Hansen Magellan vascular robotic catheter were included in this study. As a part of preoperative planning, relevant vessel landmarks were electronically marked in contrast-enhanced multi-slice computed tomography images and stored. At the beginning of procedure, an intraoperative noncontrast C-arm cone beam CT (syngo DynaCT(®), Siemens Medical Solutions USA Inc.) was acquired in the hybrid suite. Preoperative images were then coregistered to intraoperative DynaCT images using aortic wall calcifications and bone landmarks. Stored landmarks were then overlaid on 2-dimensional (2D) live fluoroscopic images as virtual markers that are updated in real-time with C-arm, table movements and image zoom. Vascular access and robotic catheter (Magellan(®), Hansen Medical) was setup per standard. Vessel cannulation was performed based on electronic virtual markers on live fluoroscopy using robotic catheter. The impact of 3-dimensional (3D) image fusion guidance on robotic vessel cannulation was evaluated retrospectively, by assessing quantitative parameters like number of angiograms acquired before vessel cannulation and qualitative parameters like accuracy of vessel ostium and centerline markers. All 17 vessels were cannulated successfully in 14 patients' attempted using robotic catheter and image fusion guidance. Median vessel diameter at origin was 5.4 mm (range, 2.3-13 mm), whereas 12 of 17 (70.6%) vessels had either calcified and/or stenosed origin from parent vessel. Nine of 17 vessels (52.9 %) were cannulated without any contrast injection. Median number of angiograms required before cannulation was 0 (range, 0-2). On qualitative assessment, 14 of 15 vessels (93.3%) had grade = 1 accuracy (guidewire inside virtual ostial marker). Fourteen of 14 vessels had grade = 1 accuracy (virtual centerlines that matched with the actual vessel trajectory during cannulation). In this small series, the experience of using DynaCT image fusion guidance together with a steerable endovascular robotic catheter indicates that such image fusion strategies can enhance intraoperative 2D fluoroscopy by bringing preoperative 3D information about vascular stenosis and/or calcification, angulation, and take off from main vessel thereby facilitating ultimate vessel cannulation. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Precision analysis of a quantitative CT liver surface nodularity score.

    PubMed

    Smith, Andrew; Varney, Elliot; Zand, Kevin; Lewis, Tara; Sirous, Reza; York, James; Florez, Edward; Abou Elkassem, Asser; Howard-Claudio, Candace M; Roda, Manohar; Parker, Ellen; Scortegagna, Eduardo; Joyner, David; Sandlin, David; Newsome, Ashley; Brewster, Parker; Lirette, Seth T; Griswold, Michael

    2018-04-26

    To evaluate precision of a software-based liver surface nodularity (LSN) score derived from CT images. An anthropomorphic CT phantom was constructed with simulated liver containing smooth and nodular segments at the surface and simulated visceral and subcutaneous fat components. The phantom was scanned multiple times on a single CT scanner with adjustment of image acquisition and reconstruction parameters (N = 34) and on 22 different CT scanners from 4 manufacturers at 12 imaging centers. LSN scores were obtained using a software-based method. Repeatability and reproducibility were evaluated by intraclass correlation (ICC) and coefficient of variation. Using abdominal CT images from 68 patients with various stages of chronic liver disease, inter-observer agreement and test-retest repeatability among 12 readers assessing LSN by software- vs. visual-based scoring methods were evaluated by ICC. There was excellent repeatability of LSN scores (ICC:0.79-0.99) using the CT phantom and routine image acquisition and reconstruction parameters (kVp 100-140, mA 200-400, and auto-mA, section thickness 1.25-5.0 mm, field of view 35-50 cm, and smooth or standard kernels). There was excellent reproducibility (smooth ICC: 0.97; 95% CI 0.95, 0.99; CV: 7%; nodular ICC: 0.94; 95% CI 0.89, 0.97; CV: 8%) for LSN scores derived from CT images from 22 different scanners. Inter-observer agreement for the software-based LSN scoring method was excellent (ICC: 0.84; 95% CI 0.79, 0.88; CV: 28%) vs. good for the visual-based method (ICC: 0.61; 95% CI 0.51, 0.69; CV: 43%). Test-retest repeatability for the software-based LSN scoring method was excellent (ICC: 0.82; 95% CI 0.79, 0.84; CV: 12%). The software-based LSN score is a quantitative CT imaging biomarker with excellent repeatability, reproducibility, inter-observer agreement, and test-retest repeatability.

  8. A novel imaging method for photonic crystal fiber fusion splicer

    NASA Astrophysics Data System (ADS)

    Bi, Weihong; Fu, Guangwei; Guo, Xuan

    2007-01-01

    Because the structure of Photonic Crystal Fiber (PCF) is very complex, and it is very difficult that traditional fiber fusion splice obtains optical axial information of PCF. Therefore, we must search for a bran-new optical imaging method to get section information of Photonic Crystal Fiber. Based on complex trait of PCF, a novel high-precision optics imaging system is presented in this article. The system uses a thinned electron-bombarded CCD (EBCCD) which is a kind of image sensor as imaging element, the thinned electron-bombarded CCD can offer low light level performance superior to conventional image intensifier coupled CCD approaches, this high-performance device can provide high contrast high resolution in low light level surveillance imaging; in order to realize precision focusing of image, we use a ultra-highprecision pace motor to adjust position of imaging lens. In this way, we can obtain legible section information of PCF. We may realize further concrete analysis for section information of PCF by digital image processing technology. Using this section information may distinguish different sorts of PCF, compute some parameters such as the size of PCF ventage, cladding structure of PCF and so on, and provide necessary analysis data for PCF fixation, adjustment, regulation, fusion and cutting system.

  9. ERS-2 SAR and IRS-1C LISS III data fusion: A PCA approach to improve remote sensing based geological interpretation

    NASA Astrophysics Data System (ADS)

    Pal, S. K.; Majumdar, T. J.; Bhattacharya, Amit K.

    Fusion of optical and synthetic aperture radar data has been attempted in the present study for mapping of various lithologic units over a part of the Singhbhum Shear Zone (SSZ) and its surroundings. ERS-2 SAR data over the study area has been enhanced using Fast Fourier Transformation (FFT) based filtering approach, and also using Frost filtering technique. Both the enhanced SAR imagery have been then separately fused with histogram equalized IRS-1C LISS III image using Principal Component Analysis (PCA) technique. Later, Feature-oriented Principal Components Selection (FPCS) technique has been applied to generate False Color Composite (FCC) images, from which corresponding geological maps have been prepared. Finally, GIS techniques have been successfully used for change detection analysis in the lithological interpretation between the published geological map and the fusion based geological maps. In general, there is good agreement between these maps over a large portion of the study area. Based on the change detection studies, few areas could be identified which need attention for further detailed ground-based geological studies.

  10. Registration and Fusion of Multiple Source Remotely Sensed Image Data

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline

    2004-01-01

    Earth and Space Science often involve the comparison, fusion, and integration of multiple types of remotely sensed data at various temporal, radiometric, and spatial resolutions. Results of this integration may be utilized for global change analysis, global coverage of an area at multiple resolutions, map updating or validation of new instruments, as well as integration of data provided by multiple instruments carried on multiple platforms, e.g. in spacecraft constellations or fleets of planetary rovers. Our focus is on developing methods to perform fast, accurate and automatic image registration and fusion. General methods for automatic image registration are being reviewed and evaluated. Various choices for feature extraction, feature matching and similarity measurements are being compared, including wavelet-based algorithms, mutual information and statistically robust techniques. Our work also involves studies related to image fusion and investigates dimension reduction and co-kriging for application-dependent fusion. All methods are being tested using several multi-sensor datasets, acquired at EOS Core Sites, and including multiple sensors such as IKONOS, Landsat-7/ETM+, EO1/ALI and Hyperion, MODIS, and SeaWIFS instruments. Issues related to the coregistration of data from the same platform (i.e., AIRS and MODIS from Aqua) or from several platforms of the A-train (i.e., MLS, HIRDLS, OMI from Aura with AIRS and MODIS from Terra and Aqua) will also be considered.

  11. Web-based interactive 2D/3D medical image processing and visualization software.

    PubMed

    Mahmoudi, Seyyed Ehsan; Akhondi-Asl, Alireza; Rahmani, Roohollah; Faghih-Roohi, Shahrooz; Taimouri, Vahid; Sabouri, Ahmad; Soltanian-Zadeh, Hamid

    2010-05-01

    There are many medical image processing software tools available for research and diagnosis purposes. However, most of these tools are available only as local applications. This limits the accessibility of the software to a specific machine, and thus the data and processing power of that application are not available to other workstations. Further, there are operating system and processing power limitations which prevent such applications from running on every type of workstation. By developing web-based tools, it is possible for users to access the medical image processing functionalities wherever the internet is available. In this paper, we introduce a pure web-based, interactive, extendable, 2D and 3D medical image processing and visualization application that requires no client installation. Our software uses a four-layered design consisting of an algorithm layer, web-user-interface layer, server communication layer, and wrapper layer. To compete with extendibility of the current local medical image processing software, each layer is highly independent of other layers. A wide range of medical image preprocessing, registration, and segmentation methods are implemented using open source libraries. Desktop-like user interaction is provided by using AJAX technology in the web-user-interface. For the visualization functionality of the software, the VRML standard is used to provide 3D features over the web. Integration of these technologies has allowed implementation of our purely web-based software with high functionality without requiring powerful computational resources in the client side. The user-interface is designed such that the users can select appropriate parameters for practical research and clinical studies. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.

  12. Scientific Use Cases for the Virtual Atomic and Molecular Data Center

    NASA Astrophysics Data System (ADS)

    Dubernet, M. L.; Aboudarham, J.; Ba, Y. A.; Boiziot, M.; Bottinelli, S.; Caux, E.; Endres, C.; Glorian, J. M.; Henry, F.; Lamy, L.; Le Sidaner, P.; Møller, T.; Moreau, N.; Rénié, C.; Roueff, E.; Schilke, P.; Vastel, C.; Zwoelf, C. M.

    2014-12-01

    VAMDC Consortium is a worldwide consortium which federates interoperable Atomic and Molecular databases through an e-science infrastructure. The contained data are of the highest scientific quality and are crucial for many applications: astrophysics, atmospheric physics, fusion, plasma and lighting technologies, health, etc. In this paper we present astrophysical scientific use cases in relation to the use of the VAMDC e-infrastructure. Those will cover very different applications such as: (i) modeling the spectra of interstellar objects using the myXCLASS software tool implemented in the Common Astronomy Software Applications package (CASA) or using the CASSIS software tool, in its stand-alone version or implemented in the Herschel Interactive Processing Environment (HIPE); (ii) the use of Virtual Observatory tools accessing VAMDC databases; (iii) the access of VAMDC from the Paris solar BASS2000 portal; (iv) the combination of tools and database from the APIS service (Auroral Planetary Imaging and Spectroscopy); (v) combination of heterogeneous data for the application to the interstellar medium from the SPECTCOL tool.

  13. An improved artifact removal in exposure fusion with local linear constraints

    NASA Astrophysics Data System (ADS)

    Zhang, Hai; Yu, Mali

    2018-04-01

    In exposure fusion, it is challenging to remove artifacts because of camera motion and moving objects in the scene. An improved artifact removal method is proposed in this paper, which performs local linear adjustment in artifact removal progress. After determining a reference image, we first perform high-dynamic-range (HDR) deghosting to generate an intermediate image stack from the input image stack. Then, a linear Intensity Mapping Function (IMF) in each window is extracted based on the intensities of intermediate image and reference image, the intensity mean and variance of reference image. Finally, with the extracted local linear constraints, we reconstruct a target image stack, which can be directly used for fusing a single HDR-like image. Some experiments have been implemented and experimental results demonstrate that the proposed method is robust and effective in removing artifacts especially in the saturated regions of the reference image.

  14. Geometric modeling of hepatic arteries in 3D ultrasound with unsupervised MRA fusion during liver interventions.

    PubMed

    Gérard, Maxime; Michaud, François; Bigot, Alexandre; Tang, An; Soulez, Gilles; Kadoury, Samuel

    2017-06-01

    Modulating the chemotherapy injection rate with regard to blood flow velocities in the tumor-feeding arteries during intra-arterial therapies may help improve liver tumor targeting while decreasing systemic exposure. These velocities can be obtained noninvasively using Doppler ultrasound (US). However, small vessels situated in the liver are difficult to identify and follow in US. We propose a multimodal fusion approach that non-rigidly registers a 3D geometric mesh model of the hepatic arteries obtained from preoperative MR angiography (MRA) acquisitions with intra-operative 3D US imaging. The proposed fusion tool integrates 3 imaging modalities: an arterial MRA, a portal phase MRA and an intra-operative 3D US. Preoperatively, the arterial phase MRA is used to generate a 3D model of the hepatic arteries, which is then non-rigidly co-registered with the portal phase MRA. Once the intra-operative 3D US is acquired, we register it with the portal MRA using a vessel-based rigid initialization followed by a non-rigid registration using an image-based metric based on linear correlation of linear combination. Using the combined non-rigid transformation matrices, the 3D mesh model is fused with the 3D US. 3D US and multi-phase MRA images acquired from 10 porcine models were used to test the performance of the proposed fusion tool. Unimodal registration of the MRA phases yielded a target registration error (TRE) of [Formula: see text] mm. Initial rigid alignment of the portal MRA and 3D US yielded a mean TRE of [Formula: see text] mm, which was significantly reduced to [Formula: see text] mm ([Formula: see text]) after affine image-based registration. The following deformable registration step allowed for further decrease of the mean TRE to [Formula: see text] mm. The proposed tool could facilitate visualization and localization of these vessels when using 3D US intra-operatively for either intravascular or percutaneous interventions to avoid vessel perforation.

  15. Facial expression recognition under partial occlusion based on fusion of global and local features

    NASA Astrophysics Data System (ADS)

    Wang, Xiaohua; Xia, Chen; Hu, Min; Ren, Fuji

    2018-04-01

    Facial expression recognition under partial occlusion is a challenging research. This paper proposes a novel framework for facial expression recognition under occlusion by fusing the global and local features. In global aspect, first, information entropy are employed to locate the occluded region. Second, principal Component Analysis (PCA) method is adopted to reconstruct the occlusion region of image. After that, a replace strategy is applied to reconstruct image by replacing the occluded region with the corresponding region of the best matched image in training set, Pyramid Weber Local Descriptor (PWLD) feature is then extracted. At last, the outputs of SVM are fitted to the probabilities of the target class by using sigmoid function. For the local aspect, an overlapping block-based method is adopted to extract WLD features, and each block is weighted adaptively by information entropy, Chi-square distance and similar block summation methods are then applied to obtain the probabilities which emotion belongs to. Finally, fusion at the decision level is employed for the data fusion of the global and local features based on Dempster-Shafer theory of evidence. Experimental results on the Cohn-Kanade and JAFFE databases demonstrate the effectiveness and fault tolerance of this method.

  16. Exploring the complementarity of THz pulse imaging and DCE-MRIs: Toward a unified multi-channel classification and a deep learning framework.

    PubMed

    Yin, X-X; Zhang, Y; Cao, J; Wu, J-L; Hadjiloucas, S

    2016-12-01

    We provide a comprehensive account of recent advances in biomedical image analysis and classification from two complementary imaging modalities: terahertz (THz) pulse imaging and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The work aims to highlight underlining commonalities in both data structures so that a common multi-channel data fusion framework can be developed. Signal pre-processing in both datasets is discussed briefly taking into consideration advances in multi-resolution analysis and model based fractional order calculus system identification. Developments in statistical signal processing using principal component and independent component analysis are also considered. These algorithms have been developed independently by the THz-pulse imaging and DCE-MRI communities, and there is scope to place them in a common multi-channel framework to provide better software standardization at the pre-processing de-noising stage. A comprehensive discussion of feature selection strategies is also provided and the importance of preserving textural information is highlighted. Feature extraction and classification methods taking into consideration recent advances in support vector machine (SVM) and extreme learning machine (ELM) classifiers and their complex extensions are presented. An outlook on Clifford algebra classifiers and deep learning techniques suitable to both types of datasets is also provided. The work points toward the direction of developing a new unified multi-channel signal processing framework for biomedical image analysis that will explore synergies from both sensing modalities for inferring disease proliferation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Neuronavigation using three-dimensional proton magnetic resonance spectroscopy data.

    PubMed

    Kanberoglu, Berkay; Moore, Nina Z; Frakes, David; Karam, Lina J; Debbins, Josef P; Preul, Mark C

    2014-01-01

    Applications in clinical medicine can benefit from fusion of spectroscopy data with anatomical imagery. For example, new 3-dimensional (3D) spectroscopy techniques allow for improved correlation of metabolite profiles with specific regions of interest in anatomical tumor images, which can be useful in characterizing and treating heterogeneous tumors that appear structurally homogeneous. We sought to develop a clinical workflow and uniquely capable custom software tool to integrate advanced 3-tesla 3D proton magnetic resonance spectroscopic imaging ((1)H-MRSI) into industry standard image-guided neuronavigation systems, especially for use in brain tumor surgery. (1)H-MRSI spectra from preoperative scanning on 15 patients with recurrent or newly diagnosed meningiomas were processed and analyzed, and specific voxels were selected based on their chemical contents. 3D neuronavigation overlays were then generated and applied to anatomical image data in the operating room. The proposed 3D methods fully account for scanner calibration and comprise tools that we have now made publicly available. The new methods were quantitatively validated through a phantom study and applied successfully to mitigate biopsy uncertainty in a clinical study of meningiomas. The proposed methods improve upon the current state of the art in neuronavigation through the use of detailed 3D (1)H-MRSI data. Specifically, 3D MRSI-based overlays provide comprehensive, quantitative visual cues and location information during neurosurgery, enabling a progressive new form of online spectroscopy-guided neuronavigation. © 2014 S. Karger AG, Basel.

  18. Feasibility study on sensor data fusion for the CP-140 aircraft: fusion architecture analyses

    NASA Astrophysics Data System (ADS)

    Shahbazian, Elisa

    1995-09-01

    Loral Canada completed (May 1995) a Department of National Defense (DND) Chief of Research and Development (CRAD) contract, to study the feasibility of implementing a multi- sensor data fusion (MSDF) system onboard the CP-140 Aurora aircraft. This system is expected to fuse data from: (a) attributed measurement oriented sensors (ESM, IFF, etc.); (b) imaging sensors (FLIR, SAR, etc.); (c) tracking sensors (radar, acoustics, etc.); (d) data from remote platforms (data links); and (e) non-sensor data (intelligence reports, environmental data, visual sightings, encyclopedic data, etc.). Based on purely theoretical considerations a central-level fusion architecture will lead to a higher performance fusion system. However, there are a number of systems and fusion architecture issues involving fusion of such dissimilar data: (1) the currently existing sensors are not designed to provide the type of data required by a fusion system; (2) the different types (attribute, imaging, tracking, etc.) of data may require different degree of processing, before they can be used within a fusion system efficiently; (3) the data quality from different sensors, and more importantly from remote platforms via the data links must be taken into account before fusing; and (4) the non-sensor data may impose specific requirements on the fusion architecture (e.g. variable weight/priority for the data from different sensors). This paper presents the analyses performed for the selection of the fusion architecture for the enhanced sensor suite planned for the CP-140 aircraft in the context of the mission requirements and environmental conditions.

  19. A fuzzy feature fusion method for auto-segmentation of gliomas with multi-modality diffusion and perfusion magnetic resonance images in radiotherapy.

    PubMed

    Guo, Lu; Wang, Ping; Sun, Ranran; Yang, Chengwen; Zhang, Ning; Guo, Yu; Feng, Yuanming

    2018-02-19

    The diffusion and perfusion magnetic resonance (MR) images can provide functional information about tumour and enable more sensitive detection of the tumour extent. We aimed to develop a fuzzy feature fusion method for auto-segmentation of gliomas in radiotherapy planning using multi-parametric functional MR images including apparent diffusion coefficient (ADC), fractional anisotropy (FA) and relative cerebral blood volume (rCBV). For each functional modality, one histogram-based fuzzy model was created to transform image volume into a fuzzy feature space. Based on the fuzzy fusion result of the three fuzzy feature spaces, regions with high possibility belonging to tumour were generated automatically. The auto-segmentations of tumour in structural MR images were added in final auto-segmented gross tumour volume (GTV). For evaluation, one radiation oncologist delineated GTVs for nine patients with all modalities. Comparisons between manually delineated and auto-segmented GTVs showed that, the mean volume difference was 8.69% (±5.62%); the mean Dice's similarity coefficient (DSC) was 0.88 (±0.02); the mean sensitivity and specificity of auto-segmentation was 0.87 (±0.04) and 0.98 (±0.01) respectively. High accuracy and efficiency can be achieved with the new method, which shows potential of utilizing functional multi-parametric MR images for target definition in precision radiation treatment planning for patients with gliomas.

  20. Development of a robust MRI fiducial system for automated fusion of MR-US abdominal images.

    PubMed

    Favazza, Christopher P; Gorny, Krzysztof R; Callstrom, Matthew R; Kurup, Anil N; Washburn, Michael; Trester, Pamela S; Fowler, Charles L; Hangiandreou, Nicholas J

    2018-05-21

    We present the development of a two-component magnetic resonance (MR) fiducial system, that is, a fiducial marker device combined with an auto-segmentation algorithm, designed to be paired with existing ultrasound probe tracking and image fusion technology to automatically fuse MR and ultrasound (US) images. The fiducial device consisted of four ~6.4 mL cylindrical wells filled with 1 g/L copper sulfate solution. The algorithm was designed to automatically segment the device in clinical abdominal MR images. The algorithm's detection rate and repeatability were investigated through a phantom study and in human volunteers. The detection rate was 100% in all phantom and human images. The center-of-mass of the fiducial device was robustly identified with maximum variations of 2.9 mm in position and 0.9° in angular orientation. In volunteer images, average differences between algorithm-measured inter-marker spacings and actual separation distances were 0.53 ± 0.36 mm. "Proof-of-concept" automatic MR-US fusions were conducted with sets of images from both a phantom and volunteer using a commercial prototype system, which was built based on the above findings. Image fusion accuracy was measured to be within 5 mm for breath-hold scanning. These results demonstrate the capability of this approach to automatically fuse US and MR images acquired across a wide range of clinical abdominal pulse sequences. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

Top