Sample records for image fusion algorithm

  1. Novel cooperative neural fusion algorithms for image restoration and image fusion.

    PubMed

    Xia, Youshen; Kamel, Mohamed S

    2007-02-01

    To deal with the problem of restoring degraded images with non-Gaussian noise, this paper proposes a novel cooperative neural fusion regularization (CNFR) algorithm for image restoration. Compared with conventional regularization algorithms for image restoration, the proposed CNFR algorithm can relax need of the optimal regularization parameter to be estimated. Furthermore, to enhance the quality of restored images, this paper presents a cooperative neural fusion (CNF) algorithm for image fusion. Compared with existing signal-level image fusion algorithms, the proposed CNF algorithm can greatly reduce the loss of contrast information under blind Gaussian noise environments. The performance analysis shows that the proposed two neural fusion algorithms can converge globally to the robust and optimal image estimate. Simulation results confirm that in different noise environments, the proposed two neural fusion algorithms can obtain a better image estimate than several well known image restoration and image fusion methods.

  2. [An improved medical image fusion algorithm and quality evaluation].

    PubMed

    Chen, Meiling; Tao, Ling; Qian, Zhiyu

    2009-08-01

    Medical image fusion is of very important value for application in medical image analysis and diagnosis. In this paper, the conventional method of wavelet fusion is improved,so a new algorithm of medical image fusion is presented and the high frequency and low frequency coefficients are studied respectively. When high frequency coefficients are chosen, the regional edge intensities of each sub-image are calculated to realize adaptive fusion. The choice of low frequency coefficient is based on the edges of images, so that the fused image preserves all useful information and appears more distinctly. We apply the conventional and the improved fusion algorithms based on wavelet transform to fuse two images of human body and also evaluate the fusion results through a quality evaluation method. Experimental results show that this algorithm can effectively retain the details of information on original images and enhance their edge and texture features. This new algorithm is better than the conventional fusion algorithm based on wavelet transform.

  3. Benchmarking of data fusion algorithms in support of earth observation based Antarctic wildlife monitoring

    NASA Astrophysics Data System (ADS)

    Witharana, Chandi; LaRue, Michelle A.; Lynch, Heather J.

    2016-03-01

    Remote sensing is a rapidly developing tool for mapping the abundance and distribution of Antarctic wildlife. While both panchromatic and multispectral imagery have been used in this context, image fusion techniques have received little attention. We tasked seven widely-used fusion algorithms: Ehlers fusion, hyperspherical color space fusion, high-pass fusion, principal component analysis (PCA) fusion, University of New Brunswick fusion, and wavelet-PCA fusion to resolution enhance a series of single-date QuickBird-2 and Worldview-2 image scenes comprising penguin guano, seals, and vegetation. Fused images were assessed for spectral and spatial fidelity using a variety of quantitative quality indicators and visual inspection methods. Our visual evaluation elected the high-pass fusion algorithm and the University of New Brunswick fusion algorithm as best for manual wildlife detection while the quantitative assessment suggested the Gram-Schmidt fusion algorithm and the University of New Brunswick fusion algorithm as best for automated classification. The hyperspherical color space fusion algorithm exhibited mediocre results in terms of spectral and spatial fidelities. The PCA fusion algorithm showed spatial superiority at the expense of spectral inconsistencies. The Ehlers fusion algorithm and the wavelet-PCA algorithm showed the weakest performances. As remote sensing becomes a more routine method of surveying Antarctic wildlife, these benchmarks will provide guidance for image fusion and pave the way for more standardized products for specific types of wildlife surveys.

  4. The optimal algorithm for Multi-source RS image fusion.

    PubMed

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.

  5. Hybrid Image Fusion for Sharpness Enhancement of Multi-Spectral Lunar Images

    NASA Astrophysics Data System (ADS)

    Awumah, Anna; Mahanti, Prasun; Robinson, Mark

    2016-10-01

    Image fusion enhances the sharpness of a multi-spectral (MS) image by incorporating spatial details from a higher-resolution panchromatic (Pan) image [1,2]. Known applications of image fusion for planetary images are rare, although image fusion is well-known for its applications to Earth-based remote sensing. In a recent work [3], six different image fusion algorithms were implemented and their performances were verified with images from the Lunar Reconnaissance Orbiter (LRO) Camera. The image fusion procedure obtained a high-resolution multi-spectral (HRMS) product from the LRO Narrow Angle Camera (used as Pan) and LRO Wide Angle Camera (used as MS) images. The results showed that the Intensity-Hue-Saturation (IHS) algorithm results in a high-spatial quality product while the Wavelet-based image fusion algorithm best preserves spectral quality among all the algorithms. In this work we show the results of a hybrid IHS-Wavelet image fusion algorithm when applied to LROC MS images. The hybrid method provides the best HRMS product - both in terms of spatial resolution and preservation of spectral details. Results from hybrid image fusion can enable new science and increase the science return from existing LROC images.[1] Pohl, Cle, and John L. Van Genderen. "Review article multisensor image fusion in remote sensing: concepts, methods and applications." International journal of remote sensing 19.5 (1998): 823-854.[2] Zhang, Yun. "Understanding image fusion." Photogramm. Eng. Remote Sens 70.6 (2004): 657-661.[3] Mahanti, Prasun et al. "Enhancement of spatial resolution of the LROC Wide Angle Camera images." Archives, XXIII ISPRS Congress Archives (2016).

  6. An FPGA-based heterogeneous image fusion system design method

    NASA Astrophysics Data System (ADS)

    Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong

    2011-08-01

    Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.

  7. Advances in multi-sensor data fusion: algorithms and applications.

    PubMed

    Dong, Jiang; Zhuang, Dafang; Huang, Yaohuan; Fu, Jingying

    2009-01-01

    With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of "algorithm fusion" methods; (3) Establishment of an automatic quality assessment scheme.

  8. Research on HDR image fusion algorithm based on Laplace pyramid weight transform with extreme low-light CMOS

    NASA Astrophysics Data System (ADS)

    Guan, Wen; Li, Li; Jin, Weiqi; Qiu, Su; Zou, Yan

    2015-10-01

    Extreme-Low-Light CMOS has been widely applied in the field of night-vision as a new type of solid image sensor. But if the illumination in the scene has drastic changes or the illumination is too strong, Extreme-Low-Light CMOS can't both clearly present the high-light scene and low-light region. According to the partial saturation problem in the field of night-vision, a HDR image fusion algorithm based on the Laplace Pyramid was researched. The overall gray value and the contrast of the low light image is very low. We choose the fusion strategy based on regional average gradient for the top layer of the long exposure image and short exposure image, which has rich brightness and textural features. The remained layers which represent the edge feature information of the target are based on the fusion strategy based on regional energy. In the process of source image reconstruction with Laplacian pyramid image, we compare the fusion results with four kinds of basal images. The algorithm is tested using Matlab and compared with the different fusion strategies. We use information entropy, average gradient and standard deviation these three objective evaluation parameters for the further analysis of the fusion result. Different low illumination environment experiments show that the algorithm in this paper can rapidly get wide dynamic range while keeping high entropy. Through the verification of this algorithm features, there is a further application prospect of the optimized algorithm. Keywords: high dynamic range imaging, image fusion, multi-exposure image, weight coefficient, information fusion, Laplacian pyramid transform.

  9. Heterogeneous Vision Data Fusion for Independently Moving Cameras

    DTIC Science & Technology

    2010-03-01

    target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY

  10. A Fusion Algorithm for GFP Image and Phase Contrast Image of Arabidopsis Cell Based on SFL-Contourlet Transform

    PubMed Central

    Feng, Peng; Wang, Jing; Wei, Biao; Mi, Deling

    2013-01-01

    A hybrid multiscale and multilevel image fusion algorithm for green fluorescent protein (GFP) image and phase contrast image of Arabidopsis cell is proposed in this paper. Combining intensity-hue-saturation (IHS) transform and sharp frequency localization Contourlet transform (SFL-CT), this algorithm uses different fusion strategies for different detailed subbands, which include neighborhood consistency measurement (NCM) that can adaptively find balance between color background and gray structure. Also two kinds of neighborhood classes based on empirical model are taken into consideration. Visual information fidelity (VIF) as an objective criterion is introduced to evaluate the fusion image. The experimental results of 117 groups of Arabidopsis cell image from John Innes Center show that the new algorithm cannot only make the details of original images well preserved but also improve the visibility of the fusion image, which shows the superiority of the novel method to traditional ones. PMID:23476716

  11. [A study on medical image fusion].

    PubMed

    Zhang, Er-hu; Bian, Zheng-zhong

    2002-09-01

    Five algorithms with its advantages and disadvantage for medical image fusion are analyzed. Four kinds of quantitative evaluation criteria for the quality of image fusion algorithms are proposed and these will give us some guidance for future research.

  12. An automatic fuzzy-based multi-temporal brain digital subtraction angiography image fusion algorithm using curvelet transform and content selection strategy.

    PubMed

    Momeni, Saba; Pourghassem, Hossein

    2014-08-01

    Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.

  13. Multimodal medical image fusion by combining gradient minimization smoothing filter and non-subsampled directional filter bank

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng; Wenbo, Mei; Huiqian, Du; Zexian, Wang

    2018-04-01

    A new algorithm was proposed for medical images fusion in this paper, which combined gradient minimization smoothing filter (GMSF) with non-sampled directional filter bank (NSDFB). In order to preserve more detail information, a multi scale edge preserving decomposition framework (MEDF) was used to decompose an image into a base image and a series of detail images. For the fusion of base images, the local Gaussian membership function is applied to construct the fusion weighted factor. For the fusion of detail images, NSDFB was applied to decompose each detail image into multiple directional sub-images that are fused by pulse coupled neural network (PCNN) respectively. The experimental results demonstrate that the proposed algorithm is superior to the compared algorithms in both visual effect and objective assessment.

  14. Enhanced image fusion using directional contrast rules in fuzzy transform domain.

    PubMed

    Nandal, Amita; Rosales, Hamurabi Gamboa

    2016-01-01

    In this paper a novel image fusion algorithm based on directional contrast in fuzzy transform (FTR) domain is proposed. Input images to be fused are first divided into several non-overlapping blocks. The components of these sub-blocks are fused using directional contrast based fuzzy fusion rule in FTR domain. The fused sub-blocks are then transformed into original size blocks using inverse-FTR. Further, these inverse transformed blocks are fused according to select maximum based fusion rule for reconstructing the final fused image. The proposed fusion algorithm is both visually and quantitatively compared with other standard and recent fusion algorithms. Experimental results demonstrate that the proposed method generates better results than the other methods.

  15. Multiple-algorithm parallel fusion of infrared polarization and intensity images based on algorithmic complementarity and synergy

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Yang, Fengbao; Ji, Linna; Lv, Sheng

    2018-01-01

    Diverse image fusion methods perform differently. Each method has advantages and disadvantages compared with others. One notion is that the advantages of different image methods can be effectively combined. A multiple-algorithm parallel fusion method based on algorithmic complementarity and synergy is proposed. First, in view of the characteristics of the different algorithms and difference-features among images, an index vector-based feature-similarity is proposed to define the degree of complementarity and synergy. This proposed index vector is a reliable evidence indicator for algorithm selection. Second, the algorithms with a high degree of complementarity and synergy are selected. Then, the different degrees of various features and infrared intensity images are used as the initial weights for the nonnegative matrix factorization (NMF). This avoids randomness of the NMF initialization parameter. Finally, the fused images of different algorithms are integrated using the NMF because of its excellent data fusing performance on independent features. Experimental results demonstrate that the visual effect and objective evaluation index of the fused images obtained using the proposed method are better than those obtained using traditional methods. The proposed method retains all the advantages that individual fusion algorithms have.

  16. Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.

    2016-01-01

    Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.

  17. Nighttime images fusion based on Laplacian pyramid

    NASA Astrophysics Data System (ADS)

    Wu, Cong; Zhan, Jinhao; Jin, Jicheng

    2018-02-01

    This paper expounds method of the average weighted fusion, image pyramid fusion, the wavelet transform and apply these methods on the fusion of multiple exposures nighttime images. Through calculating information entropy and cross entropy of fusion images, we can evaluate the effect of different fusion. Experiments showed that Laplacian pyramid image fusion algorithm is suitable for processing nighttime images fusion, it can reduce the halo while preserving image details.

  18. Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators

    PubMed Central

    Bai, Xiangzhi

    2015-01-01

    The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion. PMID:26184229

  19. Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators.

    PubMed

    Bai, Xiangzhi

    2015-07-15

    The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion.

  20. Assessment of SPOT-6 optical remote sensing data against GF-1 using NNDiffuse image fusion algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Jinling; Guo, Junjie; Cheng, Wenjie; Xu, Chao; Huang, Linsheng

    2017-07-01

    A cross-comparison method was used to assess the SPOT-6 optical satellite imagery against Chinese GF-1 imagery using three types of indicators: spectral and color quality, fusion effect and identification potential. More specifically, spectral response function (SRF) curves were used to compare the two imagery, showing that the SRF curve shape of SPOT-6 is more like a rectangle compared to GF-1 in blue, green, red and near-infrared bands. NNDiffuse image fusion algorithm was used to evaluate the capability of information conservation in comparison with wavelet transform (WT) and principal component (PC) algorithms. The results show that NNDiffuse fused image has extremely similar entropy vales than original image (1.849 versus 1.852) and better color quality. In addition, the object-oriented classification toolset (ENVI EX) was used to identify greenlands for comparing the effect of self-fusion image of SPOT-6 and inter-fusion image between SPOT-6 and GF-1 based on the NNDiffuse algorithm. The overall accuracy is 97.27% and 76.88%, respectively, showing that self-fused image of SPOT-6 has better identification capability.

  1. A fast and automatic fusion algorithm for unregistered multi-exposure image sequence

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Yu, Feihong

    2014-09-01

    Human visual system (HVS) can visualize all the brightness levels of the scene through visual adaptation. However, the dynamic range of most commercial digital cameras and display devices are smaller than the dynamic range of human eye. This implies low dynamic range (LDR) images captured by normal digital camera may lose image details. We propose an efficient approach to high dynamic (HDR) image fusion that copes with image displacement and image blur degradation in a computationally efficient manner, which is suitable for implementation on mobile devices. The various image registration algorithms proposed in the previous literatures are unable to meet the efficiency and performance requirements in the application of mobile devices. In this paper, we selected Oriented Brief (ORB) detector to extract local image structures. The descriptor selected in multi-exposure image fusion algorithm has to be fast and robust to illumination variations and geometric deformations. ORB descriptor is the best candidate in our algorithm. Further, we perform an improved RANdom Sample Consensus (RANSAC) algorithm to reject incorrect matches. For the fusion of images, a new approach based on Stationary Wavelet Transform (SWT) is used. The experimental results demonstrate that the proposed algorithm generates high quality images at low computational cost. Comparisons with a number of other feature matching methods show that our method gets better performance.

  2. Integrating image quality in 2nu-SVM biometric match score fusion.

    PubMed

    Vatsa, Mayank; Singh, Richa; Noore, Afzel

    2007-10-01

    This paper proposes an intelligent 2nu-support vector machine based match score fusion algorithm to improve the performance of face and iris recognition by integrating the quality of images. The proposed algorithm applies redundant discrete wavelet transform to evaluate the underlying linear and non-linear features present in the image. A composite quality score is computed to determine the extent of smoothness, sharpness, noise, and other pertinent features present in each subband of the image. The match score and the corresponding quality score of an image are fused using 2nu-support vector machine to improve the verification performance. The proposed algorithm is experimentally validated using the FERET face database and the CASIA iris database. The verification performance and statistical evaluation show that the proposed algorithm outperforms existing fusion algorithms.

  3. Adaptive structured dictionary learning for image fusion based on group-sparse-representation

    NASA Astrophysics Data System (ADS)

    Yang, Jiajie; Sun, Bin; Luo, Chengwei; Wu, Yuzhong; Xu, Limei

    2018-04-01

    Dictionary learning is the key process of sparse representation which is one of the most widely used image representation theories in image fusion. The existing dictionary learning method does not use the group structure information and the sparse coefficients well. In this paper, we propose a new adaptive structured dictionary learning algorithm and a l1-norm maximum fusion rule that innovatively utilizes grouped sparse coefficients to merge the images. In the dictionary learning algorithm, we do not need prior knowledge about any group structure of the dictionary. By using the characteristics of the dictionary in expressing the signal, our algorithm can automatically find the desired potential structure information that hidden in the dictionary. The fusion rule takes the physical meaning of the group structure dictionary, and makes activity-level judgement on the structure information when the images are being merged. Therefore, the fused image can retain more significant information. Comparisons have been made with several state-of-the-art dictionary learning methods and fusion rules. The experimental results demonstrate that, the dictionary learning algorithm and the fusion rule both outperform others in terms of several objective evaluation metrics.

  4. Segment fusion of ToF-SIMS images.

    PubMed

    Milillo, Tammy M; Miller, Mary E; Fischione, Remo; Montes, Angelina; Gardella, Joseph A

    2016-06-08

    The imaging capabilities of time-of-flight secondary ion mass spectrometry (ToF-SIMS) have not been used to their full potential in the analysis of polymer and biological samples. Imaging has been limited by the size of the dataset and the chemical complexity of the sample being imaged. Pixel and segment based image fusion algorithms commonly used in remote sensing, ecology, geography, and geology provide a way to improve spatial resolution and classification of biological images. In this study, a sample of Arabidopsis thaliana was treated with silver nanoparticles and imaged with ToF-SIMS. These images provide insight into the uptake mechanism for the silver nanoparticles into the plant tissue, giving new understanding to the mechanism of uptake of heavy metals in the environment. The Munechika algorithm was programmed in-house and applied to achieve pixel based fusion, which improved the spatial resolution of the image obtained. Multispectral and quadtree segment or region based fusion algorithms were performed using ecognition software, a commercially available remote sensing software suite, and used to classify the images. The Munechika fusion improved the spatial resolution for the images containing silver nanoparticles, while the segment fusion allowed classification and fusion based on the tissue types in the sample, suggesting potential pathways for the uptake of the silver nanoparticles.

  5. Infrared and visible image fusion with the target marked based on multi-resolution visual attention mechanisms

    NASA Astrophysics Data System (ADS)

    Huang, Yadong; Gao, Kun; Gong, Chen; Han, Lu; Guo, Yue

    2016-03-01

    During traditional multi-resolution infrared and visible image fusion processing, the low contrast ratio target may be weakened and become inconspicuous because of the opposite DN values in the source images. So a novel target pseudo-color enhanced image fusion algorithm based on the modified attention model and fast discrete curvelet transformation is proposed. The interesting target regions are extracted from source images by introducing the motion features gained from the modified attention model, and source images are performed the gray fusion via the rules based on physical characteristics of sensors in curvelet domain. The final fusion image is obtained by mapping extracted targets into the gray result with the proper pseudo-color instead. The experiments show that the algorithm can highlight dim targets effectively and improve SNR of fusion image.

  6. Poster — Thur Eve — 09: Evaluation of electrical impedance and computed tomography fusion algorithms using an anthropomorphic phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chugh, Brige Paul; Krishnan, Kalpagam; Liu, Jeff

    2014-08-15

    Integration of biological conductivity information provided by Electrical Impedance Tomography (EIT) with anatomical information provided by Computed Tomography (CT) imaging could improve the ability to characterize tissues in clinical applications. In this paper, we report results of our study which compared the fusion of EIT with CT using three different image fusion algorithms, namely: weighted averaging, wavelet fusion, and ROI indexing. The ROI indexing method of fusion involves segmenting the regions of interest from the CT image and replacing the pixels with the pixels of the EIT image. The three algorithms were applied to a CT and EIT image ofmore » an anthropomorphic phantom, constructed out of five acrylic contrast targets with varying diameter embedded in a base of gelatin bolus. The imaging performance was assessed using Detectability and Structural Similarity Index Measure (SSIM). Wavelet fusion and ROI-indexing resulted in lower Detectability (by 35% and 47%, respectively) yet higher SSIM (by 66% and 73%, respectively) than weighted averaging. Our results suggest that wavelet fusion and ROI-indexing yielded more consistent and optimal fusion performance than weighted averaging.« less

  7. A novel framework of tissue membrane systems for image fusion.

    PubMed

    Zhang, Zulin; Yi, Xinzhong; Peng, Hong

    2014-01-01

    This paper proposes a tissue membrane system-based framework to deal with the optimal image fusion problem. A spatial domain fusion algorithm is given, and a tissue membrane system of multiple cells is used as its computing framework. Based on the multicellular structure and inherent communication mechanism of the tissue membrane system, an improved velocity-position model is developed. The performance of the fusion framework is studied with comparison of several traditional fusion methods as well as genetic algorithm (GA)-based and differential evolution (DE)-based spatial domain fusion methods. Experimental results show that the proposed fusion framework is superior or comparable to the other methods and can be efficiently used for image fusion.

  8. A dual-channel fusion system of visual and infrared images based on color transfer

    NASA Astrophysics Data System (ADS)

    Pei, Chuang; Jiang, Xiao-yu; Zhang, Peng-wei; Liang, Hao-cong

    2013-09-01

    A dual-channel fusion system of visual and infrared images based on color transfer The increasing availability and deployment of imaging sensors operating in multiple spectrums has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, most of these algorithms have gray or false color fusion results which are not adapt to human vision. Transfer color from a day-time reference image to get natural color fusion result is an effective way to solve this problem, but the computation cost of color transfer is expensive and can't meet the request of real-time image processing. We developed a dual-channel infrared and visual images fusion system based on TMS320DM642 digital signal processing chip. The system is divided into image acquisition and registration unit, image fusion processing unit, system control unit and image fusion result out-put unit. The image registration of dual-channel images is realized by combining hardware and software methods in the system. False color image fusion algorithm in RGB color space is used to get R-G fused image, then the system chooses a reference image to transfer color to the fusion result. A color lookup table based on statistical properties of images is proposed to solve the complexity computation problem in color transfer. The mapping calculation between the standard lookup table and the improved color lookup table is simple and only once for a fixed scene. The real-time fusion and natural colorization of infrared and visual images are realized by this system. The experimental result shows that the color-transferred images have a natural color perception to human eyes, and can highlight the targets effectively with clear background details. Human observers with this system will be able to interpret the image better and faster, thereby improving situational awareness and reducing target detection time.

  9. Enhancing hyperspectral spatial resolution using multispectral image fusion: A wavelet approach

    NASA Astrophysics Data System (ADS)

    Jazaeri, Amin

    High spectral and spatial resolution images have a significant impact in remote sensing applications. Because both spatial and spectral resolutions of spaceborne sensors are fixed by design and it is not possible to further increase the spatial or spectral resolution, techniques such as image fusion must be applied to achieve such goals. This dissertation introduces the concept of wavelet fusion between hyperspectral and multispectral sensors in order to enhance the spectral and spatial resolution of a hyperspectral image. To test the robustness of this concept, images from Hyperion (hyperspectral sensor) and Advanced Land Imager (multispectral sensor) were first co-registered and then fused using different wavelet algorithms. A regression-based fusion algorithm was also implemented for comparison purposes. The results show that the fused images using a combined bi-linear wavelet-regression algorithm have less error than other methods when compared to the ground truth. In addition, a combined regression-wavelet algorithm shows more immunity to misalignment of the pixels due to the lack of proper registration. The quantitative measures of average mean square error show that the performance of wavelet-based methods degrades when the spatial resolution of hyperspectral images becomes eight times less than its corresponding multispectral image. Regardless of what method of fusion is utilized, the main challenge in image fusion is image registration, which is also a very time intensive process. Because the combined regression wavelet technique is computationally expensive, a hybrid technique based on regression and wavelet methods was also implemented to decrease computational overhead. However, the gain in faster computation was offset by the introduction of more error in the outcome. The secondary objective of this dissertation is to examine the feasibility and sensor requirements for image fusion for future NASA missions in order to be able to perform onboard image fusion. In this process, the main challenge of image registration was resolved by registering the input images using transformation matrices of previously acquired data. The composite image resulted from the fusion process remarkably matched the ground truth, indicating the possibility of real time onboard fusion processing.

  10. Infrared and visible image fusion based on total variation and augmented Lagrangian.

    PubMed

    Guo, Hanqi; Ma, Yong; Mei, Xiaoguang; Ma, Jiayi

    2017-11-01

    This paper proposes a new algorithm for infrared and visible image fusion based on gradient transfer that achieves fusion by preserving the intensity of the infrared image and then transferring gradients in the corresponding visible one to the result. The gradient transfer suffers from the problems of low dynamic range and detail loss because it ignores the intensity from the visible image. The new algorithm solves these problems by providing additive intensity from the visible image to balance the intensity between the infrared image and the visible one. It formulates the fusion task as an l 1 -l 1 -TV minimization problem and then employs variable splitting and augmented Lagrangian to convert the unconstrained problem to a constrained one that can be solved in the framework of alternating the multiplier direction method. Experiments demonstrate that the new algorithm achieves better fusion results with a high computation efficiency in both qualitative and quantitative tests than gradient transfer and most state-of-the-art methods.

  11. Angiogram, fundus, and oxygen saturation optic nerve head image fusion

    NASA Astrophysics Data System (ADS)

    Cao, Hua; Khoobehi, Bahram

    2009-02-01

    A novel multi-modality optic nerve head image fusion approach has been successfully designed. The new approach has been applied on three ophthalmologic modalities: angiogram, fundus, and oxygen saturation retinal optic nerve head images. It has achieved an excellent result by giving the visualization of fundus or oxygen saturation images with a complete angiogram overlay. During this study, two contributions have been made in terms of novelty, efficiency, and accuracy. The first contribution is the automated control point detection algorithm for multi-sensor images. The new method employs retina vasculature and bifurcation features by identifying the initial good-guess of control points using the Adaptive Exploratory Algorithm. The second contribution is the heuristic optimization fusion algorithm. In order to maximize the objective function (Mutual-Pixel-Count), the iteration algorithm adjusts the initial guess of the control points at the sub-pixel level. A refinement of the parameter set is obtained at the end of each loop, and finally an optimal fused image is generated at the end of the iteration. It is the first time that Mutual-Pixel-Count concept has been introduced into biomedical image fusion area. By locking the images in one place, the fused image allows ophthalmologists to match the same eye over time and get a sense of disease progress and pinpoint surgical tools. The new algorithm can be easily expanded to human or animals' 3D eye, brain, or body image registration and fusion.

  12. Change detection in synthetic aperture radar images based on image fusion and fuzzy clustering.

    PubMed

    Gong, Maoguo; Zhou, Zhiqiang; Ma, Jingjing

    2012-04-01

    This paper presents an unsupervised distribution-free change detection approach for synthetic aperture radar (SAR) images based on an image fusion strategy and a novel fuzzy clustering algorithm. The image fusion technique is introduced to generate a difference image by using complementary information from a mean-ratio image and a log-ratio image. In order to restrain the background information and enhance the information of changed regions in the fused difference image, wavelet fusion rules based on an average operator and minimum local area energy are chosen to fuse the wavelet coefficients for a low-frequency band and a high-frequency band, respectively. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.

  13. A Study for Texture Feature Extraction of High-Resolution Satellite Images Based on a Direction Measure and Gray Level Co-Occurrence Matrix Fusion Algorithm

    PubMed Central

    Zhang, Xin; Cui, Jintian; Wang, Weisheng; Lin, Chao

    2017-01-01

    To address the problem of image texture feature extraction, a direction measure statistic that is based on the directionality of image texture is constructed, and a new method of texture feature extraction, which is based on the direction measure and a gray level co-occurrence matrix (GLCM) fusion algorithm, is proposed in this paper. This method applies the GLCM to extract the texture feature value of an image and integrates the weight factor that is introduced by the direction measure to obtain the final texture feature of an image. A set of classification experiments for the high-resolution remote sensing images were performed by using support vector machine (SVM) classifier with the direction measure and gray level co-occurrence matrix fusion algorithm. Both qualitative and quantitative approaches were applied to assess the classification results. The experimental results demonstrated that texture feature extraction based on the fusion algorithm achieved a better image recognition, and the accuracy of classification based on this method has been significantly improved. PMID:28640181

  14. Benchmarking image fusion system design parameters

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.

    2013-06-01

    A clear and absolute method for discriminating between image fusion algorithm performances is presented. This method can effectively be used to assist in the design and modeling of image fusion systems. Specifically, it is postulated that quantifying human task performance using image fusion should be benchmarked to whether the fusion algorithm, at a minimum, retained the performance benefit achievable by each independent spectral band being fused. The established benchmark would then clearly represent the threshold that a fusion system should surpass to be considered beneficial to a particular task. A genetic algorithm is employed to characterize the fused system parameters using a Matlab® implementation of NVThermIP as the objective function. By setting the problem up as a mixed-integer constraint optimization problem, one can effectively look backwards through the image acquisition process: optimizing fused system parameters by minimizing the difference between modeled task difficulty measure and the benchmark task difficulty measure. The results of an identification perception experiment are presented, where human observers were asked to identify a standard set of military targets, and used to demonstrate the effectiveness of the benchmarking process.

  15. Investigations of image fusion

    NASA Astrophysics Data System (ADS)

    Zhang, Zhong

    1999-12-01

    The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a single image which is more suitable for the purpose of human visual perception or further image processing tasks. In this thesis, a region-based fusion algorithm using the wavelet transform is proposed. The identification of important features in each image, such as edges and regions of interest, are used to guide the fusion process. The idea of multiscale grouping is also introduced and a generic image fusion framework based on multiscale decomposition is studied. The framework includes all of the existing multiscale-decomposition- based fusion approaches we found in the literature which did not assume a statistical model for the source images. Comparisons indicate that our framework includes some new approaches which outperform the existing approaches for the cases we consider. Registration must precede our fusion algorithms. So we proposed a hybrid scheme which uses both feature-based and intensity-based methods. The idea of robust estimation of optical flow from time- varying images is employed with a coarse-to-fine multi- resolution approach and feature-based registration to overcome some of the limitations of the intensity-based schemes. Experiments show that this approach is robust and efficient. Assessing image fusion performance in a real application is a complicated issue. In this dissertation, a mixture probability density function model is used in conjunction with the Expectation- Maximization algorithm to model histograms of edge intensity. Some new techniques are proposed for estimating the quality of a noisy image of a natural scene. Such quality measures can be used to guide the fusion. Finally, we study fusion of images obtained from several copies of a new type of camera developed for video surveillance. Our techniques increase the capability and reliability of the surveillance system and provide an easy way to obtain 3-D information of objects in the space monitored by the system.

  16. Dynamic image fusion and general observer preference

    NASA Astrophysics Data System (ADS)

    Burks, Stephen D.; Doe, Joshua M.

    2010-04-01

    Recent developments in image fusion give the user community many options for ways of presenting the imagery to an end-user. Individuals at the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate have developed an electronic system that allows users to quickly and efficiently determine optimal image fusion algorithms and color parameters based upon collected imagery and videos from environments that are typical to observers in a military environment. After performing multiple multi-band data collections in a variety of military-like scenarios, different waveband, fusion algorithm, image post-processing, and color choices are presented to observers as an output of the fusion system. The observer preferences can give guidelines as to how specific scenarios should affect the presentation of fused imagery.

  17. Perception-oriented fusion of multi-sensor imagery: visible, IR, and SAR

    NASA Astrophysics Data System (ADS)

    Sidorchuk, D.; Volkov, V.; Gladilin, S.

    2018-04-01

    This paper addresses the problem of image fusion of optical (visible and thermal domain) data and radar data for the purpose of visualization. These types of images typically contain a lot of complimentary information, and their joint visualization can be useful and more convenient for human user than a set of individual images. To solve the image fusion problem we propose a novel algorithm that utilizes some peculiarities of human color perception and based on the grey-scale structural visualization. Benefits of presented algorithm are exemplified by satellite imagery.

  18. Adaptive polarization image fusion based on regional energy dynamic weighted average

    NASA Astrophysics Data System (ADS)

    Zhao, Yong-Qiang; Pan, Quan; Zhang, Hong-Cai

    2005-11-01

    According to the principle of polarization imaging and the relation between Stokes parameters and the degree of linear polarization, there are much redundant and complementary information in polarized images. Since man-made objects and natural objects can be easily distinguished in images of degree of linear polarization and images of Stokes parameters contain rich detailed information of the scene, the clutters in the images can be removed efficiently while the detailed information can be maintained by combining these images. An algorithm of adaptive polarization image fusion based on regional energy dynamic weighted average is proposed in this paper to combine these images. Through an experiment and simulations, most clutters are removed by this algorithm. The fusion method is used for different light conditions in simulation, and the influence of lighting conditions on the fusion results is analyzed.

  19. Improving iris recognition performance using segmentation, quality enhancement, match score fusion, and indexing.

    PubMed

    Vatsa, Mayank; Singh, Richa; Noore, Afzel

    2008-08-01

    This paper proposes algorithms for iris segmentation, quality enhancement, match score fusion, and indexing to improve both the accuracy and the speed of iris recognition. A curve evolution approach is proposed to effectively segment a nonideal iris image using the modified Mumford-Shah functional. Different enhancement algorithms are concurrently applied on the segmented iris image to produce multiple enhanced versions of the iris image. A support-vector-machine-based learning algorithm selects locally enhanced regions from each globally enhanced image and combines these good-quality regions to create a single high-quality iris image. Two distinct features are extracted from the high-quality iris image. The global textural feature is extracted using the 1-D log polar Gabor transform, and the local topological feature is extracted using Euler numbers. An intelligent fusion algorithm combines the textural and topological matching scores to further improve the iris recognition performance and reduce the false rejection rate, whereas an indexing algorithm enables fast and accurate iris identification. The verification and identification performance of the proposed algorithms is validated and compared with other algorithms using the CASIA Version 3, ICE 2005, and UBIRIS iris databases.

  20. A fast fusion scheme for infrared and visible light images in NSCT domain

    NASA Astrophysics Data System (ADS)

    Zhao, Chunhui; Guo, Yunting; Wang, Yulei

    2015-09-01

    Fusion of infrared and visible light images is an effective way to obtain a simultaneous visualization of details of background provided by visible light image and hiding target information provided by infrared image, which is more suitable for browsing and further processing. Two crucial components for infrared and visual light image fusion are improving its fusion performance as well as reducing its computational burden. In this paper, a novel fusion algorithm named pixel information estimation is proposed, which determines the weights by evaluating the information of pixel and is well applied in visible light and infrared image fusion with better fusion quality and lower time-consumption. Besides, a fast realization of non-subsampled contourlet transform is also proposed in this paper to improve the computational efficiency. To verify the advantage of the proposed method, this paper compares it with several popular ones in six evaluation metrics over four different image groups. Experimental results show that the proposed algorithm gets a more effective result with much less time consuming and performs well in both subjective evaluation and objective indicators.

  1. Multi exposure image fusion algorithm based on YCbCr space

    NASA Astrophysics Data System (ADS)

    Yang, T. T.; Fang, P. Y.

    2018-05-01

    To solve the problem that scene details and visual effects are difficult to be optimized in high dynamic image synthesis, we proposes a multi exposure image fusion algorithm for processing low dynamic range images in YCbCr space, and weighted blending of luminance and chromatic aberration components respectively. The experimental results show that the method can retain color effect of the fused image while balancing details of the bright and dark regions of the high dynamic image.

  2. [Research progress of multi-model medical image fusion and recognition].

    PubMed

    Zhou, Tao; Lu, Huiling; Chen, Zhiqiang; Ma, Jingxian

    2013-10-01

    Medical image fusion and recognition has a wide range of applications, such as focal location, cancer staging and treatment effect assessment. Multi-model medical image fusion and recognition are analyzed and summarized in this paper. Firstly, the question of multi-model medical image fusion and recognition is discussed, and its advantage and key steps are discussed. Secondly, three fusion strategies are reviewed from the point of algorithm, and four fusion recognition structures are discussed. Thirdly, difficulties, challenges and possible future research direction are discussed.

  3. Autofocus and fusion using nonlinear correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cabazos-Marín, Alma Rocío; Álvarez-Borrego, Josué, E-mail: josue@cicese.mx; Coronel-Beltrán, Ángel

    2014-10-06

    In this work a new algorithm is proposed for auto focusing and images fusion captured by microscope's CCD. The proposed algorithm for auto focusing implements the spiral scanning of each image in the stack f(x, y){sub w} to define the V{sub w} vector. The spectrum of the vector FV{sub w} is calculated by fast Fourier transform. The best in-focus image is determined by a focus measure that is obtained by the FV{sub 1} nonlinear correlation vector, of the reference image, with each other FV{sub W} images in the stack. In addition, fusion is performed with a subset of selected imagesmore » f(x, y){sub SBF} like the images with best focus measurement. Fusion creates a new improved image f(x, y){sub F} with the selection of pixels of higher intensity.« less

  4. A new method of Quickbird own image fusion

    NASA Astrophysics Data System (ADS)

    Han, Ying; Jiang, Hong; Zhang, Xiuying

    2009-10-01

    With the rapid development of remote sensing technology, the means of accessing to remote sensing data become increasingly abundant, thus the same area can form a large number of multi-temporal, different resolution image sequence. At present, the fusion methods are mainly: HPF, IHS transform method, PCA method, Brovey, Mallat algorithm and wavelet transform and so on. There exists a serious distortion of the spectrums in the IHS transform, Mallat algorithm omits low-frequency information of the high spatial resolution images, the integration results of which has obvious blocking effects. Wavelet multi-scale decomposition for different sizes, the directions, details and the edges can have achieved very good results, but different fusion rules and algorithms can achieve different effects. This article takes the Quickbird own image fusion as an example, basing on wavelet transform and HVS, wavelet transform and IHS integration. The result shows that the former better. This paper introduces the correlation coefficient, the relative average spectral error index and usual index to evaluate the quality of image.

  5. Performance of fusion algorithms for computer-aided detection and classification of mines in very shallow water obtained from testing in navy Fleet Battle Exercise-Hotel 2000

    NASA Astrophysics Data System (ADS)

    Ciany, Charles M.; Zurawski, William; Kerfoot, Ian

    2001-10-01

    The performance of Computer Aided Detection/Computer Aided Classification (CAD/CAC) Fusion algorithms on side-scan sonar images was evaluated using data taken at the Navy's's Fleet Battle Exercise-Hotel held in Panama City, Florida, in August 2000. A 2-of-3 binary fusion algorithm is shown to provide robust performance. The algorithm accepts the classification decisions and associated contact locations form three different CAD/CAC algorithms, clusters the contacts based on Euclidian distance, and then declares a valid target when a clustered contact is declared by at least 2 of the 3 individual algorithms. This simple binary fusion provided a 96 percent probability of correct classification at a false alarm rate of 0.14 false alarms per image per side. The performance represented a 3.8:1 reduction in false alarms over the best performing single CAD/CAC algorithm, with no loss in probability of correct classification.

  6. Multi-atlas label fusion using hybrid of discriminative and generative classifiers for segmentation of cardiac MR images.

    PubMed

    Sedai, Suman; Garnavi, Rahil; Roy, Pallab; Xi Liang

    2015-08-01

    Multi-atlas segmentation first registers each atlas image to the target image and transfers the label of atlas image to the coordinate system of the target image. The transferred labels are then combined, using a label fusion algorithm. In this paper, we propose a novel label fusion method which aggregates discriminative learning and generative modeling for segmentation of cardiac MR images. First, a probabilistic Random Forest classifier is trained as a discriminative model to obtain the prior probability of a label at the given voxel of the target image. Then, a probability distribution of image patches is modeled using Gaussian Mixture Model for each label, providing the likelihood of the voxel belonging to the label. The final label posterior is obtained by combining the classification score and the likelihood score under Bayesian rule. Comparative study performed on MICCAI 2013 SATA Segmentation Challenge demonstrates that our proposed hybrid label fusion algorithm is accurate than other five state-of-the-art label fusion methods. The proposed method obtains dice similarity coefficient of 0.94 and 0.92 in segmenting epicardium and endocardium respectively. Moreover, our label fusion method achieves more accurate segmentation results compared to four other label fusion methods.

  7. The effect of multispectral image fusion enhancement on human efficiency.

    PubMed

    Bittner, Jennifer L; Schill, M Trent; Mohd-Zaid, Fairul; Blaha, Leslie M

    2017-01-01

    The visual system can be highly influenced by changes to visual presentation. Thus, numerous techniques have been developed to augment imagery in an attempt to improve human perception. The current paper examines the potential impact of one such enhancement, multispectral image fusion, where imagery captured in varying spectral bands (e.g., visible, thermal, night vision) is algorithmically combined to produce an output to strengthen visual perception. We employ ideal observer analysis over a series of experimental conditions to (1) establish a framework for testing the impact of image fusion over the varying aspects surrounding its implementation (e.g., stimulus content, task) and (2) examine the effectiveness of fusion on human information processing efficiency in a basic application. We used a set of rotated Landolt C images captured with a number of individual sensor cameras and combined across seven traditional fusion algorithms (e.g., Laplacian pyramid, principal component analysis, averaging) in a 1-of-8 orientation task. We found that, contrary to the idea of fused imagery always producing a greater impact on perception, single-band imagery can be just as influential. Additionally, efficiency data were shown to fluctuate based on sensor combination instead of fusion algorithm, suggesting the need for examining multiple factors to determine the success of image fusion. Our use of ideal observer analysis, a popular technique from the vision sciences, provides not only a standard for testing fusion in direct relation to the visual system but also allows for comparable examination of fusion across its associated problem space of application.

  8. Research on fusion algorithm of polarization image in tetrolet domain

    NASA Astrophysics Data System (ADS)

    Zhang, Dexiang; Yuan, BaoHong; Zhang, Jingjing

    2015-12-01

    Tetrolets are Haar-type wavelets whose supports are tetrominoes which are shapes made by connecting four equal-sized squares. A fusion method for polarization images based on tetrolet transform is proposed. Firstly, the magnitude of polarization image and angle of polarization image can be decomposed into low-frequency coefficients and high-frequency coefficients with multi-scales and multi-directions using tetrolet transform. For the low-frequency coefficients, the average fusion method is used. According to edge distribution differences in high frequency sub-band images, for the directional high-frequency coefficients are used to select the better coefficients by region spectrum entropy algorithm for fusion. At last the fused image can be obtained by utilizing inverse transform for fused tetrolet coefficients. Experimental results show that the proposed method can detect image features more effectively and the fused image has better subjective visual effect

  9. Implementing and validating of pan-sharpening algorithms in open-source software

    NASA Astrophysics Data System (ADS)

    Pesántez-Cobos, Paúl; Cánovas-García, Fulgencio; Alonso-Sarría, Francisco

    2017-10-01

    Several approaches have been used in remote sensing to integrate images with different spectral and spatial resolutions in order to obtain fused enhanced images. The objective of this research is three-fold. To implement in R three image fusion techniques (High Pass Filter, Principal Component Analysis and Gram-Schmidt); to apply these techniques to merging multispectral and panchromatic images from five different images with different spatial resolutions; finally, to evaluate the results using the universal image quality index (Q index) and the ERGAS index. As regards qualitative analysis, Landsat-7 and Landsat-8 show greater colour distortion with the three pansharpening methods, although the results for the other images were better. Q index revealed that HPF fusion performs better for the QuickBird, IKONOS and Landsat-7 images, followed by GS fusion; whereas in the case of Landsat-8 and Natmur-08 images, the results were more even. Regarding the ERGAS spatial index, the ACP algorithm performed better for the QuickBird, IKONOS, Landsat-7 and Natmur-08 images, followed closely by the GS algorithm. Only for the Landsat-8 image did, the GS fusion present the best result. In the evaluation of spectral components, HPF results tended to be better and ACP results worse, the opposite was the case with the spatial components. Better quantitative results are obtained in Landsat-7 and Landsat-8 images with the three fusion methods than with the QuickBird, IKONOS and Natmur-08 images. This contrasts with the qualitative evaluation reflecting the importance of splitting the two evaluation approaches (qualitative and quantitative). Significant disagreement may arise when different methodologies are used to asses the quality of an image fusion. Moreover, it is not possible to designate, a priori, a given algorithm as the best, not only because of the different characteristics of the sensors, but also because of the different atmospherics conditions or peculiarities of the different study areas, among other reasons.

  10. A joint FED watermarking system using spatial fusion for verifying the security issues of teleradiology.

    PubMed

    Viswanathan, P; Krishna, P Venkata

    2014-05-01

    Teleradiology allows transmission of medical images for clinical data interpretation to provide improved e-health care access, delivery, and standards. The remote transmission raises various ethical and legal issues like image retention, fraud, privacy, malpractice liability, etc. A joint FED watermarking system means a joint fingerprint/encryption/dual watermarking system is proposed for addressing these issues. The system combines a region based substitution dual watermarking algorithm using spatial fusion, stream cipher algorithm using symmetric key, and fingerprint verification algorithm using invariants. This paper aims to give access to the outcomes of medical images with confidentiality, availability, integrity, and its origin. The watermarking, encryption, and fingerprint enrollment are conducted jointly in protection stage such that the extraction, decryption, and verification can be applied independently. The dual watermarking system, introducing two different embedding schemes, one used for patient data and other for fingerprint features, reduces the difficulty in maintenance of multiple documents like authentication data, personnel and diagnosis data, and medical images. The spatial fusion algorithm, which determines the region of embedding using threshold from the image to embed the encrypted patient data, follows the exact rules of fusion resulting in better quality than other fusion techniques. The four step stream cipher algorithm using symmetric key for encrypting the patient data with fingerprint verification system using algebraic invariants improves the robustness of the medical information. The experiment result of proposed scheme is evaluated for security and quality analysis in DICOM medical images resulted well in terms of attacks, quality index, and imperceptibility.

  11. An infrared-visible image fusion scheme based on NSCT and compressed sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Maldague, Xavier

    2015-05-01

    Image fusion, as a research hot point nowadays in the field of infrared computer vision, has been developed utilizing different varieties of methods. Traditional image fusion algorithms are inclined to bring problems, such as data storage shortage and computational complexity increase, etc. Compressed sensing (CS) uses sparse sampling without knowing the priori knowledge and greatly reconstructs the image, which reduces the cost and complexity of image processing. In this paper, an advanced compressed sensing image fusion algorithm based on non-subsampled contourlet transform (NSCT) is proposed. NSCT provides better sparsity than the wavelet transform in image representation. Throughout the NSCT decomposition, the low-frequency and high-frequency coefficients can be obtained respectively. For the fusion processing of low-frequency coefficients of infrared and visible images , the adaptive regional energy weighting rule is utilized. Thus only the high-frequency coefficients are specially measured. Here we use sparse representation and random projection to obtain the required values of high-frequency coefficients, afterwards, the coefficients of each image block can be fused via the absolute maximum selection rule and/or the regional standard deviation rule. In the reconstruction of the compressive sampling results, a gradient-based iterative algorithm and the total variation (TV) method are employed to recover the high-frequency coefficients. Eventually, the fused image is recovered by inverse NSCT. Both the visual effects and the numerical computation results after experiments indicate that the presented approach achieves much higher quality of image fusion, accelerates the calculations, enhances various targets and extracts more useful information.

  12. Retinal vessel enhancement based on the Gaussian function and image fusion

    NASA Astrophysics Data System (ADS)

    Moraru, Luminita; Obreja, Cristian Dragoş

    2017-01-01

    The Gaussian function is essential in the construction of the Frangi and COSFIRE (combination of shifted filter responses) filters. The connection of the broken vessels and an accurate extraction of the vascular structure is the main goal of this study. Thus, the outcome of the Frangi and COSFIRE edge detection algorithms are fused using the Dempster-Shafer algorithm with the aim to improve detection and to enhance the retinal vascular structure. For objective results, the average diameters of the retinal vessels provided by Frangi, COSFIRE and Dempster-Shafer fusion algorithms are measured. These experimental values are compared to the ground truth values provided by manually segmented retinal images. We prove the superiority of the fusion algorithm in terms of image quality by using the figure of merit objective metric that correlates the effects of all post-processing techniques.

  13. Research on multi-source image fusion technology in haze environment

    NASA Astrophysics Data System (ADS)

    Ma, GuoDong; Piao, Yan; Li, Bing

    2017-11-01

    In the haze environment, the visible image collected by a single sensor can express the details of the shape, color and texture of the target very well, but because of the haze, the sharpness is low and some of the target subjects are lost; Because of the expression of thermal radiation and strong penetration ability, infrared image collected by a single sensor can clearly express the target subject, but it will lose detail information. Therefore, the multi-source image fusion method is proposed to exploit their respective advantages. Firstly, the improved Dark Channel Prior algorithm is used to preprocess the visible haze image. Secondly, the improved SURF algorithm is used to register the infrared image and the haze-free visible image. Finally, the weighted fusion algorithm based on information complementary is used to fuse the image. Experiments show that the proposed method can improve the clarity of the visible target and highlight the occluded infrared target for target recognition.

  14. Speckle noise reduction in ultrasound images using a discrete wavelet transform-based image fusion technique.

    PubMed

    Choi, Hyun Ho; Lee, Ju Hwan; Kim, Sung Min; Park, Sung Yun

    2015-01-01

    Here, the speckle noise in ultrasonic images is removed using an image fusion-based denoising method. To optimize the denoising performance, each discrete wavelet transform (DWT) and filtering technique was analyzed and compared. In addition, the performances were compared in order to derive the optimal input conditions. To evaluate the speckle noise removal performance, an image fusion algorithm was applied to the ultrasound images, and comparatively analyzed with the original image without the algorithm. As a result, applying DWT and filtering techniques caused information loss and noise characteristics, and did not represent the most significant noise reduction performance. Conversely, an image fusion method applying SRAD-original conditions preserved the key information in the original image, and the speckle noise was removed. Based on such characteristics, the input conditions of SRAD-original had the best denoising performance with the ultrasound images. From this study, the best denoising technique proposed based on the results was confirmed to have a high potential for clinical application.

  15. Pixel-based image fusion with false color mapping

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Mao, Shiyi

    2003-06-01

    In this paper, we propose a pixel-based image fusion algorithm that combines the gray-level image fusion method with the false color mapping. This algorithm integrates two gray-level images presenting different sensor modalities or at different frequencies and produces a fused false-color image. The resulting image has higher information content than each of the original images. The objects in the fused color image are easy to be recognized. This algorithm has three steps: first, obtaining the fused gray-level image of two original images; second, giving the generalized high-boost filtering images between fused gray-level image and two source images respectively; third, generating the fused false-color image. We use the hybrid averaging and selection fusion method to obtain the fused gray-level image. The fused gray-level image will provide better details than two original images and reduce noise at the same time. But the fused gray-level image can't contain all detail information in two source images. At the same time, the details in gray-level image cannot be discerned as easy as in a color image. So a color fused image is necessary. In order to create color variation and enhance details in the final fusion image, we produce three generalized high-boost filtering images. These three images are displayed through red, green and blue channel respectively. A fused color image is produced finally. This method is used to fuse two SAR images acquired on the San Francisco area (California, USA). The result shows that fused false-color image enhances the visibility of certain details. The resolution of the final false-color image is the same as the resolution of the input images.

  16. Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.

    PubMed

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D

    2015-05-08

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.

  17. Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition

    PubMed Central

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.

    2015-01-01

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714

  18. A real-time photogrammetric algorithm for sensor and synthetic image fusion with application to aviation combined vision

    NASA Astrophysics Data System (ADS)

    Lebedev, M. A.; Stepaniants, D. G.; Komarov, D. V.; Vygolov, O. V.; Vizilter, Yu. V.; Zheltov, S. Yu.

    2014-08-01

    The paper addresses a promising visualization concept related to combination of sensor and synthetic images in order to enhance situation awareness of a pilot during an aircraft landing. A real-time algorithm for a fusion of a sensor image, acquired by an onboard camera, and a synthetic 3D image of the external view, generated in an onboard computer, is proposed. The pixel correspondence between the sensor and the synthetic images is obtained by an exterior orientation of a "virtual" camera using runway points as a geospatial reference. The runway points are detected by the Projective Hough Transform, which idea is to project the edge map onto a horizontal plane in the object space (the runway plane) and then to calculate intensity projections of edge pixels on different directions of intensity gradient. The performed experiments on simulated images show that on a base glide path the algorithm provides image fusion with pixel accuracy, even in the case of significant navigation errors.

  19. Advanced image fusion algorithms for Gamma Knife treatment planning. Evaluation and proposal for clinical use.

    PubMed

    Apostolou, N; Papazoglou, Th; Koutsouris, D

    2006-01-01

    Image fusion is a process of combining information from multiple sensors. It is a useful tool implemented in the treatment planning programme of Gamma Knife Radiosurgery. In this paper we evaluate advanced image fusion algorithms for Matlab platform and head images. We develop nine level grayscale image fusion methods: average, principal component analysis (PCA), discrete wavelet transform (DWT) and Laplacian, filter - subtract - decimate (FSD), contrast, gradient, morphological pyramid and a shift invariant discrete wavelet transform (SIDWT) method in Matlab platform. We test these methods qualitatively and quantitatively. The quantitative criteria we use are the Root Mean Square Error (RMSE), the Mutual Information (MI), the Standard Deviation (STD), the Entropy (H), the Difference Entropy (DH) and the Cross Entropy (CEN). The qualitative are: natural appearance, brilliance contrast, presence of complementary features and enhancement of common features. Finally we make clinically useful suggestions.

  20. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method

    NASA Astrophysics Data System (ADS)

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR.

  1. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method

    PubMed Central

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR. PMID:26781194

  2. Research on Remote Sensing Image Classification Based on Feature Level Fusion

    NASA Astrophysics Data System (ADS)

    Yuan, L.; Zhu, G.

    2018-04-01

    Remote sensing image classification, as an important direction of remote sensing image processing and application, has been widely studied. However, in the process of existing classification algorithms, there still exists the phenomenon of misclassification and missing points, which leads to the final classification accuracy is not high. In this paper, we selected Sentinel-1A and Landsat8 OLI images as data sources, and propose a classification method based on feature level fusion. Compare three kind of feature level fusion algorithms (i.e., Gram-Schmidt spectral sharpening, Principal Component Analysis transform and Brovey transform), and then select the best fused image for the classification experimental. In the classification process, we choose four kinds of image classification algorithms (i.e. Minimum distance, Mahalanobis distance, Support Vector Machine and ISODATA) to do contrast experiment. We use overall classification precision and Kappa coefficient as the classification accuracy evaluation criteria, and the four classification results of fused image are analysed. The experimental results show that the fusion effect of Gram-Schmidt spectral sharpening is better than other methods. In four kinds of classification algorithms, the fused image has the best applicability to Support Vector Machine classification, the overall classification precision is 94.01 % and the Kappa coefficients is 0.91. The fused image with Sentinel-1A and Landsat8 OLI is not only have more spatial information and spectral texture characteristics, but also enhances the distinguishing features of the images. The proposed method is beneficial to improve the accuracy and stability of remote sensing image classification.

  3. Data fusion of Landsat TM and IRS images in forest classification

    Treesearch

    Guangxing Wang; Markus Holopainen; Eero Lukkarinen

    2000-01-01

    Data fusion of Landsat TM images and Indian Remote Sensing satellite panchromatic image (IRS-1C PAN) was studied and compared to the use of TM or IRS image only. The aim was to combine the high spatial resolution of IRS-1C PAN to the high spectral resolution of Landsat TM images using a data fusion algorithm. The ground truth of the study was based on a sample of 1,020...

  4. Image fusion algorithm based on energy of Laplacian and PCNN

    NASA Astrophysics Data System (ADS)

    Li, Meili; Wang, Hongmei; Li, Yanjun; Zhang, Ke

    2009-12-01

    Owing to the global coupling and pulse synchronization characteristic of pulse coupled neural networks (PCNN), it has been proved to be suitable for image processing and successfully employed in image fusion. However, in almost all the literatures of image processing about PCNN, linking strength of each neuron is assigned the same value which is chosen by experiments. This is not consistent with the human vision system in which the responses to the region with notable features are stronger than that to the region with nonnotable features. It is more reasonable that notable features, rather than the same value, are employed to linking strength of each neuron. As notable feature, energy of Laplacian (EOL) is used to obtain the value of linking strength in PCNN in this paper. Experimental results demonstrate that the proposed algorithm outperforms Laplacian-based, wavelet-based, PCNN -based fusion algorithms.

  5. Detail-enhanced multimodality medical image fusion based on gradient minimization smoothing filter and shearing filter.

    PubMed

    Liu, Xingbin; Mei, Wenbo; Du, Huiqian

    2018-02-13

    In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images. Graphical abstract The detailed implementation of the proposed medical image fusion algorithm.

  6. A wavelet-based adaptive fusion algorithm of infrared polarization imaging

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Gu, Guohua; Chen, Qian; Zeng, Haifang

    2011-08-01

    The purpose of infrared polarization image is to highlight man-made target from a complex natural background. For the infrared polarization images can significantly distinguish target from background with different features, this paper presents a wavelet-based infrared polarization image fusion algorithm. The method is mainly for image processing of high-frequency signal portion, as for the low frequency signal, the original weighted average method has been applied. High-frequency part is processed as follows: first, the source image of the high frequency information has been extracted by way of wavelet transform, then signal strength of 3*3 window area has been calculated, making the regional signal intensity ration of source image as a matching measurement. Extraction method and decision mode of the details are determined by the decision making module. Image fusion effect is closely related to the setting threshold of decision making module. Compared to the commonly used experiment way, quadratic interpolation optimization algorithm is proposed in this paper to obtain threshold. Set the endpoints and midpoint of the threshold searching interval as initial interpolation nodes, and compute the minimum quadratic interpolation function. The best threshold can be obtained by comparing the minimum quadratic interpolation function. A series of image quality evaluation results show this method has got improvement in fusion effect; moreover, it is not only effective for some individual image, but also for a large number of images.

  7. A GIHS-based spectral preservation fusion method for remote sensing images using edge restored spectral modulation

    NASA Astrophysics Data System (ADS)

    Zhou, Xiran; Liu, Jun; Liu, Shuguang; Cao, Lei; Zhou, Qiming; Huang, Huawen

    2014-02-01

    High spatial resolution and spectral fidelity are basic standards for evaluating an image fusion algorithm. Numerous fusion methods for remote sensing images have been developed. Some of these methods are based on the intensity-hue-saturation (IHS) transform and the generalized IHS (GIHS), which may cause serious spectral distortion. Spectral distortion in the GIHS is proven to result from changes in saturation during fusion. Therefore, reducing such changes can achieve high spectral fidelity. A GIHS-based spectral preservation fusion method that can theoretically reduce spectral distortion is proposed in this study. The proposed algorithm consists of two steps. The first step is spectral modulation (SM), which uses the Gaussian function to extract spatial details and conduct SM of multispectral (MS) images. This method yields a desirable visual effect without requiring histogram matching between the panchromatic image and the intensity of the MS image. The second step uses the Gaussian convolution function to restore lost edge details during SM. The proposed method is proven effective and shown to provide better results compared with other GIHS-based methods.

  8. Results of the 2016 International Skin Imaging Collaboration International Symposium on Biomedical Imaging challenge: Comparison of the accuracy of computer algorithms to dermatologists for the diagnosis of melanoma from dermoscopic images.

    PubMed

    Marchetti, Michael A; Codella, Noel C F; Dusza, Stephen W; Gutman, David A; Helba, Brian; Kalloo, Aadi; Mishra, Nabin; Carrera, Cristina; Celebi, M Emre; DeFazio, Jennifer L; Jaimes, Natalia; Marghoob, Ashfaq A; Quigley, Elizabeth; Scope, Alon; Yélamos, Oriol; Halpern, Allan C

    2018-02-01

    Computer vision may aid in melanoma detection. We sought to compare melanoma diagnostic accuracy of computer algorithms to dermatologists using dermoscopic images. We conducted a cross-sectional study using 100 randomly selected dermoscopic images (50 melanomas, 44 nevi, and 6 lentigines) from an international computer vision melanoma challenge dataset (n = 379), along with individual algorithm results from 25 teams. We used 5 methods (nonlearned and machine learning) to combine individual automated predictions into "fusion" algorithms. In a companion study, 8 dermatologists classified the lesions in the 100 images as either benign or malignant. The average sensitivity and specificity of dermatologists in classification was 82% and 59%. At 82% sensitivity, dermatologist specificity was similar to the top challenge algorithm (59% vs. 62%, P = .68) but lower than the best-performing fusion algorithm (59% vs. 76%, P = .02). Receiver operating characteristic area of the top fusion algorithm was greater than the mean receiver operating characteristic area of dermatologists (0.86 vs. 0.71, P = .001). The dataset lacked the full spectrum of skin lesions encountered in clinical practice, particularly banal lesions. Readers and algorithms were not provided clinical data (eg, age or lesion history/symptoms). Results obtained using our study design cannot be extrapolated to clinical practice. Deep learning computer vision systems classified melanoma dermoscopy images with accuracy that exceeded some but not all dermatologists. Copyright © 2017 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.

  9. Collection Fusion Using Bayesian Estimation of a Linear Regression Model in Image Databases on the Web.

    ERIC Educational Resources Information Center

    Kim, Deok-Hwan; Chung, Chin-Wan

    2003-01-01

    Discusses the collection fusion problem of image databases, concerned with retrieving relevant images by content based retrieval from image databases distributed on the Web. Focuses on a metaserver which selects image databases supporting similarity measures and proposes a new algorithm which exploits a probabilistic technique using Bayesian…

  10. Data fusion strategies for hazard detection and safe site selection for planetary and small body landings

    NASA Astrophysics Data System (ADS)

    Câmara, F.; Oliveira, J.; Hormigo, T.; Araújo, J.; Ribeiro, R.; Falcão, A.; Gomes, M.; Dubois-Matra, O.; Vijendran, S.

    2015-06-01

    This paper discusses the design and evaluation of data fusion strategies to perform tiered fusion of several heterogeneous sensors and a priori data. The aim is to increase robustness and performance of hazard detection and avoidance systems, while enabling safe planetary and small body landings anytime, anywhere. The focus is on Mars and asteroid landing mission scenarios and three distinct data fusion algorithms are introduced and compared. The first algorithm consists of a hybrid camera-LIDAR hazard detection and avoidance system, the H2DAS, in which data fusion is performed at both sensor-level data (reconstruction of the point cloud obtained with a scanning LIDAR using the navigation motion states and correcting the image for motion compensation using IMU data), feature-level data (concatenation of multiple digital elevation maps, obtained from consecutive LIDAR images, to achieve higher accuracy and resolution maps while enabling relative positioning) as well as decision-level data (fusing hazard maps from multiple sensors onto a single image space, with a single grid orientation and spacing). The second method presented is a hybrid reasoning fusion, the HRF, in which innovative algorithms replace the decision-level functions of the previous method, by combining three different reasoning engines—a fuzzy reasoning engine, a probabilistic reasoning engine and an evidential reasoning engine—to produce safety maps. Finally, the third method presented is called Intelligent Planetary Site Selection, the IPSIS, an innovative multi-criteria, dynamic decision-level data fusion algorithm that takes into account historical information for the selection of landing sites and a piloting function with a non-exhaustive landing site search capability, i.e., capable of finding local optima by searching a reduced set of global maps. All the discussed data fusion strategies and algorithms have been integrated, verified and validated in a closed-loop simulation environment. Monte Carlo simulation campaigns were performed for the algorithms performance assessment and benchmarking. The simulations results comprise the landing phases of Mars and Phobos landing mission scenarios.

  11. Development of a robust MRI fiducial system for automated fusion of MR-US abdominal images.

    PubMed

    Favazza, Christopher P; Gorny, Krzysztof R; Callstrom, Matthew R; Kurup, Anil N; Washburn, Michael; Trester, Pamela S; Fowler, Charles L; Hangiandreou, Nicholas J

    2018-05-21

    We present the development of a two-component magnetic resonance (MR) fiducial system, that is, a fiducial marker device combined with an auto-segmentation algorithm, designed to be paired with existing ultrasound probe tracking and image fusion technology to automatically fuse MR and ultrasound (US) images. The fiducial device consisted of four ~6.4 mL cylindrical wells filled with 1 g/L copper sulfate solution. The algorithm was designed to automatically segment the device in clinical abdominal MR images. The algorithm's detection rate and repeatability were investigated through a phantom study and in human volunteers. The detection rate was 100% in all phantom and human images. The center-of-mass of the fiducial device was robustly identified with maximum variations of 2.9 mm in position and 0.9° in angular orientation. In volunteer images, average differences between algorithm-measured inter-marker spacings and actual separation distances were 0.53 ± 0.36 mm. "Proof-of-concept" automatic MR-US fusions were conducted with sets of images from both a phantom and volunteer using a commercial prototype system, which was built based on the above findings. Image fusion accuracy was measured to be within 5 mm for breath-hold scanning. These results demonstrate the capability of this approach to automatically fuse US and MR images acquired across a wide range of clinical abdominal pulse sequences. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  12. Super-resolution algorithm based on sparse representation and wavelet preprocessing for remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Ren, Ruizhi; Gu, Lingjia; Fu, Haoyang; Sun, Chenglin

    2017-04-01

    An effective super-resolution (SR) algorithm is proposed for actual spectral remote sensing images based on sparse representation and wavelet preprocessing. The proposed SR algorithm mainly consists of dictionary training and image reconstruction. Wavelet preprocessing is used to establish four subbands, i.e., low frequency, horizontal, vertical, and diagonal high frequency, for an input image. As compared to the traditional approaches involving the direct training of image patches, the proposed approach focuses on the training of features derived from these four subbands. The proposed algorithm is verified using different spectral remote sensing images, e.g., moderate-resolution imaging spectroradiometer (MODIS) images with different bands, and the latest Chinese Jilin-1 satellite images with high spatial resolution. According to the visual experimental results obtained from the MODIS remote sensing data, the SR images using the proposed SR algorithm are superior to those using a conventional bicubic interpolation algorithm or traditional SR algorithms without preprocessing. Fusion algorithms, e.g., standard intensity-hue-saturation, principal component analysis, wavelet transform, and the proposed SR algorithms are utilized to merge the multispectral and panchromatic images acquired by the Jilin-1 satellite. The effectiveness of the proposed SR algorithm is assessed by parameters such as peak signal-to-noise ratio, structural similarity index, correlation coefficient, root-mean-square error, relative dimensionless global error in synthesis, relative average spectral error, spectral angle mapper, and the quality index Q4, and its performance is better than that of the standard image fusion algorithms.

  13. A color fusion method of infrared and low-light-level images based on visual perception

    NASA Astrophysics Data System (ADS)

    Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa

    2014-11-01

    The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.

  14. Mosaicing of single plane illumination microscopy images using groupwise registration and fast content-based image fusion

    NASA Astrophysics Data System (ADS)

    Preibisch, Stephan; Rohlfing, Torsten; Hasak, Michael P.; Tomancak, Pavel

    2008-03-01

    Single Plane Illumination Microscopy (SPIM; Huisken et al., Nature 305(5686):1007-1009, 2004) is an emerging microscopic technique that enables live imaging of large biological specimens in their entirety. By imaging the living biological sample from multiple angles SPIM has the potential to achieve isotropic resolution throughout even relatively large biological specimens. For every angle, however, only a relatively shallow section of the specimen is imaged with high resolution, whereas deeper regions appear increasingly blurred. In order to produce a single, uniformly high resolution image, we propose here an image mosaicing algorithm that combines state of the art groupwise image registration for alignment with content-based image fusion to prevent degrading of the fused image due to regional blurring of the input images. For the registration stage, we introduce an application-specific groupwise transformation model that incorporates per-image as well as groupwise transformation parameters. We also propose a new fusion algorithm based on Gaussian filters, which is substantially faster than fusion based on local image entropy. We demonstrate the performance of our mosaicing method on data acquired from living embryos of the fruit fly, Drosophila, using four and eight angle acquisitions.

  15. BP fusion model for the detection of oil spills on the sea by remote sensing

    NASA Astrophysics Data System (ADS)

    Chen, Weiwei; An, Jubai; Zhang, Hande; Lin, Bin

    2003-06-01

    Oil spills are very serious marine pollution in many countries. In order to detect and identify the oil-spilled on the sea by remote sensor, scientists have to conduct a research work on the remote sensing image. As to the detection of oil spills on the sea, edge detection is an important technology in image processing. There are many algorithms of edge detection developed for image processing. These edge detection algorithms always have their own advantages and disadvantages in the image processing. Based on the primary requirements of edge detection of the oil spills" image on the sea, computation time and detection accuracy, we developed a fusion model. The model employed a BP neural net to fuse the detection results of simple operators. The reason we selected BP neural net as the fusion technology is that the relation between simple operators" result of edge gray level and the image"s true edge gray level is nonlinear, while BP neural net is good at solving the nonlinear identification problem. Therefore in this paper we trained a BP neural net by some oil spill images, then applied the BP fusion model on the edge detection of other oil spill images and obtained a good result. In this paper the detection result of some gradient operators and Laplacian operator are also compared with the result of BP fusion model to analysis the fusion effect. At last the paper pointed out that the fusion model has higher accuracy and higher speed in the processing oil spill image"s edge detection.

  16. Information fusion for diabetic retinopathy CAD in digital color fundus photographs.

    PubMed

    Niemeijer, Meindert; Abramoff, Michael D; van Ginneken, Bram

    2009-05-01

    The purpose of computer-aided detection or diagnosis (CAD) technology has so far been to serve as a second reader. If, however, all relevant lesions in an image can be detected by CAD algorithms, use of CAD for automatic reading or prescreening may become feasible. This work addresses the question how to fuse information from multiple CAD algorithms, operating on multiple images that comprise an exam, to determine a likelihood that the exam is normal and would not require further inspection by human operators. We focus on retinal image screening for diabetic retinopathy, a common complication of diabetes. Current CAD systems are not designed to automatically evaluate complete exams consisting of multiple images for which several detection algorithm output sets are available. Information fusion will potentially play a crucial role in enabling the application of CAD technology to the automatic screening problem. Several different fusion methods are proposed and their effect on the performance of a complete comprehensive automatic diabetic retinopathy screening system is evaluated. Experiments show that the choice of fusion method can have a large impact on system performance. The complete system was evaluated on a set of 15,000 exams (60,000 images). The best performing fusion method obtained an area under the receiver operator characteristic curve of 0.881. This indicates that automated prescreening could be applied in diabetic retinopathy screening programs.

  17. Entropy-functional-based online adaptive decision fusion framework with application to wildfire detection in video.

    PubMed

    Gunay, Osman; Toreyin, Behçet Ugur; Kose, Kivanc; Cetin, A Enis

    2012-05-01

    In this paper, an entropy-functional-based online adaptive decision fusion (EADF) framework is developed for image analysis and computer vision applications. In this framework, it is assumed that the compound algorithm consists of several subalgorithms, each of which yields its own decision as a real number centered around zero, representing the confidence level of that particular subalgorithm. Decision values are linearly combined with weights that are updated online according to an active fusion method based on performing entropic projections onto convex sets describing subalgorithms. It is assumed that there is an oracle, who is usually a human operator, providing feedback to the decision fusion method. A video-based wildfire detection system was developed to evaluate the performance of the decision fusion algorithm. In this case, image data arrive sequentially, and the oracle is the security guard of the forest lookout tower, verifying the decision of the combined algorithm. The simulation results are presented.

  18. A new evaluation method research for fusion quality of infrared and visible images

    NASA Astrophysics Data System (ADS)

    Ge, Xingguo; Ji, Yiguo; Tao, Zhongxiang; Tian, Chunyan; Ning, Chengda

    2017-03-01

    In order to objectively evaluate the fusion effect of infrared and visible image, a fusion evaluation method for infrared and visible images based on energy-weighted average structure similarity and edge information retention value is proposed for drawbacks of existing evaluation methods. The evaluation index of this method is given, and the infrared and visible image fusion results under different algorithms and environments are made evaluation experiments on the basis of this index. The experimental results show that the objective evaluation index is consistent with the subjective evaluation results obtained from this method, which shows that the method is a practical and effective fusion image quality evaluation method.

  19. Fusion of an Ensemble of Augmented Image Detectors for Robust Object Detection

    PubMed Central

    Wei, Pan; Anderson, Derek T.

    2018-01-01

    A significant challenge in object detection is accurate identification of an object’s position in image space, whereas one algorithm with one set of parameters is usually not enough, and the fusion of multiple algorithms and/or parameters can lead to more robust results. Herein, a new computational intelligence fusion approach based on the dynamic analysis of agreement among object detection outputs is proposed. Furthermore, we propose an online versus just in training image augmentation strategy. Experiments comparing the results both with and without fusion are presented. We demonstrate that the augmented and fused combination results are the best, with respect to higher accuracy rates and reduction of outlier influences. The approach is demonstrated in the context of cone, pedestrian and box detection for Advanced Driver Assistance Systems (ADAS) applications. PMID:29562609

  20. Multi-focus image fusion algorithm using NSCT and MPCNN

    NASA Astrophysics Data System (ADS)

    Liu, Kang; Wang, Lianli

    2018-04-01

    Based on nonsubsampled contourlet transform (NSCT) and modified pulse coupled neural network (MPCNN), the paper proposes an effective method of image fusion. Firstly, the paper decomposes the source image into the low-frequency components and high-frequency components using NSCT, and then processes the low-frequency components by regional statistical fusion rules. For high-frequency components, the paper calculates the spatial frequency (SF), which is input into MPCNN model to get relevant coefficients according to the fire-mapping image of MPCNN. At last, the paper restructures the final image by inverse transformation of low-frequency and high-frequency components. Compared with the wavelet transformation (WT) and the traditional NSCT algorithm, experimental results indicate that the method proposed in this paper achieves an improvement both in human visual perception and objective evaluation. It indicates that the method is effective, practical and good performance.

  1. Change Detection of High-Resolution Remote Sensing Images Based on Adaptive Fusion of Multiple Features

    NASA Astrophysics Data System (ADS)

    Wang, G. H.; Wang, H. B.; Fan, W. F.; Liu, Y.; Chen, C.

    2018-04-01

    In view of the traditional change detection algorithm mainly depends on the spectral information image spot, failed to effectively mining and fusion of multi-image feature detection advantage, the article borrows the ideas of object oriented analysis proposed a multi feature fusion of remote sensing image change detection algorithm. First by the multi-scale segmentation of image objects based; then calculate the various objects of color histogram and linear gradient histogram; utilizes the color distance and edge line feature distance between EMD statistical operator in different periods of the object, using the adaptive weighted method, the color feature distance and edge in a straight line distance of combination is constructed object heterogeneity. Finally, the curvature histogram analysis image spot change detection results. The experimental results show that the method can fully fuse the color and edge line features, thus improving the accuracy of the change detection.

  2. A new hyperspectral image compression paradigm based on fusion

    NASA Astrophysics Data System (ADS)

    Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.

  3. Label fusion based brain MR image segmentation via a latent selective model

    NASA Astrophysics Data System (ADS)

    Liu, Gang; Guo, Xiantang; Zhu, Kai; Liao, Hengxu

    2018-04-01

    Multi-atlas segmentation is an effective approach and increasingly popular for automatically labeling objects of interest in medical images. Recently, segmentation methods based on generative models and patch-based techniques have become the two principal branches of label fusion. However, these generative models and patch-based techniques are only loosely related, and the requirement for higher accuracy, faster segmentation, and robustness is always a great challenge. In this paper, we propose novel algorithm that combines the two branches using global weighted fusion strategy based on a patch latent selective model to perform segmentation of specific anatomical structures for human brain magnetic resonance (MR) images. In establishing this probabilistic model of label fusion between the target patch and patch dictionary, we explored the Kronecker delta function in the label prior, which is more suitable than other models, and designed a latent selective model as a membership prior to determine from which training patch the intensity and label of the target patch are generated at each spatial location. Because the image background is an equally important factor for segmentation, it is analyzed in label fusion procedure and we regard it as an isolated label to keep the same privilege between the background and the regions of interest. During label fusion with the global weighted fusion scheme, we use Bayesian inference and expectation maximization algorithm to estimate the labels of the target scan to produce the segmentation map. Experimental results indicate that the proposed algorithm is more accurate and robust than the other segmentation methods.

  4. Infrared and visible image fusion method based on saliency detection in sparse domain

    NASA Astrophysics Data System (ADS)

    Liu, C. H.; Qi, Y.; Ding, W. R.

    2017-06-01

    Infrared and visible image fusion is a key problem in the field of multi-sensor image fusion. To better preserve the significant information of the infrared and visible images in the final fused image, the saliency maps of the source images is introduced into the fusion procedure. Firstly, under the framework of the joint sparse representation (JSR) model, the global and local saliency maps of the source images are obtained based on sparse coefficients. Then, a saliency detection model is proposed, which combines the global and local saliency maps to generate an integrated saliency map. Finally, a weighted fusion algorithm based on the integrated saliency map is developed to achieve the fusion progress. The experimental results show that our method is superior to the state-of-the-art methods in terms of several universal quality evaluation indexes, as well as in the visual quality.

  5. A novel false color mapping model-based fusion method of visual and infrared images

    NASA Astrophysics Data System (ADS)

    Qi, Bin; Kun, Gao; Tian, Yue-xin; Zhu, Zhen-yu

    2013-12-01

    A fast and efficient image fusion method is presented to generate near-natural colors from panchromatic visual and thermal imaging sensors. Firstly, a set of daytime color reference images are analyzed and the false color mapping principle is proposed according to human's visual and emotional habits. That is, object colors should remain invariant after color mapping operations, differences between infrared and visual images should be enhanced and the background color should be consistent with the main scene content. Then a novel nonlinear color mapping model is given by introducing the geometric average value of the input visual and infrared image gray and the weighted average algorithm. To determine the control parameters in the mapping model, the boundary conditions are listed according to the mapping principle above. Fusion experiments show that the new fusion method can achieve the near-natural appearance of the fused image, and has the features of enhancing color contrasts and highlighting the infrared brilliant objects when comparing with the traditional TNO algorithm. Moreover, it owns the low complexity and is easy to realize real-time processing. So it is quite suitable for the nighttime imaging apparatus.

  6. Multispectral image fusion based on fractal features

    NASA Astrophysics Data System (ADS)

    Tian, Jie; Chen, Jie; Zhang, Chunhua

    2004-01-01

    Imagery sensors have been one indispensable part of the detection and recognition systems. They are widely used to the field of surveillance, navigation, control and guide, et. However, different imagery sensors depend on diverse imaging mechanisms, and work within diverse range of spectrum. They also perform diverse functions and have diverse circumstance requires. So it is unpractical to accomplish the task of detection or recognition with a single imagery sensor under the conditions of different circumstances, different backgrounds and different targets. Fortunately, the multi-sensor image fusion technique emerged as important route to solve this problem. So image fusion has been one of the main technical routines used to detect and recognize objects from images. While, loss of information is unavoidable during fusion process, so it is always a very important content of image fusion how to preserve the useful information to the utmost. That is to say, it should be taken into account before designing the fusion schemes how to avoid the loss of useful information or how to preserve the features helpful to the detection. In consideration of these issues and the fact that most detection problems are actually to distinguish man-made objects from natural background, a fractal-based multi-spectral fusion algorithm has been proposed in this paper aiming at the recognition of battlefield targets in the complicated backgrounds. According to this algorithm, source images are firstly orthogonally decomposed according to wavelet transform theories, and then fractal-based detection is held to each decomposed image. At this step, natural background and man-made targets are distinguished by use of fractal models that can well imitate natural objects. Special fusion operators are employed during the fusion of area that contains man-made targets so that useful information could be preserved and features of targets could be extruded. The final fused image is reconstructed from the composition of source pyramid images. So this fusion scheme is a multi-resolution analysis. The wavelet decomposition of image can be actually considered as special pyramid decomposition. According to wavelet decomposition theories, the approximation of image (formula available in paper) at resolution 2j+1 equal to its orthogonal projection in space , that is, where Ajf is the low-frequency approximation of image f(x, y) at resolution 2j and , , represent the vertical, horizontal and diagonal wavelet coefficients respectively at resolution 2j. These coefficients describe the high-frequency information of image at direction of vertical, horizontal and diagonal respectively. Ajf, , and are independent and can be considered as images. In this paper J is set to be 1, so the source image is decomposed to produce the son-images Af, D1f, D2f and D3f. To solve the problem of detecting artifacts, the concepts of vertical fractal dimension FD1, horizontal fractal dimension FD2 and diagonal fractal dimension FD3 are proposed in this paper. The vertical fractal dimension FD1 corresponds to the vertical wavelet coefficients image after the wavelet decomposition of source image, the horizontal fractal dimension FD2 corresponds to the horizontal wavelet coefficients and the diagonal fractal dimension FD3 the diagonal one. These definitions enrich the illustration of source images. Therefore they are helpful to classify the targets. Then the detection of artifacts in the decomposed images is a problem of pattern recognition in 4-D space. The combination of FD0, FD1, FD2 and FD3 make a vector of (FD0, FD1, FD2, FD3), which can be considered as a united feature vector of the studied image. All the parts of the images are classified in the 4-D pattern space created by the vector of (FD0, FD1, FD2, FD3) so that the area that contains man-made objects could be detected. This detection can be considered as a coarse recognition, and then the significant areas in each son-images are signed so that they can be dealt with special rules. There has been various fusion rules developed with each one aiming at a special problem. These rules have different performance, so it is very important to select an appropriate rule during the design of an image fusion system. Recent research denotes that the rule should be adjustable so that it is always suitable to extrude the features of targets and to preserve the pixels of useful information. In this paper, owing to the consideration that fractal dimension is one of the main features to distinguish man-made targets from natural objects, the fusion rule was defined that if the studied region of image contains man-made target, the pixels of the source image whose fractal dimension is minimal are saved to be the pixels of the fused image, otherwise, a weighted average operator is adopted to avoid loss of information. The main idea of this rule is to store the pixels with low fractal dimensions, so it can be named Minimal Fractal dimensions (MFD) fusion rule. This fractal-based algorithm is compared with a common weighted average fusion algorithm. An objective assessment is taken to the two fusion results. The criteria of Entropy, Cross-Entropy, Peak Signal-to-Noise Ratio (PSNR) and Standard Gray Scale Difference are defined in this paper. Reversely to the idea of constructing an ideal image as the assessing reference, the source images are selected to be the reference in this paper. It can be deemed that this assessment is to calculate how much the image quality has been enhanced and the quantity of information has been increased when the fused image is compared with the source images. The experimental results imply that the fractal-based multi-spectral fusion algorithm can effectively preserve the information of man-made objects with a high contrast. It is proved that this algorithm could well preserve features of military targets because that battlefield targets are most man-made objects and in common their images differ from fractal models obviously. Furthermore, the fractal features are not sensitive to the imaging conditions and the movement of targets, so this fractal-based algorithm may be very practical.

  7. Multi-focus image fusion based on window empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Qin, Xinqiang; Zheng, Jiaoyue; Hu, Gang; Wang, Jiao

    2017-09-01

    In order to improve multi-focus image fusion quality, a novel fusion algorithm based on window empirical mode decomposition (WEMD) is proposed. This WEMD is an improved form of bidimensional empirical mode decomposition (BEMD), due to its decomposition process using the adding window principle, effectively resolving the signal concealment problem. We used WEMD for multi-focus image fusion, and formulated different fusion rules for bidimensional intrinsic mode function (BIMF) components and the residue component. For fusion of the BIMF components, the concept of the Sum-modified-Laplacian was used and a scheme based on the visual feature contrast adopted; when choosing the residue coefficients, a pixel value based on the local visibility was selected. We carried out four groups of multi-focus image fusion experiments and compared objective evaluation criteria with other three fusion methods. The experimental results show that the proposed fusion approach is effective and performs better at fusing multi-focus images than some traditional methods.

  8. A detail-preserved and luminance-consistent multi-exposure image fusion algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Guanquan; Zhou, Yue

    2018-04-01

    When irradiance across a scene varies greatly, we can hardly get an image of the scene without over- or underexposure area, because of the constraints of cameras. Multi-exposure image fusion (MEF) is an effective method to deal with this problem by fusing multi-exposure images of a static scene. A novel MEF method is described in this paper. In the proposed algorithm, coarser-scale luminance consistency is preserved by contribution adjustment using the luminance information between blocks; detail-preserved smoothing filter can stitch blocks smoothly without losing details. Experiment results show that the proposed method performs well in preserving luminance consistency and details.

  9. Study on polarization image methods in turbid medium

    NASA Astrophysics Data System (ADS)

    Fu, Qiang; Mo, Chunhe; Liu, Boyu; Duan, Jin; Zhang, Su; Zhu, Yong

    2014-11-01

    Polarization imaging detection technology in addition to the traditional imaging information, also can get polarization multi-dimensional information, thus improve the probability of target detection and recognition.Image fusion in turbid medium target polarization image research, is helpful to obtain high quality images. Based on visible light wavelength of light wavelength of laser polarization imaging, through the rotation Angle of polaroid get corresponding linear polarized light intensity, respectively to obtain the concentration range from 5% to 10% of turbid medium target stocks of polarization parameters, introduces the processing of image fusion technology, main research on access to the polarization of the image by using different polarization image fusion methods for image processing, discusses several kinds of turbid medium has superior performance of polarization image fusion method, and gives the treatment effect and analysis of data tables. Then use pixel level, feature level and decision level fusion algorithm on three levels of information fusion, DOLP polarization image fusion, the results show that: with the increase of the polarization Angle, polarization image will be more and more fuzzy, quality worse and worse. Than a single fused image contrast of the image be improved obviously, the finally analysis on reasons of the increase the image contrast and polarized light.

  10. Infrared and visible image fusion based on robust principal component analysis and compressed sensing

    NASA Astrophysics Data System (ADS)

    Li, Jun; Song, Minghui; Peng, Yuanxi

    2018-03-01

    Current infrared and visible image fusion methods do not achieve adequate information extraction, i.e., they cannot extract the target information from infrared images while retaining the background information from visible images. Moreover, most of them have high complexity and are time-consuming. This paper proposes an efficient image fusion framework for infrared and visible images on the basis of robust principal component analysis (RPCA) and compressed sensing (CS). The novel framework consists of three phases. First, RPCA decomposition is applied to the infrared and visible images to obtain their sparse and low-rank components, which represent the salient features and background information of the images, respectively. Second, the sparse and low-rank coefficients are fused by different strategies. On the one hand, the measurements of the sparse coefficients are obtained by the random Gaussian matrix, and they are then fused by the standard deviation (SD) based fusion rule. Next, the fused sparse component is obtained by reconstructing the result of the fused measurement using the fast continuous linearized augmented Lagrangian algorithm (FCLALM). On the other hand, the low-rank coefficients are fused using the max-absolute rule. Subsequently, the fused image is superposed by the fused sparse and low-rank components. For comparison, several popular fusion algorithms are tested experimentally. By comparing the fused results subjectively and objectively, we find that the proposed framework can extract the infrared targets while retaining the background information in the visible images. Thus, it exhibits state-of-the-art performance in terms of both fusion effects and timeliness.

  11. a Comparative Analysis of Spatiotemporal Data Fusion Models for Landsat and Modis Data

    NASA Astrophysics Data System (ADS)

    Hazaymeh, K.; Almagbile, A.

    2018-04-01

    In this study, three documented spatiotemporal data fusion models were applied to Landsat-7 and MODIS surface reflectance, and NDVI. The algorithms included the spatial and temporal adaptive reflectance fusion model (STARFM), sparse representation based on a spatiotemporal reflectance fusion model (SPSTFM), and spatiotemporal image-fusion model (STI-FM). The objectives of this study were to (i) compare the performance of these three fusion models using a one Landsat-MODIS spectral reflectance image pairs using time-series datasets from the Coleambally irrigation area in Australia, and (ii) quantitatively evaluate the accuracy of the synthetic images generated from each fusion model using statistical measurements. Results showed that the three fusion models predicted the synthetic Landsat-7 image with adequate agreements. The STI-FM produced more accurate reconstructions of both Landsat-7 spectral bands and NDVI. Furthermore, it produced surface reflectance images having the highest correlation with the actual Landsat-7 images. This study indicated that STI-FM would be more suitable for spatiotemporal data fusion applications such as vegetation monitoring, drought monitoring, and evapotranspiration.

  12. A New Sparse Representation Framework for Reconstruction of an Isotropic High Spatial Resolution MR Volume From Orthogonal Anisotropic Resolution Scans.

    PubMed

    Jia, Yuanyuan; Gholipour, Ali; He, Zhongshi; Warfield, Simon K

    2017-05-01

    In magnetic resonance (MR), hardware limitations, scan time constraints, and patient movement often result in the acquisition of anisotropic 3-D MR images with limited spatial resolution in the out-of-plane views. Our goal is to construct an isotropic high-resolution (HR) 3-D MR image through upsampling and fusion of orthogonal anisotropic input scans. We propose a multiframe super-resolution (SR) reconstruction technique based on sparse representation of MR images. Our proposed algorithm exploits the correspondence between the HR slices and the low-resolution (LR) sections of the orthogonal input scans as well as the self-similarity of each input scan to train pairs of overcomplete dictionaries that are used in a sparse-land local model to upsample the input scans. The upsampled images are then combined using wavelet fusion and error backprojection to reconstruct an image. Features are learned from the data and no extra training set is needed. Qualitative and quantitative analyses were conducted to evaluate the proposed algorithm using simulated and clinical MR scans. Experimental results show that the proposed algorithm achieves promising results in terms of peak signal-to-noise ratio, structural similarity image index, intensity profiles, and visualization of small structures obscured in the LR imaging process due to partial volume effects. Our novel SR algorithm outperforms the nonlocal means (NLM) method using self-similarity, NLM method using self-similarity and image prior, self-training dictionary learning-based SR method, averaging of upsampled scans, and the wavelet fusion method. Our SR algorithm can reduce through-plane partial volume artifact by combining multiple orthogonal MR scans, and thus can potentially improve medical image analysis, research, and clinical diagnosis.

  13. Feature-Motivated Simplified Adaptive PCNN-Based Medical Image Fusion Algorithm in NSST Domain.

    PubMed

    Ganasala, Padma; Kumar, Vinod

    2016-02-01

    Multimodality medical image fusion plays a vital role in diagnosis, treatment planning, and follow-up studies of various diseases. It provides a composite image containing critical information of source images required for better localization and definition of different organs and lesions. In the state-of-the-art image fusion methods based on nonsubsampled shearlet transform (NSST) and pulse-coupled neural network (PCNN), authors have used normalized coefficient value to motivate the PCNN-processing both low-frequency (LF) and high-frequency (HF) sub-bands. This makes the fused image blurred and decreases its contrast. The main objective of this work is to design an image fusion method that gives the fused image with better contrast, more detail information, and suitable for clinical use. We propose a novel image fusion method utilizing feature-motivated adaptive PCNN in NSST domain for fusion of anatomical images. The basic PCNN model is simplified, and adaptive-linking strength is used. Different features are used to motivate the PCNN-processing LF and HF sub-bands. The proposed method is extended for fusion of functional image with an anatomical image in improved nonlinear intensity hue and saturation (INIHS) color model. Extensive fusion experiments have been performed on CT-MRI and SPECT-MRI datasets. Visual and quantitative analysis of experimental results proved that the proposed method provides satisfactory fusion outcome compared to other image fusion methods.

  14. 3D reconstruction from multi-view VHR-satellite images in MicMac

    NASA Astrophysics Data System (ADS)

    Rupnik, Ewelina; Pierrot-Deseilligny, Marc; Delorme, Arthur

    2018-05-01

    This work addresses the generation of high quality digital surface models by fusing multiple depths maps calculated with the dense image matching method. The algorithm is adapted to very high resolution multi-view satellite images, and the main contributions of this work are in the multi-view fusion. The algorithm is insensitive to outliers, takes into account the matching quality indicators, handles non-correlated zones (e.g. occlusions), and is solved with a multi-directional dynamic programming approach. No geometric constraints (e.g. surface planarity) or auxiliary data in form of ground control points are required for its operation. Prior to the fusion procedures, the RPC geolocation parameters of all images are improved in a bundle block adjustment routine. The performance of the algorithm is evaluated on two VHR (Very High Resolution)-satellite image datasets (Pléiades, WorldView-3) revealing its good performance in reconstructing non-textured areas, repetitive patterns, and surface discontinuities.

  15. Research on polarization imaging information parsing method

    NASA Astrophysics Data System (ADS)

    Yuan, Hongwu; Zhou, Pucheng; Wang, Xiaolong

    2016-11-01

    Polarization information parsing plays an important role in polarization imaging detection. This paper focus on the polarization information parsing method: Firstly, the general process of polarization information parsing is given, mainly including polarization image preprocessing, multiple polarization parameters calculation, polarization image fusion and polarization image tracking, etc.; And then the research achievements of the polarization information parsing method are presented, in terms of polarization image preprocessing, the polarization image registration method based on the maximum mutual information is designed. The experiment shows that this method can improve the precision of registration and be satisfied the need of polarization information parsing; In terms of multiple polarization parameters calculation, based on the omnidirectional polarization inversion model is built, a variety of polarization parameter images are obtained and the precision of inversion is to be improve obviously; In terms of polarization image fusion , using fuzzy integral and sparse representation, the multiple polarization parameters adaptive optimal fusion method is given, and the targets detection in complex scene is completed by using the clustering image segmentation algorithm based on fractal characters; In polarization image tracking, the average displacement polarization image characteristics of auxiliary particle filtering fusion tracking algorithm is put forward to achieve the smooth tracking of moving targets. Finally, the polarization information parsing method is applied to the polarization imaging detection of typical targets such as the camouflage target, the fog and latent fingerprints.

  16. A survey of infrared and visual image fusion methods

    NASA Astrophysics Data System (ADS)

    Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Hai, Jinjin; He, Kangjian

    2017-09-01

    Infrared (IR) and visual (VI) image fusion is designed to fuse multiple source images into a comprehensive image to boost imaging quality and reduce redundancy information, which is widely used in various imaging equipment to improve the visual ability of human and robot. The accurate, reliable and complementary descriptions of the scene in fused images make these techniques be widely used in various fields. In recent years, a large number of fusion methods for IR and VI images have been proposed due to the ever-growing demands and the progress of image representation methods; however, there has not been published an integrated survey paper about this field in last several years. Therefore, we make a survey to report the algorithmic developments of IR and VI image fusion. In this paper, we first characterize the IR and VI image fusion based applications to represent an overview of the research status. Then we present a synthesize survey of the state of the art. Thirdly, the frequently-used image fusion quality measures are introduced. Fourthly, we perform some experiments of typical methods and make corresponding analysis. At last, we summarize the corresponding tendencies and challenges in IR and VI image fusion. This survey concludes that although various IR and VI image fusion methods have been proposed, there still exist further improvements or potential research directions in different applications of IR and VI image fusion.

  17. Multisensor fusion for 3D target tracking using track-before-detect particle filter

    NASA Astrophysics Data System (ADS)

    Moshtagh, Nima; Romberg, Paul M.; Chan, Moses W.

    2015-05-01

    This work presents a novel fusion mechanism for estimating the three-dimensional trajectory of a moving target using images collected by multiple imaging sensors. The proposed projective particle filter avoids the explicit target detection prior to fusion. In projective particle filter, particles that represent the posterior density (of target state in a high-dimensional space) are projected onto the lower-dimensional observation space. Measurements are generated directly in the observation space (image plane) and a marginal (sensor) likelihood is computed. The particles states and their weights are updated using the joint likelihood computed from all the sensors. The 3D state estimate of target (system track) is then generated from the states of the particles. This approach is similar to track-before-detect particle filters that are known to perform well in tracking dim and stealthy targets in image collections. Our approach extends the track-before-detect approach to 3D tracking using the projective particle filter. The performance of this measurement-level fusion method is compared with that of a track-level fusion algorithm using the projective particle filter. In the track-level fusion algorithm, the 2D sensor tracks are generated separately and transmitted to a fusion center, where they are treated as measurements to the state estimator. The 2D sensor tracks are then fused to reconstruct the system track. A realistic synthetic scenario with a boosting target was generated, and used to study the performance of the fusion mechanisms.

  18. Information fusion performance evaluation for motion imagery data using mutual information: initial study

    NASA Astrophysics Data System (ADS)

    Grieggs, Samuel M.; McLaughlin, Michael J.; Ezekiel, Soundararajan; Blasch, Erik

    2015-06-01

    As technology and internet use grows at an exponential rate, video and imagery data is becoming increasingly important. Various techniques such as Wide Area Motion imagery (WAMI), Full Motion Video (FMV), and Hyperspectral Imaging (HSI) are used to collect motion data and extract relevant information. Detecting and identifying a particular object in imagery data is an important step in understanding visual imagery, such as content-based image retrieval (CBIR). Imagery data is segmented and automatically analyzed and stored in dynamic and robust database. In our system, we seek utilize image fusion methods which require quality metrics. Many Image Fusion (IF) algorithms have been proposed based on different, but only a few metrics, used to evaluate the performance of these algorithms. In this paper, we seek a robust, objective metric to evaluate the performance of IF algorithms which compares the outcome of a given algorithm to ground truth and reports several types of errors. Given the ground truth of a motion imagery data, it will compute detection failure, false alarm, precision and recall metrics, background and foreground regions statistics, as well as split and merge of foreground regions. Using the Structural Similarity Index (SSIM), Mutual Information (MI), and entropy metrics; experimental results demonstrate the effectiveness of the proposed methodology for object detection, activity exploitation, and CBIR.

  19. Multi-sensor image fusion algorithm based on multi-objective particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Xie, Xia-zhu; Xu, Ya-wei

    2017-11-01

    On the basis of DT-CWT (Dual-Tree Complex Wavelet Transform - DT-CWT) theory, an approach based on MOPSO (Multi-objective Particle Swarm Optimization Algorithm) was proposed to objectively choose the fused weights of low frequency sub-bands. High and low frequency sub-bands were produced by DT-CWT. Absolute value of coefficients was adopted as fusion rule to fuse high frequency sub-bands. Fusion weights in low frequency sub-bands were used as particles in MOPSO. Spatial Frequency and Average Gradient were adopted as two kinds of fitness functions in MOPSO. The experimental result shows that the proposed approach performances better than Average Fusion and fusion methods based on local variance and local energy respectively in brightness, clarity and quantitative evaluation which includes Entropy, Spatial Frequency, Average Gradient and QAB/F.

  20. 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display.

    PubMed

    Fan, Zhencheng; Weng, Yitong; Chen, Guowen; Liao, Hongen

    2017-07-01

    Three-dimensional (3D) visualization of preoperative and intraoperative medical information becomes more and more important in minimally invasive surgery. We develop a 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display for surgeons to observe surgical target intuitively. The spatial information of regions of interest (ROIs) is captured by the mobile device and transferred to a server for further image processing. Triangular patches of intraoperative data with texture are calculated with a dimension-reduced triangulation algorithm and a projection-weighted mapping algorithm. A point cloud selection-based warm-start iterative closest point (ICP) algorithm is also developed for fusion of the reconstructed 3D intraoperative image and the preoperative image. The fusion images are rendered for 3D autostereoscopic display using integral videography (IV) technology. Moreover, 3D visualization of medical image corresponding to observer's viewing direction is updated automatically using mutual information registration method. Experimental results show that the spatial position error between the IV-based 3D autostereoscopic fusion image and the actual object was 0.38±0.92mm (n=5). The system can be utilized in telemedicine, operating education, surgical planning, navigation, etc. to acquire spatial information conveniently and display surgical information intuitively. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Fusion of spectral and panchromatic images using false color mapping and wavelet integrated approach

    NASA Astrophysics Data System (ADS)

    Zhao, Yongqiang; Pan, Quan; Zhang, Hongcai

    2006-01-01

    With the development of sensory technology, new image sensors have been introduced that provide a greater range of information to users. But as the power limitation of radiation, there will always be some trade-off between spatial and spectral resolution in the image captured by specific sensors. Images with high spatial resolution can locate objects with high accuracy, whereas images with high spectral resolution can be used to identify the materials. Many applications in remote sensing require fusing low-resolution imaging spectral images with panchromatic images to identify materials at high resolution in clutter. A pixel-based false color mapping and wavelet transform integrated fusion algorithm is presented in this paper, the resulting images have a higher information content than each of the original images and retain sensor-specific image information. The simulation results show that this algorithm can enhance the visibility of certain details and preserve the difference of different materials.

  2. Wavelet Fusion for Concealed Object Detection Using Passive Millimeter Wave Sequence Images

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Pang, L.; Liu, H.; Xu, X.

    2018-04-01

    PMMW imaging system can create interpretable imagery on the objects concealed under clothing, which gives the great advantage to the security check system. Paper addresses wavelet fusion to detect concealed objects using passive millimeter wave (PMMW) sequence images. According to PMMW real-time imager acquired image characteristics and storage methods firstly, using the sum of squared difference (SSD) as the image-related parameters to screen the sequence images. Secondly, the selected images are optimized using wavelet fusion algorithm. Finally, the concealed objects are detected by mean filter, threshold segmentation and edge detection. The experimental results show that this method improves the detection effect of concealed objects by selecting the most relevant images from PMMW sequence images and using wavelet fusion to enhance the information of the concealed objects. The method can be effectively applied to human body concealed object detection in millimeter wave video.

  3. PET-CT image fusion using random forest and à-trous wavelet transform.

    PubMed

    Seal, Ayan; Bhattacharjee, Debotosh; Nasipuri, Mita; Rodríguez-Esparragón, Dionisio; Menasalvas, Ernestina; Gonzalo-Martin, Consuelo

    2018-03-01

    New image fusion rules for multimodal medical images are proposed in this work. Image fusion rules are defined by random forest learning algorithm and a translation-invariant à-trous wavelet transform (AWT). The proposed method is threefold. First, source images are decomposed into approximation and detail coefficients using AWT. Second, random forest is used to choose pixels from the approximation and detail coefficients for forming the approximation and detail coefficients of the fused image. Lastly, inverse AWT is applied to reconstruct fused image. All experiments have been performed on 198 slices of both computed tomography and positron emission tomography images of a patient. A traditional fusion method based on Mallat wavelet transform has also been implemented on these slices. A new image fusion performance measure along with 4 existing measures has been presented, which helps to compare the performance of 2 pixel level fusion methods. The experimental results clearly indicate that the proposed method outperforms the traditional method in terms of visual and quantitative qualities and the new measure is meaningful. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Image Fusion of CT and MR with Sparse Representation in NSST Domain

    PubMed Central

    Qiu, Chenhui; Wang, Yuanyuan; Zhang, Huan

    2017-01-01

    Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. PMID:29250134

  5. Image Fusion of CT and MR with Sparse Representation in NSST Domain.

    PubMed

    Qiu, Chenhui; Wang, Yuanyuan; Zhang, Huan; Xia, Shunren

    2017-01-01

    Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation.

  6. Sensor fusion II: Human and machine strategies; Proceedings of the Meeting, Philadelphia, PA, Nov. 6-9, 1989

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1990-01-01

    Various papers on human and machine strategies in sensor fusion are presented. The general topics addressed include: active vision, measurement and analysis of visual motion, decision models for sensor fusion, implementation of sensor fusion algorithms, applying sensor fusion to image analysis, perceptual modules and their fusion, perceptual organization and object recognition, planning and the integration of high-level knowledge with perception, using prior knowledge and context in sensor fusion.

  7. Multispectral medical image fusion in Contourlet domain for computer based diagnosis of Alzheimer's disease.

    PubMed

    Bhateja, Vikrant; Moin, Aisha; Srivastava, Anuja; Bao, Le Nguyen; Lay-Ekuakille, Aimé; Le, Dac-Nhuong

    2016-07-01

    Computer based diagnosis of Alzheimer's disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer's disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).

  8. Multispectral medical image fusion in Contourlet domain for computer based diagnosis of Alzheimer’s disease

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhateja, Vikrant, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn; Moin, Aisha; Srivastava, Anuja

    Computer based diagnosis of Alzheimer’s disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer’s disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Componentmore » Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).« less

  9. Computational polarization difference underwater imaging based on image fusion

    NASA Astrophysics Data System (ADS)

    Han, Hongwei; Zhang, Xiaohui; Guan, Feng

    2016-01-01

    Polarization difference imaging can improve the quality of images acquired underwater, whether the background and veiling light are unpolarized or partial polarized. Computational polarization difference imaging technique which replaces the mechanical rotation of polarization analyzer and shortens the time spent to select the optimum orthogonal ǁ and ⊥axes is the improvement of the conventional PDI. But it originally gets the output image by setting the weight coefficient manually to an identical constant for all pixels. In this paper, a kind of algorithm is proposed to combine the Q and U parameters of the Stokes vector through pixel-level image fusion theory based on non-subsample contourlet transform. The experimental system built by the green LED array with polarizer to illuminate a piece of flat target merged in water and the CCD with polarization analyzer to obtain target image under different angle is used to verify the effect of the proposed algorithm. The results showed that the output processed by our algorithm could show more details of the flat target and had higher contrast compared to original computational polarization difference imaging.

  10. Real-time Enhancement, Registration, and Fusion for a Multi-Sensor Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2006-01-01

    Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than- human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests. Keywords: enhanced vision system, image enhancement, retinex, digital signal processing, sensor fusion

  11. Fusion of High Resolution Multispectral Imagery in Vulnerable Coastal and Land Ecosystems.

    PubMed

    Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier; Garcia-Pedrero, Angel; Rodriguez-Esparragon, Dionisio

    2017-01-25

    Ecosystems provide a wide variety of useful resources that enhance human welfare, but these resources are declining due to climate change and anthropogenic pressure. In this work, three vulnerable ecosystems, including shrublands, coastal areas with dunes systems and areas of shallow water, are studied. As far as these resources' reduction is concerned, remote sensing and image processing techniques could contribute to the management of these natural resources in a practical and cost-effective way, although some improvements are needed for obtaining a higher quality of the information available. An important quality improvement is the fusion at the pixel level. Hence, the objective of this work is to assess which pansharpening technique provides the best fused image for the different types of ecosystems. After a preliminary evaluation of twelve classic and novel fusion algorithms, a total of four pansharpening algorithms was analyzed using six quality indices. The quality assessment was implemented not only for the whole set of multispectral bands, but also for the subset of spectral bands covered by the wavelength range of the panchromatic image and outside of it. A better quality result is observed in the fused image using only the bands covered by the panchromatic band range. It is important to highlight the use of these techniques not only in land and urban areas, but a novel analysis in areas of shallow water ecosystems. Although the algorithms do not show a high difference in land and coastal areas, coastal ecosystems require simpler algorithms, such as fast intensity hue saturation, whereas more heterogeneous ecosystems need advanced algorithms, as weighted wavelet ' à trous ' through fractal dimension maps for shrublands and mixed ecosystems. Moreover, quality map analysis was carried out in order to study the fusion result in each band at the local level. Finally, to demonstrate the performance of these pansharpening techniques, advanced Object-Based (OBIA) support vector machine classification was applied, and a thematic map for the shrubland ecosystem was obtained, which corroborates wavelet ' à trous ' through fractal dimension maps as the best fusion algorithm for this ecosystem.

  12. Fusion of High Resolution Multispectral Imagery in Vulnerable Coastal and Land Ecosystems

    PubMed Central

    Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier; Garcia-Pedrero, Angel; Rodriguez-Esparragon, Dionisio

    2017-01-01

    Ecosystems provide a wide variety of useful resources that enhance human welfare, but these resources are declining due to climate change and anthropogenic pressure. In this work, three vulnerable ecosystems, including shrublands, coastal areas with dunes systems and areas of shallow water, are studied. As far as these resources’ reduction is concerned, remote sensing and image processing techniques could contribute to the management of these natural resources in a practical and cost-effective way, although some improvements are needed for obtaining a higher quality of the information available. An important quality improvement is the fusion at the pixel level. Hence, the objective of this work is to assess which pansharpening technique provides the best fused image for the different types of ecosystems. After a preliminary evaluation of twelve classic and novel fusion algorithms, a total of four pansharpening algorithms was analyzed using six quality indices. The quality assessment was implemented not only for the whole set of multispectral bands, but also for the subset of spectral bands covered by the wavelength range of the panchromatic image and outside of it. A better quality result is observed in the fused image using only the bands covered by the panchromatic band range. It is important to highlight the use of these techniques not only in land and urban areas, but a novel analysis in areas of shallow water ecosystems. Although the algorithms do not show a high difference in land and coastal areas, coastal ecosystems require simpler algorithms, such as fast intensity hue saturation, whereas more heterogeneous ecosystems need advanced algorithms, as weighted wavelet ‘à trous’ through fractal dimension maps for shrublands and mixed ecosystems. Moreover, quality map analysis was carried out in order to study the fusion result in each band at the local level. Finally, to demonstrate the performance of these pansharpening techniques, advanced Object-Based (OBIA) support vector machine classification was applied, and a thematic map for the shrubland ecosystem was obtained, which corroborates wavelet ‘à trous’ through fractal dimension maps as the best fusion algorithm for this ecosystem. PMID:28125055

  13. Multi-focus image fusion with the all convolutional neural network

    NASA Astrophysics Data System (ADS)

    Du, Chao-ben; Gao, She-sheng

    2018-01-01

    A decision map contains complete and clear information about the image to be fused, which is crucial to various image fusion issues, especially multi-focus image fusion. However, in order to get a satisfactory image fusion effect, getting a decision map is very necessary and usually difficult to finish. In this letter, we address this problem with convolutional neural network (CNN), aiming to get a state-of-the-art decision map. The main idea is that the max-pooling of CNN is replaced by a convolution layer, the residuals are propagated backwards by gradient descent, and the training parameters of the individual layers of the CNN are updated layer by layer. Based on this, we propose a new all CNN (ACNN)-based multi-focus image fusion method in spatial domain. We demonstrate that the decision map obtained from the ACNN is reliable and can lead to high-quality fusion results. Experimental results clearly validate that the proposed algorithm can obtain state-of-the-art fusion performance in terms of both qualitative and quantitative evaluations.

  14. Illustration Watermarking for Digital Images: An Investigation of Hierarchical Signal Inheritances for Nested Object-based Embedding

    DTIC Science & Technology

    2007-02-23

    approach for signal-level watermark inheritance. 15. SUBJECT TERMS EOARD, Steganography , Image Fusion, Data Mining, Image ...in watermarking algorithms , a program interface and protocol has been de - veloped, which allows control of the embedding and retrieval processes by the...watermarks in an image . Watermarking algorithm (DLL) Watermarking editor (Delphi) - User marks all objects: ci - class information oi - object instance

  15. Evaluating an image-fusion algorithm with synthetic-image-generation tools

    NASA Astrophysics Data System (ADS)

    Gross, Harry N.; Schott, John R.

    1996-06-01

    An algorithm that combines spectral mixing and nonlinear optimization is used to fuse multiresolution images. Image fusion merges images of different spatial and spectral resolutions to create a high spatial resolution multispectral combination. High spectral resolution allows identification of materials in the scene, while high spatial resolution locates those materials. In this algorithm, conventional spectral mixing estimates the percentage of each material (called endmembers) within each low resolution pixel. Three spectral mixing models are compared; unconstrained, partially constrained, and fully constrained. In the partially constrained application, the endmember fractions are required to sum to one. In the fully constrained application, all fractions are additionally required to lie between zero and one. While negative fractions seem inappropriate, they can arise from random spectral realizations of the materials. In the second part of the algorithm, the low resolution fractions are used as inputs to a constrained nonlinear optimization that calculates the endmember fractions for the high resolution pixels. The constraints mirror the low resolution constraints and maintain consistency with the low resolution fraction results. The algorithm can use one or more higher resolution sharpening images to locate the endmembers to high spatial accuracy. The algorithm was evaluated with synthetic image generation (SIG) tools. A SIG developed image can be used to control the various error sources that are likely to impair the algorithm performance. These error sources include atmospheric effects, mismodeled spectral endmembers, and variability in topography and illumination. By controlling the introduction of these errors, the robustness of the algorithm can be studied and improved upon. The motivation for this research is to take advantage of the next generation of multi/hyperspectral sensors. Although the hyperspectral images will be of modest to low resolution, fusing them with high resolution sharpening images will produce a higher spatial resolution land cover or material map.

  16. Hyperspectral face recognition with spatiospectral information fusion and PLS regression.

    PubMed

    Uzair, Muhammad; Mahmood, Arif; Mian, Ajmal

    2015-03-01

    Hyperspectral imaging offers new opportunities for face recognition via improved discrimination along the spectral dimension. However, it poses new challenges, including low signal-to-noise ratio, interband misalignment, and high data dimensionality. Due to these challenges, the literature on hyperspectral face recognition is not only sparse but is limited to ad hoc dimensionality reduction techniques and lacks comprehensive evaluation. We propose a hyperspectral face recognition algorithm using a spatiospectral covariance for band fusion and partial least square regression for classification. Moreover, we extend 13 existing face recognition techniques, for the first time, to perform hyperspectral face recognition.We formulate hyperspectral face recognition as an image-set classification problem and evaluate the performance of seven state-of-the-art image-set classification techniques. We also test six state-of-the-art grayscale and RGB (color) face recognition algorithms after applying fusion techniques on hyperspectral images. Comparison with the 13 extended and five existing hyperspectral face recognition techniques on three standard data sets show that the proposed algorithm outperforms all by a significant margin. Finally, we perform band selection experiments to find the most discriminative bands in the visible and near infrared response spectrum.

  17. Log-Gabor Energy Based Multimodal Medical Image Fusion in NSCT Domain

    PubMed Central

    Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan

    2014-01-01

    Multimodal medical image fusion is a powerful tool in clinical applications such as noninvasive diagnosis, image-guided radiotherapy, and treatment planning. In this paper, a novel nonsubsampled Contourlet transform (NSCT) based method for multimodal medical image fusion is presented, which is approximately shift invariant and can effectively suppress the pseudo-Gibbs phenomena. The source medical images are initially transformed by NSCT followed by fusing low- and high-frequency components. The phase congruency that can provide a contrast and brightness-invariant representation is applied to fuse low-frequency coefficients, whereas the Log-Gabor energy that can efficiently determine the frequency coefficients from the clear and detail parts is employed to fuse the high-frequency coefficients. The proposed fusion method has been compared with the discrete wavelet transform (DWT), the fast discrete curvelet transform (FDCT), and the dual tree complex wavelet transform (DTCWT) based image fusion methods and other NSCT-based methods. Visually and quantitatively experimental results indicate that the proposed fusion method can obtain more effective and accurate fusion results of multimodal medical images than other algorithms. Further, the applicability of the proposed method has been testified by carrying out a clinical example on a woman affected with recurrent tumor images. PMID:25214889

  18. Adaptive fusion of infrared and visible images in dynamic scene

    NASA Astrophysics Data System (ADS)

    Yang, Guang; Yin, Yafeng; Man, Hong; Desai, Sachi

    2011-11-01

    Multiple modalities sensor fusion has been widely employed in various surveillance and military applications. A variety of image fusion techniques including PCA, wavelet, curvelet and HSV has been proposed in recent years to improve human visual perception for object detection. One of the main challenges for visible and infrared image fusion is to automatically determine an optimal fusion strategy for different input scenes along with an acceptable computational cost. This paper, we propose a fast and adaptive feature selection based image fusion method to obtain high a contrast image from visible and infrared sensors for targets detection. At first, fuzzy c-means clustering is applied on the infrared image to highlight possible hotspot regions, which will be considered as potential targets' locations. After that, the region surrounding the target area is segmented as the background regions. Then image fusion is locally applied on the selected target and background regions by computing different linear combination of color components from registered visible and infrared images. After obtaining different fused images, histogram distributions are computed on these local fusion images as the fusion feature set. The variance ratio which is based on Linear Discriminative Analysis (LDA) measure is employed to sort the feature set and the most discriminative one is selected for the whole image fusion. As the feature selection is performed over time, the process will dynamically determine the most suitable feature for the image fusion in different scenes. Experiment is conducted on the OSU Color-Thermal database, and TNO Human Factor dataset. The fusion results indicate that our proposed method achieved a competitive performance compared with other fusion algorithms at a relatively low computational cost.

  19. Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT.

    PubMed

    Mazaheri, Samaneh; Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah

    2015-01-01

    Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics.

  20. Fusion of infrared and visible images based on saliency scale-space in frequency domain

    NASA Astrophysics Data System (ADS)

    Chen, Yanfei; Sang, Nong; Dan, Zhiping

    2015-12-01

    A fusion algorithm of infrared and visible images based on saliency scale-space in the frequency domain was proposed. Focus of human attention is directed towards the salient targets which interpret the most important information in the image. For the given registered infrared and visible images, firstly, visual features are extracted to obtain the input hypercomplex matrix. Secondly, the Hypercomplex Fourier Transform (HFT) is used to obtain the salient regions of the infrared and visible images respectively, the convolution of the input hypercomplex matrix amplitude spectrum with a low-pass Gaussian kernel of an appropriate scale which is equivalent to an image saliency detector are done. The saliency maps are obtained by reconstructing the 2D signal using the original phase and the amplitude spectrum, filtered at a scale selected by minimizing saliency map entropy. Thirdly, the salient regions are fused with the adoptive weighting fusion rules, and the nonsalient regions are fused with the rule based on region energy (RE) and region sharpness (RS), then the fused image is obtained. Experimental results show that the presented algorithm can hold high spectrum information of the visual image, and effectively get the thermal targets information at different scales of the infrared image.

  1. Engineering workstation: Sensor modeling

    NASA Technical Reports Server (NTRS)

    Pavel, M; Sweet, B.

    1993-01-01

    The purpose of the engineering workstation is to provide an environment for rapid prototyping and evaluation of fusion and image processing algorithms. Ideally, the algorithms are designed to optimize the extraction of information that is useful to a pilot for all phases of flight operations. Successful design of effective fusion algorithms depends on the ability to characterize both the information available from the sensors and the information useful to a pilot. The workstation is comprised of subsystems for simulation of sensor-generated images, image processing, image enhancement, and fusion algorithms. As such, the workstation can be used to implement and evaluate both short-term solutions and long-term solutions. The short-term solutions are being developed to enhance a pilot's situational awareness by providing information in addition to his direct vision. The long term solutions are aimed at the development of complete synthetic vision systems. One of the important functions of the engineering workstation is to simulate the images that would be generated by the sensors. The simulation system is designed to use the graphics modeling and rendering capabilities of various workstations manufactured by Silicon Graphics Inc. The workstation simulates various aspects of the sensor-generated images arising from phenomenology of the sensors. In addition, the workstation can be used to simulate a variety of impairments due to mechanical limitations of the sensor placement and due to the motion of the airplane. Although the simulation is currently not performed in real-time, sequences of individual frames can be processed, stored, and recorded in a video format. In that way, it is possible to examine the appearance of different dynamic sensor-generated and fused images.

  2. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    PubMed

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  3. Adaptive multiple super fast simulated annealing for stochastic microstructure reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryu, Seun; Lin, Guang; Sun, Xin

    2013-01-01

    Fast image reconstruction from statistical information is critical in image fusion from multimodality chemical imaging instrumentation to create high resolution image with large domain. Stochastic methods have been used widely in image reconstruction from two point correlation function. The main challenge is to increase the efficiency of reconstruction. A novel simulated annealing method is proposed for fast solution of image reconstruction. Combining the advantage of very fast cooling schedules, dynamic adaption and parallelization, the new simulation annealing algorithm increases the efficiencies by several orders of magnitude, making the large domain image fusion feasible.

  4. Multi-modality image fusion based on enhanced fuzzy radial basis function neural networks.

    PubMed

    Chao, Zhen; Kim, Dohyeon; Kim, Hee-Joung

    2018-04-01

    In clinical applications, single modality images do not provide sufficient diagnostic information. Therefore, it is necessary to combine the advantages or complementarities of different modalities of images. Recently, neural network technique was applied to medical image fusion by many researchers, but there are still many deficiencies. In this study, we propose a novel fusion method to combine multi-modality medical images based on the enhanced fuzzy radial basis function neural network (Fuzzy-RBFNN), which includes five layers: input, fuzzy partition, front combination, inference, and output. Moreover, we propose a hybrid of the gravitational search algorithm (GSA) and error back propagation algorithm (EBPA) to train the network to update the parameters of the network. Two different patterns of images are used as inputs of the neural network, and the output is the fused image. A comparison with the conventional fusion methods and another neural network method through subjective observation and objective evaluation indexes reveals that the proposed method effectively synthesized the information of input images and achieved better results. Meanwhile, we also trained the network by using the EBPA and GSA, individually. The results reveal that the EBPGSA not only outperformed both EBPA and GSA, but also trained the neural network more accurately by analyzing the same evaluation indexes. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  5. Real time mitigation of atmospheric turbulence in long distance imaging using the lucky region fusion algorithm with FPGA and GPU hardware acceleration

    NASA Astrophysics Data System (ADS)

    Jackson, Christopher Robert

    "Lucky-region" fusion (LRF) is a synthetic imaging technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm selects sharp regions of an image obtained from a series of short exposure frames, and fuses the sharp regions into a final, improved image. In previous research, the LRF algorithm had been implemented on a PC using the C programming language. However, the PC did not have sufficient sequential processing power to handle real-time extraction, processing and reduction required when the LRF algorithm was applied to real-time video from fast, high-resolution image sensors. This thesis describes two hardware implementations of the LRF algorithm to achieve real-time image processing. The first was created with a VIRTEX-7 field programmable gate array (FPGA). The other developed using the graphics processing unit (GPU) of a NVIDIA GeForce GTX 690 video card. The novelty in the FPGA approach is the creation of a "black box" LRF video processing system with a general camera link input, a user controller interface, and a camera link video output. We also describe a custom hardware simulation environment we have built to test the FPGA LRF implementation. The advantage of the GPU approach is significantly improved development time, integration of image stabilization into the system, and comparable atmospheric turbulence mitigation.

  6. Use of image registration and fusion algorithms and techniques in radiotherapy: Report of the AAPM Radiation Therapy Committee Task Group No. 132.

    PubMed

    Brock, Kristy K; Mutic, Sasa; McNutt, Todd R; Li, Hua; Kessler, Marc L

    2017-07-01

    Image registration and fusion algorithms exist in almost every software system that creates or uses images in radiotherapy. Most treatment planning systems support some form of image registration and fusion to allow the use of multimodality and time-series image data and even anatomical atlases to assist in target volume and normal tissue delineation. Treatment delivery systems perform registration and fusion between the planning images and the in-room images acquired during the treatment to assist patient positioning. Advanced applications are beginning to support daily dose assessment and enable adaptive radiotherapy using image registration and fusion to propagate contours and accumulate dose between image data taken over the course of therapy to provide up-to-date estimates of anatomical changes and delivered dose. This information aids in the detection of anatomical and functional changes that might elicit changes in the treatment plan or prescription. As the output of the image registration process is always used as the input of another process for planning or delivery, it is important to understand and communicate the uncertainty associated with the software in general and the result of a specific registration. Unfortunately, there is no standard mathematical formalism to perform this for real-world situations where noise, distortion, and complex anatomical variations can occur. Validation of the software systems performance is also complicated by the lack of documentation available from commercial systems leading to use of these systems in undesirable 'black-box' fashion. In view of this situation and the central role that image registration and fusion play in treatment planning and delivery, the Therapy Physics Committee of the American Association of Physicists in Medicine commissioned Task Group 132 to review current approaches and solutions for image registration (both rigid and deformable) in radiotherapy and to provide recommendations for quality assurance and quality control of these clinical processes. © 2017 American Association of Physicists in Medicine.

  7. Different source image fusion based on FPGA

    NASA Astrophysics Data System (ADS)

    Luo, Xiao; Piao, Yan

    2016-03-01

    The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.

  8. Multi-atlas based segmentation using probabilistic label fusion with adaptive weighting of image similarity measures.

    PubMed

    Sjöberg, C; Ahnesjö, A

    2013-06-01

    Label fusion multi-atlas approaches for image segmentation can give better segmentation results than single atlas methods. We present a multi-atlas label fusion strategy based on probabilistic weighting of distance maps. Relationships between image similarities and segmentation similarities are estimated in a learning phase and used to derive fusion weights that are proportional to the probability for each atlas to improve the segmentation result. The method was tested using a leave-one-out strategy on a database of 21 pre-segmented prostate patients for different image registrations combined with different image similarity scorings. The probabilistic weighting yields results that are equal or better compared to both fusion with equal weights and results using the STAPLE algorithm. Results from the experiments demonstrate that label fusion by weighted distance maps is feasible, and that probabilistic weighted fusion improves segmentation quality more the stronger the individual atlas segmentation quality depends on the corresponding registered image similarity. The regions used for evaluation of the image similarity measures were found to be more important than the choice of similarity measure. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  9. Robust mosiacs of close-range high-resolution images

    NASA Astrophysics Data System (ADS)

    Song, Ran; Szymanski, John E.

    2008-03-01

    This paper presents a robust algorithm which relies only on the information contained within the captured images for the construction of massive composite mosaic images from close-range and high-resolution originals, such as those obtained when imaging architectural and heritage structures. We first apply Harris algorithm to extract a selection of corners and, then, employ both the intensity correlation and the spatial correlation between the corresponding corners for matching them. Then we estimate the eight-parameter projective transformation matrix by the genetic algorithm. Lastly, image fusion using a weighted blending function together with intensity compensation produces an effective seamless mosaic image.

  10. Fusion of visible and near-infrared images based on luminance estimation by weighted luminance algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Zhun; Cheng, Feiyan; Shi, Junsheng; Huang, Xiaoqiao

    2018-01-01

    In a low-light scene, capturing color images needs to be at a high-gain setting or a long-exposure setting to avoid a visible flash. However, such these setting will lead to color images with serious noise or motion blur. Several methods have been proposed to improve a noise-color image through an invisible near infrared flash image. A novel method is that the luminance component and the chroma component of the improved color image are estimated from different image sources [1]. The luminance component is estimated mainly from the NIR image via a spectral estimation, and the chroma component is estimated from the noise-color image by denoising. However, it is challenging to estimate the luminance component. This novel method to estimate the luminance component needs to generate the learning data pairs, and the processes and algorithm are complex. It is difficult to achieve practical application. In order to reduce the complexity of the luminance estimation, an improved luminance estimation algorithm is presented in this paper, which is to weight the NIR image and the denoised-color image and the weighted coefficients are based on the mean value and standard deviation of both images. Experimental results show that the same fusion effect at aspect of color fidelity and texture quality is achieved, compared the proposed method with the novel method, however, the algorithm is more simple and practical.

  11. Accuracy of volume measurement using 3D ultrasound and development of CT-3D US image fusion algorithm for prostate cancer radiotherapy.

    PubMed

    Baek, Jihye; Huh, Jangyoung; Kim, Myungsoo; Hyun An, So; Oh, Yoonjin; Kim, DongYoung; Chung, Kwangzoo; Cho, Sungho; Lee, Rena

    2013-02-01

    To evaluate the accuracy of measuring volumes using three-dimensional ultrasound (3D US), and to verify the feasibility of the replacement of CT-MR fusion images with CT-3D US in radiotherapy treatment planning. Phantoms, consisting of water, contrast agent, and agarose, were manufactured. The volume was measured using 3D US, CT, and MR devices. A CT-3D US and MR-3D US image fusion software was developed using the Insight Toolkit library in order to acquire three-dimensional fusion images. The quality of the image fusion was evaluated using metric value and fusion images. Volume measurement, using 3D US, shows a 2.8 ± 1.5% error, 4.4 ± 3.0% error for CT, and 3.1 ± 2.0% error for MR. The results imply that volume measurement using the 3D US devices has a similar accuracy level to that of CT and MR. Three-dimensional image fusion of CT-3D US and MR-3D US was successfully performed using phantom images. Moreover, MR-3D US image fusion was performed using human bladder images. 3D US could be used in the volume measurement of human bladders and prostates. CT-3D US image fusion could be used in monitoring the target position in each fraction of external beam radiation therapy. Moreover, the feasibility of replacing the CT-MR image fusion to the CT-3D US in radiotherapy treatment planning was verified.

  12. Nonsubsampled rotated complex wavelet transform (NSRCxWT) for medical image fusion related to clinical aspects in neurocysticercosis.

    PubMed

    Chavan, Satishkumar S; Mahajan, Abhishek; Talbar, Sanjay N; Desai, Subhash; Thakur, Meenakshi; D'cruz, Anil

    2017-02-01

    Neurocysticercosis (NCC) is a parasite infection caused by the tapeworm Taenia solium in its larvae stage which affects the central nervous system of the human body (a definite host). It results in the formation of multiple lesions in the brain at different locations during its various stages. During diagnosis of such symptomatic patients, these lesions can be better visualized using a feature based fusion of Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). This paper presents a novel approach to Multimodality Medical Image Fusion (MMIF) used for the analysis of the lesions for the diagnostic purpose and post treatment review of NCC. The MMIF presented here is a technique of combining CT and MRI data of the same patient into a new slice using a Nonsubsampled Rotated Complex Wavelet Transform (NSRCxWT). The forward NSRCxWT is applied on both the source modalities separately to extract the complementary and the edge related features. These features are then combined to form a composite spectral plane using average and maximum value selection fusion rules. The inverse transformation on this composite plane results into a new, visually better, and enriched fused image. The proposed technique is tested on the pilot study data sets of patients infected with NCC. The quality of these fused images is measured using objective and subjective evaluation metrics. Objective evaluation is performed by estimating the fusion parameters like entropy, fusion factor, image quality index, edge quality measure, mean structural similarity index measure, etc. The fused images are also evaluated for their visual quality using subjective analysis with the help of three expert radiologists. The experimental results on 43 image data sets of 17 patients are promising and superior when compared with the state of the art wavelet based fusion algorithms. The proposed algorithm can be a part of computer-aided detection and diagnosis (CADD) system which assists the radiologists in clinical practices. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Assessment of Spatiotemporal Fusion Algorithms for Planet and Worldview Images

    PubMed Central

    Zhu, Xiaolin; Gao, Feng; Chou, Bryan; Li, Jiang; Shen, Yuzhong; Koperski, Krzysztof; Marchisio, Giovanni

    2018-01-01

    Although Worldview-2 (WV) images (non-pansharpened) have 2-m resolution, the re-visit times for the same areas may be seven days or more. In contrast, Planet images are collected using small satellites that can cover the whole Earth almost daily. However, the resolution of Planet images is 3.125 m. It would be ideal to fuse these two satellites images to generate high spatial resolution (2 m) and high temporal resolution (1 or 2 days) images for applications such as damage assessment, border monitoring, etc. that require quick decisions. In this paper, we evaluate three approaches to fusing Worldview (WV) and Planet images. These approaches are known as Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), Flexible Spatiotemporal Data Fusion (FSDAF), and Hybrid Color Mapping (HCM), which have been applied to the fusion of MODIS and Landsat images in recent years. Experimental results using actual Planet and Worldview images demonstrated that the three aforementioned approaches have comparable performance and can all generate high quality prediction images. PMID:29614745

  14. Assessment of Spatiotemporal Fusion Algorithms for Planet and Worldview Images.

    PubMed

    Kwan, Chiman; Zhu, Xiaolin; Gao, Feng; Chou, Bryan; Perez, Daniel; Li, Jiang; Shen, Yuzhong; Koperski, Krzysztof; Marchisio, Giovanni

    2018-03-31

    Although Worldview-2 (WV) images (non-pansharpened) have 2-m resolution, the re-visit times for the same areas may be seven days or more. In contrast, Planet images are collected using small satellites that can cover the whole Earth almost daily. However, the resolution of Planet images is 3.125 m. It would be ideal to fuse these two satellites images to generate high spatial resolution (2 m) and high temporal resolution (1 or 2 days) images for applications such as damage assessment, border monitoring, etc. that require quick decisions. In this paper, we evaluate three approaches to fusing Worldview (WV) and Planet images. These approaches are known as Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), Flexible Spatiotemporal Data Fusion (FSDAF), and Hybrid Color Mapping (HCM), which have been applied to the fusion of MODIS and Landsat images in recent years. Experimental results using actual Planet and Worldview images demonstrated that the three aforementioned approaches have comparable performance and can all generate high quality prediction images.

  15. Multifocus image fusion scheme based on the multiscale curvature in nonsubsampled contourlet transform domain

    NASA Astrophysics Data System (ADS)

    Li, Xiaosong; Li, Huafeng; Yu, Zhengtao; Kong, Yingchun

    2015-07-01

    An efficient multifocus image fusion scheme in nonsubsampled contourlet transform (NSCT) domain is proposed. Based on the property of optical imaging and the theory of defocused image, we present a selection principle for lowpass frequency coefficients and also investigate the connection between a low-frequency image and the defocused image. Generally, the NSCT algorithm decomposes detail image information indwells in different scales and different directions in the bandpass subband coefficient. In order to correctly pick out the prefused bandpass directional coefficients, we introduce multiscale curvature, which not only inherits the advantages of windows with different sizes, but also correctly recognizes the focused pixels from source images, and then develop a new fusion scheme of the bandpass subband coefficients. The fused image can be obtained by inverse NSCT with the different fused coefficients. Several multifocus image fusion methods are compared with the proposed scheme. The experimental results clearly indicate the validity and superiority of the proposed scheme in terms of both the visual qualities and the quantitative evaluation.

  16. Image Fusion Algorithms Using Human Visual System in Transform Domain

    NASA Astrophysics Data System (ADS)

    Vadhi, Radhika; Swamy Kilari, Veera; Samayamantula, Srinivas Kumar

    2017-08-01

    The endeavor of digital image fusion is to combine the important visual parts from various sources to advance the visibility eminence of the image. The fused image has a more visual quality than any source images. In this paper, the Human Visual System (HVS) weights are used in the transform domain to select appropriate information from various source images and then to attain a fused image. In this process, mainly two steps are involved. First, apply the DWT to the registered source images. Later, identify qualitative sub-bands using HVS weights. Hence, qualitative sub-bands are selected from different sources to form high quality HVS based fused image. The quality of the HVS based fused image is evaluated with general fusion metrics. The results show the superiority among the state-of-the art resolution Transforms (MRT) such as Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), Contourlet Transform (CT), and Non Sub Sampled Contourlet Transform (NSCT) using maximum selection fusion rule.

  17. Fusion of infrared and visible images based on BEMD and NSDFB

    NASA Astrophysics Data System (ADS)

    Zhu, Pan; Huang, Zhanhua; Lei, Hai

    2016-07-01

    This paper presents a new fusion method based on the adaptive multi-scale decomposition of bidimensional empirical mode decomposition (BEMD) and the flexible directional expansion of nonsubsampled directional filter banks (NSDFB) for visible-infrared images. Compared with conventional multi-scale fusion methods, BEMD is non-parametric and completely data-driven, which is relatively more suitable for non-linear signals decomposition and fusion. NSDFB can provide direction filtering on the decomposition levels to capture more geometrical structure of the source images effectively. In our fusion framework, the entropies of the two patterns of source images are firstly calculated and the residue of the image whose entropy is larger is extracted to make it highly relevant with the other source image. Then, the residue and the other source image are decomposed into low-frequency sub-bands and a sequence of high-frequency directional sub-bands in different scales by using BEMD and NSDFB. In this fusion scheme, two relevant fusion rules are used in low-frequency sub-bands and high-frequency directional sub-bands, respectively. Finally, the fused image is obtained by applying corresponding inverse transform. Experimental results indicate that the proposed fusion algorithm can obtain state-of-the-art performance for visible-infrared images fusion in both aspects of objective assessment and subjective visual quality even for the source images obtained in different conditions. Furthermore, the fused results have high contrast, remarkable target information and rich details information that are more suitable for human visual characteristics or machine perception.

  18. An adaptive block-based fusion method with LUE-SSIM for multi-focus images

    NASA Astrophysics Data System (ADS)

    Zheng, Jianing; Guo, Yongcai; Huang, Yukun

    2016-09-01

    Because of the lenses' limited depth of field, digital cameras are incapable of acquiring an all-in-focus image of objects at varying distances in a scene. Multi-focus image fusion technique can effectively solve this problem. Aiming at the block-based multi-focus image fusion methods, the problem that blocking-artifacts often occurs. An Adaptive block-based fusion method based on lifting undistorted-edge structural similarity (LUE-SSIM) is put forward. In this method, image quality metrics LUE-SSIM is firstly proposed, which utilizes the characteristics of human visual system (HVS) and structural similarity (SSIM) to make the metrics consistent with the human visual perception. Particle swarm optimization(PSO) algorithm which selects LUE-SSIM as the object function is used for optimizing the block size to construct the fused image. Experimental results on LIVE image database shows that LUE-SSIM outperform SSIM on Gaussian defocus blur images quality assessment. Besides, multi-focus image fusion experiment is carried out to verify our proposed image fusion method in terms of visual and quantitative evaluation. The results show that the proposed method performs better than some other block-based methods, especially in reducing the blocking-artifact of the fused image. And our method can effectively preserve the undistorted-edge details in focus region of the source images.

  19. An efficient method for the fusion of light field refocused images

    NASA Astrophysics Data System (ADS)

    Wang, Yingqian; Yang, Jungang; Xiao, Chao; An, Wei

    2018-04-01

    Light field cameras have drawn much attention due to the advantage of post-capture adjustments such as refocusing after exposure. The depth of field in refocused images is always shallow because of the large equivalent aperture. As a result, a large number of multi-focus images are obtained and an all-in-focus image is demanded. Consider that most multi-focus image fusion algorithms do not particularly aim at large numbers of source images and traditional DWT-based fusion approach has serious problems in dealing with lots of multi-focus images, causing color distortion and ringing effect. To solve this problem, this paper proposes an efficient multi-focus image fusion method based on stationary wavelet transform (SWT), which can deal with a large quantity of multi-focus images with shallow depth of fields. We compare SWT-based approach with DWT-based approach on various occasions. And the results demonstrate that the proposed method performs much better both visually and quantitatively.

  20. Multi-focus image fusion and robust encryption algorithm based on compressive sensing

    NASA Astrophysics Data System (ADS)

    Xiao, Di; Wang, Lan; Xiang, Tao; Wang, Yong

    2017-06-01

    Multi-focus image fusion schemes have been studied in recent years. However, little work has been done in multi-focus image transmission security. This paper proposes a scheme that can reduce data transmission volume and resist various attacks. First, multi-focus image fusion based on wavelet decomposition can generate complete scene images and optimize the perception of the human eye. The fused images are sparsely represented with DCT and sampled with structurally random matrix (SRM), which reduces the data volume and realizes the initial encryption. Then the obtained measurements are further encrypted to resist noise and crop attack through combining permutation and diffusion stages. At the receiver, the cipher images can be jointly decrypted and reconstructed. Simulation results demonstrate the security and robustness of the proposed scheme.

  1. A novel fusion framework of visible light and infrared images based on singular value decomposition and adaptive DUAL-PCNN in NSST domain

    NASA Astrophysics Data System (ADS)

    Cheng, Boyang; Jin, Longxu; Li, Guoning

    2018-06-01

    Visible light and infrared images fusion has been a significant subject in imaging science. As a new contribution to this field, a novel fusion framework of visible light and infrared images based on adaptive dual-channel unit-linking pulse coupled neural networks with singular value decomposition (ADS-PCNN) in non-subsampled shearlet transform (NSST) domain is present in this paper. First, the source images are decomposed into multi-direction and multi-scale sub-images by NSST. Furthermore, an improved novel sum modified-Laplacian (INSML) of low-pass sub-image and an improved average gradient (IAVG) of high-pass sub-images are input to stimulate the ADS-PCNN, respectively. To address the large spectral difference between infrared and visible light and the occurrence of black artifacts in fused images, a local structure information operator (LSI), which comes from local area singular value decomposition in each source image, is regarded as the adaptive linking strength that enhances fusion accuracy. Compared with PCNN models in other studies, the proposed method simplifies certain peripheral parameters, and the time matrix is utilized to decide the iteration number adaptively. A series of images from diverse scenes are used for fusion experiments and the fusion results are evaluated subjectively and objectively. The results of the subjective and objective evaluation show that our algorithm exhibits superior fusion performance and is more effective than the existing typical fusion techniques.

  2. An efficient multiple exposure image fusion in JPEG domain

    NASA Astrophysics Data System (ADS)

    Hebbalaguppe, Ramya; Kakarala, Ramakrishna

    2012-01-01

    In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.

  3. Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT

    PubMed Central

    Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah

    2015-01-01

    Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics. PMID:26089965

  4. Detection of buried objects by fusing dual-band infrared images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, G.A.; Sengupta, S.K.; Sherwood, R.J.

    1993-11-01

    We have conducted experiments to demonstrate the enhanced detectability of buried land mines using sensor fusion techniques. Multiple sensors, including visible imagery, infrared imagery, and ground penetrating radar (GPR), have been used to acquire data on a number of buried mines and mine surrogates. Because the visible wavelength and GPR data are currently incomplete. This paper focuses on the fusion of two-band infrared images. We use feature-level fusion and supervised learning with the probabilistic neural network (PNN) to evaluate detection performance. The novelty of the work lies in the application of advanced target recognition algorithms, the fusion of dual-band infraredmore » images and evaluation of the techniques using two real data sets.« less

  5. [An improved low spectral distortion PCA fusion method].

    PubMed

    Peng, Shi; Zhang, Ai-Wu; Li, Han-Lun; Hu, Shao-Xing; Meng, Xian-Gang; Sun, Wei-Dong

    2013-10-01

    Aiming at the spectral distortion produced in PCA fusion process, the present paper proposes an improved low spectral distortion PCA fusion method. This method uses NCUT (normalized cut) image segmentation algorithm to make a complex hyperspectral remote sensing image into multiple sub-images for increasing the separability of samples, which can weaken the spectral distortions of traditional PCA fusion; Pixels similarity weighting matrix and masks were produced by using graph theory and clustering theory. These masks are used to cut the hyperspectral image and high-resolution image into some sub-region objects. All corresponding sub-region objects between the hyperspectral image and high-resolution image are fused by using PCA method, and all sub-regional integration results are spliced together to produce a new image. In the experiment, Hyperion hyperspectral data and Rapid Eye data were used. And the experiment result shows that the proposed method has the same ability to enhance spatial resolution and greater ability to improve spectral fidelity performance.

  6. Geometry planning and image registration in magnetic particle imaging using bimodal fiducial markers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Werner, F., E-mail: f.werner@uke.de; Hofmann, M.; Them, K.

    Purpose: Magnetic particle imaging (MPI) is a quantitative imaging modality that allows the distribution of superparamagnetic nanoparticles to be visualized. Compared to other imaging techniques like x-ray radiography, computed tomography (CT), and magnetic resonance imaging (MRI), MPI only provides a signal from the administered tracer, but no additional morphological information, which complicates geometry planning and the interpretation of MP images. The purpose of the authors’ study was to develop bimodal fiducial markers that can be visualized by MPI and MRI in order to create MP–MR fusion images. Methods: A certain arrangement of three bimodal fiducial markers was developed and usedmore » in a combined MRI/MPI phantom and also during in vivo experiments in order to investigate its suitability for geometry planning and image fusion. An algorithm for automated marker extraction in both MR and MP images and rigid registration was established. Results: The developed bimodal fiducial markers can be visualized by MRI and MPI and allow for geometry planning as well as automated registration and fusion of MR–MP images. Conclusions: To date, exact positioning of the object to be imaged within the field of view (FOV) and the assignment of reconstructed MPI signals to corresponding morphological regions has been difficult. The developed bimodal fiducial markers and the automated image registration algorithm help to overcome these difficulties.« less

  7. A fusion algorithm for infrared and visible based on guided filtering and phase congruency in NSST domain

    NASA Astrophysics Data System (ADS)

    Liu, Zhanwen; Feng, Yan; Chen, Hang; Jiao, Licheng

    2017-10-01

    A novel and effective image fusion method is proposed for creating a highly informative and smooth surface of fused image through merging visible and infrared images. Firstly, a two-scale non-subsampled shearlet transform (NSST) is employed to decompose the visible and infrared images into detail layers and one base layer. Then, phase congruency is adopted to extract the saliency maps from the detail layers and a guided filtering is proposed to compute the filtering output of base layer and saliency maps. Next, a novel weighted average technique is used to make full use of scene consistency for fusion and obtaining coefficients map. Finally the fusion image was acquired by taking inverse NSST of the fused coefficients map. Experiments show that the proposed approach can achieve better performance than other methods in terms of subjective visual effect and objective assessment.

  8. Spectral edge: gradient-preserving spectral mapping for image fusion.

    PubMed

    Connah, David; Drew, Mark S; Finlayson, Graham D

    2015-12-01

    This paper describes a novel approach to image fusion for color display. Our goal is to generate an output image whose gradient matches that of the input as closely as possible. We achieve this using a constrained contrast mapping paradigm in the gradient domain, where the structure tensor of a high-dimensional gradient representation is mapped exactly to that of a low-dimensional gradient field which is then reintegrated to form an output. Constraints on output colors are provided by an initial RGB rendering. Initially, we motivate our solution with a simple "ansatz" (educated guess) for projecting higher-D contrast onto color gradients, which we expand to a more rigorous theorem to incorporate color constraints. The solution to these constrained optimizations is closed-form, allowing for simple and hence fast and efficient algorithms. The approach can map any N-D image data to any M-D output and can be used in a variety of applications using the same basic algorithm. In this paper, we focus on the problem of mapping N-D inputs to 3D color outputs. We present results in five applications: hyperspectral remote sensing, fusion of color and near-infrared or clear-filter images, multilighting imaging, dark flash, and color visualization of magnetic resonance imaging diffusion-tensor imaging.

  9. Multi-exposure high dynamic range image synthesis with camera shake correction

    NASA Astrophysics Data System (ADS)

    Li, Xudong; Chen, Yongfu; Jiang, Hongzhi; Zhao, Huijie

    2017-10-01

    Machine vision plays an important part in industrial online inspection. Owing to the nonuniform illuminance conditions and variable working distances, the captured image tends to be over-exposed or under-exposed. As a result, when processing the image such as crack inspection, the algorithm complexity and computing time increase. Multiexposure high dynamic range (HDR) image synthesis is used to improve the quality of the captured image, whose dynamic range is limited. Inevitably, camera shake will result in ghost effect, which blurs the synthesis image to some extent. However, existed exposure fusion algorithms assume that the input images are either perfectly aligned or captured in the same scene. These assumptions limit the application. At present, widely used registration based on Scale Invariant Feature Transform (SIFT) is usually time consuming. In order to rapidly obtain a high quality HDR image without ghost effect, we come up with an efficient Low Dynamic Range (LDR) images capturing approach and propose a registration method based on ORiented Brief (ORB) and histogram equalization which can eliminate the illumination differences between the LDR images. The fusion is performed after alignment. The experiment results demonstrate that the proposed method is robust to illumination changes and local geometric distortion. Comparing with other exposure fusion methods, our method is more efficient and can produce HDR images without ghost effect by registering and fusing four multi-exposure images.

  10. Computational Intelligence for Medical Imaging Simulations.

    PubMed

    Chang, Victor

    2017-11-25

    This paper describes how to simulate medical imaging by computational intelligence to explore areas that cannot be easily achieved by traditional ways, including genes and proteins simulations related to cancer development and immunity. This paper has presented simulations and virtual inspections of BIRC3, BIRC6, CCL4, KLKB1 and CYP2A6 with their outputs and explanations, as well as brain segment intensity due to dancing. Our proposed MapReduce framework with the fusion algorithm can simulate medical imaging. The concept is very similar to the digital surface theories to simulate how biological units can get together to form bigger units, until the formation of the entire unit of biological subject. The M-Fusion and M-Update function by the fusion algorithm can achieve a good performance evaluation which can process and visualize up to 40 GB of data within 600 s. We conclude that computational intelligence can provide effective and efficient healthcare research offered by simulations and visualization.

  11. Infrared and visible image fusion using discrete cosine transform and swarm intelligence for surveillance applications

    NASA Astrophysics Data System (ADS)

    Paramanandham, Nirmala; Rajendiran, Kishore

    2018-01-01

    A novel image fusion technique is presented for integrating infrared and visible images. Integration of images from the same or various sensing modalities can deliver the required information that cannot be delivered by viewing the sensor outputs individually and consecutively. In this paper, a swarm intelligence based image fusion technique using discrete cosine transform (DCT) domain is proposed for surveillance application which integrates the infrared image with the visible image for generating a single informative fused image. Particle swarm optimization (PSO) is used in the fusion process for obtaining the optimized weighting factor. These optimized weighting factors are used for fusing the DCT coefficients of visible and infrared images. Inverse DCT is applied for obtaining the initial fused image. An enhanced fused image is obtained through adaptive histogram equalization for a better visual understanding and target detection. The proposed framework is evaluated using quantitative metrics such as standard deviation, spatial frequency, entropy and mean gradient. The experimental results demonstrate the outperformance of the proposed algorithm over many other state- of- the- art techniques reported in literature.

  12. Compressive hyperspectral and multispectral imaging fusion

    NASA Astrophysics Data System (ADS)

    Espitia, Óscar; Castillo, Sergio; Arguello, Henry

    2016-05-01

    Image fusion is a valuable framework which combines two or more images of the same scene from one or multiple sensors, allowing to improve the resolution of the images and increase the interpretable content. In remote sensing a common fusion problem consists of merging hyperspectral (HS) and multispectral (MS) images that involve large amount of redundant data, which ignores the highly correlated structure of the datacube along the spatial and spectral dimensions. Compressive HS and MS systems compress the spectral data in the acquisition step allowing to reduce the data redundancy by using different sampling patterns. This work presents a compressed HS and MS image fusion approach, which uses a high dimensional joint sparse model. The joint sparse model is formulated by combining HS and MS compressive acquisition models. The high spectral and spatial resolution image is reconstructed by using sparse optimization algorithms. Different fusion spectral image scenarios are used to explore the performance of the proposed scheme. Several simulations with synthetic and real datacubes show promising results as the reliable reconstruction of a high spectral and spatial resolution image can be achieved by using as few as just the 50% of the datacube.

  13. Automated Recognition of 3D Features in GPIR Images

    NASA Technical Reports Server (NTRS)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a directed-graph data structure. Relative to past approaches, this multiaxis approach offers the advantages of more reliable detections, better discrimination of objects, and provision of redundant information, which can be helpful in filling gaps in feature recognition by one of the component algorithms. The image-processing class also includes postprocessing algorithms that enhance identified features to prepare them for further scrutiny by human analysts (see figure). Enhancement of images as a postprocessing step is a significant departure from traditional practice, in which enhancement of images is a preprocessing step.

  14. Artificial intelligence (AI)-based relational matching and multimodal medical image fusion: generalized 3D approaches

    NASA Astrophysics Data System (ADS)

    Vajdic, Stevan M.; Katz, Henry E.; Downing, Andrew R.; Brooks, Michael J.

    1994-09-01

    A 3D relational image matching/fusion algorithm is introduced. It is implemented in the domain of medical imaging and is based on Artificial Intelligence paradigms--in particular, knowledge base representation and tree search. The 2D reference and target images are selected from 3D sets and segmented into non-touching and non-overlapping regions, using iterative thresholding and/or knowledge about the anatomical shapes of human organs. Selected image region attributes are calculated. Region matches are obtained using a tree search, and the error is minimized by evaluating a `goodness' of matching function based on similarities of region attributes. Once the matched regions are found and the spline geometric transform is applied to regional centers of gravity, images are ready for fusion and visualization into a single 3D image of higher clarity.

  15. Autofocus algorithm using one-dimensional Fourier transform and Pearson correlation

    NASA Astrophysics Data System (ADS)

    Bueno Mario, A.; Alvarez-Borrego, Josue; Acho, L.

    2004-10-01

    A new autofocus algorithm based on one-dimensional Fourier transform and Pearson correlation for Z automatized microscope is proposed. Our goal is to determine in fast response time and accuracy, the best focused plane through an algorithm. We capture in bright and dark field several images set at different Z distances from biological organism sample. The algorithm uses the one-dimensional Fourier transform to obtain the image frequency content of a vectors pattern previously defined comparing the Pearson correlation of these frequency vectors versus the reference image frequency vector, the most out of focus image, we find the best focusing. Experimental results showed the algorithm has fast response time and accuracy in getting the best focus plane from captured images. In conclusions, the algorithm can be implemented in real time systems due fast response time, accuracy and robustness. The algorithm can be used to get focused images in bright and dark field and it can be extended to include fusion techniques to construct multifocus final images beyond of this paper.

  16. Formulation of image fusion as a constrained least squares optimization problem

    PubMed Central

    Dwork, Nicholas; Lasry, Eric M.; Pauly, John M.; Balbás, Jorge

    2017-01-01

    Abstract. Fusing a lower resolution color image with a higher resolution monochrome image is a common practice in medical imaging. By incorporating spatial context and/or improving the signal-to-noise ratio, it provides clinicians with a single frame of the most complete information for diagnosis. In this paper, image fusion is formulated as a convex optimization problem that avoids image decomposition and permits operations at the pixel level. This results in a highly efficient and embarrassingly parallelizable algorithm based on widely available robust and simple numerical methods that realizes the fused image as the global minimizer of the convex optimization problem. PMID:28331885

  17. Decision-level fusion of SAR and IR sensor information for automatic target detection

    NASA Astrophysics Data System (ADS)

    Cho, Young-Rae; Yim, Sung-Hyuk; Cho, Hyun-Woong; Won, Jin-Ju; Song, Woo-Jin; Kim, So-Hyeon

    2017-05-01

    We propose a decision-level architecture that combines synthetic aperture radar (SAR) and an infrared (IR) sensor for automatic target detection. We present a new size-based feature, called target-silhouette to reduce the number of false alarms produced by the conventional target-detection algorithm. Boolean Map Visual Theory is used to combine a pair of SAR and IR images to generate the target-enhanced map. Then basic belief assignment is used to transform this map into a belief map. The detection results of sensors are combined to build the target-silhouette map. We integrate the fusion mass and the target-silhouette map on the decision level to exclude false alarms. The proposed algorithm is evaluated using a SAR and IR synthetic database generated by SE-WORKBENCH simulator, and compared with conventional algorithms. The proposed fusion scheme achieves higher detection rate and lower false alarm rate than the conventional algorithms.

  18. Fusion of MultiSpectral and Panchromatic Images Based on Morphological Operators.

    PubMed

    Restaino, Rocco; Vivone, Gemine; Dalla Mura, Mauro; Chanussot, Jocelyn

    2016-04-20

    Nonlinear decomposition schemes constitute an alternative to classical approaches for facing the problem of data fusion. In this paper we discuss the application of this methodology to a popular remote sensing application called pansharpening, which consists in the fusion of a low resolution multispectral image and a high resolution panchromatic image. We design a complete pansharpening scheme based on the use of morphological half gradients operators and demonstrate the suitability of this algorithm through the comparison with state of the art approaches. Four datasets acquired by the Pleiades, Worldview-2, Ikonos and Geoeye-1 satellites are employed for the performance assessment, testifying the effectiveness of the proposed approach in producing top-class images with a setting independent of the specific sensor.

  19. Research on segmentation based on multi-atlas in brain MR image

    NASA Astrophysics Data System (ADS)

    Qian, Yuejing

    2018-03-01

    Accurate segmentation of specific tissues in brain MR image can be effectively achieved with the multi-atlas-based segmentation method, and the accuracy mainly depends on the image registration accuracy and fusion scheme. This paper proposes an automatic segmentation method based on the multi-atlas for brain MR image. Firstly, to improve the registration accuracy in the area to be segmented, we employ a target-oriented image registration method for the refinement. Then In the label fusion, we proposed a new algorithm to detect the abnormal sparse patch and simultaneously abandon the corresponding abnormal sparse coefficients, this method is made based on the remaining sparse coefficients combined with the multipoint label estimator strategy. The performance of the proposed method was compared with those of the nonlocal patch-based label fusion method (Nonlocal-PBM), the sparse patch-based label fusion method (Sparse-PBM) and majority voting method (MV). Based on our experimental results, the proposed method is efficient in the brain MR images segmentation compared with MV, Nonlocal-PBM, and Sparse-PBM methods.

  20. Infrared and Visible Image Fusion Based on Different Constraints in the Non-Subsampled Shearlet Transform Domain.

    PubMed

    Huang, Yan; Bi, Duyan; Wu, Dongpeng

    2018-04-11

    There are many artificial parameters when fuse infrared and visible images, to overcome the lack of detail in the fusion image because of the artifacts, a novel fusion algorithm for infrared and visible images that is based on different constraints in non-subsampled shearlet transform (NSST) domain is proposed. There are high bands and low bands of images that are decomposed by the NSST. After analyzing the characters of the bands, fusing the high level bands by the gradient constraint, the fused image can obtain more details; fusing the low bands by the constraint of saliency in the images, the targets are more salient. Before the inverse NSST, the Nash equilibrium is used to update the coefficient. The fused images and the quantitative results demonstrate that our method is more effective in reserving details and highlighting the targets when compared with other state-of-the-art methods.

  1. Infrared and Visible Image Fusion Based on Different Constraints in the Non-Subsampled Shearlet Transform Domain

    PubMed Central

    Huang, Yan; Bi, Duyan; Wu, Dongpeng

    2018-01-01

    There are many artificial parameters when fuse infrared and visible images, to overcome the lack of detail in the fusion image because of the artifacts, a novel fusion algorithm for infrared and visible images that is based on different constraints in non-subsampled shearlet transform (NSST) domain is proposed. There are high bands and low bands of images that are decomposed by the NSST. After analyzing the characters of the bands, fusing the high level bands by the gradient constraint, the fused image can obtain more details; fusing the low bands by the constraint of saliency in the images, the targets are more salient. Before the inverse NSST, the Nash equilibrium is used to update the coefficient. The fused images and the quantitative results demonstrate that our method is more effective in reserving details and highlighting the targets when compared with other state-of-the-art methods. PMID:29641505

  2. Intensity-hue-saturation-based image fusion using iterative linear regression

    NASA Astrophysics Data System (ADS)

    Cetin, Mufit; Tepecik, Abdulkadir

    2016-10-01

    The image fusion process basically produces a high-resolution image by combining the superior features of a low-resolution spatial image and a high-resolution panchromatic image. Despite its common usage due to its fast computing capability and high sharpening ability, the intensity-hue-saturation (IHS) fusion method may cause some color distortions, especially when a large number of gray value differences exist among the images to be combined. This paper proposes a spatially adaptive IHS (SA-IHS) technique to avoid these distortions by automatically adjusting the exact spatial information to be injected into the multispectral image during the fusion process. The SA-IHS method essentially suppresses the effects of those pixels that cause the spectral distortions by assigning weaker weights to them and avoiding a large number of redundancies on the fused image. The experimental database consists of IKONOS images, and the experimental results both visually and statistically prove the enhancement of the proposed algorithm when compared with the several other IHS-like methods such as IHS, generalized IHS, fast IHS, and generalized adaptive IHS.

  3. The Research on Dryland Crop Classification Based on the Fusion of SENTINEL-1A SAR and Optical Images

    NASA Astrophysics Data System (ADS)

    Liu, F.; Chen, T.; He, J.; Wen, Q.; Yu, F.; Gu, X.; Wang, Z.

    2018-04-01

    In recent years, the quick upgrading and improvement of SAR sensors provide beneficial complements for the traditional optical remote sensing in the aspects of theory, technology and data. In this paper, Sentinel-1A SAR data and GF-1 optical data were selected for image fusion, and more emphases were put on the dryland crop classification under a complex crop planting structure, regarding corn and cotton as the research objects. Considering the differences among various data fusion methods, the principal component analysis (PCA), Gram-Schmidt (GS), Brovey and wavelet transform (WT) methods were compared with each other, and the GS and Brovey methods were proved to be more applicable in the study area. Then, the classification was conducted based on the object-oriented technique process. And for the GS, Brovey fusion images and GF-1 optical image, the nearest neighbour algorithm was adopted to realize the supervised classification with the same training samples. Based on the sample plots in the study area, the accuracy assessment was conducted subsequently. The values of overall accuracy and kappa coefficient of fusion images were all higher than those of GF-1 optical image, and GS method performed better than Brovey method. In particular, the overall accuracy of GS fusion image was 79.8 %, and the Kappa coefficient was 0.644. Thus, the results showed that GS and Brovey fusion images were superior to optical images for dryland crop classification. This study suggests that the fusion of SAR and optical images is reliable for dryland crop classification under a complex crop planting structure.

  4. Expansion of the visual angle of a car rear-view image via an image mosaic algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Zhuangwen; Zhu, Liangrong; Sun, Xincheng

    2015-05-01

    The rear-view image system is one of the active safety devices in cars and is widely applied in all types of vehicles and traffic safety areas. However, studies made by both domestic and foreign researchers were based on a single image capture device while reversing, so a blind area still remained to drivers. Even if multiple cameras were used to expand the visual angle of the car's rear-view image in some studies, the blind area remained because different source images were not mosaicked together. To acquire an expanded visual angle of a car rear-view image, two charge-coupled device cameras with optical axes angled at 30 deg were mounted below the left and right fenders of a car in three light conditions-sunny outdoors, cloudy outdoors, and an underground garage-to capture rear-view heterologous images of the car. Then these rear-view heterologous images were rapidly registered through the scale invariant feature transform algorithm. Combined with the random sample consensus algorithm, the two heterologous images were finally mosaicked using the linear weighted gradated in-and-out fusion algorithm, and a seamless and visual-angle-expanded rear-view image was acquired. The four-index test results showed that the algorithms can mosaic rear-view images well in the underground garage condition, where the average rate of correct matching was the lowest among the three conditions. The rear-view image mosaic algorithm presented had the best information preservation, the shortest computation time and the most complete preservation of the image detail features compared to the mean value method (MVM) and segmental fusion method (SFM), and it was also able to perform better in real time and provided more comprehensive image details than MVM and SFM. In addition, it had the most complete image preservation from source images among the three algorithms. The method introduced by this paper provided the basis for researching the expansion of the visual angle of a car rear-view image in all-weather conditions.

  5. Evaluation of GMI and PMI diffeomorphic‐based demons algorithms for aligning PET and CT Images

    PubMed Central

    Yang, Juan; Zhang, You; Yin, Yong

    2015-01-01

    Fusion of anatomic information in computed tomography (CT) and functional information in F18‐FDG positron emission tomography (PET) is crucial for accurate differentiation of tumor from benign masses, designing radiotherapy treatment plan and staging of cancer. Although current PET and CT images can be acquired from combined F18‐FDG PET/CT scanner, the two acquisitions are scanned separately and take a long time, which may induce potential positional errors in global and local caused by respiratory motion or organ peristalsis. So registration (alignment) of whole‐body PET and CT images is a prerequisite for their meaningful fusion. The purpose of this study was to assess the performance of two multimodal registration algorithms for aligning PET and CT images. The proposed gradient of mutual information (GMI)‐based demons algorithm, which incorporated the GMI between two images as an external force to facilitate the alignment, was compared with the point‐wise mutual information (PMI) diffeomorphic‐based demons algorithm whose external force was modified by replacing the image intensity difference in diffeomorphic demons algorithm with the PMI to make it appropriate for multimodal image registration. Eight patients with esophageal cancer(s) were enrolled in this IRB‐approved study. Whole‐body PET and CT images were acquired from a combined F18‐FDG PET/CT scanner for each patient. The modified Hausdorff distance (dMH) was used to evaluate the registration accuracy of the two algorithms. Of all patients, the mean values and standard deviations (SDs) of dMH were 6.65 (± 1.90) voxels and 6.01 (± 1.90) after the GMI‐based demons and the PMI diffeomorphic‐based demons registration algorithms respectively. Preliminary results on oncological patients showed that the respiratory motion and organ peristalsis in PET/CT esophageal images could not be neglected, although a combined F18‐FDG PET/CT scanner was used for image acquisition. The PMI diffeomorphic‐based demons algorithm was more accurate than the GMI‐based demons algorithm in registering PET/CT esophageal images. PACS numbers: 87.57.nj, 87.57. Q‐, 87.57.uk PMID:26218993

  6. Evaluation of GMI and PMI diffeomorphic-based demons algorithms for aligning PET and CT Images.

    PubMed

    Yang, Juan; Wang, Hongjun; Zhang, You; Yin, Yong

    2015-07-08

    Fusion of anatomic information in computed tomography (CT) and functional information in 18F-FDG positron emission tomography (PET) is crucial for accurate differentiation of tumor from benign masses, designing radiotherapy treatment plan and staging of cancer. Although current PET and CT images can be acquired from combined 18F-FDG PET/CT scanner, the two acquisitions are scanned separately and take a long time, which may induce potential positional errors in global and local caused by respiratory motion or organ peristalsis. So registration (alignment) of whole-body PET and CT images is a prerequisite for their meaningful fusion. The purpose of this study was to assess the performance of two multimodal registration algorithms for aligning PET and CT images. The proposed gradient of mutual information (GMI)-based demons algorithm, which incorporated the GMI between two images as an external force to facilitate the alignment, was compared with the point-wise mutual information (PMI) diffeomorphic-based demons algorithm whose external force was modified by replacing the image intensity difference in diffeomorphic demons algorithm with the PMI to make it appropriate for multimodal image registration. Eight patients with esophageal cancer(s) were enrolled in this IRB-approved study. Whole-body PET and CT images were acquired from a combined 18F-FDG PET/CT scanner for each patient. The modified Hausdorff distance (d(MH)) was used to evaluate the registration accuracy of the two algorithms. Of all patients, the mean values and standard deviations (SDs) of d(MH) were 6.65 (± 1.90) voxels and 6.01 (± 1.90) after the GMI-based demons and the PMI diffeomorphic-based demons registration algorithms respectively. Preliminary results on oncological patients showed that the respiratory motion and organ peristalsis in PET/CT esophageal images could not be neglected, although a combined 18F-FDG PET/CT scanner was used for image acquisition. The PMI diffeomorphic-based demons algorithm was more accurate than the GMI-based demons algorithm in registering PET/CT esophageal images.

  7. Simultaneous usage of pinhole and penumbral apertures for imaging small scale neutron sources from inertial confinement fusion experiments.

    PubMed

    Guler, N; Volegov, P; Danly, C R; Grim, G P; Merrill, F E; Wilde, C H

    2012-10-01

    Inertial confinement fusion experiments at the National Ignition Facility are designed to understand the basic principles of creating self-sustaining fusion reactions by laser driven compression of deuterium-tritium (DT) filled cryogenic plastic capsules. The neutron imaging diagnostic provides information on the distribution of the central fusion reaction region and the surrounding DT fuel by observing neutron images in two different energy bands for primary (13-17 MeV) and down-scattered (6-12 MeV) neutrons. From this, the final shape and size of the compressed capsule can be estimated and the symmetry of the compression can be inferred. These experiments provide small sources with high yield neutron flux. An aperture design that includes an array of pinholes and penumbral apertures has provided the opportunity to image the same source with two different techniques. This allows for an evaluation of these different aperture designs and reconstruction algorithms.

  8. Minimizing the semantic gap in biomedical content-based image retrieval

    NASA Astrophysics Data System (ADS)

    Guan, Haiying; Antani, Sameer; Long, L. Rodney; Thoma, George R.

    2010-03-01

    A major challenge in biomedical Content-Based Image Retrieval (CBIR) is to achieve meaningful mappings that minimize the semantic gap between the high-level biomedical semantic concepts and the low-level visual features in images. This paper presents a comprehensive learning-based scheme toward meeting this challenge and improving retrieval quality. The article presents two algorithms: a learning-based feature selection and fusion algorithm and the Ranking Support Vector Machine (Ranking SVM) algorithm. The feature selection algorithm aims to select 'good' features and fuse them using different similarity measurements to provide a better representation of the high-level concepts with the low-level image features. Ranking SVM is applied to learn the retrieval rank function and associate the selected low-level features with query concepts, given the ground-truth ranking of the training samples. The proposed scheme addresses four major issues in CBIR to improve the retrieval accuracy: image feature extraction, selection and fusion, similarity measurements, the association of the low-level features with high-level concepts, and the generation of the rank function to support high-level semantic image retrieval. It models the relationship between semantic concepts and image features, and enables retrieval at the semantic level. We apply it to the problem of vertebra shape retrieval from a digitized spine x-ray image set collected by the second National Health and Nutrition Examination Survey (NHANES II). The experimental results show an improvement of up to 41.92% in the mean average precision (MAP) over conventional image similarity computation methods.

  9. Underwater video enhancement using multi-camera super-resolution

    NASA Astrophysics Data System (ADS)

    Quevedo, E.; Delory, E.; Callicó, G. M.; Tobajas, F.; Sarmiento, R.

    2017-12-01

    Image spatial resolution is critical in several fields such as medicine, communications or satellite, and underwater applications. While a large variety of techniques for image restoration and enhancement has been proposed in the literature, this paper focuses on a novel Super-Resolution fusion algorithm based on a Multi-Camera environment that permits to enhance the quality of underwater video sequences without significantly increasing computation. In order to compare the quality enhancement, two objective quality metrics have been used: PSNR (Peak Signal-to-Noise Ratio) and the SSIM (Structural SIMilarity) index. Results have shown that the proposed method enhances the objective quality of several underwater sequences, avoiding the appearance of undesirable artifacts, with respect to basic fusion Super-Resolution algorithms.

  10. Guided filter-based fusion method for multiexposure images

    NASA Astrophysics Data System (ADS)

    Hou, Xinglin; Luo, Haibo; Qi, Feng; Zhou, Peipei

    2016-11-01

    It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range camera. A weighted sum-based image fusion (IF) algorithm is proposed so as to express an HDR scene with a high-quality image. This method mainly includes three parts. First, two image features, i.e., gradients and well-exposedness are measured to estimate the initial weight maps. Second, the initial weight maps are refined by a guided filter, in which the source image is considered as the guidance image. This process could reduce the noise in initial weight maps and preserve more texture consistent with the original images. Finally, the fused image is constructed by a weighted sum of source images in the spatial domain. The main contributions of this method are the estimation of the initial weight maps and the appropriate use of the guided filter-based weight maps refinement. It provides accurate weight maps for IF. Compared to traditional IF methods, this algorithm avoids image segmentation, combination, and the camera response curve calibration. Furthermore, experimental results demonstrate the superiority of the proposed method in both subjective and objective evaluations.

  11. An acceleration system for Laplacian image fusion based on SoC

    NASA Astrophysics Data System (ADS)

    Gao, Liwen; Zhao, Hongtu; Qu, Xiujie; Wei, Tianbo; Du, Peng

    2018-04-01

    Based on the analysis of Laplacian image fusion algorithm, this paper proposes a partial pipelining and modular processing architecture, and a SoC based acceleration system is implemented accordingly. Full pipelining method is used for the design of each module, and modules in series form the partial pipelining with unified data formation, which is easy for management and reuse. Integrated with ARM processor, DMA and embedded bare-mental program, this system achieves 4 layers of Laplacian pyramid on the Zynq-7000 board. Experiments show that, with small resources consumption, a couple of 256×256 images can be fused within 1ms, maintaining a fine fusion effect at the same time.

  12. Performance analysis of unsupervised optimal fuzzy clustering algorithm for MRI brain tumor segmentation.

    PubMed

    Blessy, S A Praylin Selva; Sulochana, C Helen

    2015-01-01

    Segmentation of brain tumor from Magnetic Resonance Imaging (MRI) becomes very complicated due to the structural complexities of human brain and the presence of intensity inhomogeneities. To propose a method that effectively segments brain tumor from MR images and to evaluate the performance of unsupervised optimal fuzzy clustering (UOFC) algorithm for segmentation of brain tumor from MR images. Segmentation is done by preprocessing the MR image to standardize intensity inhomogeneities followed by feature extraction, feature fusion and clustering. Different validation measures are used to evaluate the performance of the proposed method using different clustering algorithms. The proposed method using UOFC algorithm produces high sensitivity (96%) and low specificity (4%) compared to other clustering methods. Validation results clearly show that the proposed method with UOFC algorithm effectively segments brain tumor from MR images.

  13. Score-Level Fusion of Phase-Based and Feature-Based Fingerprint Matching Algorithms

    NASA Astrophysics Data System (ADS)

    Ito, Koichi; Morita, Ayumi; Aoki, Takafumi; Nakajima, Hiroshi; Kobayashi, Koji; Higuchi, Tatsuo

    This paper proposes an efficient fingerprint recognition algorithm combining phase-based image matching and feature-based matching. In our previous work, we have already proposed an efficient fingerprint recognition algorithm using Phase-Only Correlation (POC), and developed commercial fingerprint verification units for access control applications. The use of Fourier phase information of fingerprint images makes it possible to achieve robust recognition for weakly impressed, low-quality fingerprint images. This paper presents an idea of improving the performance of POC-based fingerprint matching by combining it with feature-based matching, where feature-based matching is introduced in order to improve recognition efficiency for images with nonlinear distortion. Experimental evaluation using two different types of fingerprint image databases demonstrates efficient recognition performance of the combination of the POC-based algorithm and the feature-based algorithm.

  14. Color image guided depth image super resolution using fusion filter

    NASA Astrophysics Data System (ADS)

    He, Jin; Liang, Bin; He, Ying; Yang, Jun

    2018-04-01

    Depth cameras are currently playing an important role in many areas. However, most of them can only obtain lowresolution (LR) depth images. Color cameras can easily provide high-resolution (HR) color images. Using color image as a guide image is an efficient way to get a HR depth image. In this paper, we propose a depth image super resolution (SR) algorithm, which uses a HR color image as a guide image and a LR depth image as input. We use the fusion filter of guided filter and edge based joint bilateral filter to get HR depth image. Our experimental results on Middlebury 2005 datasets show that our method can provide better quality in HR depth images both numerically and visually.

  15. Image fusion method based on regional feature and improved bidimensional empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Qin, Xinqiang; Hu, Gang; Hu, Kai

    2018-01-01

    The decomposition of multiple source images using bidimensional empirical mode decomposition (BEMD) often produces mismatched bidimensional intrinsic mode functions, either by their number or their frequency, making image fusion difficult. A solution to this problem is proposed using a fixed number of iterations and a union operation in the sifting process. By combining the local regional features of the images, an image fusion method has been developed. First, the source images are decomposed using the proposed BEMD to produce the first intrinsic mode function (IMF) and residue component. Second, for the IMF component, a selection and weighted average strategy based on local area energy is used to obtain a high-frequency fusion component. Third, for the residue component, a selection and weighted average strategy based on local average gray difference is used to obtain a low-frequency fusion component. Finally, the fused image is obtained by applying the inverse BEMD transform. Experimental results show that the proposed algorithm provides superior performance over methods based on wavelet transform, line and column-based EMD, and complex empirical mode decomposition, both in terms of visual quality and objective evaluation criteria.

  16. Multifocus watermarking approach based on discrete cosine transform.

    PubMed

    Waheed, Safa Riyadh; Alkawaz, Mohammed Hazim; Rehman, Amjad; Almazyad, Abdulaziz S; Saba, Tanzila

    2016-05-01

    Image fusion process consolidates data and information from various images of same sight into a solitary image. Each of the source images might speak to a fractional perspective of the scene, and contains both "pertinent" and "immaterial" information. In this study, a new image fusion method is proposed utilizing the Discrete Cosine Transform (DCT) to join the source image into a solitary minimized image containing more exact depiction of the sight than any of the individual source images. In addition, the fused image comes out with most ideal quality image without bending appearance or loss of data. DCT algorithm is considered efficient in image fusion. The proposed scheme is performed in five steps: (1) RGB colour image (input image) is split into three channels R, G, and B for source images. (2) DCT algorithm is applied to each channel (R, G, and B). (3) The variance values are computed for the corresponding 8 × 8 blocks of each channel. (4) Each block of R of source images is compared with each other based on the variance value and then the block with maximum variance value is selected to be the block in the new image. This process is repeated for all channels of source images. (5) Inverse discrete cosine transform is applied on each fused channel to convert coefficient values to pixel values, and then combined all the channels to generate the fused image. The proposed technique can potentially solve the problem of unwanted side effects such as blurring or blocking artifacts by reducing the quality of the subsequent image in image fusion process. The proposed approach is evaluated using three measurement units: the average of Q(abf), standard deviation, and peak Signal Noise Rate. The experimental results of this proposed technique have shown good results as compared with older techniques. © 2016 Wiley Periodicals, Inc.

  17. Development and Application of Non-Linear Image Enhancement and Multi-Sensor Fusion Techniques for Hazy and Dark Imaging

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-ur

    2005-01-01

    The purpose of this research was to develop enhancement and multi-sensor fusion algorithms and techniques to make it safer for the pilot to fly in what would normally be considered Instrument Flight Rules (IFR) conditions, where pilot visibility is severely restricted due to fog, haze or other weather phenomenon. We proposed to use the non-linear Multiscale Retinex (MSR) as the basic driver for developing an integrated enhancement and fusion engine. When we started this research, the MSR was being applied primarily to grayscale imagery such as medical images, or to three-band color imagery, such as that produced in consumer photography: it was not, however, being applied to other imagery such as that produced by infrared image sources. However, we felt that it was possible by using the MSR algorithm in conjunction with multiple imaging modalities such as long-wave infrared (LWIR), short-wave infrared (SWIR), and visible spectrum (VIS), we could substantially improve over the then state-of-the-art enhancement algorithms, especially in poor visibility conditions. We proposed the following tasks: 1) Investigate the effects of applying the MSR to LWIR and SWIR images. This consisted of optimizing the algorithm in terms of surround scales, and weights for these spectral bands; 2) Fusing the LWIR and SWIR images with the VIS images using the MSR framework to determine the best possible representation of the desired features; 3) Evaluating different mixes of LWIR, SWIR and VIS bands for maximum fog and haze reduction, and low light level compensation; 4) Modifying the existing algorithms to work with video sequences. Over the course of the 3 year research period, we were able to accomplish these tasks and report on them at various internal presentations at NASA Langley Research Center, and in presentations and publications elsewhere. A description of the work performed under the tasks is provided in Section 2. The complete list of relevant publications during the research periods is provided in Section 5. This research also resulted in the generation of intellectual property.

  18. Multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement

    NASA Astrophysics Data System (ADS)

    Yan, Dan; Bai, Lianfa; Zhang, Yi; Han, Jing

    2018-02-01

    For the problems of missing details and performance of the colorization based on sparse representation, we propose a conceptual model framework for colorizing gray-scale images, and then a multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement (CEMDC) is proposed based on this framework. The algorithm can achieve a natural colorized effect for a gray-scale image, and it is consistent with the human vision. First, the algorithm establishes a multi-sparse dictionary classification colorization model. Then, to improve the accuracy rate of the classification, the corresponding local constraint algorithm is proposed. Finally, we propose a detail enhancement based on Laplacian Pyramid, which is effective in solving the problem of missing details and improving the speed of image colorization. In addition, the algorithm not only realizes the colorization of the visual gray-scale image, but also can be applied to the other areas, such as color transfer between color images, colorizing gray fusion images, and infrared images.

  19. Incorporating priors on expert performance parameters for segmentation validation and label fusion: a maximum a posteriori STAPLE

    PubMed Central

    Commowick, Olivier; Warfield, Simon K

    2010-01-01

    In order to evaluate the quality of segmentations of an image and assess intra- and inter-expert variability in segmentation performance, an Expectation Maximization (EM) algorithm for Simultaneous Truth And Performance Level Estimation (STAPLE) was recently developed. This algorithm, originally presented for segmentation validation, has since been used for many applications, such as atlas construction and decision fusion. However, the manual delineation of structures of interest is a very time consuming and burdensome task. Further, as the time required and burden of manual delineation increase, the accuracy of the delineation is decreased. Therefore, it may be desirable to ask the experts to delineate only a reduced number of structures or the segmentation of all structures by all experts may simply not be achieved. Fusion from data with some structures not segmented by each expert should be carried out in a manner that accounts for the missing information. In other applications, locally inconsistent segmentations may drive the STAPLE algorithm into an undesirable local optimum, leading to misclassifications or misleading experts performance parameters. We present a new algorithm that allows fusion with partial delineation and which can avoid convergence to undesirable local optima in the presence of strongly inconsistent segmentations. The algorithm extends STAPLE by incorporating prior probabilities for the expert performance parameters. This is achieved through a Maximum A Posteriori formulation, where the prior probabilities for the performance parameters are modeled by a beta distribution. We demonstrate that this new algorithm enables dramatically improved fusion from data with partial delineation by each expert in comparison to fusion with STAPLE. PMID:20879379

  20. Incorporating priors on expert performance parameters for segmentation validation and label fusion: a maximum a posteriori STAPLE.

    PubMed

    Commowick, Olivier; Warfield, Simon K

    2010-01-01

    In order to evaluate the quality of segmentations of an image and assess intra- and inter-expert variability in segmentation performance, an Expectation Maximization (EM) algorithm for Simultaneous Truth And Performance Level Estimation (STAPLE) was recently developed. This algorithm, originally presented for segmentation validation, has since been used for many applications, such as atlas construction and decision fusion. However, the manual delineation of structures of interest is a very time consuming and burdensome task. Further, as the time required and burden of manual delineation increase, the accuracy of the delineation is decreased. Therefore, it may be desirable to ask the experts to delineate only a reduced number of structures or the segmentation of all structures by all experts may simply not be achieved. Fusion from data with some structures not segmented by each expert should be carried out in a manner that accounts for the missing information. In other applications, locally inconsistent segmentations may drive the STAPLE algorithm into an undesirable local optimum, leading to misclassifications or misleading experts performance parameters. We present a new algorithm that allows fusion with partial delineation and which can avoid convergence to undesirable local optima in the presence of strongly inconsistent segmentations. The algorithm extends STAPLE by incorporating prior probabilities for the expert performance parameters. This is achieved through a Maximum A Posteriori formulation, where the prior probabilities for the performance parameters are modeled by a beta distribution. We demonstrate that this new algorithm enables dramatically improved fusion from data with partial delineation by each expert in comparison to fusion with STAPLE.

  1. Real-time Enhancement, Registration, and Fusion for an Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2006-01-01

    Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than-human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, Kalpagam; Liu, Jeff; Kohli, Kirpal

    Purpose: Fusion of electrical impedance tomography (EIT) with computed tomography (CT) can be useful as a clinical tool for providing additional physiological information about tissues, but requires suitable fusion algorithms and validation procedures. This work explores the feasibility of fusing EIT and CT images using an algorithm for coregistration. The imaging performance is validated through feature space assessment on phantom contrast targets. Methods: EIT data were acquired by scanning a phantom using a circuit, configured for injecting current through 16 electrodes, placed around the phantom. A conductivity image of the phantom was obtained from the data using electrical impedance andmore » diffuse optical tomography reconstruction software (EIDORS). A CT image of the phantom was also acquired. The EIT and CT images were fused using a region of interest (ROI) coregistration fusion algorithm. Phantom imaging experiments were carried out on objects of different contrasts, sizes, and positions. The conductive medium of the phantoms was made of a tissue-mimicking bolus material that is routinely used in clinical radiation therapy settings. To validate the imaging performance in detecting different contrasts, the ROI of the phantom was filled with distilled water and normal saline. Spatially separated cylindrical objects of different sizes were used for validating the imaging performance in multiple target detection. Analyses of the CT, EIT and the EIT/CT phantom images were carried out based on the variations of contrast, correlation, energy, and homogeneity, using a gray level co-occurrence matrix (GLCM). A reference image of the phantom was simulated using EIDORS, and the performances of the CT and EIT imaging systems were evaluated and compared against the performance of the EIT/CT system using various feature metrics, detectability, and structural similarity index measures. Results: In detecting distilled and normal saline water in bolus medium, EIT as a stand-alone imaging system showed contrast discrimination of 47%, while the CT imaging system showed a discrimination of only 1.5%. The structural similarity index measure showed a drop of 24% with EIT imaging compared to CT imaging. The average detectability measure for CT imaging was found to be 2.375 ± 0.19 before fusion. After complementing with EIT information, the detectability measure increased to 11.06 ± 2.04. Based on the feature metrics, the functional imaging quality of CT and EIT were found to be 2.29% and 86%, respectively, before fusion. Structural imaging quality was found to be 66% for CT and 16% for EIT. After fusion, functional imaging quality improved in CT imaging from 2.29% to 42% and the structural imaging quality of EIT imaging changed from 16% to 66%. The improvement in image quality was also observed in detecting objects of different sizes. Conclusions: The authors found a significant improvement in the contrast detectability performance of CT imaging when complemented with functional imaging information from EIT. Along with the feature assessment metrics, the concept of complementing CT with EIT imaging can lead to an EIT/CT imaging modality which might fully utilize the functional imaging abilities of EIT imaging, thereby enhancing the quality of care in the areas of cancer diagnosis and radiotherapy treatment planning.« less

  3. Fourier domain image fusion for differential X-ray phase-contrast breast imaging.

    PubMed

    Coello, Eduardo; Sperl, Jonathan I; Bequé, Dirk; Benz, Tobias; Scherer, Kai; Herzen, Julia; Sztrókay-Gaul, Anikó; Hellerhoff, Karin; Pfeiffer, Franz; Cozzini, Cristina; Grandl, Susanne

    2017-04-01

    X-Ray Phase-Contrast (XPC) imaging is a novel technology with a great potential for applications in clinical practice, with breast imaging being of special interest. This work introduces an intuitive methodology to combine and visualize relevant diagnostic features, present in the X-ray attenuation, phase shift and scattering information retrieved in XPC imaging, using a Fourier domain fusion algorithm. The method allows to present complementary information from the three acquired signals in one single image, minimizing the noise component and maintaining visual similarity to a conventional X-ray image, but with noticeable enhancement in diagnostic features, details and resolution. Radiologists experienced in mammography applied the image fusion method to XPC measurements of mastectomy samples and evaluated the feature content of each input and the fused image. This assessment validated that the combination of all the relevant diagnostic features, contained in the XPC images, was present in the fused image as well. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Time-resolved wide-field optically sectioned fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Dupuis, Guillaume; Benabdallah, Nadia; Chopinaud, Aurélien; Mayet, Céline; Lévêque-Fort, Sandrine

    2013-02-01

    We present the implementation of a fast wide-field optical sectioning technique called HiLo microscopy on a fluorescence lifetime imaging microscope. HiLo microscopy is based on the fusion of two images, one with structured illumination and another with uniform illumination. Optically sectioned images are then digitally generated thanks to a fusion algorithm. HiLo images are comparable in quality with confocal images but they can be acquired faster over larger fields of view. We obtain 4D imaging by combining HiLo optical sectioning, time-gated detection, and z-displacement. We characterize the performances of this set-up in terms of 3D spatial resolution and time-resolved capabilities in both fixed- and live-cell imaging modes.

  5. Thermal and range fusion for a planetary rover

    NASA Technical Reports Server (NTRS)

    Caillas, Claude

    1992-01-01

    This paper describes how fusion between thermal and range imaging allows us to discriminate different types of materials in outdoor scenes. First, we analyze how pure vision segmentation algorithms applied to thermal images allow discriminating materials such as rock and sand. Second, we show how combining thermal and range information allows us to better discriminate rocks from sand. Third, as an application, we examine how an autonomous legged robot can use these techniques to explore other planets.

  6. Handling Different Spatial Resolutions in Image Fusion by Multivariate Curve Resolution-Alternating Least Squares for Incomplete Image Multisets.

    PubMed

    Piqueras, Sara; Bedia, Carmen; Beleites, Claudia; Krafft, Christoph; Popp, Jürgen; Maeder, Marcel; Tauler, Romà; de Juan, Anna

    2018-06-05

    Data fusion of different imaging techniques allows a comprehensive description of chemical and biological systems. Yet, joining images acquired with different spectroscopic platforms is complex because of the different sample orientation and image spatial resolution. Whereas matching sample orientation is often solved by performing suitable affine transformations of rotation, translation, and scaling among images, the main difficulty in image fusion is preserving the spatial detail of the highest spatial resolution image during multitechnique image analysis. In this work, a special variant of the unmixing algorithm Multivariate Curve Resolution Alternating Least Squares (MCR-ALS) for incomplete multisets is proposed to provide a solution for this kind of problem. This algorithm allows analyzing simultaneously images collected with different spectroscopic platforms without losing spatial resolution and ensuring spatial coherence among the images treated. The incomplete multiset structure concatenates images of the two platforms at the lowest spatial resolution with the image acquired with the highest spatial resolution. As a result, the constituents of the sample analyzed are defined by a single set of distribution maps, common to all platforms used and with the highest spatial resolution, and their related extended spectral signatures, covering the signals provided by each of the fused techniques. We demonstrate the potential of the new variant of MCR-ALS for multitechnique analysis on three case studies: (i) a model example of MIR and Raman images of pharmaceutical mixture, (ii) FT-IR and Raman images of palatine tonsil tissue, and (iii) mass spectrometry and Raman images of bean tissue.

  7. SIRF: Simultaneous Satellite Image Registration and Fusion in a Unified Framework.

    PubMed

    Chen, Chen; Li, Yeqing; Liu, Wei; Huang, Junzhou

    2015-11-01

    In this paper, we propose a novel method for image fusion with a high-resolution panchromatic image and a low-resolution multispectral (Ms) image at the same geographical location. The fusion is formulated as a convex optimization problem which minimizes a linear combination of a least-squares fitting term and a dynamic gradient sparsity regularizer. The former is to preserve accurate spectral information of the Ms image, while the latter is to keep sharp edges of the high-resolution panchromatic image. We further propose to simultaneously register the two images during the fusing process, which is naturally achieved by virtue of the dynamic gradient sparsity property. An efficient algorithm is then devised to solve the optimization problem, accomplishing a linear computational complexity in the size of the output image in each iteration. We compare our method against six state-of-the-art image fusion methods on Ms image data sets from four satellites. Extensive experimental results demonstrate that the proposed method substantially outperforms the others in terms of both spatial and spectral qualities. We also show that our method can provide high-quality products from coarsely registered real-world IKONOS data sets. Finally, a MATLAB implementation is provided to facilitate future research.

  8. The TRICLOBS Dynamic Multi-Band Image Data Set for the Development and Evaluation of Image Fusion Methods

    PubMed Central

    Hogervorst, Maarten A.; Pinkus, Alan R.

    2016-01-01

    The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4–0.7μm), near-infrared (NIR, 0.7–1.0μm) and long-wave infrared (LWIR, 8–14μm) motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian) people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation) all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer). The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false) color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic) image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs can be used to define color mappings that give the multi-band imagery a realistic color appearance. PMID:28036328

  9. The TRICLOBS Dynamic Multi-Band Image Data Set for the Development and Evaluation of Image Fusion Methods.

    PubMed

    Toet, Alexander; Hogervorst, Maarten A; Pinkus, Alan R

    2016-01-01

    The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4-0.7μm), near-infrared (NIR, 0.7-1.0μm) and long-wave infrared (LWIR, 8-14μm) motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian) people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation) all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer). The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false) color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic) image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs can be used to define color mappings that give the multi-band imagery a realistic color appearance.

  10. Red to far-red multispectral fluorescence image fusion for detection of fecal contamination on apples

    USDA-ARS?s Scientific Manuscript database

    This research developed a multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet/blue LED excitation for detection of fecal contamination on Golden Delicious apples. Using a hyperspectral line-scan imaging system consisting of an EMCCD camera, spectrograph, an...

  11. Comparison and evaluation on image fusion methods for GaoFen-1 imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Ningyu; Zhao, Junqing; Zhang, Ling

    2016-10-01

    Currently, there are many research works focusing on the best fusion method suitable for satellite images of SPOT, QuickBird, Landsat and so on, but only a few of them discuss the application of GaoFen-1 satellite images. This paper proposes a novel idea by using four fusion methods, such as principal component analysis transform, Brovey transform, hue-saturation-value transform, and Gram-Schmidt transform, from the perspective of keeping the original image spectral information. The experimental results showed that the transformed images by the four fusion methods not only retain high spatial resolution on panchromatic band but also have the abundant spectral information. Through comparison and evaluation, the integration of Brovey transform is better, but the color fidelity is not the premium. The brightness and color distortion in hue saturation-value transformed image is the largest. Principal component analysis transform did a good job in color fidelity, but its clarity still need improvement. Gram-Schmidt transform works best in color fidelity, and the edge of the vegetation is the most obvious, the fused image sharpness is higher than that of principal component analysis. Brovey transform, is suitable for distinguishing the Gram-Schmidt transform, and the most appropriate for GaoFen-1 satellite image in vegetation and non-vegetation area. In brief, different fusion methods have different advantages in image quality and class extraction, and should be used according to the actual application information and image fusion algorithm.

  12. Multi-observation PET image analysis for patient follow-up quantitation and therapy assessment

    NASA Astrophysics Data System (ADS)

    David, S.; Visvikis, D.; Roux, C.; Hatt, M.

    2011-09-01

    In positron emission tomography (PET) imaging, an early therapeutic response is usually characterized by variations of semi-quantitative parameters restricted to maximum SUV measured in PET scans during the treatment. Such measurements do not reflect overall tumor volume and radiotracer uptake variations. The proposed approach is based on multi-observation image analysis for merging several PET acquisitions to assess tumor metabolic volume and uptake variations. The fusion algorithm is based on iterative estimation using a stochastic expectation maximization (SEM) algorithm. The proposed method was applied to simulated and clinical follow-up PET images. We compared the multi-observation fusion performance to threshold-based methods, proposed for the assessment of the therapeutic response based on functional volumes. On simulated datasets the adaptive threshold applied independently on both images led to higher errors than the ASEM fusion and on clinical datasets it failed to provide coherent measurements for four patients out of seven due to aberrant delineations. The ASEM method demonstrated improved and more robust estimation of the evaluation leading to more pertinent measurements. Future work will consist in extending the methodology and applying it to clinical multi-tracer datasets in order to evaluate its potential impact on the biological tumor volume definition for radiotherapy applications.

  13. Multisensor data fusion for IED threat detection

    NASA Astrophysics Data System (ADS)

    Mees, Wim; Heremans, Roel

    2012-10-01

    In this paper we present the multi-sensor registration and fusion algorithms that were developed for a force protection research project in order to detect threats against military patrol vehicles. The fusion is performed at object level, using a hierarchical evidence aggregation approach. It first uses expert domain knowledge about the features used to characterize the detected threats, that is implemented in the form of a fuzzy expert system. The next level consists in fusing intra-sensor and inter-sensor information. Here an ordered weighted averaging operator is used. The object level fusion between candidate threats that are detected asynchronously on a moving vehicle by sensors with different imaging geometries, requires an accurate sensor to world coordinate transformation. This image registration will also be discussed in this paper.

  14. Image fusion using sparse overcomplete feature dictionaries

    DOEpatents

    Brumby, Steven P.; Bettencourt, Luis; Kenyon, Garrett T.; Chartrand, Rick; Wohlberg, Brendt

    2015-10-06

    Approaches for deciding what individuals in a population of visual system "neurons" are looking for using sparse overcomplete feature dictionaries are provided. A sparse overcomplete feature dictionary may be learned for an image dataset and a local sparse representation of the image dataset may be built using the learned feature dictionary. A local maximum pooling operation may be applied on the local sparse representation to produce a translation-tolerant representation of the image dataset. An object may then be classified and/or clustered within the translation-tolerant representation of the image dataset using a supervised classification algorithm and/or an unsupervised clustering algorithm.

  15. Registration of a Dynamic Multimodal Target Image Test Set for the Evaluation of Image Fusion Techniques

    DTIC Science & Technology

    2013-10-17

    imagery. 2. Report describing the registration algorithms and parameters . 3.2 Deliverables from Phase 2 (current phase, ongoing) 1. Selected and...representation. To be submitted to Information Fusion [Impact factor 2.262 ] . 2. De Jong, M., Toet, A., Hogervorst, M.A., Hooge , I., Pinkus, A. R. (in...Impact factor 3.376] . 3. Koenderink, J.J., van Doorn, A., De Jong, M., Toet, A., Hogervorst, M.A., Hooge , I., Pinkus, A. R. (in preparation

  16. Target detection method by airborne and spaceborne images fusion based on past images

    NASA Astrophysics Data System (ADS)

    Chen, Shanjing; Kang, Qing; Wang, Zhenggang; Shen, ZhiQiang; Pu, Huan; Han, Hao; Gu, Zhongzheng

    2017-11-01

    To solve the problem that remote sensing target detection method has low utilization rate of past remote sensing data on target area, and can not recognize camouflage target accurately, a target detection method by airborne and spaceborne images fusion based on past images is proposed in this paper. The target area's past of space remote sensing image is taken as background. The airborne and spaceborne remote sensing data is fused and target feature is extracted by the means of airborne and spaceborne images registration, target change feature extraction, background noise suppression and artificial target feature extraction based on real-time aerial optical remote sensing image. Finally, the support vector machine is used to detect and recognize the target on feature fusion data. The experimental results have established that the proposed method combines the target area change feature of airborne and spaceborne remote sensing images with target detection algorithm, and obtains fine detection and recognition effect on camouflage and non-camouflage targets.

  17. Improving the recognition of fingerprint biometric system using enhanced image fusion

    NASA Astrophysics Data System (ADS)

    Alsharif, Salim; El-Saba, Aed; Stripathi, Reshma

    2010-04-01

    Fingerprints recognition systems have been widely used by financial institutions, law enforcement, border control, visa issuing, just to mention few. Biometric identifiers can be counterfeited, but considered more reliable and secure compared to traditional ID cards or personal passwords methods. Fingerprint pattern fusion improves the performance of a fingerprint recognition system in terms of accuracy and security. This paper presents digital enhancement and fusion approaches that improve the biometric of the fingerprint recognition system. It is a two-step approach. In the first step raw fingerprint images are enhanced using high-frequency-emphasis filtering (HFEF). The second step is a simple linear fusion process between the raw images and the HFEF ones. It is shown that the proposed approach increases the verification and identification of the fingerprint biometric recognition system, where any improvement is justified using the correlation performance metrics of the matching algorithm.

  18. The pre-image problem for Laplacian Eigenmaps utilizing L 1 regularization with applications to data fusion

    NASA Astrophysics Data System (ADS)

    Cloninger, Alexander; Czaja, Wojciech; Doster, Timothy

    2017-07-01

    As the popularity of non-linear manifold learning techniques such as kernel PCA and Laplacian Eigenmaps grows, vast improvements have been seen in many areas of data processing, including heterogeneous data fusion and integration. One problem with the non-linear techniques, however, is the lack of an easily calculable pre-image. Existence of such pre-image would allow visualization of the fused data not only in the embedded space, but also in the original data space. The ability to make such comparisons can be crucial for data analysts and other subject matter experts who are the end users of novel mathematical algorithms. In this paper, we propose a pre-image algorithm for Laplacian Eigenmaps. Our method offers major improvements over existing techniques, which allow us to address the problem of noisy inputs and the issue of how to calculate the pre-image of a point outside the convex hull of training samples; both of which have been overlooked in previous studies in this field. We conclude by showing that our pre-image algorithm, combined with feature space rotations, allows us to recover occluded pixels of an imaging modality based off knowledge of that image measured by heterogeneous modalities. We demonstrate this data recovery on heterogeneous hyperspectral (HS) cameras, as well as by recovering LIDAR measurements from HS data.

  19. Assessment of spatiotemporal fusion algorithms for Planet and Worldview images

    USDA-ARS?s Scientific Manuscript database

    Although Worldview (WV) images (non-pansharpened) have 2-meter resolution, the re-visit times for the same areas may be 7 days or more. In contrast, Planet images using small satellites can cover the whole Earth almost daily. However, the resolution of Planet images is 3.125 m. It will be ideal to f...

  20. Fusion of LBP and SWLD using spatio-spectral information for hyperspectral face recognition

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Jiang, Peng; Zhang, Shuai; Xiong, Jinquan

    2018-01-01

    Hyperspectral imaging, recording intrinsic spectral information of the skin cross different spectral bands, become an important issue for robust face recognition. However, the main challenges for hyperspectral face recognition are high data dimensionality, low signal to noise ratio and inter band misalignment. In this paper, hyperspectral face recognition based on LBP (Local binary pattern) and SWLD (Simplified Weber local descriptor) is proposed to extract discriminative local features from spatio-spectral fusion information. Firstly, the spatio-spectral fusion strategy based on statistical information is used to attain discriminative features of hyperspectral face images. Secondly, LBP is applied to extract the orientation of the fusion face edges. Thirdly, SWLD is proposed to encode the intensity information in hyperspectral images. Finally, we adopt a symmetric Kullback-Leibler distance to compute the encoded face images. The hyperspectral face recognition is tested on Hong Kong Polytechnic University Hyperspectral Face database (PolyUHSFD). Experimental results show that the proposed method has higher recognition rate (92.8%) than the state of the art hyperspectral face recognition algorithms.

  1. Approach to interpret images produced by new generations of multidetector CT scanners in post-operative spine.

    PubMed

    Zeitoun, Rania; Hussein, Manar

    2017-11-01

    To reach a practical approach to interpret MDCT findings in post-operative spine cases and to change the false belief of CT failure in the setting of instruments secondary to related artefacts. We performed observational retrospective analysis of premier, early and late MDCT scans in 68 post-operative spine patients, with emphasis on instruments related complications and osseous fusion status. We used a grading system for assessment of osseous fusion in 35 patients and we further analysed the findings in failure of fusion, grade (D). We observed a variety of instruments related complications (mostly screws medially penetrating the pedicle) and osseous fusion status in late scans. We graded 11 interbody and 14 posterolateral levels as osseous fusion failure, showing additional instruments related complications, end plates erosive changes, adjacent segments spondylosis and malalignment. Modern MDCT scanners provide high quality images and are strongly recommended in assessment of the instruments and status of osseous fusion. In post-operative imaging of the spine, it is essential to be aware for what you are looking for, in relevance to the date of surgery. Advances in knowledge: Modern MDCT scanners allow assessment of instruments position and integrity and osseous fusion status in post-operative spine. We propose a helpful algorithm to simplify interpreting post-operative spine imaging.

  2. A framework for small infrared target real-time visual enhancement

    NASA Astrophysics Data System (ADS)

    Sun, Xiaoliang; Long, Gucan; Shang, Yang; Liu, Xiaolin

    2015-03-01

    This paper proposes a framework for small infrared target real-time visual enhancement. The framework is consisted of three parts: energy accumulation for small infrared target enhancement, noise suppression and weighted fusion. Dynamic programming based track-before-detection algorithm is adopted in the energy accumulation to detect the target accurately and enhance the target's intensity notably. In the noise suppression, the target region is weighted by a Gaussian mask according to the target's Gaussian shape. In order to fuse the processed target region and unprocessed background smoothly, the intensity in the target region is treated as weight in the fusion. Experiments on real small infrared target images indicate that the framework proposed in this paper can enhances the small infrared target markedly and improves the image's visual quality notably. The proposed framework outperforms tradition algorithms in enhancing the small infrared target, especially for image in which the target is hardly visible.

  3. Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.

    PubMed

    Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F

    2013-09-01

    The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.

  4. Joint image registration and fusion method with a gradient strength regularization

    NASA Astrophysics Data System (ADS)

    Lidong, Huang; Wei, Zhao; Jun, Wang

    2015-05-01

    Image registration is an essential process for image fusion, and fusion performance can be used to evaluate registration accuracy. We propose a maximum likelihood (ML) approach to joint image registration and fusion instead of treating them as two independent processes in the conventional way. To improve the visual quality of a fused image, a gradient strength (GS) regularization is introduced in the cost function of ML. The GS of the fused image is controllable by setting the target GS value in the regularization term. This is useful because a larger target GS brings a clearer fused image and a smaller target GS makes the fused image smoother and thus restrains noise. Hence, the subjective quality of the fused image can be improved whether the source images are polluted by noise or not. We can obtain the fused image and registration parameters successively by minimizing the cost function using an iterative optimization method. Experimental results show that our method is effective with transformation, rotation, and scale parameters in the range of [-2.0, 2.0] pixel, [-1.1 deg, 1.1 deg], and [0.95, 1.05], respectively, and variances of noise smaller than 300. It also demonstrated that our method yields a more visual pleasing fused image and higher registration accuracy compared with a state-of-the-art algorithm.

  5. Enhanced video indirect ophthalmoscopy (VIO) via robust mosaicing.

    PubMed

    Estrada, Rolando; Tomasi, Carlo; Cabrera, Michelle T; Wallace, David K; Freedman, Sharon F; Farsiu, Sina

    2011-10-01

    Indirect ophthalmoscopy (IO) is the standard of care for evaluation of the neonatal retina. When recorded on video from a head-mounted camera, IO images have low quality and narrow Field of View (FOV). We present an image fusion methodology for converting a video IO recording into a single, high quality, wide-FOV mosaic that seamlessly blends the best frames in the video. To this end, we have developed fast and robust algorithms for automatic evaluation of video quality, artifact detection and removal, vessel mapping, registration, and multi-frame image fusion. Our experiments show the effectiveness of the proposed methods.

  6. A sea-land segmentation algorithm based on multi-feature fusion for a large-field remote sensing image

    NASA Astrophysics Data System (ADS)

    Li, Jing; Xie, Weixin; Pei, Jihong

    2018-03-01

    Sea-land segmentation is one of the key technologies of sea target detection in remote sensing images. At present, the existing algorithms have the problems of low accuracy, low universality and poor automatic performance. This paper puts forward a sea-land segmentation algorithm based on multi-feature fusion for a large-field remote sensing image removing island. Firstly, the coastline data is extracted and all of land area is labeled by using the geographic information in large-field remote sensing image. Secondly, three features (local entropy, local texture and local gradient mean) is extracted in the sea-land border area, and the three features combine a 3D feature vector. And then the MultiGaussian model is adopted to describe 3D feature vectors of sea background in the edge of the coastline. Based on this multi-gaussian sea background model, the sea pixels and land pixels near coastline are classified more precise. Finally, the coarse segmentation result and the fine segmentation result are fused to obtain the accurate sea-land segmentation. Comparing and analyzing the experimental results by subjective vision, it shows that the proposed method has high segmentation accuracy, wide applicability and strong anti-disturbance ability.

  7. Determination of feature generation methods for PTZ camera object tracking

    NASA Astrophysics Data System (ADS)

    Doyle, Daniel D.; Black, Jonathan T.

    2012-06-01

    Object detection and tracking using computer vision (CV) techniques have been widely applied to sensor fusion applications. Many papers continue to be written that speed up performance and increase learning of artificially intelligent systems through improved algorithms, workload distribution, and information fusion. Military application of real-time tracking systems is becoming more and more complex with an ever increasing need of fusion and CV techniques to actively track and control dynamic systems. Examples include the use of metrology systems for tracking and measuring micro air vehicles (MAVs) and autonomous navigation systems for controlling MAVs. This paper seeks to contribute to the determination of select tracking algorithms that best track a moving object using a pan/tilt/zoom (PTZ) camera applicable to both of the examples presented. The select feature generation algorithms compared in this paper are the trained Scale-Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), the Mixture of Gaussians (MoG) background subtraction method, the Lucas- Kanade optical flow method (2000) and the Farneback optical flow method (2003). The matching algorithm used in this paper for the trained feature generation algorithms is the Fast Library for Approximate Nearest Neighbors (FLANN). The BSD licensed OpenCV library is used extensively to demonstrate the viability of each algorithm and its performance. Initial testing is performed on a sequence of images using a stationary camera. Further testing is performed on a sequence of images such that the PTZ camera is moving in order to capture the moving object. Comparisons are made based upon accuracy, speed and memory.

  8. Multi-atlas segmentation with joint label fusion and corrective learning—an open source implementation

    PubMed Central

    Wang, Hongzhi; Yushkevich, Paul A.

    2013-01-01

    Label fusion based multi-atlas segmentation has proven to be one of the most competitive techniques for medical image segmentation. This technique transfers segmentations from expert-labeled images, called atlases, to a novel image using deformable image registration. Errors produced by label transfer are further reduced by label fusion that combines the results produced by all atlases into a consensus solution. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity is a simple and highly effective label fusion technique. However, one limitation of most weighted voting methods is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this problem, we recently developed the joint label fusion technique and the corrective learning technique, which won the first place of the 2012 MICCAI Multi-Atlas Labeling Challenge and was one of the top performers in 2013 MICCAI Segmentation: Algorithms, Theory and Applications (SATA) challenge. To make our techniques more accessible to the scientific research community, we describe an Insight-Toolkit based open source implementation of our label fusion methods. Our implementation extends our methods to work with multi-modality imaging data and is more suitable for segmentation problems with multiple labels. We demonstrate the usage of our tools through applying them to the 2012 MICCAI Multi-Atlas Labeling Challenge brain image dataset and the 2013 SATA challenge canine leg image dataset. We report the best results on these two datasets so far. PMID:24319427

  9. Spatial Distortion in MRI-Guided Stereotactic Procedures: Evaluation in 1.5-, 3- and 7-Tesla MRI Scanners.

    PubMed

    Neumann, Jan-Oliver; Giese, Henrik; Biller, Armin; Nagel, Armin M; Kiening, Karl

    2015-01-01

    Magnetic resonance imaging (MRI) is replacing computed tomography (CT) as the main imaging modality for stereotactic transformations. MRI is prone to spatial distortion artifacts, which can lead to inaccuracy in stereotactic procedures. Modern MRI systems provide distortion correction algorithms that may ameliorate this problem. This study investigates the different options of distortion correction using standard 1.5-, 3- and 7-tesla MRI scanners. A phantom was mounted on a stereotactic frame. One CT scan and three MRI scans were performed. At all three field strengths, two 3-dimensional sequences, volumetric interpolated breath-hold examination (VIBE) and magnetization-prepared rapid acquisition with gradient echo, were acquired, and automatic distortion correction was performed. Global stereotactic transformation of all 13 datasets was performed and two stereotactic planning workflows (MRI only vs. CT/MR image fusion) were subsequently analysed. Distortion correction on the 1.5- and 3-tesla scanners caused a considerable reduction in positional error. The effect was more pronounced when using the VIBE sequences. By using co-registration (CT/MR image fusion), even a lower positional error could be obtained. In ultra-high-field (7 T) MR imaging, distortion correction introduced even higher errors. However, the accuracy of non-corrected 7-tesla sequences was comparable to CT/MR image fusion 3-tesla imaging. MRI distortion correction algorithms can reduce positional errors by up to 60%. For stereotactic applications of utmost precision, we recommend a co-registration to an additional CT dataset. © 2015 S. Karger AG, Basel.

  10. Recognition of pornographic web pages by classifying texts and images.

    PubMed

    Hu, Weiming; Wu, Ou; Chen, Zhouyao; Fu, Zhouyu; Maybank, Steve

    2007-06-01

    With the rapid development of the World Wide Web, people benefit more and more from the sharing of information. However, Web pages with obscene, harmful, or illegal content can be easily accessed. It is important to recognize such unsuitable, offensive, or pornographic Web pages. In this paper, a novel framework for recognizing pornographic Web pages is described. A C4.5 decision tree is used to divide Web pages, according to content representations, into continuous text pages, discrete text pages, and image pages. These three categories of Web pages are handled, respectively, by a continuous text classifier, a discrete text classifier, and an algorithm that fuses the results from the image classifier and the discrete text classifier. In the continuous text classifier, statistical and semantic features are used to recognize pornographic texts. In the discrete text classifier, the naive Bayes rule is used to calculate the probability that a discrete text is pornographic. In the image classifier, the object's contour-based features are extracted to recognize pornographic images. In the text and image fusion algorithm, the Bayes theory is used to combine the recognition results from images and texts. Experimental results demonstrate that the continuous text classifier outperforms the traditional keyword-statistics-based classifier, the contour-based image classifier outperforms the traditional skin-region-based image classifier, the results obtained by our fusion algorithm outperform those by either of the individual classifiers, and our framework can be adapted to different categories of Web pages.

  11. Fusion of Visible and Thermal Descriptors Using Genetic Algorithms for Face Recognition Systems.

    PubMed

    Hermosilla, Gabriel; Gallardo, Francisco; Farias, Gonzalo; San Martin, Cesar

    2015-07-23

    The aim of this article is to present a new face recognition system based on the fusion of visible and thermal features obtained from the most current local matching descriptors by maximizing face recognition rates through the use of genetic algorithms. The article considers a comparison of the performance of the proposed fusion methodology against five current face recognition methods and classic fusion techniques used commonly in the literature. These were selected by considering their performance in face recognition. The five local matching methods and the proposed fusion methodology are evaluated using the standard visible/thermal database, the Equinox database, along with a new database, the PUCV-VTF, designed for visible-thermal studies in face recognition and described for the first time in this work. The latter is created considering visible and thermal image sensors with different real-world conditions, such as variations in illumination, facial expression, pose, occlusion, etc. The main conclusions of this article are that two variants of the proposed fusion methodology surpass current face recognition methods and the classic fusion techniques reported in the literature, attaining recognition rates of over 97% and 99% for the Equinox and PUCV-VTF databases, respectively. The fusion methodology is very robust to illumination and expression changes, as it combines thermal and visible information efficiently by using genetic algorithms, thus allowing it to choose optimal face areas where one spectrum is more representative than the other.

  12. Fusion of Visible and Thermal Descriptors Using Genetic Algorithms for Face Recognition Systems

    PubMed Central

    Hermosilla, Gabriel; Gallardo, Francisco; Farias, Gonzalo; San Martin, Cesar

    2015-01-01

    The aim of this article is to present a new face recognition system based on the fusion of visible and thermal features obtained from the most current local matching descriptors by maximizing face recognition rates through the use of genetic algorithms. The article considers a comparison of the performance of the proposed fusion methodology against five current face recognition methods and classic fusion techniques used commonly in the literature. These were selected by considering their performance in face recognition. The five local matching methods and the proposed fusion methodology are evaluated using the standard visible/thermal database, the Equinox database, along with a new database, the PUCV-VTF, designed for visible-thermal studies in face recognition and described for the first time in this work. The latter is created considering visible and thermal image sensors with different real-world conditions, such as variations in illumination, facial expression, pose, occlusion, etc. The main conclusions of this article are that two variants of the proposed fusion methodology surpass current face recognition methods and the classic fusion techniques reported in the literature, attaining recognition rates of over 97% and 99% for the Equinox and PUCV-VTF databases, respectively. The fusion methodology is very robust to illumination and expression changes, as it combines thermal and visible information efficiently by using genetic algorithms, thus allowing it to choose optimal face areas where one spectrum is more representative than the other. PMID:26213932

  13. Study on Hybrid Image Search Technology Based on Texts and Contents

    NASA Astrophysics Data System (ADS)

    Wang, H. T.; Ma, F. L.; Yan, C.; Pan, H.

    2018-05-01

    Image search was studied first here based on texts and contents, respectively. The text-based image feature extraction was put forward by integrating the statistical and topic features in view of the limitation of extraction of keywords only by means of statistical features of words. On the other hand, a search-by-image method was put forward based on multi-feature fusion in view of the imprecision of the content-based image search by means of a single feature. The layered-searching method depended on primarily the text-based image search method and additionally the content-based image search was then put forward in view of differences between the text-based and content-based methods and their difficult direct fusion. The feasibility and effectiveness of the hybrid search algorithm were experimentally verified.

  14. PET/CT image registration: preliminary tests for its application to clinical dosimetry in radiotherapy.

    PubMed

    Baños-Capilla, M C; García, M A; Bea, J; Pla, C; Larrea, L; López, E

    2007-06-01

    The quality of dosimetry in radiotherapy treatment requires the accurate delimitation of the gross tumor volume. This can be achieved by complementing the anatomical detail provided by CT images through fusion with other imaging modalities that provide additional metabolic and physiological information. Therefore, use of multiple imaging modalities for radiotherapy treatment planning requires an accurate image registration method. This work describes tests carried out on a Discovery LS positron emission/computed tomography (PET/CT) system by General Electric Medical Systems (GEMS), for its later use to obtain images to delimit the target in radiotherapy treatment. Several phantoms have been used to verify image correlation, in combination with fiducial markers, which were used as a system of external landmarks. We analyzed the geometrical accuracy of two different fusion methods with the images obtained with these phantoms. We first studied the fusion method used by the PET/CT system by GEMS (hardware fusion) on the basis that there is satisfactory coincidence between the reconstruction centers in CT and PET systems; and secondly the fiducial fusion, a registration method, by means of least-squares fitting algorithm of a landmark points system. The study concluded with the verification of the centroid position of some phantom components in both imaging modalities. Centroids were estimated through a calculation similar to center-of-mass, weighted by the value of the CT number and the uptake intensity in PET. The mean deviations found for the hardware fusion method were: deltax/ +/-sigma = 3.3 mm +/- 1.0 mm and /deltax/ +/-sigma = 3.6 mm +/- 1.0 mm. These values were substantially improved upon applying fiducial fusion based on external landmark points: /deltax/ +/-sigma = 0.7 mm +/- 0.8 mm and /deltax/ +/-sigma = 0.3 mm 1.7 mm. We also noted that differences found for each of the fusion methods were similar for both the axial and helical CT image acquisition protocols.

  15. Assessment of Cropping System Diversity in the Fergana Valley Through Image Fusion of Landsat 8 and SENTINEL-1

    NASA Astrophysics Data System (ADS)

    Dimov, D.; Kuhn, J.; Conrad, C.

    2016-06-01

    In the transitioning agricultural societies of the world, food security is an essential element of livelihood and economic development with the agricultural sector very often being the major employment factor and income source. Rapid population growth, urbanization, pollution, desertification, soil degradation and climate change pose a variety of threats to a sustainable agricultural development and can be expressed as agricultural vulnerability components. Diverse cropping patterns may help to adapt the agricultural systems to those hazards in terms of increasing the potential yield and resilience to water scarcity. Thus, the quantification of crop diversity using indices like the Simpson Index of Diversity (SID) e.g. through freely available remote sensing data becomes a very important issue. This however requires accurate land use classifications. In this study, the focus is set on the cropping system diversity of garden plots, summer crop fields and orchard plots which are the prevalent agricultural systems in the test area of the Fergana Valley in Uzbekistan. In order to improve the accuracy of land use classification algorithms with low or medium resolution data, a novel processing chain through the hitherto unique fusion of optical and SAR data from the Landsat 8 and Sentinel-1 platforms is proposed. The combination of both sensors is intended to enhance the object's textural and spectral signature rather than just to enhance the spatial context through pansharpening. It could be concluded that the Ehlers fusion algorithm gave the most suitable results. Based on the derived image fusion different object-based image classification algorithms such as SVM, Naïve Bayesian and Random Forest were evaluated whereby the latter one achieved the highest classification accuracy. Subsequently, the SID was applied to measure the diversification of the three main cropping systems.

  16. Cross-modal face recognition using multi-matcher face scores

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2015-05-01

    The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.

  17. Image denoising via fundamental anisotropic diffusion and wavelet shrinkage: a comparative study

    NASA Astrophysics Data System (ADS)

    Bayraktar, Bulent; Analoui, Mostafa

    2004-05-01

    Noise removal faces a challenge: Keeping the image details. Resolving the dilemma of two purposes (smoothing and keeping image features in tact) working inadvertently of each other was an almost impossible task until anisotropic dif-fusion (AD) was formally introduced by Perona and Malik (PM). AD favors intra-region smoothing over inter-region in piecewise smooth images. Many authors regularized the original PM algorithm to overcome its drawbacks. We compared the performance of denoising using such 'fundamental' AD algorithms and one of the most powerful multiresolution tools available today, namely, wavelet shrinkage. The AD algorithms here are called 'fundamental' in the sense that the regularized versions center around the original PM algorithm with minor changes to the logic. The algorithms are tested with different noise types and levels. On top of the visual inspection, two mathematical metrics are used for performance comparison: Signal-to-noise ratio (SNR) and universal image quality index (UIQI). We conclude that some of the regu-larized versions of PM algorithm (AD) perform comparably with wavelet shrinkage denoising. This saves a lot of compu-tational power. With this conclusion, we applied the better-performing fundamental AD algorithms to a new imaging modality: Optical Coherence Tomography (OCT).

  18. Efficient Retrieval of Massive Ocean Remote Sensing Images via a Cloud-Based Mean-Shift Algorithm.

    PubMed

    Yang, Mengzhao; Song, Wei; Mei, Haibin

    2017-07-23

    The rapid development of remote sensing (RS) technology has resulted in the proliferation of high-resolution images. There are challenges involved in not only storing large volumes of RS images but also in rapidly retrieving the images for ocean disaster analysis such as for storm surges and typhoon warnings. In this paper, we present an efficient retrieval of massive ocean RS images via a Cloud-based mean-shift algorithm. Distributed construction method via the pyramid model is proposed based on the maximum hierarchical layer algorithm and used to realize efficient storage structure of RS images on the Cloud platform. We achieve high-performance processing of massive RS images in the Hadoop system. Based on the pyramid Hadoop distributed file system (HDFS) storage method, an improved mean-shift algorithm for RS image retrieval is presented by fusion with the canopy algorithm via Hadoop MapReduce programming. The results show that the new method can achieve better performance for data storage than HDFS alone and WebGIS-based HDFS. Speedup and scaleup are very close to linear changes with an increase of RS images, which proves that image retrieval using our method is efficient.

  19. Efficient Retrieval of Massive Ocean Remote Sensing Images via a Cloud-Based Mean-Shift Algorithm

    PubMed Central

    Song, Wei; Mei, Haibin

    2017-01-01

    The rapid development of remote sensing (RS) technology has resulted in the proliferation of high-resolution images. There are challenges involved in not only storing large volumes of RS images but also in rapidly retrieving the images for ocean disaster analysis such as for storm surges and typhoon warnings. In this paper, we present an efficient retrieval of massive ocean RS images via a Cloud-based mean-shift algorithm. Distributed construction method via the pyramid model is proposed based on the maximum hierarchical layer algorithm and used to realize efficient storage structure of RS images on the Cloud platform. We achieve high-performance processing of massive RS images in the Hadoop system. Based on the pyramid Hadoop distributed file system (HDFS) storage method, an improved mean-shift algorithm for RS image retrieval is presented by fusion with the canopy algorithm via Hadoop MapReduce programming. The results show that the new method can achieve better performance for data storage than HDFS alone and WebGIS-based HDFS. Speedup and scaleup are very close to linear changes with an increase of RS images, which proves that image retrieval using our method is efficient. PMID:28737699

  20. The design of red-blue 3D video fusion system based on DM642

    NASA Astrophysics Data System (ADS)

    Fu, Rongguo; Luo, Hao; Lv, Jin; Feng, Shu; Wei, Yifang; Zhang, Hao

    2016-10-01

    Aiming at the uncertainty of traditional 3D video capturing including camera focal lengths, distance and angle parameters between two cameras, a red-blue 3D video fusion system based on DM642 hardware processing platform is designed with the parallel optical axis. In view of the brightness reduction of traditional 3D video, the brightness enhancement algorithm based on human visual characteristics is proposed and the luminance component processing method based on YCbCr color space is also proposed. The BIOS real-time operating system is used to improve the real-time performance. The video processing circuit with the core of DM642 enhances the brightness of the images, then converts the video signals of YCbCr to RGB and extracts the R component from one camera, so does the other video and G, B component are extracted synchronously, outputs 3D fusion images finally. The real-time adjustments such as translation and scaling of the two color components are realized through the serial communication between the VC software and BIOS. The system with the method of adding red-blue components reduces the lost of the chrominance components and makes the picture color saturation reduce to more than 95% of the original. Enhancement algorithm after optimization to reduce the amount of data fusion in the processing of video is used to reduce the fusion time and watching effect is improved. Experimental results show that the system can capture images in near distance, output red-blue 3D video and presents the nice experiences to the audience wearing red-blue glasses.

  1. Fusion of imaging and nonimaging data for surveillance aircraft

    NASA Astrophysics Data System (ADS)

    Shahbazian, Elisa; Gagnon, Langis; Duquet, Jean Remi; Macieszczak, Maciej; Valin, Pierre

    1997-06-01

    This paper describes a phased incremental integration approach for application of image analysis and data fusion technologies to provide automated intelligent target tracking and identification for airborne surveillance on board an Aurora Maritime Patrol Aircraft. The sensor suite of the Aurora consists of a radar, an identification friend or foe (IFF) system, an electronic support measures (ESM) system, a spotlight synthetic aperture radar (SSAR), a forward looking infra-red (FLIR) sensor and a link-11 tactical datalink system. Lockheed Martin Canada (LMCan) is developing a testbed, which will be used to analyze and evaluate approaches for combining the data provided by the existing sensors, which were initially not designed to feed a fusion system. Three concurrent research proof-of-concept activities provide techniques, algorithms and methodology into three sequential phases of integration of this testbed. These activities are: (1) analysis of the fusion architecture (track/contact/hybrid) most appropriate for the type of data available, (2) extraction and fusion of simple features from the imaging data into the fusion system performing automatic target identification, and (3) development of a unique software architecture which will permit integration and independent evolution, enhancement and optimization of various decision aid capabilities, such as multi-sensor data fusion (MSDF), situation and threat assessment (STA) and resource management (RM).

  2. Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion

    PubMed Central

    Zhao, Yuanshen; Gong, Liang; Huang, Yixiang; Liu, Chengliang

    2016-01-01

    Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the  a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ) color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost. PMID:26840313

  3. The algorithm of fast image stitching based on multi-feature extraction

    NASA Astrophysics Data System (ADS)

    Yang, Chunde; Wu, Ge; Shi, Jing

    2018-05-01

    This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.

  4. Image Registration of High-Resolution Uav Data: the New Hypare Algorithm

    NASA Astrophysics Data System (ADS)

    Bahr, T.; Jin, X.; Lasica, R.; Giessel, D.

    2013-08-01

    Unmanned aerial vehicles play an important role in the present-day civilian and military intelligence. Equipped with a variety of sensors, such as SAR imaging modes, E/O- and IR sensor technology, they are due to their agility suitable for many applications. Hence, the necessity arises to use fusion technologies and to develop them continuously. Here an exact image-to-image registration is essential. It serves as the basis for important image processing operations such as georeferencing, change detection, and data fusion. Therefore we developed the Hybrid Powered Auto-Registration Engine (HyPARE). HyPARE combines all available spatial reference information with a number of image registration approaches to improve the accuracy, performance, and automation of tie point generation and image registration. We demonstrate this approach by the registration of 39 still images from a high-resolution image stream, acquired with a Aeryon Photo3S™ camera on an Aeryon Scout micro-UAV™.

  5. Improving cerebellar segmentation with statistical fusion

    NASA Astrophysics Data System (ADS)

    Plassard, Andrew J.; Yang, Zhen; Prince, Jerry L.; Claassen, Daniel O.; Landman, Bennett A.

    2016-03-01

    The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multiatlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non- Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution.

  6. Focus measure method based on the modulus of the gradient of the color planes for digital microscopy

    NASA Astrophysics Data System (ADS)

    Hurtado-Pérez, Román; Toxqui-Quitl, Carina; Padilla-Vivanco, Alfonso; Aguilar-Valdez, J. Félix; Ortega-Mendoza, Gabriel

    2018-02-01

    The modulus of the gradient of the color planes (MGC) is implemented to transform multichannel information to a grayscale image. This digital technique is used in two applications: (a) focus measurements during autofocusing (AF) process and (b) extending the depth of field (EDoF) by means of multifocus image fusion. In the first case, the MGC procedure is based on an edge detection technique and is implemented in over 15 focus metrics that are typically handled in digital microscopy. The MGC approach is tested on color images of histological sections for the selection of in-focus images. An appealing attribute of all the AF metrics working in the MGC space is their monotonic behavior even up to a magnification of 100×. An advantage of the MGC method is its computational simplicity and inherent parallelism. In the second application, a multifocus image fusion algorithm based on the MGC approach has been implemented on graphics processing units (GPUs). The resulting fused images are evaluated using a nonreference image quality metric. The proposed fusion method reveals a high-quality image independently of faulty illumination during the image acquisition. Finally, the three-dimensional visualization of the in-focus image is shown.

  7. Study on Low Illumination Simultaneous Polarization Image Registration Based on Improved SURF Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Wanjun; Yang, Xu

    2017-12-01

    Registration of simultaneous polarization images is the premise of subsequent image fusion operations. However, in the process of shooting all-weather, the polarized camera exposure time need to be kept unchanged, sometimes polarization images under low illumination conditions due to too dark result in SURF algorithm can not extract feature points, thus unable to complete the registration, therefore this paper proposes an improved SURF algorithm. Firstly, the luminance operator is used to improve overall brightness of low illumination image, and then create integral image, using Hession matrix to extract the points of interest to get the main direction of characteristic points, calculate Haar wavelet response in X and Y directions to get the SURF descriptor information, then use the RANSAC function to make precise matching, the function can eliminate wrong matching points and improve accuracy rate. And finally resume the brightness of the polarized image after registration, the effect of the polarized image is not affected. Results show that the improved SURF algorithm can be applied well under low illumination conditions.

  8. An object tracking method based on guided filter for night fusion image

    NASA Astrophysics Data System (ADS)

    Qian, Xiaoyan; Wang, Yuedong; Han, Lei

    2016-01-01

    Online object tracking is a challenging problem as it entails learning an effective model to account for appearance change caused by intrinsic and extrinsic factors. In this paper, we propose a novel online object tracking with guided image filter for accurate and robust night fusion image tracking. Firstly, frame difference is applied to produce the coarse target, which helps to generate observation models. Under the restriction of these models and local source image, guided filter generates sufficient and accurate foreground target. Then accurate boundaries of the target can be extracted from detection results. Finally timely updating for observation models help to avoid tracking shift. Both qualitative and quantitative evaluations on challenging image sequences demonstrate that the proposed tracking algorithm performs favorably against several state-of-art methods.

  9. Segmentation of lung nodules in computed tomography images using dynamic programming and multidirection fusion techniques.

    PubMed

    Wang, Qian; Song, Enmin; Jin, Renchao; Han, Ping; Wang, Xiaotong; Zhou, Yanying; Zeng, Jianchao

    2009-06-01

    The aim of this study was to develop a novel algorithm for segmenting lung nodules on three-dimensional (3D) computed tomographic images to improve the performance of computer-aided diagnosis (CAD) systems. The database used in this study consists of two data sets obtained from the Lung Imaging Database Consortium. The first data set, containing 23 nodules (22% irregular nodules, 13% nonsolid nodules, 17% nodules attached to other structures), was used for training. The second data set, containing 64 nodules (37% irregular nodules, 40% nonsolid nodules, 62% nodules attached to other structures), was used for testing. Two key techniques were developed in the segmentation algorithm: (1) a 3D extended dynamic programming model, with a newly defined internal cost function based on the information between adjacent slices, allowing parameters to be adapted to each slice, and (2) a multidirection fusion technique, which makes use of the complementary relationships among different directions to improve the final segmentation accuracy. The performance of this approach was evaluated by the overlap criterion, complemented by the true-positive fraction and the false-positive fraction criteria. The mean values of the overlap, true-positive fraction, and false-positive fraction for the first data set achieved using the segmentation scheme were 66%, 75%, and 15%, respectively, and the corresponding values for the second data set were 58%, 71%, and 22%, respectively. The experimental results indicate that this segmentation scheme can achieve better performance for nodule segmentation than two existing algorithms reported in the literature. The proposed 3D extended dynamic programming model is an effective way to segment sequential images of lung nodules. The proposed multidirection fusion technique is capable of reducing segmentation errors especially for no-nodule and near-end slices, thus resulting in better overall performance.

  10. Anatomic vascular phantom for the verification of MRA and XRA visualization and fusion

    NASA Astrophysics Data System (ADS)

    Mankovich, Nicholas J.; Lambert, Timothy; Zrimec, Tatjana; Hiller, John B.

    1995-05-01

    A project is underway to develop automated methods of fusing cerebral magnetic resonance angiography (MRA) and x-ray angiography (XRA) for creating accurate visualizations used in planning treatment of vascular disease. We have developed a vascular phantom suitable for testing segmentation and fusion algorithms with either derived images (psuedo-MRA/psuedo-XRA) or actual MRA or XRA image sequences. The initial unilateral arterial phantom design, based on normal human anatomy, contains 48 tapering vascular segments with lumen diameters from 2.5 millimeter to 0.25 millimeter. The initial phantom used rapid prototyping technology (stereolithography) with a 0.9 millimeter vessel wall fabricated in an ultraviolet-cured plastic. The model fabrication resulted in a hollow vessel model comprising the internal carotid artery, the ophthalmic artery, and the proximal segments of the anterior, middle, and posterior cerebral arteries. The complete model was fabricated but the model's lumen could not be cleared for vessels with less than 1 millimeter diameter. Measurements of selected vascular outer diameters as judged against the CAD specification showed an accuracy of 0.14 mm and precision (standard deviation) of 0.15 mm. The plastic vascular model produced provides a fixed geometric framework for the evaluation of imaging protocols and the development of algorithms for both segmentation and fusion.

  11. Fusion of 3D laser scanner and depth images for obstacle recognition in mobile applications

    NASA Astrophysics Data System (ADS)

    Budzan, Sebastian; Kasprzyk, Jerzy

    2016-02-01

    The problem of obstacle detection and recognition or, generally, scene mapping is one of the most investigated problems in computer vision, especially in mobile applications. In this paper a fused optical system using depth information with color images gathered from the Microsoft Kinect sensor and 3D laser range scanner data is proposed for obstacle detection and ground estimation in real-time mobile systems. The algorithm consists of feature extraction in the laser range images, processing of the depth information from the Kinect sensor, fusion of the sensor information, and classification of the data into two separate categories: road and obstacle. Exemplary results are presented and it is shown that fusion of information gathered from different sources increases the effectiveness of the obstacle detection in different scenarios, and it can be used successfully for road surface mapping.

  12. Nonlinear Fusion of Multispectral Citrus Fruit Image Data with Information Contents.

    PubMed

    Li, Peilin; Lee, Sang-Heon; Hsu, Hung-Yao; Park, Jae-Sam

    2017-01-13

    The main issue of vison-based automatic harvesting manipulators is the difficulty in the correct fruit identification in the images under natural lighting conditions. Mostly, the solution has been based on a linear combination of color components in the multispectral images. However, the results have not reached a satisfactory level. To overcome this issue, this paper proposes a robust nonlinear fusion method to augment the original color image with the synchronized near infrared image. The two images are fused with Daubechies wavelet transform (DWT) in a multiscale decomposition approach. With DWT, the background noises are reduced and the necessary image features are enhanced by fusing the color contrast of the color components and the homogeneity of the near infrared (NIR) component. The resulting fused color image is classified with a C-means algorithm for reconstruction. The performance of the proposed approach is evaluated with the statistical F measure in comparison to some existing methods using linear combinations of color components. The results show that the fusion of information in different spectral components has the advantage of enhancing the image quality, therefore improving the classification accuracy in citrus fruit identification in natural lighting conditions.

  13. Nonlinear Fusion of Multispectral Citrus Fruit Image Data with Information Contents

    PubMed Central

    Li, Peilin; Lee, Sang-Heon; Hsu, Hung-Yao; Park, Jae-Sam

    2017-01-01

    The main issue of vison-based automatic harvesting manipulators is the difficulty in the correct fruit identification in the images under natural lighting conditions. Mostly, the solution has been based on a linear combination of color components in the multispectral images. However, the results have not reached a satisfactory level. To overcome this issue, this paper proposes a robust nonlinear fusion method to augment the original color image with the synchronized near infrared image. The two images are fused with Daubechies wavelet transform (DWT) in a multiscale decomposition approach. With DWT, the background noises are reduced and the necessary image features are enhanced by fusing the color contrast of the color components and the homogeneity of the near infrared (NIR) component. The resulting fused color image is classified with a C-means algorithm for reconstruction. The performance of the proposed approach is evaluated with the statistical F measure in comparison to some existing methods using linear combinations of color components. The results show that the fusion of information in different spectral components has the advantage of enhancing the image quality, therefore improving the classification accuracy in citrus fruit identification in natural lighting conditions. PMID:28098797

  14. Binocular Multispectral Adaptive Imaging System (BMAIS)

    DTIC Science & Technology

    2010-07-26

    system for pilots that adaptively integrates shortwave infrared (SWIR), visible, near ‐IR (NIR), off‐head thermal, and computer symbology/imagery into...respective areas. BMAIS is a binocular helmet mounted imaging system that features dual shortwave infrared (SWIR) cameras, embedded image processors and...algorithms and fusion of other sensor sites such as forward looking infrared (FLIR) and other aircraft subsystems. BMAIS is attached to the helmet

  15. 3D-SIFT-Flow for atlas-based CT liver image segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Yan, E-mail: xuyan04@gmail.com; Xu, Chenchao, E-mail: chenchaoxu33@gmail.com; Kuang, Xiao, E-mail: kuangxiao.ace@gmail.com

    Purpose: In this paper, the authors proposed a new 3D registration algorithm, 3D-scale invariant feature transform (SIFT)-Flow, for multiatlas-based liver segmentation in computed tomography (CT) images. Methods: In the registration work, the authors developed a new registration method that takes advantage of dense correspondence using the informative and robust SIFT feature. The authors computed the dense SIFT features for the source image and the target image and designed an objective function to obtain the correspondence between these two images. Labeling of the source image was then mapped to the target image according to the former correspondence, resulting in accurate segmentation.more » In the fusion work, the 2D-based nonparametric label transfer method was extended to 3D for fusing the registered 3D atlases. Results: Compared with existing registration algorithms, 3D-SIFT-Flow has its particular advantage in matching anatomical structures (such as the liver) that observe large variation/deformation. The authors observed consistent improvement over widely adopted state-of-the-art registration methods such as ELASTIX, ANTS, and multiatlas fusion methods such as joint label fusion. Experimental results of liver segmentation on the MICCAI 2007 Grand Challenge are encouraging, e.g., Dice overlap ratio 96.27% ± 0.96% by our method compared with the previous state-of-the-art result of 94.90% ± 2.86%. Conclusions: Experimental results show that 3D-SIFT-Flow is robust for segmenting the liver from CT images, which has large tissue deformation and blurry boundary, and 3D label transfer is effective and efficient for improving the registration accuracy.« less

  16. Cross contrast multi-channel image registration using image synthesis for MR brain images.

    PubMed

    Chen, Min; Carass, Aaron; Jog, Amod; Lee, Junghoon; Roy, Snehashis; Prince, Jerry L

    2017-02-01

    Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery

    PubMed Central

    Fuerst, Bernhard; Tateno, Keisuke; Johnson, Alex; Fotouhi, Javad; Osgood, Greg; Tombari, Federico; Navab, Nassir

    2017-01-01

    Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion. PMID:29184659

  18. Applying a new unequally weighted feature fusion method to improve CAD performance of classifying breast lesions

    NASA Astrophysics Data System (ADS)

    Zargari Khuzani, Abolfazl; Danala, Gopichandh; Heidari, Morteza; Du, Yue; Mashhadi, Najmeh; Qiu, Yuchen; Zheng, Bin

    2018-02-01

    Higher recall rates are a major challenge in mammography screening. Thus, developing computer-aided diagnosis (CAD) scheme to classify between malignant and benign breast lesions can play an important role to improve efficacy of mammography screening. Objective of this study is to develop and test a unique image feature fusion framework to improve performance in classifying suspicious mass-like breast lesions depicting on mammograms. The image dataset consists of 302 suspicious masses detected on both craniocaudal and mediolateral-oblique view images. Amongst them, 151 were malignant and 151 were benign. The study consists of following 3 image processing and feature analysis steps. First, an adaptive region growing segmentation algorithm was used to automatically segment mass regions. Second, a set of 70 image features related to spatial and frequency characteristics of mass regions were initially computed. Third, a generalized linear regression model (GLM) based machine learning classifier combined with a bat optimization algorithm was used to optimally fuse the selected image features based on predefined assessment performance index. An area under ROC curve (AUC) with was used as a performance assessment index. Applying CAD scheme to the testing dataset, AUC was 0.75+/-0.04, which was significantly higher than using a single best feature (AUC=0.69+/-0.05) or the classifier with equally weighted features (AUC=0.73+/-0.05). This study demonstrated that comparing to the conventional equal-weighted approach, using an unequal-weighted feature fusion approach had potential to significantly improve accuracy in classifying between malignant and benign breast masses.

  19. Infrared and visible fusion face recognition based on NSCT domain

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan

    2018-01-01

    Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In this paper, a novel fusion algorithm in non-subsampled contourlet transform (NSCT) domain is proposed for Infrared and visible face fusion recognition. Firstly, NSCT is used respectively to process the infrared and visible face images, which exploits the image information at multiple scales, orientations, and frequency bands. Then, to exploit the effective discriminant feature and balance the power of high-low frequency band of NSCT coefficients, the local Gabor binary pattern (LGBP) and Local Binary Pattern (LBP) are applied respectively in different frequency parts to obtain the robust representation of infrared and visible face images. Finally, the score-level fusion is used to fuse the all the features for final classification. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. Experiments results show that the proposed method extracts the complementary features of near-infrared and visible-light images and improves the robustness of unconstrained face recognition.

  20. GASPACHO: a generic automatic solver using proximal algorithms for convex huge optimization problems

    NASA Astrophysics Data System (ADS)

    Goossens, Bart; Luong, Hiêp; Philips, Wilfried

    2017-08-01

    Many inverse problems (e.g., demosaicking, deblurring, denoising, image fusion, HDR synthesis) share various similarities: degradation operators are often modeled by a specific data fitting function while image prior knowledge (e.g., sparsity) is incorporated by additional regularization terms. In this paper, we investigate automatic algorithmic techniques for evaluating proximal operators. These algorithmic techniques also enable efficient calculation of adjoints from linear operators in a general matrix-free setting. In particular, we study the simultaneous-direction method of multipliers (SDMM) and the parallel proximal algorithm (PPXA) solvers and show that the automatically derived implementations are well suited for both single-GPU and multi-GPU processing. We demonstrate this approach for an Electron Microscopy (EM) deconvolution problem.

  1. On the creation of high spatial resolution imaging spectroscopy data from multi-temporal low spatial resolution imagery

    NASA Astrophysics Data System (ADS)

    Yao, Wei; van Aardt, Jan; Messinger, David

    2017-05-01

    The Hyperspectral Infrared Imager (HyspIRI) mission aims to provide global imaging spectroscopy data to the benefit of especially ecosystem studies. The onboard spectrometer will collect radiance spectra from the visible to short wave infrared (VSWIR) regions (400-2500 nm). The mission calls for fine spectral resolution (10 nm band width) and as such will enable scientists to perform material characterization, species classification, and even sub-pixel mapping. However, the global coverage requirement results in a relatively low spatial resolution (GSD 30m), which restricts applications to objects of similar scales. We therefore have focused on the assessment of sub-pixel vegetation structure from spectroscopy data in past studies. In this study, we investigate the development or reconstruction of higher spatial resolution imaging spectroscopy data via fusion of multi-temporal data sets to address the drawbacks implicit in low spatial resolution imagery. The projected temporal resolution of the HyspIRI VSWIR instrument is 15 days, which implies that we have access to as many as six data sets for an area over the course of a growth season. Previous studies have shown that select vegetation structural parameters, e.g., leaf area index (LAI) and gross ecosystem production (GEP), are relatively constant in summer and winter for temperate forests; we therefore consider the data sets collected in summer to be from a similar, stable forest structure. The first step, prior to fusion, involves registration of the multi-temporal data. A data fusion algorithm then can be applied to the pre-processed data sets. The approach hinges on an algorithm that has been widely applied to fuse RGB images. Ideally, if we have four images of a scene which all meet the following requirements - i) they are captured with the same camera configurations; ii) the pixel size of each image is x; and iii) at least r2 images are aligned on a grid of x/r - then a high-resolution image, with a pixel size of x/r, can be reconstructed from the multi-temporal set. The algorithm was applied to data from NASA's classic Airborne Visible and Infrared Imaging Spectrometer (AVIRIS-C; GSD 18m), collected between 2013-2015 (summer and fall) over our study area (NEON's Southwest Pacific Domain; Fresno, CA) to generate higher spatial resolution imagery (GSD 9m). The reconstructed data set was validated via comparison to NEON's imaging spectrometer (NIS) data (GSD 1m). The results showed that algorithm worked well with the AVIRIS-C data and could be applied to the HyspIRI data.

  2. Application and evaluation of ISVR method in QuickBird image fusion

    NASA Astrophysics Data System (ADS)

    Cheng, Bo; Song, Xiaolu

    2014-05-01

    QuickBird satellite images are widely used in many fields, and applications have put forward high requirements for the integration of the spatial information and spectral information of the imagery. A fusion method for high resolution remote sensing images based on ISVR is identified in this study. The core principle of ISVS is taking the advantage of radicalization targeting to remove the effect of different gain and error of satellites' sensors. Transformed from DN to radiance, the multi-spectral image's energy is used to simulate the panchromatic band. The linear regression analysis is carried through the simulation process to find a new synthetically panchromatic image, which is highly linearly correlated to the original panchromatic image. In order to evaluate, test and compare the algorithm results, this paper used ISVR and other two different fusion methods to give a comparative study of the spatial information and spectral information, taking the average gradient and the correlation coefficient as an indicator. Experiments showed that this method could significantly improve the quality of fused image, especially in preserving spectral information, to maximize the spectral information of original multispectral images, while maintaining abundant spatial information.

  3. Processing LiDAR Data to Predict Natural Hazards

    NASA Technical Reports Server (NTRS)

    Fairweather, Ian; Crabtree, Robert; Hager, Stacey

    2008-01-01

    ELF-Base and ELF-Hazards (wherein 'ELF' signifies 'Extract LiDAR Features' and 'LiDAR' signifies 'light detection and ranging') are developmental software modules for processing remote-sensing LiDAR data to identify past natural hazards (principally, landslides) and predict future ones. ELF-Base processes raw LiDAR data, including LiDAR intensity data that are often ignored in other software, to create digital terrain models (DTMs) and digital feature models (DFMs) with sub-meter accuracy. ELF-Hazards fuses raw LiDAR data, data from multispectral and hyperspectral optical images, and DTMs and DFMs generated by ELF-Base to generate hazard risk maps. Advanced algorithms in these software modules include line-enhancement and edge-detection algorithms, surface-characterization algorithms, and algorithms that implement innovative data-fusion techniques. The line-extraction and edge-detection algorithms enable users to locate such features as faults and landslide headwall scarps. Also implemented in this software are improved methodologies for identification and mapping of past landslide events by use of (1) accurate, ELF-derived surface characterizations and (2) three LiDAR/optical-data-fusion techniques: post-classification data fusion, maximum-likelihood estimation modeling, and hierarchical within-class discrimination. This software is expected to enable faster, more accurate forecasting of natural hazards than has previously been possible.

  4. A new optimal seam method for seamless image stitching

    NASA Astrophysics Data System (ADS)

    Xue, Jiale; Chen, Shengyong; Cheng, Xu; Han, Ying; Zhao, Meng

    2017-07-01

    A novel optimal seam method which aims to stitch those images with overlapping area more seamlessly has been propos ed. Considering the traditional gradient domain optimal seam method and fusion algorithm result in bad color difference measurement and taking a long time respectively, the input images would be converted to HSV space and a new energy function is designed to seek optimal stitching path. To smooth the optimal stitching path, a simplified pixel correction and weighted average method are utilized individually. The proposed methods exhibit performance in eliminating the stitching seam compared with the traditional gradient optimal seam and high efficiency with multi-band blending algorithm.

  5. Multi-Focus Image Fusion Based on NSCT and NSST

    NASA Astrophysics Data System (ADS)

    Moonon, Altan-Ulzii; Hu, Jianwen

    2015-12-01

    In this paper, a multi-focus image fusion algorithm based on the nonsubsampled contourlet transform (NSCT) and the nonsubsampled shearlet transform (NSST) is proposed. The source images are first decomposed by the NSCT and NSST into low frequency coefficients and high frequency coefficients. Then, the average method is used to fuse low frequency coefficient of the NSCT. To obtain more accurate salience measurement, the high frequency coefficients of the NSST and NSCT are combined to measure salience. The high frequency coefficients of the NSCT with larger salience are selected as fused high frequency coefficients. Finally, the fused image is reconstructed by the inverse NSCT. We adopt three metrics (Q AB/F , Q e and Q w ) to evaluate the quality of fused images. The experimental results demonstrate that the proposed method outperforms other methods. It retains highly detailed edges and contours.

  6. Novel fusion for hybrid optical/microcomputed tomography imaging based on natural light surface reconstruction and iterated closest point

    NASA Astrophysics Data System (ADS)

    Ning, Nannan; Tian, Jie; Liu, Xia; Deng, Kexin; Wu, Ping; Wang, Bo; Wang, Kun; Ma, Xibo

    2014-02-01

    In mathematics, optical molecular imaging including bioluminescence tomography (BLT), fluorescence tomography (FMT) and Cerenkov luminescence tomography (CLT) are concerned with a similar inverse source problem. They all involve the reconstruction of the 3D location of a single/multiple internal luminescent/fluorescent sources based on 3D surface flux distribution. To achieve that, an accurate fusion between 2D luminescent/fluorescent images and 3D structural images that may be acquired form micro-CT, MRI or beam scanning is extremely critical. However, the absence of a universal method that can effectively convert 2D optical information into 3D makes the accurate fusion challengeable. In this study, to improve the fusion accuracy, a new fusion method for dual-modality tomography (luminescence/fluorescence and micro-CT) based on natural light surface reconstruction (NLSR) and iterated closest point (ICP) was presented. It consisted of Octree structure, exact visual hull from marching cubes and ICP. Different from conventional limited projection methods, it is 360° free-space registration, and utilizes more luminescence/fluorescence distribution information from unlimited multi-orientation 2D optical images. A mouse mimicking phantom (one XPM-2 Phantom Light Source, XENOGEN Corporation) and an in-vivo BALB/C mouse with implanted one luminescent light source were used to evaluate the performance of the new fusion method. Compared with conventional fusion methods, the average error of preset markers was improved by 0.3 and 0.2 pixels from the new method, respectively. After running the same 3D internal light source reconstruction algorithm of the BALB/C mouse, the distance error between the actual and reconstructed internal source was decreased by 0.19 mm.

  7. [Image fusion: use in the control of the distribution of prostatic biopsies].

    PubMed

    Mozer, Pierre; Baumann, Michaël; Chevreau, Grégoire; Troccaz, Jocelyne

    2008-02-01

    Prostate biopsies are performed under 2D TransRectal UltraSound (US) guidance by sampling the prostate according to a predefined pattern. Modern image processing tools allow better control of biopsy distribution. We evaluated the accuracy of a single operator performing a pattern of 12 ultrasound-guided biopsies by registering 3D ultrasound control images acquired after each biopsy. For each patient, prostate image alignment was performed automatically with a voxel-based registration algorithm allowing visualization of each biopsy trajectory in a single ultrasound reference volume. On average, the operator reached the target in 60% of all cases. This study shows that it is difficult to accurately reach targets in the prostate using 2D ultrasound. In the near future, real-time fusion of MRI and US images will allow selection of a target in previously acquired MR images and biopsy of this target by US guidance.

  8. Textual and visual content-based anti-phishing: a Bayesian approach.

    PubMed

    Zhang, Haijun; Liu, Gang; Chow, Tommy W S; Liu, Wenyin

    2011-10-01

    A novel framework using a Bayesian approach for content-based phishing web page detection is presented. Our model takes into account textual and visual contents to measure the similarity between the protected web page and suspicious web pages. A text classifier, an image classifier, and an algorithm fusing the results from classifiers are introduced. An outstanding feature of this paper is the exploration of a Bayesian model to estimate the matching threshold. This is required in the classifier for determining the class of the web page and identifying whether the web page is phishing or not. In the text classifier, the naive Bayes rule is used to calculate the probability that a web page is phishing. In the image classifier, the earth mover's distance is employed to measure the visual similarity, and our Bayesian model is designed to determine the threshold. In the data fusion algorithm, the Bayes theory is used to synthesize the classification results from textual and visual content. The effectiveness of our proposed approach was examined in a large-scale dataset collected from real phishing cases. Experimental results demonstrated that the text classifier and the image classifier we designed deliver promising results, the fusion algorithm outperforms either of the individual classifiers, and our model can be adapted to different phishing cases. © 2011 IEEE

  9. Novel Approaches to Improve Iris Recognition System Performance Based on Local Quality Evaluation and Feature Fusion

    PubMed Central

    2014-01-01

    For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system. PMID:24693243

  10. Novel approaches to improve iris recognition system performance based on local quality evaluation and feature fusion.

    PubMed

    Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; Chen, Huiling; He, Fei; Pang, Yutong

    2014-01-01

    For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system.

  11. Image classification using multiscale information fusion based on saliency driven nonlinear diffusion filtering.

    PubMed

    Hu, Weiming; Hu, Ruiguang; Xie, Nianhua; Ling, Haibin; Maybank, Stephen

    2014-04-01

    In this paper, we propose saliency driven image multiscale nonlinear diffusion filtering. The resulting scale space in general preserves or even enhances semantically important structures such as edges, lines, or flow-like structures in the foreground, and inhibits and smoothes clutter in the background. The image is classified using multiscale information fusion based on the original image, the image at the final scale at which the diffusion process converges, and the image at a midscale. Our algorithm emphasizes the foreground features, which are important for image classification. The background image regions, whether considered as contexts of the foreground or noise to the foreground, can be globally handled by fusing information from different scales. Experimental tests of the effectiveness of the multiscale space for the image classification are conducted on the following publicly available datasets: 1) the PASCAL 2005 dataset; 2) the Oxford 102 flowers dataset; and 3) the Oxford 17 flowers dataset, with high classification rates.

  12. Multi-PSF fusion in image restoration of range-gated systems

    NASA Astrophysics Data System (ADS)

    Wang, Canjin; Sun, Tao; Wang, Tingfeng; Miao, Xikui; Wang, Rui

    2018-07-01

    For the task of image restoration, an accurate estimation of degrading PSF/kernel is the premise of recovering a visually superior image. The imaging process of range-gated imaging system in atmosphere associates with lots of factors, such as back scattering, background radiation, diffraction limit and the vibration of the platform. On one hand, due to the difficulty of constructing models for all factors, the kernels from physical-model based methods are not strictly accurate and practical. On the other hand, there are few strong edges in images, which brings significant errors to most of image-feature-based methods. Since different methods focus on different formation factors of the kernel, their results often complement each other. Therefore, we propose an approach which combines physical model with image features. With an fusion strategy using GCRF (Gaussian Conditional Random Fields) framework, we get a final kernel which is closer to the actual one. Aiming at the problem that ground-truth image is difficult to obtain, we then propose a semi data-driven fusion method in which different data sets are used to train fusion parameters. Finally, a semi blind restoration strategy based on EM (Expectation Maximization) and RL (Richardson-Lucy) algorithm is proposed. Our methods not only models how the lasers transfer in the atmosphere and imaging in the ICCD (Intensified CCD) plane, but also quantifies other unknown degraded factors using image-based methods, revealing how multiple kernel elements interact with each other. The experimental results demonstrate that our method achieves better performance than state-of-the-art restoration approaches.

  13. Registration and Fusion of Multiple Source Remotely Sensed Image Data

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline

    2004-01-01

    Earth and Space Science often involve the comparison, fusion, and integration of multiple types of remotely sensed data at various temporal, radiometric, and spatial resolutions. Results of this integration may be utilized for global change analysis, global coverage of an area at multiple resolutions, map updating or validation of new instruments, as well as integration of data provided by multiple instruments carried on multiple platforms, e.g. in spacecraft constellations or fleets of planetary rovers. Our focus is on developing methods to perform fast, accurate and automatic image registration and fusion. General methods for automatic image registration are being reviewed and evaluated. Various choices for feature extraction, feature matching and similarity measurements are being compared, including wavelet-based algorithms, mutual information and statistically robust techniques. Our work also involves studies related to image fusion and investigates dimension reduction and co-kriging for application-dependent fusion. All methods are being tested using several multi-sensor datasets, acquired at EOS Core Sites, and including multiple sensors such as IKONOS, Landsat-7/ETM+, EO1/ALI and Hyperion, MODIS, and SeaWIFS instruments. Issues related to the coregistration of data from the same platform (i.e., AIRS and MODIS from Aqua) or from several platforms of the A-train (i.e., MLS, HIRDLS, OMI from Aura with AIRS and MODIS from Terra and Aqua) will also be considered.

  14. Near infrared and visible face recognition based on decision fusion of LBP and DCT features

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan

    2018-03-01

    Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In order to extract the discriminative complementary features between near infrared and visible images, in this paper, we proposed a novel near infrared and visible face fusion recognition algorithm based on DCT and LBP features. Firstly, the effective features in near-infrared face image are extracted by the low frequency part of DCT coefficients and the partition histograms of LBP operator. Secondly, the LBP features of visible-light face image are extracted to compensate for the lacking detail features of the near-infrared face image. Then, the LBP features of visible-light face image, the DCT and LBP features of near-infrared face image are sent to each classifier for labeling. Finally, decision level fusion strategy is used to obtain the final recognition result. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. The experiment results show that the proposed method extracts the complementary features of near-infrared and visible face images and improves the robustness of unconstrained face recognition. Especially for the circumstance of small training samples, the recognition rate of proposed method can reach 96.13%, which has improved significantly than 92.75 % of the method based on statistical feature fusion.

  15. Vision, healing brush, and fiber bundles

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor

    2005-03-01

    The Healing Brush is a tool introduced for the first time in Adobe Photoshop (2002) that removes defects in images by seamless cloning (gradient domain fusion). The Healing Brush algorithms are built on a new mathematical approach that uses Fibre Bundles and Connections to model the representation of images in the visual system. Our mathematical results are derived from first principles of human vision, related to adaptation transforms of von Kries type and Retinex theory. In this paper we present the new result of Healing in arbitrary color space. In addition to supporting image repair and seamless cloning, our approach also produces the exact solution to the problem of high dynamic range compression of17 and can be applied to other image processing algorithms.

  16. Metal Artifact Suppression in Dental Cone Beam Computed Tomography Images Using Image Processing Techniques.

    PubMed

    Johari, Masoumeh; Abdollahzadeh, Milad; Esmaeili, Farzad; Sakhamanesh, Vahideh

    2018-01-01

    Dental cone beam computed tomography (CBCT) images suffer from severe metal artifacts. These artifacts degrade the quality of acquired image and in some cases make it unsuitable to use. Streaking artifacts and cavities around teeth are the main reason of degradation. In this article, we have proposed a new artifact reduction algorithm which has three parallel components. The first component extracts teeth based on the modeling of image histogram with a Gaussian mixture model. Striking artifact reduction component reduces artifacts using converting image into the polar domain and applying morphological filtering. The third component fills cavities through a simple but effective morphological filtering operation. Finally, results of these three components are combined into a fusion step to create a visually good image which is more compatible to human visual system. Results show that the proposed algorithm reduces artifacts of dental CBCT images and produces clean images.

  17. Metal Artifact Suppression in Dental Cone Beam Computed Tomography Images Using Image Processing Techniques

    PubMed Central

    Johari, Masoumeh; Abdollahzadeh, Milad; Esmaeili, Farzad; Sakhamanesh, Vahideh

    2018-01-01

    Background: Dental cone beam computed tomography (CBCT) images suffer from severe metal artifacts. These artifacts degrade the quality of acquired image and in some cases make it unsuitable to use. Streaking artifacts and cavities around teeth are the main reason of degradation. Methods: In this article, we have proposed a new artifact reduction algorithm which has three parallel components. The first component extracts teeth based on the modeling of image histogram with a Gaussian mixture model. Striking artifact reduction component reduces artifacts using converting image into the polar domain and applying morphological filtering. The third component fills cavities through a simple but effective morphological filtering operation. Results: Finally, results of these three components are combined into a fusion step to create a visually good image which is more compatible to human visual system. Conclusions: Results show that the proposed algorithm reduces artifacts of dental CBCT images and produces clean images. PMID:29535920

  18. SDIA: A dynamic situation driven information fusion algorithm for cloud environment

    NASA Astrophysics Data System (ADS)

    Guo, Shuhang; Wang, Tong; Wang, Jian

    2017-09-01

    Information fusion is an important issue in information integration domain. In order to form an extensive information fusion technology under the complex and diverse situations, a new information fusion algorithm is proposed. Firstly, a fuzzy evaluation model of tag utility was proposed that can be used to count the tag entropy. Secondly, a ubiquitous situation tag tree model is proposed to define multidimensional structure of information situation. Thirdly, the similarity matching between the situation models is classified into three types: the tree inclusion, the tree embedding, and the tree compatibility. Next, in order to reduce the time complexity of the tree compatible matching algorithm, a fast and ordered tree matching algorithm is proposed based on the node entropy, which is used to support the information fusion by ubiquitous situation. Since the algorithm revolve from the graph theory of disordered tree matching algorithm, it can improve the information fusion present recall rate and precision rate in the situation. The information fusion algorithm is compared with the star and the random tree matching algorithm, and the difference between the three algorithms is analyzed in the view of isomorphism, which proves the innovation and applicability of the algorithm.

  19. [Time consumption and quality of an automated fusion tool for SPECT and MRI images of the brain].

    PubMed

    Fiedler, E; Platsch, G; Schwarz, A; Schmiedehausen, K; Tomandl, B; Huk, W; Rupprecht, Th; Rahn, N; Kuwert, T

    2003-10-01

    Although the fusion of images from different modalities may improve diagnostic accuracy, it is rarely used in clinical routine work due to logistic problems. Therefore we evaluated performance and time needed for fusing MRI and SPECT images using a semiautomated dedicated software. PATIENTS, MATERIAL AND METHOD: In 32 patients regional cerebral blood flow was measured using (99m)Tc ethylcystein dimer (ECD) and the three-headed SPECT camera MultiSPECT 3. MRI scans of the brain were performed using either a 0,2 T Open or a 1,5 T Sonata. Twelve of the MRI data sets were acquired using a 3D-T1w MPRAGE sequence, 20 with a 2D acquisition technique and different echo sequences. Image fusion was performed on a Syngo workstation using an entropy minimizing algorithm by an experienced user of the software. The fusion results were classified. We measured the time needed for the automated fusion procedure and in case of need that for manual realignment after automated, but insufficient fusion. The mean time of the automated fusion procedure was 123 s. It was for the 2D significantly shorter than for the 3D MRI datasets. For four of the 2D data sets and two of the 3D data sets an optimal fit was reached using the automated approach. The remaining 26 data sets required manual correction. The sum of the time required for automated fusion and that needed for manual correction averaged 320 s (50-886 s). The fusion of 3D MRI data sets lasted significantly longer than that of the 2D MRI data. The automated fusion tool delivered in 20% an optimal fit, in 80% manual correction was necessary. Nevertheless, each of the 32 SPECT data sets could be merged in less than 15 min with the corresponding MRI data, which seems acceptable for clinical routine use.

  20. Drug-related webpages classification based on multi-modal local decision fusion

    NASA Astrophysics Data System (ADS)

    Hu, Ruiguang; Su, Xiaojing; Liu, Yanxin

    2018-03-01

    In this paper, multi-modal local decision fusion is used for drug-related webpages classification. First, meaningful text are extracted through HTML parsing, and effective images are chosen by the FOCARSS algorithm. Second, six SVM classifiers are trained for six kinds of drug-taking instruments, which are represented by PHOG. One SVM classifier is trained for the cannabis, which is represented by the mid-feature of BOW model. For each instance in a webpage, seven SVMs give seven labels for its image, and other seven labels are given by searching the names of drug-taking instruments and cannabis in its related text. Concatenating seven labels of image and seven labels of text, the representation of those instances in webpages are generated. Last, Multi-Instance Learning is used to classify those drugrelated webpages. Experimental results demonstrate that the classification accuracy of multi-instance learning with multi-modal local decision fusion is much higher than those of single-modal classification.

  1. Tomographic data fusion with CFD simulations associated with a planar sensor

    NASA Astrophysics Data System (ADS)

    Liu, J.; Liu, S.; Sun, S.; Zhou, W.; Schlaberg, I. H. I.; Wang, M.; Yan, Y.

    2017-04-01

    Tomographic techniques have great abilities to interrogate the combustion processes, especially when it is combined with the physical models of the combustion itself. In this study, a data fusion algorithm is developed to investigate the flame distribution of a swirl-induced environmental (EV) burner, a new type of burner for low NOx combustion. An electric capacitance tomography (ECT) system is used to acquire 3D flame images and computational fluid dynamics (CFD) is applied to calculate an initial distribution of the temperature profile for the EV burner. Experiments were also carried out to visualize flames at a series of locations above the burner. While the ECT images essentially agree with the CFD temperature distribution, discrepancies exist at a certain height. When data fusion is applied, the discrepancy is visibly reduced and the ECT images are improved. The methods used in this study can lead to a new route where combustion visualization can be much improved and applied to clean energy conversion and new burner development.

  2. A dynamic fuzzy genetic algorithm for natural image segmentation using adaptive mean shift

    NASA Astrophysics Data System (ADS)

    Arfan Jaffar, M.

    2017-01-01

    In this paper, a colour image segmentation approach based on hybridisation of adaptive mean shift (AMS), fuzzy c-mean and genetic algorithms (GAs) is presented. Image segmentation is the perceptual faction of pixels based on some likeness measure. GA with fuzzy behaviour is adapted to maximise the fuzzy separation and minimise the global compactness among the clusters or segments in spatial fuzzy c-mean (sFCM). It adds diversity to the search process to find the global optima. A simple fusion method has been used to combine the clusters to overcome the problem of over segmentation. The results show that our technique outperforms state-of-the-art methods.

  3. Developing an Automated Science Analysis System for Mars Surface Exploration for MSL and Beyond

    NASA Technical Reports Server (NTRS)

    Gulick, V. C.; Hart, S. D.; Shi, X.; Siegel, V. L.

    2004-01-01

    We are developing an automated science analysis system that could be utilized by robotic or human explorers on Mars (or even in remote locations on Earth) to improve the quality and quantity of science data returned. Three components of this system (our rock, layer, and horizon detectors) [1] have been incorporated into the JPL CLARITY system for possible use by MSL and future Mars robotic missions. Two other components include a multi-spectral image compression (SPEC) algorithm for pancam-type images with multiple filters and image fusion algorithms that identify the in focus regions of individual images in an image focal series [2]. Recently, we have been working to combine image and spectral data, and other knowledge to identify both rocks and minerals. Here we present our progress on developing an igneous rock detection system.

  4. Automatic Microaneurysms Detection Based on Multifeature Fusion Dictionary Learning

    PubMed Central

    Wang, Zhenzhu; Du, Wenyou

    2017-01-01

    Recently, microaneurysm (MA) detection has attracted a lot of attention in the medical image processing community. Since MAs can be seen as the earliest lesions in diabetic retinopathy, their detection plays a critical role in diabetic retinopathy diagnosis. In this paper, we propose a novel MA detection approach named multifeature fusion dictionary learning (MFFDL). The proposed method consists of four steps: preprocessing, candidate extraction, multifeature dictionary learning, and classification. The novelty of our proposed approach lies in incorporating the semantic relationships among multifeatures and dictionary learning into a unified framework for automatic detection of MAs. We evaluate the proposed algorithm by comparing it with the state-of-the-art approaches and the experimental results validate the effectiveness of our algorithm. PMID:28421125

  5. Automatic Microaneurysms Detection Based on Multifeature Fusion Dictionary Learning.

    PubMed

    Zhou, Wei; Wu, Chengdong; Chen, Dali; Wang, Zhenzhu; Yi, Yugen; Du, Wenyou

    2017-01-01

    Recently, microaneurysm (MA) detection has attracted a lot of attention in the medical image processing community. Since MAs can be seen as the earliest lesions in diabetic retinopathy, their detection plays a critical role in diabetic retinopathy diagnosis. In this paper, we propose a novel MA detection approach named multifeature fusion dictionary learning (MFFDL). The proposed method consists of four steps: preprocessing, candidate extraction, multifeature dictionary learning, and classification. The novelty of our proposed approach lies in incorporating the semantic relationships among multifeatures and dictionary learning into a unified framework for automatic detection of MAs. We evaluate the proposed algorithm by comparing it with the state-of-the-art approaches and the experimental results validate the effectiveness of our algorithm.

  6. A robust vision-based sensor fusion approach for real-time pose estimation.

    PubMed

    Assa, Akbar; Janabi-Sharifi, Farrokh

    2014-02-01

    Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.

  7. Improved GSO Optimized ESN Soft-Sensor Model of Flotation Process Based on Multisource Heterogeneous Information Fusion

    PubMed Central

    Wang, Jie-sheng; Han, Shuang; Shen, Na-na

    2014-01-01

    For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, an echo state network (ESN) based fusion soft-sensor model optimized by the improved glowworm swarm optimization (GSO) algorithm is proposed. Firstly, the color feature (saturation and brightness) and texture features (angular second moment, sum entropy, inertia moment, etc.) based on grey-level co-occurrence matrix (GLCM) are adopted to describe the visual characteristics of the flotation froth image. Then the kernel principal component analysis (KPCA) method is used to reduce the dimensionality of the high-dimensional input vector composed by the flotation froth image characteristics and process datum and extracts the nonlinear principal components in order to reduce the ESN dimension and network complex. The ESN soft-sensor model of flotation process is optimized by the GSO algorithm with congestion factor. Simulation results show that the model has better generalization and prediction accuracy to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:24982935

  8. Target discrimination of man-made objects using passive polarimetric signatures acquired in the visible and infrared spectral bands

    NASA Astrophysics Data System (ADS)

    Lavigne, Daniel A.; Breton, Mélanie; Fournier, Georges; Charette, Jean-François; Pichette, Mario; Rivet, Vincent; Bernier, Anne-Pier

    2011-10-01

    Surveillance operations and search and rescue missions regularly exploit electro-optic imaging systems to detect targets of interest in both the civilian and military communities. By incorporating the polarization of light as supplementary information to such electro-optic imaging systems, it is possible to increase their target discrimination capabilities, considering that man-made objects are known to depolarized light in different manner than natural backgrounds. As it is known that electro-magnetic radiation emitted and reflected from a smooth surface observed near a grazing angle becomes partially polarized in the visible and infrared wavelength bands, additional information about the shape, roughness, shading, and surface temperatures of difficult targets can be extracted by processing effectively such reflected/emitted polarized signatures. This paper presents a set of polarimetric image processing algorithms devised to extract meaningful information from a broad range of man-made objects. Passive polarimetric signatures are acquired in the visible, shortwave infrared, midwave infrared, and longwave infrared bands using a fully automated imaging system developed at DRDC Valcartier. A fusion algorithm is used to enable the discrimination of some objects lying in shadowed areas. Performance metrics, derived from the computed Stokes parameters, characterize the degree of polarization of man-made objects. Field experiments conducted during winter and summer time demonstrate: 1) the utility of the imaging system to collect polarized signatures of different objects in the visible and infrared spectral bands, and 2) the enhanced performance of target discrimination and fusion algorithms to exploit the polarized signatures of man-made objects against cluttered backgrounds.

  9. New false color mapping for image fusion

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; Walraven, Jan

    1996-03-01

    A pixel-based color-mapping algorithm is presented that produces a fused false color rendering of two gray-level images representing different sensor modalities. The resulting images have a higher information content than each of the original images and retain sensor-specific image information. The unique component of each image modality is enhanced in the resulting fused color image representation. First, the common component of the two original input images is determined. Second, the common component is subtracted from the original images to obtain the unique component of each image. Third, the unique component of each image modality is subtracted from the image of the other modality. This step serves to enhance the representation of sensor-specific details in the final fused result. Finally, a fused color image is produced by displaying the images resulting from the last step through, respectively, the red and green channels of a color display. The method is applied to fuse thermal and visual images. The results show that the color mapping enhances the visibility of certain details and preserves the specificity of the sensor information. The fused images also have a fairly natural appearance. The fusion scheme involves only operations on corresponding pixels. The resolution of a fused image is therefore directly related to the resolution of the input images. Before fusing, the contrast of the images can be enhanced and their noise can be reduced by standard image- processing techniques. The color mapping algorithm is computationally simple. This implies that the investigated approaches can eventually be applied in real time and that the hardware needed is not too complicated or too voluminous (an important consideration when it has to fit in an airplane, for instance).

  10. An Innovative Thinking-Based Intelligent Information Fusion Algorithm

    PubMed Central

    Hu, Liang; Liu, Gang; Zhou, Jin

    2013-01-01

    This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information. PMID:23956699

  11. An innovative thinking-based intelligent information fusion algorithm.

    PubMed

    Lu, Huimin; Hu, Liang; Liu, Gang; Zhou, Jin

    2013-01-01

    This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information.

  12. Optical design and development of a snapshot light-field laryngoscope

    NASA Astrophysics Data System (ADS)

    Zhu, Shuaishuai; Jin, Peng; Liang, Rongguang; Gao, Liang

    2018-02-01

    The convergence of recent advances in optical fabrication and digital processing yields a generation of imaging technology-light-field (LF) cameras which bridge the realms of applied mathematics, optics, and high-performance computing. Herein for the first time, we introduce the paradigm of LF imaging into laryngoscopy. The resultant probe can image the three-dimensional shape of vocal folds within a single camera exposure. Furthermore, to improve the spatial resolution, we developed an image fusion algorithm, providing a simple solution to a long-standing problem in LF imaging.

  13. Pediatric Brain Extraction Using Learning-based Meta-algorithm

    PubMed Central

    Shi, Feng; Wang, Li; Dai, Yakang; Gilmore, John H.; Lin, Weili; Shen, Dinggang

    2012-01-01

    Magnetic resonance imaging of pediatric brain provides valuable information for early brain development studies. Automated brain extraction is challenging due to the small brain size and dynamic change of tissue contrast in the developing brains. In this paper, we propose a novel Learning Algorithm for Brain Extraction and Labeling (LABEL) specially for the pediatric MR brain images. The idea is to perform multiple complementary brain extractions on a given testing image by using a meta-algorithm, including BET and BSE, where the parameters of each run of the meta-algorithm are effectively learned from the training data. Also, the representative subjects are selected as exemplars and used to guide brain extraction of new subjects in different age groups. We further develop a level-set based fusion method to combine multiple brain extractions together with a closed smooth surface for obtaining the final extraction. The proposed method has been extensively evaluated in subjects of three representative age groups, such as neonate (less than 2 months), infant (1–2 years), and child (5–18 years). Experimental results show that, with 45 subjects for training (15 neonates, 15 infant, and 15 children), the proposed method can produce more accurate brain extraction results on 246 testing subjects (75 neonates, 126 infants, and 45 children), i.e., at average Jaccard Index of 0.953, compared to those by BET (0.918), BSE (0.902), ROBEX (0.901), GCUT (0.856), and other fusion methods such as Majority Voting (0.919) and STAPLE (0.941). Along with the largely-improved computational efficiency, the proposed method demonstrates its ability of automated brain extraction for pediatric MR images in a large age range. PMID:22634859

  14. Automated segmentation of thyroid gland on CT images with multi-atlas label fusion and random classification forest

    NASA Astrophysics Data System (ADS)

    Liu, Jiamin; Chang, Kevin; Kim, Lauren; Turkbey, Evrim; Lu, Le; Yao, Jianhua; Summers, Ronald

    2015-03-01

    The thyroid gland plays an important role in clinical practice, especially for radiation therapy treatment planning. For patients with head and neck cancer, radiation therapy requires a precise delineation of the thyroid gland to be spared on the pre-treatment planning CT images to avoid thyroid dysfunction. In the current clinical workflow, the thyroid gland is normally manually delineated by radiologists or radiation oncologists, which is time consuming and error prone. Therefore, a system for automated segmentation of the thyroid is desirable. However, automated segmentation of the thyroid is challenging because the thyroid is inhomogeneous and surrounded by structures that have similar intensities. In this work, the thyroid gland segmentation is initially estimated by multi-atlas label fusion algorithm. The segmentation is refined by supervised statistical learning based voxel labeling with a random forest algorithm. Multiatlas label fusion (MALF) transfers expert-labeled thyroids from atlases to a target image using deformable registration. Errors produced by label transfer are reduced by label fusion that combines the results produced by all atlases into a consensus solution. Then, random forest (RF) employs an ensemble of decision trees that are trained on labeled thyroids to recognize features. The trained forest classifier is then applied to the thyroid estimated from the MALF by voxel scanning to assign the class-conditional probability. Voxels from the expert-labeled thyroids in CT volumes are treated as positive classes; background non-thyroid voxels as negatives. We applied this automated thyroid segmentation system to CT scans of 20 patients. The results showed that the MALF achieved an overall 0.75 Dice Similarity Coefficient (DSC) and the RF classification further improved the DSC to 0.81.

  15. Statistical algorithms improve accuracy of gene fusion detection

    PubMed Central

    Hsieh, Gillian; Bierman, Rob; Szabo, Linda; Lee, Alex Gia; Freeman, Donald E.; Watson, Nathaniel; Sweet-Cordero, E. Alejandro

    2017-01-01

    Abstract Gene fusions are known to play critical roles in tumor pathogenesis. Yet, sensitive and specific algorithms to detect gene fusions in cancer do not currently exist. In this paper, we present a new statistical algorithm, MACHETE (Mismatched Alignment CHimEra Tracking Engine), which achieves highly sensitive and specific detection of gene fusions from RNA-Seq data, including the highest Positive Predictive Value (PPV) compared to the current state-of-the-art, as assessed in simulated data. We show that the best performing published algorithms either find large numbers of fusions in negative control data or suffer from low sensitivity detecting known driving fusions in gold standard settings, such as EWSR1-FLI1. As proof of principle that MACHETE discovers novel gene fusions with high accuracy in vivo, we mined public data to discover and subsequently PCR validate novel gene fusions missed by other algorithms in the ovarian cancer cell line OVCAR3. These results highlight the gains in accuracy achieved by introducing statistical models into fusion detection, and pave the way for unbiased discovery of potentially driving and druggable gene fusions in primary tumors. PMID:28541529

  16. Optical asymmetric watermarking using modified wavelet fusion and diffractive imaging

    NASA Astrophysics Data System (ADS)

    Mehra, Isha; Nishchal, Naveen K.

    2015-05-01

    In most of the existing image encryption algorithms the generated keys are in the form of a noise like distribution with a uniform distributed histogram. However, the noise like distribution is an apparent sign indicating the presence of the keys. If the keys are to be transferred through some communication channels, then this may lead to a security problem. This is because; the noise like features may easily catch people's attention and bring more attacks. To address this problem it is required to transfer the keys to some other meaningful images to disguise the attackers. The watermarking schemes are complementary to image encryption schemes. In most of the iterative encryption schemes, support constraints play an important role of the keys in order to decrypt the meaningful data. In this article, we have transferred the support constraints which are generated by axial translation of CCD camera using amplitude-, and phase- truncation approach, into different meaningful images. This has been done by developing modified fusion technique in wavelet transform domain. The second issue is, in case, the meaningful images are caught by the attacker then how to solve the copyright protection. To resolve this issue, watermark detection plays a crucial role. For this purpose, it is necessary to recover the original image using the retrieved watermarks/support constraints. To address this issue, four asymmetric keys have been generated corresponding to each watermarked image to retrieve the watermarks. For decryption, an iterative phase retrieval algorithm is applied to extract the plain-texts from corresponding retrieved watermarks.

  17. Quality evaluation of different fusion techniques applied on Worldview-2 data

    NASA Astrophysics Data System (ADS)

    Vaiopoulos, Aristides; Nikolakopoulos, Konstantinos G.

    2015-10-01

    In the current study a Worldview-2 image was used for fusion quality assessment. The bundle image was collected on July 2014 over Araxos area in Western Peloponnese. Worldview-2 is the first satellite that collects at the same time a panchromatic (Pan) image and 8 band multispectral (MS) image. The Pan data have a spatial resolution of 0.46m while the MS data have a spatial resolution of 1.84m. In contrary to the respective Pan band of Ikonos and Quickbird that range between 0.45 and 0.90 micrometers the Worldview Pan band is narrower and ranges between 0.45 and 0.8 micrometers. The MS bands include four conventional visible and near-infrared bands common to multispectral satellites like Ikonos Quickbird, Geoeye Landsat-7 etc., and four new bands. Thus, it is quite interesting to investigate the assessment of commonly used fusion algorithms with Worldview-2 data. Twelve fusion techniques and more especially the Ehlers, Gram-Schmidt, Color Normalized, High Pass Filter, Hyperspherical Color Space, Local Mean Matching (LMM), Local Mean and Variance Matching (LMVM), Modified IHS (ModIHS), Pansharp, Pansharp2, PCA and Wavelet were used for the fusion of Worldview-2 panchromatic and multispectral data. The optical result, the statistical parameters and different quality indexes such as ERGAS, Q and entropy difference were examined and the results are presented. The quality control was evaluated both in spectral and spatial domain.

  18. Conceptual design of the CZMIL data processing system (DPS): algorithms and software for fusing lidar, hyperspectral data, and digital images

    NASA Astrophysics Data System (ADS)

    Park, Joong Yong; Tuell, Grady

    2010-04-01

    The Data Processing System (DPS) of the Coastal Zone Mapping and Imaging Lidar (CZMIL) has been designed to automatically produce a number of novel environmental products through the fusion of Lidar, spectrometer, and camera data in a single software package. These new products significantly transcend use of the system as a bathymeter, and support use of CZMIL as a complete coastal and benthic mapping tool. The DPS provides a spinning globe capability for accessing data files; automated generation of combined topographic and bathymetric point clouds; a fully-integrated manual editor and data analysis tool; automated generation of orthophoto mosaics; automated generation of reflectance data cubes from the imaging spectrometer; a coupled air-ocean spectral optimization model producing images of chlorophyll and CDOM concentrations; and a fusion based capability to produce images and classifications of the shallow water seafloor. Adopting a multitasking approach, we expect to achieve computation of the point clouds, DEMs, and reflectance images at a 1:1 processing to acquisition ratio.

  19. Automatic Structural Parcellation of Mouse Brain MRI Using Multi-Atlas Label Fusion

    PubMed Central

    Ma, Da; Cardoso, Manuel J.; Modat, Marc; Powell, Nick; Wells, Jack; Holmes, Holly; Wiseman, Frances; Tybulewicz, Victor; Fisher, Elizabeth; Lythgoe, Mark F.; Ourselin, Sébastien

    2014-01-01

    Multi-atlas segmentation propagation has evolved quickly in recent years, becoming a state-of-the-art methodology for automatic parcellation of structural images. However, few studies have applied these methods to preclinical research. In this study, we present a fully automatic framework for mouse brain MRI structural parcellation using multi-atlas segmentation propagation. The framework adopts the similarity and truth estimation for propagated segmentations (STEPS) algorithm, which utilises a locally normalised cross correlation similarity metric for atlas selection and an extended simultaneous truth and performance level estimation (STAPLE) framework for multi-label fusion. The segmentation accuracy of the multi-atlas framework was evaluated using publicly available mouse brain atlas databases with pre-segmented manually labelled anatomical structures as the gold standard, and optimised parameters were obtained for the STEPS algorithm in the label fusion to achieve the best segmentation accuracy. We showed that our multi-atlas framework resulted in significantly higher segmentation accuracy compared to single-atlas based segmentation, as well as to the original STAPLE framework. PMID:24475148

  20. Our solution for fusion of simultaneusly acquired whole body scintigrams and optical images, as usesful tool in clinical practice in patients with differentiated thyroid carcinomas after radioiodine therapy. A useful tool in clinical practice.

    PubMed

    Matovic, Milovan; Jankovic, Milica; Barjaktarovic, Marko; Jeremic, Marija

    2017-01-01

    After radioiodine therapy of differentiated thyroid cancer (DTC) patients, whole body scintigraphy (WBS) is standard procedure before releasing the patient from the hospital. A common problem is the precise localization of regions where the iod-avide tissue is located. Sometimes is practically impossible to perform precise topographic localization of such regions. In order to face this problem, we have developed a low-cost Vision-Fusion system for web-camera image acquisition simultaneously with routine scintigraphic whole body acquisition including the algorithm for fusion of images given from both cameras. For image acquisition in the gamma part of the spectra we used e.cam dual head gamma camera (Siemens, Erlangen, Germany) in WBS modality, with matrix size of 256×1024 pixels and bed speed of 6cm/min, equipped with high energy collimator. For optical image acquisition in visible part of spectra we have used web-camera model C905 (Logitech, USA) with Carl Zeiss® optics, native resolution 1600×1200 pixels, 34 o field of view, 30g weight, with autofocus option turned "off" and auto white balance turned "on". Web camera is connected to upper head of gamma camera (GC) by a holder of lightweight aluminum rod and a plexiglas adapter. Our own Vision-Fusion software for image acquisition and coregistration was developed using NI LabVIEW programming environment 2015 (National Instruments, Texas, USA) and two additional LabVIEW modules: NI Vision Acquisition Software (VAS) and NI Vision Development Module (VDM). Vision acquisition software enables communication and control between laptop computer and web-camera. Vision development module is image processing library used for image preprocessing and fusion. Software starts the web-camera image acquisition before starting image acquisition on GC and stops it when GC completes the acquisition. Web-camera is in continuous acquisition mode with frame rate f depending on speed of patient bed movement v (f=v/∆ cm , where ∆ cm is a displacement step that can be changed in Settings option of Vision-Fusion software; by default, ∆ cm is set to 1cm corresponding to ∆ p =15 pixels). All images captured while patient's bed is moving are processed. Movement of patient's bed is checked using cross-correlation of two successive images. After each image capturing, algorithm extracts the central region of interest (ROI) of the image, with the same width as captured image (1600 pixels) and the height that is equal to the ∆ p displacement in pixels. All extracted central ROI are placed next to each other in the overall whole-body image. Stacking of narrow central ROI introduces negligible distortion in the overall whole-body image. The first step for fusion of the scintigram and the optical image was determination of spatial transformation between them. We have made an experiment with two markers (point radioactivity sources of 99m Tc pertechnetate 1MBq) visible in both images (WBS and optical) to find transformation of coordinates between images. The distance between point markers is used for spatial coregistration of the gamma and optical images. At the end of coregistration process, gamma image is rescaled in spatial domain and added to the optical image (green or red channel, amplification changeable from user interface). We tested our system for 10 patients with DTC who received radioiodine therapy (8 women and two men, with average age of 50.10±12.26 years). Five patients received 5.55Gbq, three 3.70GBq and two 1.85GBq. Whole-body scintigraphy and optical image acquisition were performed 72 hours after application of radioiodine therapy. Based on our first results during clinical testing of our system, we can conclude that our system can improve diagnostic possibility of whole body scintigraphy to detect thyroid remnant tissue in patients with DTC after radioiodine therapy.

  1. Online measurement for geometrical parameters of wheel set based on structure light and CUDA parallel processing

    NASA Astrophysics Data System (ADS)

    Wu, Kaihua; Shao, Zhencheng; Chen, Nian; Wang, Wenjie

    2018-01-01

    The wearing degree of the wheel set tread is one of the main factors that influence the safety and stability of running train. Geometrical parameters mainly include flange thickness and flange height. Line structure laser light was projected on the wheel tread surface. The geometrical parameters can be deduced from the profile image. An online image acquisition system was designed based on asynchronous reset of CCD and CUDA parallel processing unit. The image acquisition was fulfilled by hardware interrupt mode. A high efficiency parallel segmentation algorithm based on CUDA was proposed. The algorithm firstly divides the image into smaller squares, and extracts the squares of the target by fusion of k_means and STING clustering image segmentation algorithm. Segmentation time is less than 0.97ms. A considerable acceleration ratio compared with the CPU serial calculation was obtained, which greatly improved the real-time image processing capacity. When wheel set was running in a limited speed, the system placed alone railway line can measure the geometrical parameters automatically. The maximum measuring speed is 120km/h.

  2. Neural network-based feature point descriptors for registration of optical and SAR images

    NASA Astrophysics Data System (ADS)

    Abulkhanov, Dmitry; Konovalenko, Ivan; Nikolaev, Dmitry; Savchik, Alexey; Shvets, Evgeny; Sidorchuk, Dmitry

    2018-04-01

    Registration of images of different nature is an important technique used in image fusion, change detection, efficient information representation and other problems of computer vision. Solving this task using feature-based approaches is usually more complex than registration of several optical images because traditional feature descriptors (SIFT, SURF, etc.) perform poorly when images have different nature. In this paper we consider the problem of registration of SAR and optical images. We train neural network to build feature point descriptors and use RANSAC algorithm to align found matches. Experimental results are presented that confirm the method's effectiveness.

  3. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images

    PubMed Central

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-01-01

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744

  4. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.

    PubMed

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-05-22

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.

  5. Fusion of multi-tracer PET images for dose painting.

    PubMed

    Lelandais, Benoît; Ruan, Su; Denœux, Thierry; Vera, Pierre; Gardin, Isabelle

    2014-10-01

    PET imaging with FluoroDesoxyGlucose (FDG) tracer is clinically used for the definition of Biological Target Volumes (BTVs) for radiotherapy. Recently, new tracers, such as FLuoroThymidine (FLT) or FluoroMisonidazol (FMiso), have been proposed. They provide complementary information for the definition of BTVs. Our work is to fuse multi-tracer PET images to obtain a good BTV definition and to help the radiation oncologist in dose painting. Due to the noise and the partial volume effect leading, respectively, to the presence of uncertainty and imprecision in PET images, the segmentation and the fusion of PET images is difficult. In this paper, a framework based on Belief Function Theory (BFT) is proposed for the segmentation of BTV from multi-tracer PET images. The first step is based on an extension of the Evidential C-Means (ECM) algorithm, taking advantage of neighboring voxels for dealing with uncertainty and imprecision in each mono-tracer PET image. Then, imprecision and uncertainty are, respectively, reduced using prior knowledge related to defects in the acquisition system and neighborhood information. Finally, a multi-tracer PET image fusion is performed. The results are represented by a set of parametric maps that provide important information for dose painting. The performances are evaluated on PET phantoms and patient data with lung cancer. Quantitative results show good performance of our method compared with other methods. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Distributed multimodal data fusion for large scale wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Ertin, Emre

    2006-05-01

    Sensor network technology has enabled new surveillance systems where sensor nodes equipped with processing and communication capabilities can collaboratively detect, classify and track targets of interest over a large surveillance area. In this paper we study distributed fusion of multimodal sensor data for extracting target information from a large scale sensor network. Optimal tracking, classification, and reporting of threat events require joint consideration of multiple sensor modalities. Multiple sensor modalities improve tracking by reducing the uncertainty in the track estimates as well as resolving track-sensor data association problems. Our approach to solving the fusion problem with large number of multimodal sensors is construction of likelihood maps. The likelihood maps provide a summary data for the solution of the detection, tracking and classification problem. The likelihood map presents the sensory information in an easy format for the decision makers to interpret and is suitable with fusion of spatial prior information such as maps, imaging data from stand-off imaging sensors. We follow a statistical approach to combine sensor data at different levels of uncertainty and resolution. The likelihood map transforms each sensor data stream to a spatio-temporal likelihood map ideally suitable for fusion with imaging sensor outputs and prior geographic information about the scene. We also discuss distributed computation of the likelihood map using a gossip based algorithm and present simulation results.

  7. Twofold processing for denoising ultrasound medical images.

    PubMed

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India.

  8. Detection of person borne IEDs using multiple cooperative sensors

    NASA Astrophysics Data System (ADS)

    MacIntosh, Scott; Deming, Ross; Hansen, Thorkild; Kishan, Neel; Tang, Ling; Shea, Jing; Lang, Stephen

    2011-06-01

    The use of multiple cooperative sensors for the detection of person borne IEDs is investigated. The purpose of the effort is to evaluate the performance benefits of adding multiple sensor data streams into an aided threat detection algorithm, and a quantitative analysis of which sensor data combinations improve overall detection performance. Testing includes both mannequins and human subjects with simulated suicide bomb devices of various configurations, materials, sizes and metal content. Aided threat recognition algorithms are being developed to test detection performance of individual sensors against combined fused sensors inputs. Sensors investigated include active and passive millimeter wave imaging systems, passive infrared, 3-D profiling sensors and acoustic imaging. The paper describes the experimental set-up and outlines the methodology behind a decision fusion algorithm-based on the concept of a "body model".

  9. Multiatlas whole heart segmentation of CT data using conditional entropy for atlas ranking and selection.

    PubMed

    Zhuang, Xiahai; Bai, Wenjia; Song, Jingjing; Zhan, Songhua; Qian, Xiaohua; Shi, Wenzhe; Lian, Yanyun; Rueckert, Daniel

    2015-07-01

    Cardiac computed tomography (CT) is widely used in clinical diagnosis of cardiovascular diseases. Whole heart segmentation (WHS) plays a vital role in developing new clinical applications of cardiac CT. However, the shape and appearance of the heart can vary greatly across different scans, making the automatic segmentation particularly challenging. The objective of this work is to develop and evaluate a multiatlas segmentation (MAS) scheme using a new atlas ranking and selection algorithm for automatic WHS of CT data. Research on different MAS strategies and their influence on WHS performance are limited. This work provides a detailed comparison study evaluating the impacts of label fusion, atlas ranking, and sizes of the atlas database on the segmentation performance. Atlases in a database were registered to the target image using a hierarchical registration scheme specifically designed for cardiac images. A subset of the atlases were selected for label fusion, according to the authors' proposed atlas ranking criterion which evaluated the performance of each atlas by computing the conditional entropy of the target image given the propagated atlas labeling. Joint label fusion was used to combine multiple label estimates to obtain the final segmentation. The authors used 30 clinical cardiac CT angiography (CTA) images to evaluate the proposed MAS scheme and to investigate different segmentation strategies. The mean WHS Dice score of the proposed MAS method was 0.918 ± 0.021, and the mean runtime for one case was 13.2 min on a workstation. This MAS scheme using joint label fusion generated significantly better Dice scores than the other label fusion strategies, including majority voting (0.901 ± 0.276, p < 0.01), locally weighted voting (0.905 ± 0.0247, p < 0.01), and probabilistic patch-based fusion (0.909 ± 0.0249, p < 0.01). In the atlas ranking study, the proposed criterion based on conditional entropy yielded a performance curve with higher WHS Dice scores compared to the conventional schemes (p < 0.03). In the atlas database study, the authors showed that the MAS using larger atlas databases generated better performance curves than the MAS using smaller ones, indicating larger atlas databases could produce more accurate segmentation. The authors have developed a new MAS framework for automatic WHS of CTA and investigated alternative implementations of MAS. With the proposed atlas ranking algorithm and joint label fusion, the MAS scheme is able to generate accurate segmentation within practically acceptable computation time. This method can be useful for the development of new clinical applications of cardiac CT.

  10. Electroencephalogram-based decoding cognitive states using convolutional neural network and likelihood ratio based score fusion.

    PubMed

    Zafar, Raheel; Dass, Sarat C; Malik, Aamir Saeed

    2017-01-01

    Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain-computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method.

  11. Improving Secondary Ion Mass Spectrometry Image Quality with Image Fusion

    NASA Astrophysics Data System (ADS)

    Tarolli, Jay G.; Jackson, Lauren M.; Winograd, Nicholas

    2014-12-01

    The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images but also by detection sensitivity. As the probe size is reduced to below 1 μm, for example, a low signal in each pixel limits lateral resolution because of counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure.

  12. Assessment of Data Fusion Algorithms for Earth Observation Change Detection Processes.

    PubMed

    Molina, Iñigo; Martinez, Estibaliz; Morillo, Carmen; Velasco, Jesus; Jara, Alvaro

    2016-09-30

    In this work a parametric multi-sensor Bayesian data fusion approach and a Support Vector Machine (SVM) are used for a Change Detection problem. For this purpose two sets of SPOT5-PAN images have been used, which are in turn used for Change Detection Indices (CDIs) calculation. For minimizing radiometric differences, a methodology based on zonal "invariant features" is suggested. The choice of one or the other CDI for a change detection process is a subjective task as each CDI is probably more or less sensitive to certain types of changes. Likewise, this idea might be employed to create and improve a "change map", which can be accomplished by means of the CDI's informational content. For this purpose, information metrics such as the Shannon Entropy and "Specific Information" have been used to weight the changes and no-changes categories contained in a certain CDI and thus introduced in the Bayesian information fusion algorithm. Furthermore, the parameters of the probability density functions (pdf's) that best fit the involved categories have also been estimated. Conversely, these considerations are not necessary for mapping procedures based on the discriminant functions of a SVM. This work has confirmed the capabilities of probabilistic information fusion procedure under these circumstances.

  13. Fully automatic multi-atlas segmentation of CTA for partial volume correction in cardiac SPECT/CT

    NASA Astrophysics Data System (ADS)

    Liu, Qingyi; Mohy-ud-Din, Hassan; Boutagy, Nabil E.; Jiang, Mingyan; Ren, Silin; Stendahl, John C.; Sinusas, Albert J.; Liu, Chi

    2017-05-01

    Anatomical-based partial volume correction (PVC) has been shown to improve image quality and quantitative accuracy in cardiac SPECT/CT. However, this method requires manual segmentation of various organs from contrast-enhanced computed tomography angiography (CTA) data. In order to achieve fully automatic CTA segmentation for clinical translation, we investigated the most common multi-atlas segmentation methods. We also modified the multi-atlas segmentation method by introducing a novel label fusion algorithm for multiple organ segmentation to eliminate overlap and gap voxels. To evaluate our proposed automatic segmentation, eight canine 99mTc-labeled red blood cell SPECT/CT datasets that incorporated PVC were analyzed, using the leave-one-out approach. The Dice similarity coefficient of each organ was computed. Compared to the conventional label fusion method, our proposed label fusion method effectively eliminated gaps and overlaps and improved the CTA segmentation accuracy. The anatomical-based PVC of cardiac SPECT images with automatic multi-atlas segmentation provided consistent image quality and quantitative estimation of intramyocardial blood volume, as compared to those derived using manual segmentation. In conclusion, our proposed automatic multi-atlas segmentation method of CTAs is feasible, practical, and facilitates anatomical-based PVC of cardiac SPECT/CT images.

  14. Pansharpening in coastal ecosystems using Worldview-2 imagery

    NASA Astrophysics Data System (ADS)

    Ibarrola-Ulzurrun, Edurne; Marcello-Ruiz, Javier; Gonzalo-Martin, Consuelo

    2016-10-01

    Both climate change and anthropogenic pressure impacts are producing a declining in ecosystem natural resources. In this work, a vulnerable coastal ecosystem, Maspalomas Natural Reserve (Canary Islands, Spain), is analyzed. The development of advanced image processing techniques, applied to new satellites with very high resolution sensors (VHR), are essential to obtain accurate and systematic information about such natural areas. Thus, remote sensing offers a practical and cost-effective means for a good environmental management although some improvements are needed by the application of pansharpening techniques. A preliminary assessment was performed selecting classical and new algorithms that could achieve good performance with WorldView-2 imagery. Moreover, different quality indices were used in order to asses which pansharpening technique gives a better fused image. A total of 7 pansharpening algorithms were analyzed using 6 spectral and spatial quality indices. The quality assessment was implemented for the whole set of multispectral bands and for those bands covered by the wavelength range of the panchromatic image and outside of it. After an extensive evaluation, the most suitable algorithm was the Weighted Wavelet `à trous' through Fractal Dimension Maps technique which provided the best compromise between the spectral and spatial quality for the image. Finally, Quality Map Analysis was performed in order to study the fusion in each band at local level. As conclusion, novel analysis has been conducted covering the evaluation of fusion methods in shallow water areas. Hence, the excellent results provided by this study have been applied to the generation of challenging thematic maps of coastal and dunes protected areas.

  15. A 3D Scan Model and Thermal Image Data Fusion Algorithms for 3D Thermography in Medicine

    PubMed Central

    Klima, Ondrej

    2017-01-01

    Objectives At present, medical thermal imaging is still considered a mere qualitative tool enabling us to distinguish between but lacking the ability to quantify the physiological and nonphysiological states of the body. Such a capability would, however, facilitate solving the problem of medical quantification, whose presence currently manifests itself within the entire healthcare system. Methods A generally applicable method to enhance captured 3D spatial data carrying temperature-related information is presented; in this context, all equations required for other data fusions are derived. The method can be utilized for high-density point clouds or detailed meshes at a high resolution but is conveniently usable in large objects with sparse points. Results The benefits of the approach are experimentally demonstrated on 3D thermal scans of injured subjects. We obtained diagnostic information inaccessible via traditional methods. Conclusion Using a 3D model and thermal image data fusion allows the quantification of inflammation, facilitating more precise injury and illness diagnostics or monitoring. The technique offers a wide application potential in medicine and multiple technological domains, including electrical and mechanical engineering. PMID:29250306

  16. Meaning of Interior Tomography

    PubMed Central

    Wang, Ge; Yu, Hengyong

    2013-01-01

    The classic imaging geometry for computed tomography is for collection of un-truncated projections and reconstruction of a global image, with the Fourier transform as the theoretical foundation that is intrinsically non-local. Recently, interior tomography research has led to theoretically exact relationships between localities in the projection and image spaces and practically promising reconstruction algorithms. Initially, interior tomography was developed for x-ray computed tomography. Then, it has been elevated as a general imaging principle. Finally, a novel framework known as “omni-tomography” is being developed for grand fusion of multiple imaging modalities, allowing tomographic synchrony of diversified features. PMID:23912256

  17. Pattern-Recognition System for Approaching a Known Target

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance; Cheng, Yang

    2008-01-01

    A closed-loop pattern-recognition system is designed to provide guidance for maneuvering a small exploratory robotic vehicle (rover) on Mars to return to a landed spacecraft to deliver soil and rock samples that the spacecraft would subsequently bring back to Earth. The system could be adapted to terrestrial use in guiding mobile robots to approach known structures that humans could not approach safely, for such purposes as reconnaissance in military or law-enforcement applications, terrestrial scientific exploration, and removal of explosive or other hazardous items. The system has been demonstrated in experiments in which the Field Integrated Design and Operations (FIDO) rover (a prototype Mars rover equipped with a video camera for guidance) is made to return to a mockup of Mars-lander spacecraft. The FIDO rover camera autonomously acquires an image of the lander from a distance of 125 m in an outdoor environment. Then under guidance by an algorithm that performs fusion of multiple line and texture features in digitized images acquired by the camera, the rover traverses the intervening terrain, using features derived from images of the lander truss structure. Then by use of precise pattern matching for determining the position and orientation of the rover relative to the lander, the rover aligns itself with the bottom of ramps extending from the lander, in preparation for climbing the ramps to deliver samples to the lander. The most innovative aspect of the system is a set of pattern-recognition algorithms that govern a three-phase visual-guidance sequence for approaching the lander. During the first phase, a multifeature fusion algorithm integrates the outputs of a horizontal-line-detection algorithm and a wavelet-transform-based visual-area-of-interest algorithm for detecting the lander from a significant distance. The horizontal-line-detection algorithm is used to determine candidate lander locations based on detection of a horizontal deck that is part of the lander.

  18. Proton pinhole imaging on the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Zylstra, A. B.; Park, H.-S.; Ross, J. S.; Fiuza, F.; Frenje, J. A.; Higginson, D. P.; Huntington, C.; Li, C. K.; Petrasso, R. D.; Pollock, B.; Remington, B.; Rinderknecht, H. G.; Ryutov, D.; Séguin, F. H.; Turnbull, D.; Wilks, S. C.

    2016-11-01

    Pinhole imaging of large (mm scale) carbon-deuterium (CD) plasmas by proton self-emission has been used for the first time to study the microphysics of shock formation, which is of astrophysical relevance. The 3 MeV deuterium-deuterium (DD) fusion proton self-emission from these plasmas is imaged using a novel pinhole imaging system, with up to five different 1 mm diameter pinholes positioned 25 cm from target-chamber center. CR39 is used as the detector medium, positioned at 100 cm distance from the pinhole for a magnification of 4 ×. A Wiener deconvolution algorithm is numerically demonstrated and used to interpret the images. When the spatial morphology is known, this algorithm accurately reproduces the size of features larger than about half the pinhole diameter. For these astrophysical plasma experiments on the National Ignition Facility, this provides a strong constraint on simulation modeling of the experiment.

  19. Proton pinhole imaging on the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zylstra, Alex B.; Park, H. -S.; Ross, J. S.

    Here, pinhole imaging of large (mm scale) carbon-deuterium (CD) plasmas by proton self-emission has been used for the first time to study the microphysics of shock formation, which is of astrophysical relevance. The 3 MeV deuterium-deuterium (DD) fusion proton self-emission from these plasmas is imaged using a novel pinhole imaging system, with up to five different 1 mm diameter pinholes positioned 25 cm from target-chamber center. CR39 is used as the detector medium, positioned at 100 cm distance from the pinhole for a magnification of 4×. A Wiener deconvolution algorithm is numerically demonstrated and used to interpret the images. Whenmore » the spatial morphology is known, this algorithm accurately reproduces the size of features larger than about half the pinhole diameter. For these astrophysical plasma experiments on the National Ignition Facility, this provides a strong constraint on simulation modeling of the experiment.« less

  20. Proton pinhole imaging on the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zylstra, A. B., E-mail: zylstra@lanl.gov; Park, H.-S.; Ross, J. S.

    Pinhole imaging of large (mm scale) carbon-deuterium (CD) plasmas by proton self-emission has been used for the first time to study the microphysics of shock formation, which is of astrophysical relevance. The 3 MeV deuterium-deuterium (DD) fusion proton self-emission from these plasmas is imaged using a novel pinhole imaging system, with up to five different 1 mm diameter pinholes positioned 25 cm from target-chamber center. CR39 is used as the detector medium, positioned at 100 cm distance from the pinhole for a magnification of 4 ×. A Wiener deconvolution algorithm is numerically demonstrated and used to interpret the images. Whenmore » the spatial morphology is known, this algorithm accurately reproduces the size of features larger than about half the pinhole diameter. For these astrophysical plasma experiments on the National Ignition Facility, this provides a strong constraint on simulation modeling of the experiment.« less

  1. Proton pinhole imaging on the National Ignition Facility

    DOE PAGES

    Zylstra, Alex B.; Park, H. -S.; Ross, J. S.; ...

    2016-07-29

    Here, pinhole imaging of large (mm scale) carbon-deuterium (CD) plasmas by proton self-emission has been used for the first time to study the microphysics of shock formation, which is of astrophysical relevance. The 3 MeV deuterium-deuterium (DD) fusion proton self-emission from these plasmas is imaged using a novel pinhole imaging system, with up to five different 1 mm diameter pinholes positioned 25 cm from target-chamber center. CR39 is used as the detector medium, positioned at 100 cm distance from the pinhole for a magnification of 4×. A Wiener deconvolution algorithm is numerically demonstrated and used to interpret the images. Whenmore » the spatial morphology is known, this algorithm accurately reproduces the size of features larger than about half the pinhole diameter. For these astrophysical plasma experiments on the National Ignition Facility, this provides a strong constraint on simulation modeling of the experiment.« less

  2. Proton pinhole imaging on the National Ignition Facility.

    PubMed

    Zylstra, A B; Park, H-S; Ross, J S; Fiuza, F; Frenje, J A; Higginson, D P; Huntington, C; Li, C K; Petrasso, R D; Pollock, B; Remington, B; Rinderknecht, H G; Ryutov, D; Séguin, F H; Turnbull, D; Wilks, S C

    2016-11-01

    Pinhole imaging of large (mm scale) carbon-deuterium (CD) plasmas by proton self-emission has been used for the first time to study the microphysics of shock formation, which is of astrophysical relevance. The 3 MeV deuterium-deuterium (DD) fusion proton self-emission from these plasmas is imaged using a novel pinhole imaging system, with up to five different 1 mm diameter pinholes positioned 25 cm from target-chamber center. CR39 is used as the detector medium, positioned at 100 cm distance from the pinhole for a magnification of 4 ×. A Wiener deconvolution algorithm is numerically demonstrated and used to interpret the images. When the spatial morphology is known, this algorithm accurately reproduces the size of features larger than about half the pinhole diameter. For these astrophysical plasma experiments on the National Ignition Facility, this provides a strong constraint on simulation modeling of the experiment.

  3. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection

    PubMed Central

    Ou, Yangming; Resnick, Susan M.; Gur, Ruben C.; Gur, Raquel E.; Satterthwaite, Theodore D.; Furth, Susan; Davatzikos, Christos

    2016-01-01

    Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328

  4. Multi-Image Registration for an Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn

    2002-01-01

    An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.

  5. Introducing two Random Forest based methods for cloud detection in remote sensing images

    NASA Astrophysics Data System (ADS)

    Ghasemian, Nafiseh; Akhoondzadeh, Mehdi

    2018-07-01

    Cloud detection is a necessary phase in satellite images processing to retrieve the atmospheric and lithospheric parameters. Currently, some cloud detection methods based on Random Forest (RF) model have been proposed but they do not consider both spectral and textural characteristics of the image. Furthermore, they have not been tested in the presence of snow/ice. In this paper, we introduce two RF based algorithms, Feature Level Fusion Random Forest (FLFRF) and Decision Level Fusion Random Forest (DLFRF) to incorporate visible, infrared (IR) and thermal spectral and textural features (FLFRF) including Gray Level Co-occurrence Matrix (GLCM) and Robust Extended Local Binary Pattern (RELBP_CI) or visible, IR and thermal classifiers (DLFRF) for highly accurate cloud detection on remote sensing images. FLFRF first fuses visible, IR and thermal features. Thereafter, it uses the RF model to classify pixels to cloud, snow/ice and background or thick cloud, thin cloud and background. DLFRF considers visible, IR and thermal features (both spectral and textural) separately and inserts each set of features to RF model. Then, it holds vote matrix of each run of the model. Finally, it fuses the classifiers using the majority vote method. To demonstrate the effectiveness of the proposed algorithms, 10 Terra MODIS and 15 Landsat 8 OLI/TIRS images with different spatial resolutions are used in this paper. Quantitative analyses are based on manually selected ground truth data. Results show that after adding RELBP_CI to input feature set cloud detection accuracy improves. Also, the average cloud kappa values of FLFRF and DLFRF on MODIS images (1 and 0.99) are higher than other machine learning methods, Linear Discriminate Analysis (LDA), Classification And Regression Tree (CART), K Nearest Neighbor (KNN) and Support Vector Machine (SVM) (0.96). The average snow/ice kappa values of FLFRF and DLFRF on MODIS images (1 and 0.85) are higher than other traditional methods. The quantitative values on Landsat 8 images show similar trend. Consequently, while SVM and K-nearest neighbor show overestimation in predicting cloud and snow/ice pixels, our Random Forest (RF) based models can achieve higher cloud, snow/ice kappa values on MODIS and thin cloud, thick cloud and snow/ice kappa values on Landsat 8 images. Our algorithms predict both thin and thick cloud on Landsat 8 images while the existing cloud detection algorithm, Fmask cannot discriminate them. Compared to the state-of-the-art methods, our algorithms have acquired higher average cloud and snow/ice kappa values for different spatial resolutions.

  6. A synthetic dataset for evaluating soft and hard fusion algorithms

    NASA Astrophysics Data System (ADS)

    Graham, Jacob L.; Hall, David L.; Rimland, Jeffrey

    2011-06-01

    There is an emerging demand for the development of data fusion techniques and algorithms that are capable of combining conventional "hard" sensor inputs such as video, radar, and multispectral sensor data with "soft" data including textual situation reports, open-source web information, and "hard/soft" data such as image or video data that includes human-generated annotations. New techniques that assist in sense-making over a wide range of vastly heterogeneous sources are critical to improving tactical situational awareness in counterinsurgency (COIN) and other asymmetric warfare situations. A major challenge in this area is the lack of realistic datasets available for test and evaluation of such algorithms. While "soft" message sets exist, they tend to be of limited use for data fusion applications due to the lack of critical message pedigree and other metadata. They also lack corresponding hard sensor data that presents reasonable "fusion opportunities" to evaluate the ability to make connections and inferences that span the soft and hard data sets. This paper outlines the design methodologies, content, and some potential use cases of a COIN-based synthetic soft and hard dataset created under a United States Multi-disciplinary University Research Initiative (MURI) program funded by the U.S. Army Research Office (ARO). The dataset includes realistic synthetic reports from a variety of sources, corresponding synthetic hard data, and an extensive supporting database that maintains "ground truth" through logical grouping of related data into "vignettes." The supporting database also maintains the pedigree of messages and other critical metadata.

  7. Drug related webpages classification using images and text information based on multi-kernel learning

    NASA Astrophysics Data System (ADS)

    Hu, Ruiguang; Xiao, Liping; Zheng, Wenjuan

    2015-12-01

    In this paper, multi-kernel learning(MKL) is used for drug-related webpages classification. First, body text and image-label text are extracted through HTML parsing, and valid images are chosen by the FOCARSS algorithm. Second, text based BOW model is used to generate text representation, and image-based BOW model is used to generate images representation. Last, text and images representation are fused with a few methods. Experimental results demonstrate that the classification accuracy of MKL is higher than those of all other fusion methods in decision level and feature level, and much higher than the accuracy of single-modal classification.

  8. 3D pose estimation and motion analysis of the articulated human hand-forearm limb in an industrial production environment

    NASA Astrophysics Data System (ADS)

    Hahn, Markus; Barrois, Björn; Krüger, Lars; Wöhler, Christian; Sagerer, Gerhard; Kummert, Franz

    2010-09-01

    This study introduces an approach to model-based 3D pose estimation and instantaneous motion analysis of the human hand-forearm limb in the application context of safe human-robot interaction. 3D pose estimation is performed using two approaches: The Multiocular Contracting Curve Density (MOCCD) algorithm is a top-down technique based on pixel statistics around a contour model projected into the images from several cameras. The Iterative Closest Point (ICP) algorithm is a bottom-up approach which uses a motion-attributed 3D point cloud to estimate the object pose. Due to their orthogonal properties, a fusion of these algorithms is shown to be favorable. The fusion is performed by a weighted combination of the extracted pose parameters in an iterative manner. The analysis of object motion is based on the pose estimation result and the motion-attributed 3D points belonging to the hand-forearm limb using an extended constraint-line approach which does not rely on any temporal filtering. A further refinement is obtained using the Shape Flow algorithm, a temporal extension of the MOCCD approach, which estimates the temporal pose derivative based on the current and the two preceding images, corresponding to temporal filtering with a short response time of two or at most three frames. Combining the results of the two motion estimation stages provides information about the instantaneous motion properties of the object. Experimental investigations are performed on real-world image sequences displaying several test persons performing different working actions typically occurring in an industrial production scenario. In all example scenes, the background is cluttered, and the test persons wear various kinds of clothes. For evaluation, independently obtained ground truth data are used. [Figure not available: see fulltext.

  9. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    PubMed Central

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-01-01

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated by OKTAL-SE. PMID:27447635

  10. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection.

    PubMed

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-07-19

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated by OKTAL-SE.

  11. FZUImageReg: A toolbox for medical image registration and dose fusion in cervical cancer radiotherapy

    PubMed Central

    Bai, Penggang; Du, Min; Ni, Xiaolei; Ke, Dongzhong; Tong, Tong

    2017-01-01

    The combination external-beam radiotherapy and high-dose-rate brachytherapy is a standard form of treatment for patients with locally advanced uterine cervical cancer. Personalized radiotherapy in cervical cancer requires efficient and accurate dose planning and assessment across these types of treatment. To achieve radiation dose assessment, accurate mapping of the dose distribution from HDR-BT onto EBRT is extremely important. However, few systems can achieve robust dose fusion and determine the accumulated dose distribution during the entire course of treatment. We have therefore developed a toolbox (FZUImageReg), which is a user-friendly dose fusion system based on hybrid image registration for radiation dose assessment in cervical cancer radiotherapy. The main part of the software consists of a collection of medical image registration algorithms and a modular design with a user-friendly interface, which allows users to quickly configure, test, monitor, and compare different registration methods for a specific application. Owing to the large deformation, the direct application of conventional state-of-the-art image registration methods is not sufficient for the accurate alignment of EBRT and HDR-BT images. To solve this problem, a multi-phase non-rigid registration method using local landmark-based free-form deformation is proposed for locally large deformation between EBRT and HDR-BT images, followed by intensity-based free-form deformation. With the transformation, the software also provides a dose mapping function according to the deformation field. The total dose distribution during the entire course of treatment can then be presented. Experimental results clearly show that the proposed system can achieve accurate registration between EBRT and HDR-BT images and provide radiation dose warping and fusion results for dose assessment in cervical cancer radiotherapy in terms of high accuracy and efficiency. PMID:28388623

  12. Cloud-Based Evaluation of Anatomical Structure Segmentation and Landmark Detection Algorithms: VISCERAL Anatomy Benchmarks.

    PubMed

    Jimenez-Del-Toro, Oscar; Muller, Henning; Krenn, Markus; Gruenberg, Katharina; Taha, Abdel Aziz; Winterstein, Marianne; Eggel, Ivan; Foncubierta-Rodriguez, Antonio; Goksel, Orcun; Jakab, Andras; Kontokotsios, Georgios; Langs, Georg; Menze, Bjoern H; Salas Fernandez, Tomas; Schaer, Roger; Walleyo, Anna; Weber, Marc-Andre; Dicente Cid, Yashin; Gass, Tobias; Heinrich, Mattias; Jia, Fucang; Kahl, Fredrik; Kechichian, Razmig; Mai, Dominic; Spanier, Assaf B; Vincent, Graham; Wang, Chunliang; Wyeth, Daniel; Hanbury, Allan

    2016-11-01

    Variations in the shape and appearance of anatomical structures in medical images are often relevant radiological signs of disease. Automatic tools can help automate parts of this manual process. A cloud-based evaluation framework is presented in this paper including results of benchmarking current state-of-the-art medical imaging algorithms for anatomical structure segmentation and landmark detection: the VISCERAL Anatomy benchmarks. The algorithms are implemented in virtual machines in the cloud where participants can only access the training data and can be run privately by the benchmark administrators to objectively compare their performance in an unseen common test set. Overall, 120 computed tomography and magnetic resonance patient volumes were manually annotated to create a standard Gold Corpus containing a total of 1295 structures and 1760 landmarks. Ten participants contributed with automatic algorithms for the organ segmentation task, and three for the landmark localization task. Different algorithms obtained the best scores in the four available imaging modalities and for subsets of anatomical structures. The annotation framework, resulting data set, evaluation setup, results and performance analysis from the three VISCERAL Anatomy benchmarks are presented in this article. Both the VISCERAL data set and Silver Corpus generated with the fusion of the participant algorithms on a larger set of non-manually-annotated medical images are available to the research community.

  13. Integrating Millimeter Wave Radar with a Monocular Vision Sensor for On-Road Obstacle Detection Applications

    PubMed Central

    Wang, Tao; Zheng, Nanning; Xin, Jingmin; Ma, Zheng

    2011-01-01

    This paper presents a systematic scheme for fusing millimeter wave (MMW) radar and a monocular vision sensor for on-road obstacle detection. As a whole, a three-level fusion strategy based on visual attention mechanism and driver’s visual consciousness is provided for MMW radar and monocular vision fusion so as to obtain better comprehensive performance. Then an experimental method for radar-vision point alignment for easy operation with no reflection intensity of radar and special tool requirements is put forward. Furthermore, a region searching approach for potential target detection is derived in order to decrease the image processing time. An adaptive thresholding algorithm based on a new understanding of shadows in the image is adopted for obstacle detection, and edge detection is used to assist in determining the boundary of obstacles. The proposed fusion approach is verified through real experimental examples of on-road vehicle/pedestrian detection. In the end, the experimental results show that the proposed method is simple and feasible. PMID:22164117

  14. Integrating millimeter wave radar with a monocular vision sensor for on-road obstacle detection applications.

    PubMed

    Wang, Tao; Zheng, Nanning; Xin, Jingmin; Ma, Zheng

    2011-01-01

    This paper presents a systematic scheme for fusing millimeter wave (MMW) radar and a monocular vision sensor for on-road obstacle detection. As a whole, a three-level fusion strategy based on visual attention mechanism and driver's visual consciousness is provided for MMW radar and monocular vision fusion so as to obtain better comprehensive performance. Then an experimental method for radar-vision point alignment for easy operation with no reflection intensity of radar and special tool requirements is put forward. Furthermore, a region searching approach for potential target detection is derived in order to decrease the image processing time. An adaptive thresholding algorithm based on a new understanding of shadows in the image is adopted for obstacle detection, and edge detection is used to assist in determining the boundary of obstacles. The proposed fusion approach is verified through real experimental examples of on-road vehicle/pedestrian detection. In the end, the experimental results show that the proposed method is simple and feasible.

  15. Land use/cover classification in the Brazilian Amazon using satellite images.

    PubMed

    Lu, Dengsheng; Batistella, Mateus; Li, Guiying; Moran, Emilio; Hetrick, Scott; Freitas, Corina da Costa; Dutra, Luciano Vieira; Sant'anna, Sidnei João Siqueira

    2012-09-01

    Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation-based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi-resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Of the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, has the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical-based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data.

  16. Land use/cover classification in the Brazilian Amazon using satellite images

    PubMed Central

    Lu, Dengsheng; Batistella, Mateus; Li, Guiying; Moran, Emilio; Hetrick, Scott; Freitas, Corina da Costa; Dutra, Luciano Vieira; Sant’Anna, Sidnei João Siqueira

    2013-01-01

    Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation-based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi-resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Of the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, has the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical-based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data. PMID:24353353

  17. Medical image registration based on normalized multidimensional mutual information

    NASA Astrophysics Data System (ADS)

    Li, Qi; Ji, Hongbing; Tong, Ming

    2009-10-01

    Registration of medical images is an essential research topic in medical image processing and applications, and especially a preliminary and key step for multimodality image fusion. This paper offers a solution to medical image registration based on normalized multi-dimensional mutual information. Firstly, affine transformation with translational and rotational parameters is applied to the floating image. Then ordinal features are extracted by ordinal filters with different orientations to represent spatial information in medical images. Integrating ordinal features with pixel intensities, the normalized multi-dimensional mutual information is defined as similarity criterion to register multimodality images. Finally the immune algorithm is used to search registration parameters. The experimental results demonstrate the effectiveness of the proposed registration scheme.

  18. [Medical imaging in tumor precision medicine: opportunities and challenges].

    PubMed

    Xu, Jingjing; Tan, Yanbin; Zhang, Minming

    2017-05-25

    Tumor precision medicine is an emerging approach for tumor diagnosis, treatment and prevention, which takes account of individual variability of environment, lifestyle and genetic information. Tumor precision medicine is built up on the medical imaging innovations developed during the past decades, including the new hardware, new imaging agents, standardized protocols, image analysis and multimodal imaging fusion technology. Also the development of automated and reproducible analysis algorithm has extracted large amount of information from image-based features. With the continuous development and mining of tumor clinical and imaging databases, the radiogenomics, radiomics and artificial intelligence have been flourishing. Therefore, these new technological advances bring new opportunities and challenges to the application of imaging in tumor precision medicine.

  19. Multiple feature fusion via covariance matrix for visual tracking

    NASA Astrophysics Data System (ADS)

    Jin, Zefenfen; Hou, Zhiqiang; Yu, Wangsheng; Wang, Xin; Sun, Hui

    2018-04-01

    Aiming at the problem of complicated dynamic scenes in visual target tracking, a multi-feature fusion tracking algorithm based on covariance matrix is proposed to improve the robustness of the tracking algorithm. In the frame-work of quantum genetic algorithm, this paper uses the region covariance descriptor to fuse the color, edge and texture features. It also uses a fast covariance intersection algorithm to update the model. The low dimension of region covariance descriptor, the fast convergence speed and strong global optimization ability of quantum genetic algorithm, and the fast computation of fast covariance intersection algorithm are used to improve the computational efficiency of fusion, matching, and updating process, so that the algorithm achieves a fast and effective multi-feature fusion tracking. The experiments prove that the proposed algorithm can not only achieve fast and robust tracking but also effectively handle interference of occlusion, rotation, deformation, motion blur and so on.

  20. Acceptance test of a commercially available software for automatic image registration of computed tomography (CT), magnetic resonance imaging (MRI) and 99mTc-methoxyisobutylisonitrile (MIBI) single-photon emission computed tomography (SPECT) brain images.

    PubMed

    Loi, Gianfranco; Dominietto, Marco; Manfredda, Irene; Mones, Eleonora; Carriero, Alessandro; Inglese, Eugenio; Krengli, Marco; Brambilla, Marco

    2008-09-01

    This note describes a method to characterize the performances of image fusion software (Syntegra) with respect to accuracy and robustness. Computed tomography (CT), magnetic resonance imaging (MRI), and single-photon emission computed tomography (SPECT) studies were acquired from two phantoms and 10 patients. Image registration was performed independently by two couples composed of one radiotherapist and one physicist by means of superposition of anatomic landmarks. Each couple performed jointly and saved the registration. The two solutions were averaged to obtain the gold standard registration. A new set of estimators was defined to identify translation and rotation errors in the coordinate axes, independently from point position in image field of view (FOV). Algorithms evaluated were local correlation (LC) for CT-MRI, normalized mutual information (MI) for CT-MRI, and CT-SPECT registrations. To evaluate accuracy, estimator values were compared to limiting values for the algorithms employed, both in phantoms and in patients. To evaluate robustness, different alignments between images taken from a sample patient were produced and registration errors determined. LC algorithm resulted accurate in CT-MRI registrations in phantoms, but exceeded limiting values in 3 of 10 patients. MI algorithm resulted accurate in CT-MRI and CT-SPECT registrations in phantoms; limiting values were exceeded in one case in CT-MRI and never reached in CT-SPECT registrations. Thus, the evaluation of robustness was restricted to the algorithm of MI both for CT-MRI and CT-SPECT registrations. The algorithm of MI proved to be robust: limiting values were not exceeded with translation perturbations up to 2.5 cm, rotation perturbations up to 10 degrees and roto-translational perturbation up to 3 cm and 5 degrees.

  1. Change detection using landsat time series: A review of frequencies, preprocessing, algorithms, and applications

    NASA Astrophysics Data System (ADS)

    Zhu, Zhe

    2017-08-01

    The free and open access to all archived Landsat images in 2008 has completely changed the way of using Landsat data. Many novel change detection algorithms based on Landsat time series have been developed We present a comprehensive review of four important aspects of change detection studies based on Landsat time series, including frequencies, preprocessing, algorithms, and applications. We observed the trend that the more recent the study, the higher the frequency of Landsat time series used. We reviewed a series of image preprocessing steps, including atmospheric correction, cloud and cloud shadow detection, and composite/fusion/metrics techniques. We divided all change detection algorithms into six categories, including thresholding, differencing, segmentation, trajectory classification, statistical boundary, and regression. Within each category, six major characteristics of different algorithms, such as frequency, change index, univariate/multivariate, online/offline, abrupt/gradual change, and sub-pixel/pixel/spatial were analyzed. Moreover, some of the widely-used change detection algorithms were also discussed. Finally, we reviewed different change detection applications by dividing these applications into two categories, change target and change agent detection.

  2. Spatial Aspects of Multi-Sensor Data Fusion: Aerosol Optical Thickness

    NASA Technical Reports Server (NTRS)

    Leptoukh, Gregory; Zubko, V.; Gopalan, A.

    2007-01-01

    The Goddard Earth Sciences Data and Information Services Center (GES DISC) investigated the applicability and limitations of combining multi-sensor data through data fusion, to increase the usefulness of the multitude of NASA remote sensing data sets, and as part of a larger effort to integrate this capability in the GES-DISC Interactive Online Visualization and Analysis Infrastructure (Giovanni). This initial study focused on merging daily mean Aerosol Optical Thickness (AOT), as measured by the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard the Terra and Aqua satellites, to increase spatial coverage and produce complete fields to facilitate comparison with models and station data. The fusion algorithm used the maximum likelihood technique to merge the pixel values where available. The algorithm was applied to two regional AOT subsets (with mostly regular and irregular gaps, respectively) and a set of AOT fields that differed only in the size and location of artificially created gaps. The Cumulative Semivariogram (CSV) was found to be sensitive to the spatial distribution of gap areas and, thus, useful for assessing the sensitivity of the fused data to spatial gaps.

  3. Framework for cognitive analysis of dynamic perfusion computed tomography with visualization of large volumetric data

    NASA Astrophysics Data System (ADS)

    Hachaj, Tomasz; Ogiela, Marek R.

    2012-10-01

    The proposed framework for cognitive analysis of perfusion computed tomography images is a fusion of image processing, pattern recognition, and image analysis procedures. The output data of the algorithm consists of: regions of perfusion abnormalities, anatomy atlas description of brain tissues, measures of perfusion parameters, and prognosis for infracted tissues. That information is superimposed onto volumetric computed tomography data and displayed to radiologists. Our rendering algorithm enables rendering large volumes on off-the-shelf hardware. This portability of rendering solution is very important because our framework can be run without using expensive dedicated hardware. The other important factors are theoretically unlimited size of rendered volume and possibility of trading of image quality for rendering speed. Such rendered, high quality visualizations may be further used for intelligent brain perfusion abnormality identification, and computer aided-diagnosis of selected types of pathologies.

  4. Fusing Panchromatic and SWIR Bands Based on Cnn - a Preliminary Study Over WORLDVIEW-3 Datasets

    NASA Astrophysics Data System (ADS)

    Guo, M.; Ma, H.; Bao, Y.; Wang, L.

    2018-04-01

    The traditional fusion methods are based on the fact that the spectral ranges of the Panchromatic (PAN) and multispectral bands (MS) are almost overlapping. In this paper, we propose a new pan-sharpening method for the fusion of PAN and SWIR (short-wave infrared) bands, whose spectral coverages are not overlapping. This problem is addressed with a convolutional neural network (CNN), which is trained by WorldView-3 dataset. CNN can learn the complex relationship among bands, and thus alleviate spectral distortion. Consequently, in our network, we use the simple three-layer basic architecture with 16 × 16 kernels to conduct the experiment. Every layer use different receptive field. The first two layers compute 512 feature maps by using the 16 × 16 and 1 × 1 receptive field respectively and the third layer with a 8 × 8 receptive field. The fusion results are optimized by continuous training. As for assessment, four evaluation indexes including Entropy, CC, SAM and UIQI are selected built on subjective visual effect and quantitative evaluation. The preliminary experimental results demonstrate that the fusion algorithms can effectively enhance the spatial information. Unfortunately, the fusion image has spectral distortion, it cannot maintain the spectral information of the SWIR image.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mankovich, N.J.; Lambert, T.; Zrimec, T.

    A project is underway to develop automated methods of fusing cerebral magnetic resonance angiography (MRA) and x-ray angiography (XRA) for creating accurate visualizations used in planning treatment of vascular disease. The authors have developed a vascular phantom suitable for testing segmentation and fusion algorithms with either derived images (pseudo-MRA/pseudo-XRA) or actual MRA or XRA image sequences. The initial unilateral arterial phantom design, based on normal human anatomy, contains 48 tapering vascular segments with lumen diameters from 2.5 millimeter to 0.25 millimeter. The initial phantom used rapid prototyping technology (stereolithography) with a 0.9 millimeter vessel wall fabricated in an ultraviolet-cured plastic.more » The model fabrication resulted in a hollow vessel model comprising the internal carotid artery, the ophthalmic artery, and the proximal segments of the anterior, middle, and posterior cerebral arteries. The complete model was fabricated but the model`s lumen could not be cleared for vessels with less than 1 millimeter diameter. Measurements of selected vascular outer diameters as judged against the CAD specification showed an accuracy of 0.14 mm and precision (standard deviation) of 0.15 mm. The plastic vascular model produced provides a fixed geometric framework for the evaluation of imaging protocols and the development of algorithms for both segmentation and fusion.« less

  6. Change Detection of Remote Sensing Images by Dt-Cwt and Mrf

    NASA Astrophysics Data System (ADS)

    Ouyang, S.; Fan, K.; Wang, H.; Wang, Z.

    2017-05-01

    Aiming at the significant loss of high frequency information during reducing noise and the pixel independence in change detection of multi-scale remote sensing image, an unsupervised algorithm is proposed based on the combination between Dual-tree Complex Wavelet Transform (DT-CWT) and Markov random Field (MRF) model. This method first performs multi-scale decomposition for the difference image by the DT-CWT and extracts the change characteristics in high-frequency regions by using a MRF-based segmentation algorithm. Then our method estimates the final maximum a posterior (MAP) according to the segmentation algorithm of iterative condition model (ICM) based on fuzzy c-means(FCM) after reconstructing the high-frequency and low-frequency sub-bands of each layer respectively. Finally, the method fuses the above segmentation results of each layer by using the fusion rule proposed to obtain the mask of the final change detection result. The results of experiment prove that the method proposed is of a higher precision and of predominant robustness properties.

  7. Fast, long-term, super-resolution imaging with Hessian structured illumination microscopy.

    PubMed

    Huang, Xiaoshuai; Fan, Junchao; Li, Liuju; Liu, Haosen; Wu, Runlong; Wu, Yi; Wei, Lisi; Mao, Heng; Lal, Amit; Xi, Peng; Tang, Liqiang; Zhang, Yunfeng; Liu, Yanmei; Tan, Shan; Chen, Liangyi

    2018-06-01

    To increase the temporal resolution and maximal imaging time of super-resolution (SR) microscopy, we have developed a deconvolution algorithm for structured illumination microscopy based on Hessian matrixes (Hessian-SIM). It uses the continuity of biological structures in multiple dimensions as a priori knowledge to guide image reconstruction and attains artifact-minimized SR images with less than 10% of the photon dose used by conventional SIM while substantially outperforming current algorithms at low signal intensities. Hessian-SIM enables rapid imaging of moving vesicles or loops in the endoplasmic reticulum without motion artifacts and with a spatiotemporal resolution of 88 nm and 188 Hz. Its high sensitivity allows the use of sub-millisecond excitation pulses followed by dark recovery times to reduce photobleaching of fluorescent proteins, enabling hour-long time-lapse SR imaging of actin filaments in live cells. Finally, we observed the structural dynamics of mitochondrial cristae and structures that, to our knowledge, have not been observed previously, such as enlarged fusion pores during vesicle exocytosis.

  8. Automated identification of retinal vessels using a multiscale directional contrast quantification (MDCQ) strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhen, Yi; Zhang, Xinyuan; Wang, Ningli, E-mail: wningli@vip.163.com, E-mail: puj@upmc.edu

    2014-09-15

    Purpose: A novel algorithm is presented to automatically identify the retinal vessels depicted in color fundus photographs. Methods: The proposed algorithm quantifies the contrast of each pixel in retinal images at multiple scales and fuses the resulting consequent contrast images in a progressive manner by leveraging their spatial difference and continuity. The multiscale strategy is to deal with the variety of retinal vessels in width, intensity, resolution, and orientation; and the progressive fusion is to combine consequent images and meanwhile avoid a sudden fusion of image noise and/or artifacts in space. To quantitatively assess the performance of the algorithm, wemore » tested it on three publicly available databases, namely, DRIVE, STARE, and HRF. The agreement between the computer results and the manual delineation in these databases were quantified by computing their overlapping in both area and length (centerline). The measures include sensitivity, specificity, and accuracy. Results: For the DRIVE database, the sensitivities in identifying vessels in area and length were around 90% and 70%, respectively, the accuracy in pixel classification was around 99%, and the precisions in terms of both area and length were around 94%. For the STARE database, the sensitivities in identifying vessels were around 90% in area and 70% in length, and the accuracy in pixel classification was around 97%. For the HRF database, the sensitivities in identifying vessels were around 92% in area and 83% in length for the healthy subgroup, around 92% in area and 75% in length for the glaucomatous subgroup, around 91% in area and 73% in length for the diabetic retinopathy subgroup. For all three subgroups, the accuracy was around 98%. Conclusions: The experimental results demonstrate that the developed algorithm is capable of identifying retinal vessels depicted in color fundus photographs in a relatively reliable manner.« less

  9. A testbed for architecture and fidelity trade studies in the Bayesian decision-level fusion of ATR products

    NASA Astrophysics Data System (ADS)

    Erickson, Kyle J.; Ross, Timothy D.

    2007-04-01

    Decision-level fusion is an appealing extension to automatic/assisted target recognition (ATR) as it is a low-bandwidth technique bolstered by a strong theoretical foundation that requires no modification of the source algorithms. Despite the relative simplicity of decision-level fusion, there are many options for fusion application and fusion algorithm specifications. This paper describes a tool that allows trade studies and optimizations across these many options, by feeding an actual fusion algorithm via models of the system environment. Models and fusion algorithms can be specified and then exercised many times, with accumulated results used to compute performance metrics such as probability of correct identification. Performance differences between the best of the contributing sources and the fused result constitute examples of "gain." The tool, constructed as part of the Fusion for Identifying Targets Experiment (FITE) within the Air Force Research Laboratory (AFRL) Sensors Directorate ATR Thrust, finds its main use in examining the relationships among conditions affecting the target, prior information, fusion algorithm complexity, and fusion gain. ATR as an unsolved problem provides the main challenges to fusion in its high cost and relative scarcity of training data, its variability in application, the inability to produce truly random samples, and its sensitivity to context. This paper summarizes the mathematics underlying decision-level fusion in the ATR domain and describes a MATLAB-based architecture for exploring the trade space thus defined. Specific dimensions within this trade space are delineated, providing the raw material necessary to define experiments suitable for multi-look and multi-sensor ATR systems.

  10. Evaluating total inorganic nitrogen in coastal waters through fusion of multi-temporal RADARSAT-2 and optical imagery using random forest algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Meiling; Liu, Xiangnan; Li, Jin; Ding, Chao; Jiang, Jiale

    2014-12-01

    Satellites routinely provide frequent, large-scale, near-surface views of many oceanographic variables pertinent to plankton ecology. However, the nutrient fertility of water can be challenging to detect accurately using remote sensing technology. This research has explored an approach to estimate the nutrient fertility in coastal waters through the fusion of synthetic aperture radar (SAR) images and optical images using the random forest (RF) algorithm. The estimation of total inorganic nitrogen (TIN) in the Hong Kong Sea, China, was used as a case study. In March of 2009 and May and August of 2010, a sequence of multi-temporal in situ data and CCD images from China's HJ-1 satellite and RADARSAT-2 images were acquired. Four sensitive parameters were selected as input variables to evaluate TIN: single-band reflectance, a normalized difference spectral index (NDSI) and HV and VH polarizations. The RF algorithm was used to merge the different input variables from the SAR and optical imagery to generate a new dataset (i.e., the TIN outputs). The results showed the temporal-spatial distribution of TIN. The TIN values decreased from coastal waters to the open water areas, and TIN values in the northeast area were higher than those found in the southwest region of the study area. The maximum TIN values occurred in May. Additionally, the estimation accuracy for estimating TIN was significantly improved when the SAR and optical data were used in combination rather than a single data type alone. This study suggests that this method of estimating nutrient fertility in coastal waters by effectively fusing data from multiple sensors is very promising.

  11. SU-E-J-97: Quality Assurance of Deformable Image Registration Algorithms: How Realistic Should Phantoms Be?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saenz, D; Stathakis, S; Kirby, N

    Purpose: Deformable image registration (DIR) has widespread uses in radiotherapy for applications such as dose accumulation studies, multi-modality image fusion, and organ segmentation. The quality assurance (QA) of such algorithms, however, remains largely unimplemented. This work aims to determine how detailed a physical phantom needs to be to accurately perform QA of a DIR algorithm. Methods: Virtual prostate and head-and-neck phantoms, made from patient images, were used for this study. Both sets consist of an undeformed and deformed image pair. The images were processed to create additional image pairs with one through five homogeneous tissue levels using Otsu’s method. Realisticmore » noise was then added to each image. The DIR algorithms from MIM and Velocity (Deformable Multipass) were applied to the original phantom images and the processed ones. The resulting deformations were then compared to the known warping. A higher number of tissue levels creates more contrast in an image and enables DIR algorithms to produce more accurate results. For this reason, error (distance between predicted and known deformation) is utilized as a metric to evaluate how many levels are required for a phantom to be a realistic patient proxy. Results: For the prostate image pairs, the mean error decreased from 1–2 tissue levels and remained constant for 3+ levels. The mean error reduction was 39% and 26% for Velocity and MIM respectively. For head and neck, mean error fell similarly through 2 levels and flattened with total reduction of 16% and 49% for Velocity and MIM. For Velocity, 3+ levels produced comparable accuracy as the actual patient images, whereas MIM showed further accuracy improvement. Conclusion: The number of tissue levels needed to produce an accurate patient proxy depends on the algorithm. For Velocity, three levels were enough, whereas five was still insufficient for MIM.« less

  12. Hand-writing motion tracking with vision-inertial sensor fusion: calibration and error correction.

    PubMed

    Zhou, Shengli; Fei, Fei; Zhang, Guanglie; Liu, Yunhui; Li, Wen J

    2014-08-25

    The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model.

  13. Assessment of Data Fusion Algorithms for Earth Observation Change Detection Processes

    PubMed Central

    Molina, Iñigo; Martinez, Estibaliz; Morillo, Carmen; Velasco, Jesus; Jara, Alvaro

    2016-01-01

    In this work a parametric multi-sensor Bayesian data fusion approach and a Support Vector Machine (SVM) are used for a Change Detection problem. For this purpose two sets of SPOT5-PAN images have been used, which are in turn used for Change Detection Indices (CDIs) calculation. For minimizing radiometric differences, a methodology based on zonal “invariant features” is suggested. The choice of one or the other CDI for a change detection process is a subjective task as each CDI is probably more or less sensitive to certain types of changes. Likewise, this idea might be employed to create and improve a “change map”, which can be accomplished by means of the CDI’s informational content. For this purpose, information metrics such as the Shannon Entropy and “Specific Information” have been used to weight the changes and no-changes categories contained in a certain CDI and thus introduced in the Bayesian information fusion algorithm. Furthermore, the parameters of the probability density functions (pdf’s) that best fit the involved categories have also been estimated. Conversely, these considerations are not necessary for mapping procedures based on the discriminant functions of a SVM. This work has confirmed the capabilities of probabilistic information fusion procedure under these circumstances. PMID:27706048

  14. Improving Secondary Ion Mass Spectrometry Image Quality with Image Fusion

    PubMed Central

    Tarolli, Jay G.; Jackson, Lauren M.; Winograd, Nicholas

    2014-01-01

    The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images, but also by detection sensitivity. As the probe size is reduced to below 1 µm, for example, a low signal in each pixel limits lateral resolution due to counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure. PMID:24912432

  15. A review of data fusion techniques.

    PubMed

    Castanedo, Federico

    2013-01-01

    The integration of data and knowledge from several sources is known as data fusion. This paper summarizes the state of the data fusion field and describes the most relevant studies. We first enumerate and explain different classification schemes for data fusion. Then, the most common algorithms are reviewed. These methods and algorithms are presented using three different categories: (i) data association, (ii) state estimation, and (iii) decision fusion.

  16. Local-global classifier fusion for screening chest radiographs

    NASA Astrophysics Data System (ADS)

    Ding, Meng; Antani, Sameer; Jaeger, Stefan; Xue, Zhiyun; Candemir, Sema; Kohli, Marc; Thoma, George

    2017-03-01

    Tuberculosis (TB) is a severe comorbidity of HIV and chest x-ray (CXR) analysis is a necessary step in screening for the infective disease. Automatic analysis of digital CXR images for detecting pulmonary abnormalities is critical for population screening, especially in medical resource constrained developing regions. In this article, we describe steps that improve previously reported performance of NLM's CXR screening algorithms and help advance the state of the art in the field. We propose a local-global classifier fusion method where two complementary classification systems are combined. The local classifier focuses on subtle and partial presentation of the disease leveraging information in radiology reports that roughly indicates locations of the abnormalities. In addition, the global classifier models the dominant spatial structure in the gestalt image using GIST descriptor for the semantic differentiation. Finally, the two complementary classifiers are combined using linear fusion, where the weight of each decision is calculated by the confidence probabilities from the two classifiers. We evaluated our method on three datasets in terms of the area under the Receiver Operating Characteristic (ROC) curve, sensitivity, specificity and accuracy. The evaluation demonstrates the superiority of our proposed local-global fusion method over any single classifier.

  17. Electroencephalogram-based decoding cognitive states using convolutional neural network and likelihood ratio based score fusion

    PubMed Central

    2017-01-01

    Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain–computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method. PMID:28558002

  18. Orbital navigation, docking and obstacle avoidance as a form of three dimensional model-based image understanding

    NASA Technical Reports Server (NTRS)

    Beyer, J.; Jacobus, C.; Mitchell, B.

    1987-01-01

    Range imagery from a laser scanner can be used to provide sufficient information for docking and obstacle avoidance procedures to be performed automatically. Three dimensional model-based computer vision algorithms in development can perform these tasks even with targets which may not be cooperative (that is, objects without special targets or markers to provide unambiguous location points). Roll, pitch and yaw of the vehicle can be taken into account as image scanning takes place, so that these can be corrected when the image is converted from egocentric to world coordinates. Other attributes of the sensor, such as the registered reflectence and texture channels, provide additional data sources for algorithm robustness. Temporal fusion of sensor immages can take place in the work coordinate domain, allowing for the building of complex maps in three dimensional space.

  19. Multiatlas whole heart segmentation of CT data using conditional entropy for atlas ranking and selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhuang, Xiahai, E-mail: zhuangxiahai@sjtu.edu.cn; Qian, Xiaohua; Bai, Wenjia

    Purpose: Cardiac computed tomography (CT) is widely used in clinical diagnosis of cardiovascular diseases. Whole heart segmentation (WHS) plays a vital role in developing new clinical applications of cardiac CT. However, the shape and appearance of the heart can vary greatly across different scans, making the automatic segmentation particularly challenging. The objective of this work is to develop and evaluate a multiatlas segmentation (MAS) scheme using a new atlas ranking and selection algorithm for automatic WHS of CT data. Research on different MAS strategies and their influence on WHS performance are limited. This work provides a detailed comparison study evaluatingmore » the impacts of label fusion, atlas ranking, and sizes of the atlas database on the segmentation performance. Methods: Atlases in a database were registered to the target image using a hierarchical registration scheme specifically designed for cardiac images. A subset of the atlases were selected for label fusion, according to the authors’ proposed atlas ranking criterion which evaluated the performance of each atlas by computing the conditional entropy of the target image given the propagated atlas labeling. Joint label fusion was used to combine multiple label estimates to obtain the final segmentation. The authors used 30 clinical cardiac CT angiography (CTA) images to evaluate the proposed MAS scheme and to investigate different segmentation strategies. Results: The mean WHS Dice score of the proposed MAS method was 0.918 ± 0.021, and the mean runtime for one case was 13.2 min on a workstation. This MAS scheme using joint label fusion generated significantly better Dice scores than the other label fusion strategies, including majority voting (0.901 ± 0.276, p < 0.01), locally weighted voting (0.905 ± 0.0247, p < 0.01), and probabilistic patch-based fusion (0.909 ± 0.0249, p < 0.01). In the atlas ranking study, the proposed criterion based on conditional entropy yielded a performance curve with higher WHS Dice scores compared to the conventional schemes (p < 0.03). In the atlas database study, the authors showed that the MAS using larger atlas databases generated better performance curves than the MAS using smaller ones, indicating larger atlas databases could produce more accurate segmentation. Conclusions: The authors have developed a new MAS framework for automatic WHS of CTA and investigated alternative implementations of MAS. With the proposed atlas ranking algorithm and joint label fusion, the MAS scheme is able to generate accurate segmentation within practically acceptable computation time. This method can be useful for the development of new clinical applications of cardiac CT.« less

  20. Automated target classification in high resolution dual frequency sonar imagery

    NASA Astrophysics Data System (ADS)

    Aridgides, Tom; Fernández, Manuel

    2007-04-01

    An improved computer-aided-detection / computer-aided-classification (CAD/CAC) processing string has been developed. The classified objects of 2 distinct strings are fused using the classification confidence values and their expansions as features, and using "summing" or log-likelihood-ratio-test (LLRT) based fusion rules. The utility of the overall processing strings and their fusion was demonstrated with new high-resolution dual frequency sonar imagery. Three significant fusion algorithm improvements were made. First, a nonlinear 2nd order (Volterra) feature LLRT fusion algorithm was developed. Second, a Box-Cox nonlinear feature LLRT fusion algorithm was developed. The Box-Cox transformation consists of raising the features to a to-be-determined power. Third, a repeated application of a subset feature selection / feature orthogonalization / Volterra feature LLRT fusion block was utilized. It was shown that cascaded Volterra feature LLRT fusion of the CAD/CAC processing strings outperforms summing, baseline single-stage Volterra and Box-Cox feature LLRT algorithms, yielding significant improvements over the best single CAD/CAC processing string results, and providing the capability to correctly call the majority of targets while maintaining a very low false alarm rate. Additionally, the robustness of cascaded Volterra feature fusion was demonstrated, by showing that the algorithm yields similar performance with the training and test sets.

  1. Statistically significant performance results of a mine detector and fusion algorithm from an x-band high-resolution SAR

    NASA Astrophysics Data System (ADS)

    Williams, Arnold C.; Pachowicz, Peter W.

    2004-09-01

    Current mine detection research indicates that no single sensor or single look from a sensor will detect mines/minefields in a real-time manner at a performance level suitable for a forward maneuver unit. Hence, the integrated development of detectors and fusion algorithms are of primary importance. A problem in this development process has been the evaluation of these algorithms with relatively small data sets, leading to anecdotal and frequently over trained results. These anecdotal results are often unreliable and conflicting among various sensors and algorithms. Consequently, the physical phenomena that ought to be exploited and the performance benefits of this exploitation are often ambiguous. The Army RDECOM CERDEC Night Vision Laboratory and Electron Sensors Directorate has collected large amounts of multisensor data such that statistically significant evaluations of detection and fusion algorithms can be obtained. Even with these large data sets care must be taken in algorithm design and data processing to achieve statistically significant performance results for combined detectors and fusion algorithms. This paper discusses statistically significant detection and combined multilook fusion results for the Ellipse Detector (ED) and the Piecewise Level Fusion Algorithm (PLFA). These statistically significant performance results are characterized by ROC curves that have been obtained through processing this multilook data for the high resolution SAR data of the Veridian X-Band radar. We discuss the implications of these results on mine detection and the importance of statistical significance, sample size, ground truth, and algorithm design in performance evaluation.

  2. Data fusion algorithm for rapid multi-mode dust concentration measurement system based on MEMS

    NASA Astrophysics Data System (ADS)

    Liao, Maohao; Lou, Wenzhong; Wang, Jinkui; Zhang, Yan

    2018-03-01

    As single measurement method cannot fully meet the technical requirements of dust concentration measurement, the multi-mode detection method is put forward, as well as the new requirements for data processing. This paper presents a new dust concentration measurement system which contains MEMS ultrasonic sensor and MEMS capacitance sensor, and presents a new data fusion algorithm for this multi-mode dust concentration measurement system. After analyzing the relation between the data of the composite measurement method, the data fusion algorithm based on Kalman filtering is established, which effectively improve the measurement accuracy, and ultimately forms a rapid data fusion model of dust concentration measurement. Test results show that the data fusion algorithm is able to realize the rapid and exact concentration detection.

  3. An image mosaic method based on corner

    NASA Astrophysics Data System (ADS)

    Jiang, Zetao; Nie, Heting

    2015-08-01

    In view of the shortcomings of the traditional image mosaic, this paper describes a new algorithm for image mosaic based on the Harris corner. Firstly, Harris operator combining the constructed low-pass smoothing filter based on splines function and circular window search is applied to detect the image corner, which allows us to have better localisation performance and effectively avoid the phenomenon of cluster. Secondly, the correlation feature registration is used to find registration pair, remove the false registration using random sampling consensus. Finally use the method of weighted trigonometric combined with interpolation function for image fusion. The experiments show that this method can effectively remove the splicing ghosting and improve the accuracy of image mosaic.

  4. GPU-based real-time trinocular stereo vision

    NASA Astrophysics Data System (ADS)

    Yao, Yuanbin; Linton, R. J.; Padir, Taskin

    2013-01-01

    Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.

  5. The new approach for infrared target tracking based on the particle filter algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Hang; Han, Hong-xia

    2011-08-01

    Target tracking on the complex background in the infrared image sequence is hot research field. It provides the important basis in some fields such as video monitoring, precision, and video compression human-computer interaction. As a typical algorithms in the target tracking framework based on filtering and data connection, the particle filter with non-parameter estimation characteristic have ability to deal with nonlinear and non-Gaussian problems so it were widely used. There are various forms of density in the particle filter algorithm to make it valid when target occlusion occurred or recover tracking back from failure in track procedure, but in order to capture the change of the state space, it need a certain amount of particles to ensure samples is enough, and this number will increase in accompany with dimension and increase exponentially, this led to the increased amount of calculation is presented. In this paper particle filter algorithm and the Mean shift will be combined. Aiming at deficiencies of the classic mean shift Tracking algorithm easily trapped into local minima and Unable to get global optimal under the complex background. From these two perspectives that "adaptive multiple information fusion" and "with particle filter framework combining", we expand the classic Mean Shift tracking framework .Based on the previous perspective, we proposed an improved Mean Shift infrared target tracking algorithm based on multiple information fusion. In the analysis of the infrared characteristics of target basis, Algorithm firstly extracted target gray and edge character and Proposed to guide the above two characteristics by the moving of the target information thus we can get new sports guide grayscale characteristics and motion guide border feature. Then proposes a new adaptive fusion mechanism, used these two new information adaptive to integrate into the Mean Shift tracking framework. Finally we designed a kind of automatic target model updating strategy to further improve tracking performance. Experimental results show that this algorithm can compensate shortcoming of the particle filter has too much computation, and can effectively overcome the fault that mean shift is easy to fall into local extreme value instead of global maximum value .Last because of the gray and fusion target motion information, this approach also inhibit interference from the background, ultimately improve the stability and the real-time of the target track.

  6. Recent Advances in Registration, Integration and Fusion of Remotely Sensed Data: Redundant Representations and Frames

    NASA Technical Reports Server (NTRS)

    Czaja, Wojciech; Le Moigne-Stewart, Jacqueline

    2014-01-01

    In recent years, sophisticated mathematical techniques have been successfully applied to the field of remote sensing to produce significant advances in applications such as registration, integration and fusion of remotely sensed data. Registration, integration and fusion of multiple source imagery are the most important issues when dealing with Earth Science remote sensing data where information from multiple sensors, exhibiting various resolutions, must be integrated. Issues ranging from different sensor geometries, different spectral responses, differing illumination conditions, different seasons, and various amounts of noise need to be dealt with when designing an image registration, integration or fusion method. This tutorial will first define the problems and challenges associated with these applications and then will review some mathematical techniques that have been successfully utilized to solve them. In particular, we will cover topics on geometric multiscale representations, redundant representations and fusion frames, graph operators, diffusion wavelets, as well as spatial-spectral and operator-based data fusion. All the algorithms will be illustrated using remotely sensed data, with an emphasis on current and operational instruments.

  7. A Review of Data Fusion Techniques

    PubMed Central

    2013-01-01

    The integration of data and knowledge from several sources is known as data fusion. This paper summarizes the state of the data fusion field and describes the most relevant studies. We first enumerate and explain different classification schemes for data fusion. Then, the most common algorithms are reviewed. These methods and algorithms are presented using three different categories: (i) data association, (ii) state estimation, and (iii) decision fusion. PMID:24288502

  8. Membrane Topology Mapping of the Na+-Pumping NADH: Quinone Oxidoreductase from Vibrio cholerae by PhoA- Green Fluorescent Protein Fusion Analysis▿

    PubMed Central

    Duffy, Ellen B.; Barquera, Blanca

    2006-01-01

    The membrane topologies of the six subunits of Na+-translocating NADH:quinone oxidoreductase (Na+-NQR) from Vibrio cholerae were determined by a combination of topology prediction algorithms and the construction of C-terminal fusions. Fusion expression vectors contained either bacterial alkaline phosphatase (phoA) or green fluorescent protein (gfp) genes as reporters of periplasmic and cytoplasmic localization, respectively. A majority of the topology prediction algorithms did not predict any transmembrane helices for NqrA. A lack of PhoA activity when fused to the C terminus of NqrA and the observed fluorescence of the green fluorescent protein C-terminal fusion confirm that this subunit is localized to the cytoplasmic side of the membrane. Analysis of four PhoA fusions for NqrB indicates that this subunit has nine transmembrane helices and that residue T236, the binding site for flavin mononucleotide (FMN), resides in the cytoplasm. Three fusions confirm that the topology of NqrC consists of two transmembrane helices with the FMN binding site at residue T225 on the cytoplasmic side. Fusion analysis of NqrD and NqrE showed almost mirror image topologies, each consisting of six transmembrane helices; the results for NqrD and NqrE are consistent with the topologies of Escherichia coli homologs YdgQ and YdgL, respectively. The NADH, flavin adenine dinucleotide, and Fe-S center binding sites of NqrF were localized to the cytoplasm. The determination of the topologies of the subunits of Na+-NQR provides valuable insights into the location of cofactors and identifies targets for mutagenesis to characterize this enzyme in more detail. The finding that all the redox cofactors are localized to the cytoplasmic side of the membrane is discussed. PMID:17041063

  9. Fusion and quality analysis for remote sensing images using contourlet transform

    NASA Astrophysics Data System (ADS)

    Choi, Yoonsuk; Sharifahmadian, Ershad; Latifi, Shahram

    2013-05-01

    Recent developments in remote sensing technologies have provided various images with high spatial and spectral resolutions. However, multispectral images have low spatial resolution and panchromatic images have low spectral resolution. Therefore, image fusion techniques are necessary to improve the spatial resolution of spectral images by injecting spatial details of high-resolution panchromatic images. The objective of image fusion is to provide useful information by improving the spatial resolution and the spectral information of the original images. The fusion results can be utilized in various applications, such as military, medical imaging, and remote sensing. This paper addresses two issues in image fusion: i) image fusion method and ii) quality analysis of fusion results. First, a new contourlet-based image fusion method is presented, which is an improvement over the wavelet-based fusion. This fusion method is then applied to a case study to demonstrate its fusion performance. Fusion framework and scheme used in the study are discussed in detail. Second, quality analysis for the fusion results is discussed. We employed various quality metrics in order to analyze the fusion results both spatially and spectrally. Our results indicate that the proposed contourlet-based fusion method performs better than the conventional wavelet-based fusion methods.

  10. The role of imaging based prostate biopsy morphology in a data fusion paradigm for transducing prognostic predictions

    NASA Astrophysics Data System (ADS)

    Khan, Faisal M.; Kulikowski, Casimir A.

    2016-03-01

    A major focus area for precision medicine is in managing the treatment of newly diagnosed prostate cancer patients. For patients with a positive biopsy, clinicians aim to develop an individualized treatment plan based on a mechanistic understanding of the disease factors unique to each patient. Recently, there has been a movement towards a multi-modal view of the cancer through the fusion of quantitative information from multiple sources, imaging and otherwise. Simultaneously, there have been significant advances in machine learning methods for medical prognostics which integrate a multitude of predictive factors to develop an individualized risk assessment and prognosis for patients. An emerging area of research is in semi-supervised approaches which transduce the appropriate survival time for censored patients. In this work, we apply a novel semi-supervised approach for support vector regression to predict the prognosis for newly diagnosed prostate cancer patients. We integrate clinical characteristics of a patient's disease with imaging derived metrics for biomarker expression as well as glandular and nuclear morphology. In particular, our goal was to explore the performance of nuclear and glandular architecture within the transduction algorithm and assess their predictive power when compared with the Gleason score manually assigned by a pathologist. Our analysis in a multi-institutional cohort of 1027 patients indicates that not only do glandular and morphometric characteristics improve the predictive power of the semi-supervised transduction algorithm; they perform better when the pathological Gleason is absent. This work represents one of the first assessments of quantitative prostate biopsy architecture versus the Gleason grade in the context of a data fusion paradigm which leverages a semi-supervised approach for risk prognosis.

  11. Multi-stage classification method oriented to aerial image based on low-rank recovery and multi-feature fusion sparse representation.

    PubMed

    Ma, Xu; Cheng, Yongmei; Hao, Shuai

    2016-12-10

    Automatic classification of terrain surfaces from an aerial image is essential for an autonomous unmanned aerial vehicle (UAV) landing at an unprepared site by using vision. Diverse terrain surfaces may show similar spectral properties due to the illumination and noise that easily cause poor classification performance. To address this issue, a multi-stage classification algorithm based on low-rank recovery and multi-feature fusion sparse representation is proposed. First, color moments and Gabor texture feature are extracted from training data and stacked as column vectors of a dictionary. Then we perform low-rank matrix recovery for the dictionary by using augmented Lagrange multipliers and construct a multi-stage terrain classifier. Experimental results on an aerial map database that we prepared verify the classification accuracy and robustness of the proposed method.

  12. An Indoor Continuous Positioning Algorithm on the Move by Fusing Sensors and Wi-Fi on Smartphones.

    PubMed

    Li, Huaiyu; Chen, Xiuwan; Jing, Guifei; Wang, Yuan; Cao, Yanfeng; Li, Fei; Zhang, Xinlong; Xiao, Han

    2015-12-11

    Wi-Fi indoor positioning algorithms experience large positioning error and low stability when continuously positioning terminals that are on the move. This paper proposes a novel indoor continuous positioning algorithm that is on the move, fusing sensors and Wi-Fi on smartphones. The main innovative points include an improved Wi-Fi positioning algorithm and a novel positioning fusion algorithm named the Trust Chain Positioning Fusion (TCPF) algorithm. The improved Wi-Fi positioning algorithm was designed based on the properties of Wi-Fi signals on the move, which are found in a novel "quasi-dynamic" Wi-Fi signal experiment. The TCPF algorithm is proposed to realize the "process-level" fusion of Wi-Fi and Pedestrians Dead Reckoning (PDR) positioning, including three parts: trusted point determination, trust state and positioning fusion algorithm. An experiment is carried out for verification in a typical indoor environment, and the average positioning error on the move is 1.36 m, a decrease of 28.8% compared to an existing algorithm. The results show that the proposed algorithm can effectively reduce the influence caused by the unstable Wi-Fi signals, and improve the accuracy and stability of indoor continuous positioning on the move.

  13. Research on the Automatic Fusion Strategy of Fixed Value Boundary Based on the Weak Coupling Condition of Grid Partition

    NASA Astrophysics Data System (ADS)

    Wang, X. Y.; Dou, J. M.; Shen, H.; Li, J.; Yang, G. S.; Fan, R. Q.; Shen, Q.

    2018-03-01

    With the continuous strengthening of power grids, the network structure is becoming more and more complicated. An open and regional data modeling is used to complete the calculation of the protection fixed value based on the local region. At the same time, a high precision, quasi real-time boundary fusion technique is needed to seamlessly integrate the various regions so as to constitute an integrated fault computing platform which can conduct transient stability analysis of covering the whole network with high accuracy and multiple modes, deal with the impact results of non-single fault, interlocking fault and build “the first line of defense” of the power grid. The boundary fusion algorithm in this paper is an automatic fusion algorithm based on the boundary accurate coupling of the networking power grid partition, which takes the actual operation mode for qualification, complete the boundary coupling algorithm of various weak coupling partition based on open-loop mode, improving the fusion efficiency, truly reflecting its transient stability level, and effectively solving the problems of too much data, too many difficulties of partition fusion, and no effective fusion due to mutually exclusive conditions. In this paper, the basic principle of fusion process is introduced firstly, and then the method of boundary fusion customization is introduced by scene description. Finally, an example is given to illustrate the specific algorithm on how it effectively implements the boundary fusion after grid partition and to verify the accuracy and efficiency of the algorithm.

  14. A new FOD recognition algorithm based on multi-source information fusion and experiment analysis

    NASA Astrophysics Data System (ADS)

    Li, Yu; Xiao, Gang

    2011-08-01

    Foreign Object Debris (FOD) is a kind of substance, debris or article alien to an aircraft or system, which would potentially cause huge damage when it appears on the airport runway. Due to the airport's complex circumstance, quick and precise detection of FOD target on the runway is one of the important protections for airplane's safety. A multi-sensor system including millimeter-wave radar and Infrared image sensors is introduced and a developed new FOD detection and recognition algorithm based on inherent feature of FOD is proposed in this paper. Firstly, the FOD's location and coordinate can be accurately obtained by millimeter-wave radar, and then according to the coordinate IR camera will take target images and background images. Secondly, in IR image the runway's edges which are straight lines can be extracted by using Hough transformation method. The potential target region, that is, runway region, can be segmented from the whole image. Thirdly, background subtraction is utilized to localize the FOD target in runway region. Finally, in the detailed small images of FOD target, a new characteristic is discussed and used in target classification. The experiment results show that this algorithm can effectively reduce the computational complexity, satisfy the real-time requirement and possess of high detection and recognition probability.

  15. Multimodal medical information retrieval with unsupervised rank fusion.

    PubMed

    Mourão, André; Martins, Flávio; Magalhães, João

    2015-01-01

    Modern medical information retrieval systems are paramount to manage the insurmountable quantities of clinical data. These systems empower health care experts in the diagnosis of patients and play an important role in the clinical decision process. However, the ever-growing heterogeneous information generated in medical environments poses several challenges for retrieval systems. We propose a medical information retrieval system with support for multimodal medical case-based retrieval. The system supports medical information discovery by providing multimodal search, through a novel data fusion algorithm, and term suggestions from a medical thesaurus. Our search system compared favorably to other systems in 2013 ImageCLEFMedical. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Blob-level active-passive data fusion for Benthic classification

    NASA Astrophysics Data System (ADS)

    Park, Joong Yong; Kalluri, Hemanth; Mathur, Abhinav; Ramnath, Vinod; Kim, Minsu; Aitken, Jennifer; Tuell, Grady

    2012-06-01

    We extend the data fusion pixel level to the more semantically meaningful blob level, using the mean-shift algorithm to form labeled blobs having high similarity in the feature domain, and connectivity in the spatial domain. We have also developed Bhattacharyya Distance (BD) and rule-based classifiers, and have implemented these higher-level data fusion algorithms into the CZMIL Data Processing System. Applying these new algorithms to recent SHOALS and CASI data at Plymouth Harbor, Massachusetts, we achieved improved benthic classification accuracies over those produced with either single sensor, or pixel-level fusion strategies. These results appear to validate the hypothesis that classification accuracy may be generally improved by adopting higher spatial and semantic levels of fusion.

  17. Analysis of decision fusion algorithms in handling uncertainties for integrated health monitoring systems

    NASA Astrophysics Data System (ADS)

    Zein-Sabatto, Saleh; Mikhail, Maged; Bodruzzaman, Mohammad; DeSimio, Martin; Derriso, Mark; Behbahani, Alireza

    2012-06-01

    It has been widely accepted that data fusion and information fusion methods can improve the accuracy and robustness of decision-making in structural health monitoring systems. It is arguably true nonetheless, that decision-level is equally beneficial when applied to integrated health monitoring systems. Several decisions at low-levels of abstraction may be produced by different decision-makers; however, decision-level fusion is required at the final stage of the process to provide accurate assessment about the health of the monitored system as a whole. An example of such integrated systems with complex decision-making scenarios is the integrated health monitoring of aircraft. Thorough understanding of the characteristics of the decision-fusion methodologies is a crucial step for successful implementation of such decision-fusion systems. In this paper, we have presented the major information fusion methodologies reported in the literature, i.e., probabilistic, evidential, and artificial intelligent based methods. The theoretical basis and characteristics of these methodologies are explained and their performances are analyzed. Second, candidate methods from the above fusion methodologies, i.e., Bayesian, Dempster-Shafer, and fuzzy logic algorithms are selected and their applications are extended to decisions fusion. Finally, fusion algorithms are developed based on the selected fusion methods and their performance are tested on decisions generated from synthetic data and from experimental data. Also in this paper, a modeling methodology, i.e. cloud model, for generating synthetic decisions is presented and used. Using the cloud model, both types of uncertainties; randomness and fuzziness, involved in real decision-making are modeled. Synthetic decisions are generated with an unbiased process and varying interaction complexities among decisions to provide for fair performance comparison of the selected decision-fusion algorithms. For verification purposes, implementation results of the developed fusion algorithms on structural health monitoring data collected from experimental tests are reported in this paper.

  18. Multi-Sensor Registration of Earth Remotely Sensed Imagery

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Cole-Rhodes, Arlene; Eastman, Roger; Johnson, Kisha; Morisette, Jeffrey; Netanyahu, Nathan S.; Stone, Harold S.; Zavorin, Ilya; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    Assuming that approximate registration is given within a few pixels by a systematic correction system, we develop automatic image registration methods for multi-sensor data with the goal of achieving sub-pixel accuracy. Automatic image registration is usually defined by three steps; feature extraction, feature matching, and data resampling or fusion. Our previous work focused on image correlation methods based on the use of different features. In this paper, we study different feature matching techniques and present five algorithms where the features are either original gray levels or wavelet-like features, and the feature matching is based on gradient descent optimization, statistical robust matching, and mutual information. These algorithms are tested and compared on several multi-sensor datasets covering one of the EOS Core Sites, the Konza Prairie in Kansas, from four different sensors: IKONOS (4m), Landsat-7/ETM+ (30m), MODIS (500m), and SeaWIFS (1000m).

  19. Deep Convolutional Neural Networks for Classifying Body Constitution Based on Face Image.

    PubMed

    Huan, Er-Yang; Wen, Gui-Hua; Zhang, Shi-Jun; Li, Dan-Yang; Hu, Yang; Chang, Tian-Yuan; Wang, Qing; Huang, Bing-Lin

    2017-01-01

    Body constitution classification is the basis and core content of traditional Chinese medicine constitution research. It is to extract the relevant laws from the complex constitution phenomenon and finally build the constitution classification system. Traditional identification methods have the disadvantages of inefficiency and low accuracy, for instance, questionnaires. This paper proposed a body constitution recognition algorithm based on deep convolutional neural network, which can classify individual constitution types according to face images. The proposed model first uses the convolutional neural network to extract the features of face image and then combines the extracted features with the color features. Finally, the fusion features are input to the Softmax classifier to get the classification result. Different comparison experiments show that the algorithm proposed in this paper can achieve the accuracy of 65.29% about the constitution classification. And its performance was accepted by Chinese medicine practitioners.

  20. New Dandelion Algorithm Optimizes Extreme Learning Machine for Biomedical Classification Problems

    PubMed Central

    Li, Xiguang; Zhao, Liang; Gong, Changqing; Liu, Xiaojing

    2017-01-01

    Inspired by the behavior of dandelion sowing, a new novel swarm intelligence algorithm, namely, dandelion algorithm (DA), is proposed for global optimization of complex functions in this paper. In DA, the dandelion population will be divided into two subpopulations, and different subpopulations will undergo different sowing behaviors. Moreover, another sowing method is designed to jump out of local optimum. In order to demonstrate the validation of DA, we compare the proposed algorithm with other existing algorithms, including bat algorithm, particle swarm optimization, and enhanced fireworks algorithm. Simulations show that the proposed algorithm seems much superior to other algorithms. At the same time, the proposed algorithm can be applied to optimize extreme learning machine (ELM) for biomedical classification problems, and the effect is considerable. At last, we use different fusion methods to form different fusion classifiers, and the fusion classifiers can achieve higher accuracy and better stability to some extent. PMID:29085425

  1. The role of image registration in brain mapping

    PubMed Central

    Toga, A.W.; Thompson, P.M.

    2008-01-01

    Image registration is a key step in a great variety of biomedical imaging applications. It provides the ability to geometrically align one dataset with another, and is a prerequisite for all imaging applications that compare datasets across subjects, imaging modalities, or across time. Registration algorithms also enable the pooling and comparison of experimental findings across laboratories, the construction of population-based brain atlases, and the creation of systems to detect group patterns in structural and functional imaging data. We review the major types of registration approaches used in brain imaging today. We focus on their conceptual basis, the underlying mathematics, and their strengths and weaknesses in different contexts. We describe the major goals of registration, including data fusion, quantification of change, automated image segmentation and labeling, shape measurement, and pathology detection. We indicate that registration algorithms have great potential when used in conjunction with a digital brain atlas, which acts as a reference system in which brain images can be compared for statistical analysis. The resulting armory of registration approaches is fundamental to medical image analysis, and in a brain mapping context provides a means to elucidate clinical, demographic, or functional trends in the anatomy or physiology of the brain. PMID:19890483

  2. ALLFlight: detection of moving objects in IR and ladar images

    NASA Astrophysics Data System (ADS)

    Doehler, H.-U.; Peinecke, Niklas; Lueken, Thomas; Schmerwitz, Sven

    2013-05-01

    Supporting a helicopter pilot during landing and takeoff in degraded visual environment (DVE) is one of the challenges within DLR's project ALLFlight (Assisted Low Level Flight and Landing on Unprepared Landing Sites). Different types of sensors (TV, Infrared, mmW radar and laser radar) are mounted onto DLR's research helicopter FHS (flying helicopter simulator) for gathering different sensor data of the surrounding world. A high performance computer cluster architecture acquires and fuses all the information to get one single comprehensive description of the outside situation. While both TV and IR cameras deliver images with frame rates of 25 Hz or 30 Hz, Ladar and mmW radar provide georeferenced sensor data with only 2 Hz or even less. Therefore, it takes several seconds to detect or even track potential moving obstacle candidates in mmW or Ladar sequences. Especially if the helicopter is flying with higher speed, it is very important to minimize the detection time of obstacles in order to initiate a re-planning of the helicopter's mission timely. Applying feature extraction algorithms on IR images in combination with data fusion algorithms of extracted features and Ladar data can decrease the detection time appreciably. Based on real data from flight tests, the paper describes applied feature extraction methods for moving object detection, as well as data fusion techniques for combining features from TV/IR and Ladar data.

  3. A fast and automatic mosaic method for high-resolution satellite images

    NASA Astrophysics Data System (ADS)

    Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing

    2015-12-01

    We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.

  4. The selection of the optimal baseline in the front-view monocular vision system

    NASA Astrophysics Data System (ADS)

    Xiong, Bincheng; Zhang, Jun; Zhang, Daimeng; Liu, Xiaomao; Tian, Jinwen

    2018-03-01

    In the front-view monocular vision system, the accuracy of solving the depth field is related to the length of the inter-frame baseline and the accuracy of image matching result. In general, a longer length of the baseline can lead to a higher precision of solving the depth field. However, at the same time, the difference between the inter-frame images increases, which increases the difficulty in image matching and the decreases matching accuracy and at last may leads to the failure of solving the depth field. One of the usual practices is to use the tracking and matching method to improve the matching accuracy between images, but this algorithm is easy to cause matching drift between images with large interval, resulting in cumulative error in image matching, and finally the accuracy of solving the depth field is still very low. In this paper, we propose a depth field fusion algorithm based on the optimal length of the baseline. Firstly, we analyze the quantitative relationship between the accuracy of the depth field calculation and the length of the baseline between frames, and find the optimal length of the baseline by doing lots of experiments; secondly, we introduce the inverse depth filtering technique for sparse SLAM, and solve the depth field under the constraint of the optimal length of the baseline. By doing a large number of experiments, the results show that our algorithm can effectively eliminate the mismatch caused by image changes, and can still solve the depth field correctly in the large baseline scene. Our algorithm is superior to the traditional SFM algorithm in time and space complexity. The optimal baseline obtained by a large number of experiments plays a guiding role in the calculation of the depth field in front-view monocular.

  5. Statistical label fusion with hierarchical performance models

    PubMed Central

    Asman, Andrew J.; Dagley, Alexander S.; Landman, Bennett A.

    2014-01-01

    Label fusion is a critical step in many image segmentation frameworks (e.g., multi-atlas segmentation) as it provides a mechanism for generalizing a collection of labeled examples into a single estimate of the underlying segmentation. In the multi-label case, typical label fusion algorithms treat all labels equally – fully neglecting the known, yet complex, anatomical relationships exhibited in the data. To address this problem, we propose a generalized statistical fusion framework using hierarchical models of rater performance. Building on the seminal work in statistical fusion, we reformulate the traditional rater performance model from a multi-tiered hierarchical perspective. This new approach provides a natural framework for leveraging known anatomical relationships and accurately modeling the types of errors that raters (or atlases) make within a hierarchically consistent formulation. Herein, we describe several contributions. First, we derive a theoretical advancement to the statistical fusion framework that enables the simultaneous estimation of multiple (hierarchical) performance models within the statistical fusion context. Second, we demonstrate that the proposed hierarchical formulation is highly amenable to the state-of-the-art advancements that have been made to the statistical fusion framework. Lastly, in an empirical whole-brain segmentation task we demonstrate substantial qualitative and significant quantitative improvement in overall segmentation accuracy. PMID:24817809

  6. Fusion of MODIS and Landsat-8 Surface Temperature Images: A New Approach

    PubMed Central

    Hazaymeh, Khaled; Hassan, Quazi K.

    2015-01-01

    Here, our objective was to develop a spatio-temporal image fusion model (STI-FM) for enhancing temporal resolution of Landsat-8 land surface temperature (LST) images by fusing LST images acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS); and implement the developed algorithm over a heterogeneous semi-arid study area in Jordan, Middle East. The STI-FM technique consisted of two major components: (i) establishing a linear relationship between two consecutive MODIS 8-day composite LST images acquired at time 1 and time 2; and (ii) utilizing the above mentioned relationship as a function of a Landsat-8 LST image acquired at time 1 in order to predict a synthetic Landsat-8 LST image at time 2. It revealed that strong linear relationships (i.e., r2, slopes, and intercepts were in the range 0.93–0.94, 0.94–0.99; and 2.97–20.07) existed between the two consecutive MODIS LST images. We evaluated the synthetic LST images qualitatively and found high visual agreements with the actual Landsat-8 LST images. In addition, we conducted quantitative evaluations of these synthetic images; and found strong agreements with the actual Landsat-8 LST images. For example, r2, root mean square error (RMSE), and absolute average difference (AAD)-values were in the ranges 084–0.90, 0.061–0.080, and 0.003–0.004, respectively. PMID:25730279

  7. Fusion of MODIS and landsat-8 surface temperature images: a new approach.

    PubMed

    Hazaymeh, Khaled; Hassan, Quazi K

    2015-01-01

    Here, our objective was to develop a spatio-temporal image fusion model (STI-FM) for enhancing temporal resolution of Landsat-8 land surface temperature (LST) images by fusing LST images acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS); and implement the developed algorithm over a heterogeneous semi-arid study area in Jordan, Middle East. The STI-FM technique consisted of two major components: (i) establishing a linear relationship between two consecutive MODIS 8-day composite LST images acquired at time 1 and time 2; and (ii) utilizing the above mentioned relationship as a function of a Landsat-8 LST image acquired at time 1 in order to predict a synthetic Landsat-8 LST image at time 2. It revealed that strong linear relationships (i.e., r2, slopes, and intercepts were in the range 0.93-0.94, 0.94-0.99; and 2.97-20.07) existed between the two consecutive MODIS LST images. We evaluated the synthetic LST images qualitatively and found high visual agreements with the actual Landsat-8 LST images. In addition, we conducted quantitative evaluations of these synthetic images; and found strong agreements with the actual Landsat-8 LST images. For example, r2, root mean square error (RMSE), and absolute average difference (AAD)-values were in the ranges 084-0.90, 0.061-0.080, and 0.003-0.004, respectively.

  8. An Embodied Multi-Sensor Fusion Approach to Visual Motion Estimation Using Unsupervised Deep Networks.

    PubMed

    Shamwell, E Jared; Nothwang, William D; Perlis, Donald

    2018-05-04

    Aimed at improving size, weight, and power (SWaP)-constrained robotic vision-aided state estimation, we describe our unsupervised, deep convolutional-deconvolutional sensor fusion network, Multi-Hypothesis DeepEfference (MHDE). MHDE learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. We show how our multi-hypothesis formulation provides increased robustness against dynamic, heteroscedastic sensor and motion noise by computing hypothesis image mappings and predictions at 76⁻357 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel, inter-connected architectural pathways and n (1⁻20 in this work) multi-hypothesis generating sub-pathways to produce n global correspondence estimates between a source and a target image. We evaluated MHDE on the KITTI Odometry dataset and benchmarked it against the vision-only DeepMatching and Deformable Spatial Pyramids algorithms and were able to demonstrate a significant runtime decrease and a performance increase compared to the next-best performing method.

  9. Self characterization of a coded aperture array for neutron source imaging

    NASA Astrophysics Data System (ADS)

    Volegov, P. L.; Danly, C. R.; Fittinghoff, D. N.; Guler, N.; Merrill, F. E.; Wilde, C. H.

    2014-12-01

    The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the stagnation stage of inertial confinement fusion implosions. Since the neutron source is small (˜100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be precisely aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.

  10. Multisensor Parallel Largest Ellipsoid Distributed Data Fusion with Unknown Cross-Covariances

    PubMed Central

    Liu, Baoyu; Zhan, Xingqun; Zhu, Zheng H.

    2017-01-01

    As the largest ellipsoid (LE) data fusion algorithm can only be applied to two-sensor system, in this contribution, parallel fusion structure is proposed to introduce the LE algorithm into a multisensor system with unknown cross-covariances, and three parallel fusion structures based on different estimate pairing methods are presented and analyzed. In order to assess the influence of fusion structure on fusion performance, two fusion performance assessment parameters are defined as Fusion Distance and Fusion Index. Moreover, the formula for calculating the upper bounds of actual fused error covariances of the presented multisensor LE fusers is also provided. Demonstrated with simulation examples, the Fusion Index indicates fuser’s actual fused accuracy and its sensitivity to the sensor orders, as well as its robustness to the accuracy of newly added sensors. Compared to the LE fuser with sequential structure, the LE fusers with proposed parallel structures not only significantly improve their properties in these aspects, but also embrace better performances in consistency and computation efficiency. The presented multisensor LE fusers generally have better accuracies than covariance intersection (CI) fusion algorithm and are consistent when the local estimates are weakly correlated. PMID:28661442

  11. Fusion of PET and MRI for Hybrid Imaging

    NASA Astrophysics Data System (ADS)

    Cho, Zang-Hee; Son, Young-Don; Kim, Young-Bo; Yoo, Seung-Schik

    Recently, the development of the fusion PET-MRI system has been actively studied to meet the increasing demand for integrated molecular and anatomical imaging. MRI can provide detailed anatomical information on the brain, such as the locations of gray and white matter, blood vessels, axonal tracts with high resolution, while PET can measure molecular and genetic information, such as glucose metabolism, neurotransmitter-neuroreceptor binding and affinity, protein-protein interactions, and gene trafficking among biological tissues. State-of-the-art MRI systems, such as the 7.0 T whole-body MRI, now can visualize super-fine structures including neuronal bundles in the pons, fine blood vessels (such as lenticulostriate arteries) without invasive contrast agents, in vivo hippocampal substructures, and substantia nigra with excellent image contrast. High-resolution PET, known as High-Resolution Research Tomograph (HRRT), is a brain-dedicated system capable of imaging minute changes of chemicals, such as neurotransmitters and -receptors, with high spatial resolution and sensitivity. The synergistic power of the two, i.e., ultra high-resolution anatomical information offered by a 7.0 T MRI system combined with the high-sensitivity molecular information offered by HRRT-PET, will significantly elevate the level of our current understanding of the human brain, one of the most delicate, complex, and mysterious biological organs. This chapter introduces MRI, PET, and PET-MRI fusion system, and its algorithms are discussed in detail.

  12. Multi-atlas and label fusion approach for patient-specific MRI based skull estimation.

    PubMed

    Torrado-Carvajal, Angel; Herraiz, Joaquin L; Hernandez-Tamames, Juan A; San Jose-Estepar, Raul; Eryaman, Yigitcan; Rozenholc, Yves; Adalsteinsson, Elfar; Wald, Lawrence L; Malpica, Norberto

    2016-04-01

    MRI-based skull segmentation is a useful procedure for many imaging applications. This study describes a methodology for automatic segmentation of the complete skull from a single T1-weighted volume. The skull is estimated using a multi-atlas segmentation approach. Using a whole head computed tomography (CT) scan database, the skull in a new MRI volume is detected by nonrigid image registration of the volume to every CT, and combination of the individual segmentations by label-fusion. We have compared Majority Voting, Simultaneous Truth and Performance Level Estimation (STAPLE), Shape Based Averaging (SBA), and the Selective and Iterative Method for Performance Level Estimation (SIMPLE) algorithms. The pipeline has been evaluated quantitatively using images from the Retrospective Image Registration Evaluation database (reaching an overlap of 72.46 ± 6.99%), a clinical CT-MR dataset (maximum overlap of 78.31 ± 6.97%), and a whole head CT-MRI pair (maximum overlap 78.68%). A qualitative evaluation has also been performed on MRI acquisition of volunteers. It is possible to automatically segment the complete skull from MRI data using a multi-atlas and label fusion approach. This will allow the creation of complete MRI-based tissue models that can be used in electromagnetic dosimetry applications and attenuation correction in PET/MR. © 2015 Wiley Periodicals, Inc.

  13. Multiview 3-D Echocardiography Fusion with Breath-Hold Position Tracking Using an Optical Tracking System.

    PubMed

    Punithakumar, Kumaradevan; Hareendranathan, Abhilash R; McNulty, Alexander; Biamonte, Marina; He, Allen; Noga, Michelle; Boulanger, Pierre; Becher, Harald

    2016-08-01

    Recent advances in echocardiography allow real-time 3-D dynamic image acquisition of the heart. However, one of the major limitations of 3-D echocardiography is the limited field of view, which results in an acquisition insufficient to cover the whole geometry of the heart. This study proposes the novel approach of fusing multiple 3-D echocardiography images using an optical tracking system that incorporates breath-hold position tracking to infer that the heart remains at the same position during different acquisitions. In six healthy male volunteers, 18 pairs of apical/parasternal 3-D ultrasound data sets were acquired during a single breath-hold as well as in subsequent breath-holds. The proposed method yielded a field of view improvement of 35.4 ± 12.5%. To improve the quality of the fused image, a wavelet-based fusion algorithm was developed that computes pixelwise likelihood values for overlapping voxels from multiple image views. The proposed wavelet-based fusion approach yielded significant improvement in contrast (66.46 ± 21.68%), contrast-to-noise ratio (49.92 ± 28.71%), signal-to-noise ratio (57.59 ± 47.85%) and feature count (13.06 ± 7.44%) in comparison to individual views. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  14. An Indoor Continuous Positioning Algorithm on the Move by Fusing Sensors and Wi-Fi on Smartphones

    PubMed Central

    Li, Huaiyu; Chen, Xiuwan; Jing, Guifei; Wang, Yuan; Cao, Yanfeng; Li, Fei; Zhang, Xinlong; Xiao, Han

    2015-01-01

    Wi-Fi indoor positioning algorithms experience large positioning error and low stability when continuously positioning terminals that are on the move. This paper proposes a novel indoor continuous positioning algorithm that is on the move, fusing sensors and Wi-Fi on smartphones. The main innovative points include an improved Wi-Fi positioning algorithm and a novel positioning fusion algorithm named the Trust Chain Positioning Fusion (TCPF) algorithm. The improved Wi-Fi positioning algorithm was designed based on the properties of Wi-Fi signals on the move, which are found in a novel “quasi-dynamic” Wi-Fi signal experiment. The TCPF algorithm is proposed to realize the “process-level” fusion of Wi-Fi and Pedestrians Dead Reckoning (PDR) positioning, including three parts: trusted point determination, trust state and positioning fusion algorithm. An experiment is carried out for verification in a typical indoor environment, and the average positioning error on the move is 1.36 m, a decrease of 28.8% compared to an existing algorithm. The results show that the proposed algorithm can effectively reduce the influence caused by the unstable Wi-Fi signals, and improve the accuracy and stability of indoor continuous positioning on the move. PMID:26690447

  15. SU-E-J-97: Evaluation of Multi-Modality (CT/MR/PET) Image Registration Accuracy in Radiotherapy Planning.

    PubMed

    Sethi, A; Rusu, I; Surucu, M; Halama, J

    2012-06-01

    Evaluate accuracy of multi-modality image registration in radiotherapy planning process. A water-filled anthropomorphic head phantom containing eight 'donut-shaped' fiducial markers (3 internal + 5 external) was selected for this study. Seven image sets (3CTs, 3MRs and PET) of phantom were acquired and fused in a commercial treatment planning system. First, a narrow slice (0.75mm) baseline CT scan was acquired (CT1). Subsequently, the phantom was re-scanned with a coarse slice width = 1.5mm (CT2) and after subjecting phantom to rotation/displacement (CT3). Next, the phantom was scanned in a 1.5 Tesla MR scanner and three MR image sets (axial T1, axial T2, coronal T1) were acquired at 2mm slice width. Finally, the phantom and center of fiducials were doped with 18F and a PET scan was performed with 2mm cubic voxels. All image scans (CT/MR/PET) were fused to the baseline (CT1) data using automated mutual-information based fusion algorithm. Difference between centroids of fiducial markers in various image modalities was used to assess image registration accuracy. CT/CT image registration was superior to CT/MR and CT/PET: average CT/CT fusion error was found to be 0.64 ± 0.14 mm. Corresponding values for CT/MR and CT/PET fusion were 1.33 ± 0.71mm and 1.11 ± 0.37mm. Internal markers near the center of phantom fused better than external markers placed on the phantom surface. This was particularly true for the CT/MR and CT/PET. The inferior quality of external marker fusion indicates possible distortion effects toward the edges of MR image. Peripheral targets in the PET scan may be subject to parallax error caused by depth of interaction of photons in detectors. Current widespread use of multimodality imaging in radiotherapy planning calls for periodic quality assurance of image registration process. Such studies may help improve safety and accuracy in treatment planning. © 2012 American Association of Physicists in Medicine.

  16. Towards real-time detection and tracking of spatio-temporal features: Blob-filaments in fusion plasma

    DOE PAGES

    Wu, Lingfei; Wu, Kesheng; Sim, Alex; ...

    2016-06-01

    A novel algorithm and implementation of real-time identification and tracking of blob-filaments in fusion reactor data is presented. Similar spatio-temporal features are important in many other applications, for example, ignition kernels in combustion and tumor cells in a medical image. This work presents an approach for extracting these features by dividing the overall task into three steps: local identification of feature cells, grouping feature cells into extended feature, and tracking movement of feature through overlapping in space. Through our extensive work in parallelization, we demonstrate that this approach can effectively make use of a large number of compute nodes tomore » detect and track blob-filaments in real time in fusion plasma. Here, on a set of 30GB fusion simulation data, we observed linear speedup on 1024 processes and completed blob detection in less than three milliseconds using Edison, a Cray XC30 system at NERSC.« less

  17. Hand-Writing Motion Tracking with Vision-Inertial Sensor Fusion: Calibration and Error Correction

    PubMed Central

    Zhou, Shengli; Fei, Fei; Zhang, Guanglie; Liu, Yunhui; Li, Wen J.

    2014-01-01

    The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model. PMID:25157546

  18. Evaluation of the impact of metal artifacts in CT-based attenuation correction of positron emission tomography scans

    NASA Astrophysics Data System (ADS)

    Wu, Jay; Shih, Cheng-Ting; Chang, Shu-Jun; Huang, Tzung-Chi; Chen, Chuan-Lin; Wu, Tung Hsin

    2011-08-01

    The quantitative ability of PET/CT allows the widespread use in clinical research and cancer staging. However, metal artifacts induced by high-density metal objects degrade the quality of CT images. These artifacts also propagate to the corresponding PET image and cause a false increase of 18F-FDG uptake near the metal implants when the CT-based attenuation correction (AC) is performed. In this study, we applied a model-based metal artifact reduction (MAR) algorithm to reduce the dark and bright streaks in the CT image and compared the differences between PET images with the general CT-based AC (G-AC) and the MAR-corrected-CT AC (MAR-AC). Results showed that the MAR algorithm effectively reduced the metal artifacts in the CT images of the ACR flangeless phantom and two clinical cases. The MAR-AC also removed the false-positive hot spot near the metal implants of the PET images. We conclude that the MAR-AC could be applied in clinical practice to improve the quantitative accuracy of PET images. Additionally, further use of PET/CT fusion images with metal artifact correction could be more valuable for diagnosis.

  19. Objective Quality Assessment for Color-to-Gray Image Conversion.

    PubMed

    Ma, Kede; Zhao, Tiesong; Zeng, Kai; Wang, Zhou

    2015-12-01

    Color-to-gray (C2G) image conversion is the process of transforming a color image into a grayscale one. Despite its wide usage in real-world applications, little work has been dedicated to compare the performance of C2G conversion algorithms. Subjective evaluation is reliable but is also inconvenient and time consuming. Here, we make one of the first attempts to develop an objective quality model that automatically predicts the perceived quality of C2G converted images. Inspired by the philosophy of the structural similarity index, we propose a C2G structural similarity (C2G-SSIM) index, which evaluates the luminance, contrast, and structure similarities between the reference color image and the C2G converted image. The three components are then combined depending on image type to yield an overall quality measure. Experimental results show that the proposed C2G-SSIM index has close agreement with subjective rankings and significantly outperforms existing objective quality metrics for C2G conversion. To explore the potentials of C2G-SSIM, we further demonstrate its use in two applications: 1) automatic parameter tuning for C2G conversion algorithms and 2) adaptive fusion of C2G converted images.

  20. Medical Image Fusion Based on Feature Extraction and Sparse Representation

    PubMed Central

    Wei, Gao; Zongxi, Song

    2017-01-01

    As a novel multiscale geometric analysis tool, sparse representation has shown many advantages over the conventional image representation methods. However, the standard sparse representation does not take intrinsic structure and its time complexity into consideration. In this paper, a new fusion mechanism for multimodal medical images based on sparse representation and decision map is proposed to deal with these problems simultaneously. Three decision maps are designed including structure information map (SM) and energy information map (EM) as well as structure and energy map (SEM) to make the results reserve more energy and edge information. SM contains the local structure feature captured by the Laplacian of a Gaussian (LOG) and EM contains the energy and energy distribution feature detected by the mean square deviation. The decision map is added to the normal sparse representation based method to improve the speed of the algorithm. Proposed approach also improves the quality of the fused results by enhancing the contrast and reserving more structure and energy information from the source images. The experiment results of 36 groups of CT/MR, MR-T1/MR-T2, and CT/PET images demonstrate that the method based on SR and SEM outperforms five state-of-the-art methods. PMID:28321246

  1. A multi-focus image fusion method via region mosaicking on Laplacian pyramids

    PubMed Central

    Kou, Liang; Zhang, Liguo; Sun, Jianguo; Han, Qilong; Jin, Zilong

    2018-01-01

    In this paper, a method named Region Mosaicking on Laplacian Pyramids (RMLP) is proposed to fuse multi-focus images that is captured by microscope. First, the Sum-Modified-Laplacian is applied to measure the focus of multi-focus images. Then the density-based region growing algorithm is utilized to segment the focused region mask of each image. Finally, the mask is decomposed into a mask pyramid to supervise region mosaicking on a Laplacian pyramid. The region level pyramid keeps more original information than the pixel level. The experiment results show that RMLP has best performance in quantitative comparison with other methods. In addition, RMLP is insensitive to noise and can reduces the color distortion of the fused images on two datasets. PMID:29771912

  2. Systematic identification and analysis of frequent gene fusion events in metabolic pathways

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henry, Christopher S.; Lerma-Ortiz, Claudia; Gerdes, Svetlana Y.

    Here, gene fusions are the most powerful type of in silico-derived functional associations. However, many fusion compilations were made when <100 genomes were available, and algorithms for identifying fusions need updating to handle the current avalanche of sequenced genomes. The availability of a large fusion dataset would help probe functional associations and enable systematic analysis of where and why fusion events occur. As a result, here we present a systematic analysis of fusions in prokaryotes. We manually generated two training sets: (i) 121 fusions in the model organism Escherichia coli; (ii) 131 fusions found in B vitamin metabolism. These setsmore » were used to develop a fusion prediction algorithm that captured the training set fusions with only 7 % false negatives and 50 % false positives, a substantial improvement over existing approaches. This algorithm was then applied to identify 3.8 million potential fusions across 11,473 genomes. The results of the analysis are available in a searchable database. A functional analysis identified 3,000 reactions associated with frequent fusion events and revealed areas of metabolism where fusions are particularly prevalent. In conclusion, customary definitions of fusions were shown to be ambiguous, and a stricter one was proposed. Exploring the genes participating in fusion events showed that they most commonly encode transporters, regulators, and metabolic enzymes. The major rationales for fusions between metabolic genes appear to be overcoming pathway bottlenecks, avoiding toxicity, controlling competing pathways, and facilitating expression and assembly of protein complexes. Finally, our fusion dataset provides powerful clues to decipher the biological activities of domains of unknown function.« less

  3. Systematic identification and analysis of frequent gene fusion events in metabolic pathways

    DOE PAGES

    Henry, Christopher S.; Lerma-Ortiz, Claudia; Gerdes, Svetlana Y.; ...

    2016-06-24

    Here, gene fusions are the most powerful type of in silico-derived functional associations. However, many fusion compilations were made when <100 genomes were available, and algorithms for identifying fusions need updating to handle the current avalanche of sequenced genomes. The availability of a large fusion dataset would help probe functional associations and enable systematic analysis of where and why fusion events occur. As a result, here we present a systematic analysis of fusions in prokaryotes. We manually generated two training sets: (i) 121 fusions in the model organism Escherichia coli; (ii) 131 fusions found in B vitamin metabolism. These setsmore » were used to develop a fusion prediction algorithm that captured the training set fusions with only 7 % false negatives and 50 % false positives, a substantial improvement over existing approaches. This algorithm was then applied to identify 3.8 million potential fusions across 11,473 genomes. The results of the analysis are available in a searchable database. A functional analysis identified 3,000 reactions associated with frequent fusion events and revealed areas of metabolism where fusions are particularly prevalent. In conclusion, customary definitions of fusions were shown to be ambiguous, and a stricter one was proposed. Exploring the genes participating in fusion events showed that they most commonly encode transporters, regulators, and metabolic enzymes. The major rationales for fusions between metabolic genes appear to be overcoming pathway bottlenecks, avoiding toxicity, controlling competing pathways, and facilitating expression and assembly of protein complexes. Finally, our fusion dataset provides powerful clues to decipher the biological activities of domains of unknown function.« less

  4. Image computing techniques to extrapolate data for dust tracking in case of an experimental accident simulation in a nuclear fusion plant.

    PubMed

    Camplani, M; Malizia, A; Gelfusa, M; Barbato, F; Antonelli, L; Poggi, L A; Ciparisse, J F; Salgado, L; Richetta, M; Gaudio, P

    2016-01-01

    In this paper, a preliminary shadowgraph-based analysis of dust particles re-suspension due to loss of vacuum accident (LOVA) in ITER-like nuclear fusion reactors has been presented. Dust particles are produced through different mechanisms in nuclear fusion devices, one of the main issues is that dust particles are capable of being re-suspended in case of events such as LOVA. Shadowgraph is based on an expanded collimated beam of light emitted by a laser or a lamp that emits light transversely compared to the flow field direction. In the STARDUST facility, the dust moves in the flow, and it causes variations of refractive index that can be detected by using a CCD camera. The STARDUST fast camera setup allows to detect and to track dust particles moving in the vessel and then to obtain information about the velocity field of dust mobilized. In particular, the acquired images are processed such that per each frame the moving dust particles are detected by applying a background subtraction technique based on the mixture of Gaussian algorithm. The obtained foreground masks are eventually filtered with morphological operations. Finally, a multi-object tracking algorithm is used to track the detected particles along the experiment. For each particle, a Kalman filter-based tracker is applied; the particles dynamic is described by taking into account position, velocity, and acceleration as state variable. The results demonstrate that it is possible to obtain dust particles' velocity field during LOVA by automatically processing the data obtained with the shadowgraph approach.

  5. Image computing techniques to extrapolate data for dust tracking in case of an experimental accident simulation in a nuclear fusion plant

    NASA Astrophysics Data System (ADS)

    Camplani, M.; Malizia, A.; Gelfusa, M.; Barbato, F.; Antonelli, L.; Poggi, L. A.; Ciparisse, J. F.; Salgado, L.; Richetta, M.; Gaudio, P.

    2016-01-01

    In this paper, a preliminary shadowgraph-based analysis of dust particles re-suspension due to loss of vacuum accident (LOVA) in ITER-like nuclear fusion reactors has been presented. Dust particles are produced through different mechanisms in nuclear fusion devices, one of the main issues is that dust particles are capable of being re-suspended in case of events such as LOVA. Shadowgraph is based on an expanded collimated beam of light emitted by a laser or a lamp that emits light transversely compared to the flow field direction. In the STARDUST facility, the dust moves in the flow, and it causes variations of refractive index that can be detected by using a CCD camera. The STARDUST fast camera setup allows to detect and to track dust particles moving in the vessel and then to obtain information about the velocity field of dust mobilized. In particular, the acquired images are processed such that per each frame the moving dust particles are detected by applying a background subtraction technique based on the mixture of Gaussian algorithm. The obtained foreground masks are eventually filtered with morphological operations. Finally, a multi-object tracking algorithm is used to track the detected particles along the experiment. For each particle, a Kalman filter-based tracker is applied; the particles dynamic is described by taking into account position, velocity, and acceleration as state variable. The results demonstrate that it is possible to obtain dust particles' velocity field during LOVA by automatically processing the data obtained with the shadowgraph approach.

  6. Approach to explosive hazard detection using sensor fusion and multiple kernel learning with downward-looking GPR and EMI sensor data

    NASA Astrophysics Data System (ADS)

    Pinar, Anthony; Masarik, Matthew; Havens, Timothy C.; Burns, Joseph; Thelen, Brian; Becker, John

    2015-05-01

    This paper explores the effectiveness of an anomaly detection algorithm for downward-looking ground penetrating radar (GPR) and electromagnetic inductance (EMI) data. Threat detection with GPR is challenged by high responses to non-target/clutter objects, leading to a large number of false alarms (FAs), and since the responses of target and clutter signatures are so similar, classifier design is not trivial. We suggest a method based on a Run Packing (RP) algorithm to fuse GPR and EMI data into a composite confidence map to improve detection as measured by the area-under-ROC (NAUC) metric. We examine the value of a multiple kernel learning (MKL) support vector machine (SVM) classifier using image features such as histogram of oriented gradients (HOG), local binary patterns (LBP), and local statistics. Experimental results on government furnished data show that use of our proposed fusion and classification methods improves the NAUC when compared with the results from individual sensors and a single kernel SVM classifier.

  7. A measurement fusion method for nonlinear system identification using a cooperative learning algorithm.

    PubMed

    Xia, Youshen; Kamel, Mohamed S

    2007-06-01

    Identification of a general nonlinear noisy system viewed as an estimation of a predictor function is studied in this article. A measurement fusion method for the predictor function estimate is proposed. In the proposed scheme, observed data are first fused by using an optimal fusion technique, and then the optimal fused data are incorporated in a nonlinear function estimator based on a robust least squares support vector machine (LS-SVM). A cooperative learning algorithm is proposed to implement the proposed measurement fusion method. Compared with related identification methods, the proposed method can minimize both the approximation error and the noise error. The performance analysis shows that the proposed optimal measurement fusion function estimate has a smaller mean square error than the LS-SVM function estimate. Moreover, the proposed cooperative learning algorithm can converge globally to the optimal measurement fusion function estimate. Finally, the proposed measurement fusion method is applied to ARMA signal and spatial temporal signal modeling. Experimental results show that the proposed measurement fusion method can provide a more accurate model.

  8. Automatic updating and 3D modeling of airport information from high resolution images using GIS and LIDAR data

    NASA Astrophysics Data System (ADS)

    Lv, Zheng; Sui, Haigang; Zhang, Xilin; Huang, Xianfeng

    2007-11-01

    As one of the most important geo-spatial objects and military establishment, airport is always a key target in fields of transportation and military affairs. Therefore, automatic recognition and extraction of airport from remote sensing images is very important and urgent for updating of civil aviation and military application. In this paper, a new multi-source data fusion approach on automatic airport information extraction, updating and 3D modeling is addressed. Corresponding key technologies including feature extraction of airport information based on a modified Ostu algorithm, automatic change detection based on new parallel lines-based buffer detection algorithm, 3D modeling based on gradual elimination of non-building points algorithm, 3D change detecting between old airport model and LIDAR data, typical CAD models imported and so on are discussed in detail. At last, based on these technologies, we develop a prototype system and the results show our method can achieve good effects.

  9. Image registration for a UV-Visible dual-band imaging system

    NASA Astrophysics Data System (ADS)

    Chen, Tao; Yuan, Shuang; Li, Jianping; Xing, Sheng; Zhang, Honglong; Dong, Yuming; Chen, Liangpei; Liu, Peng; Jiao, Guohua

    2018-06-01

    The detection of corona discharge is an effective way for early fault diagnosis of power equipment. UV-Visible dual-band imaging can detect and locate corona discharge spot at all-weather condition. In this study, we introduce an image registration protocol for this dual-band imaging system. The protocol consists of UV image denoising and affine transformation model establishment. We report the algorithm details of UV image preprocessing, affine transformation model establishment and relevant experiments for verification of their feasibility. The denoising algorithm was based on a correlation operation between raw UV images, a continuous mask and the transformation model was established by using corner feature and a statistical method. Finally, an image fusion test was carried out to verify the accuracy of affine transformation model. It has proved the average position displacement error between corona discharge and equipment fault at different distances in a 2.5m-20 m range are 1.34 mm and 1.92 mm in the horizontal and vertical directions, respectively, which are precise enough for most industrial applications. The resultant protocol is not only expected to improve the efficiency and accuracy of such imaging system for locating corona discharge spot, but also supposed to provide a more generalized reference for the calibration of various dual-band imaging systems in practice.

  10. Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation

    DTIC Science & Technology

    2007-05-30

    with large region of attraction about the true minimum. The physical optics models provide features for high confidence identification of stationary...the detection test are used to estimate 3D object scattering; multiple images can be noncoherently combined to reconstruct a more complete object...Proc. SPIE Algorithms for Synthetic Aper- ture Radar Imagery XIII, The International Society for Optical Engineering, April 2006. [40] K. Varshney, M. C

  11. Sensor Fusion to Estimate the Depth and Width of the Weld Bead in Real Time in GMAW Processes

    PubMed Central

    Sampaio, Renato Coral; Vargas, José A. R.

    2018-01-01

    The arc welding process is widely used in industry but its automatic control is limited by the difficulty in measuring the weld bead geometry and closing the control loop on the arc, which has adverse environmental conditions. To address this problem, this work proposes a system to capture the welding variables and send stimuli to the Gas Metal Arc Welding (GMAW) conventional process with a constant voltage power source, which allows weld bead geometry estimation with an open-loop control. Dynamic models of depth and width estimators of the weld bead are implemented based on the fusion of thermographic data, welding current and welding voltage in a multilayer perceptron neural network. The estimators were trained and validated off-line with data from a novel algorithm developed to extract the features of the infrared image, a laser profilometer was implemented to measure the bead dimensions and an image processing algorithm that measures depth by making a longitudinal cut in the weld bead. These estimators are optimized for embedded devices and real-time processing and were implemented on a Field-Programmable Gate Array (FPGA) device. Experiments to collect data, train and validate the estimators are presented and discussed. The results show that the proposed method is useful in industrial and research environments. PMID:29570698

  12. Generating Daily Synthetic Landsat Imagery by Combining Landsat and MODIS Data

    PubMed Central

    Wu, Mingquan; Huang, Wenjiang; Niu, Zheng; Wang, Changyao

    2015-01-01

    Owing to low temporal resolution and cloud interference, there is a shortage of high spatial resolution remote sensing data. To address this problem, this study introduces a modified spatial and temporal data fusion approach (MSTDFA) to generate daily synthetic Landsat imagery. This algorithm was designed to avoid the limitations of the conditional spatial temporal data fusion approach (STDFA) including the constant window for disaggregation and the sensor difference. An adaptive window size selection method is proposed in this study to select the best window size and moving steps for the disaggregation of coarse pixels. The linear regression method is used to remove the influence of differences in sensor systems using disaggregated mean coarse reflectance by testing and validation in two study areas located in Xinjiang Province, China. The results show that the MSTDFA algorithm can generate daily synthetic Landsat imagery with a high correlation coefficient (R) ranged from 0.646 to 0.986 between synthetic images and the actual observations. We further show that MSTDFA can be applied to 250 m 16-day MODIS MOD13Q1 products and the Landsat Normalized Different Vegetation Index (NDVI) data by generating a synthetic NDVI image highly similar to actual Landsat NDVI observation with a high R of 0.97. PMID:26393607

  13. Generating Daily Synthetic Landsat Imagery by Combining Landsat and MODIS Data.

    PubMed

    Wu, Mingquan; Huang, Wenjiang; Niu, Zheng; Wang, Changyao

    2015-09-18

    Owing to low temporal resolution and cloud interference, there is a shortage of high spatial resolution remote sensing data. To address this problem, this study introduces a modified spatial and temporal data fusion approach (MSTDFA) to generate daily synthetic Landsat imagery. This algorithm was designed to avoid the limitations of the conditional spatial temporal data fusion approach (STDFA) including the constant window for disaggregation and the sensor difference. An adaptive window size selection method is proposed in this study to select the best window size and moving steps for the disaggregation of coarse pixels. The linear regression method is used to remove the influence of differences in sensor systems using disaggregated mean coarse reflectance by testing and validation in two study areas located in Xinjiang Province, China. The results show that the MSTDFA algorithm can generate daily synthetic Landsat imagery with a high correlation coefficient (R) ranged from 0.646 to 0.986 between synthetic images and the actual observations. We further show that MSTDFA can be applied to 250 m 16-day MODIS MOD13Q1 products and the Landsat Normalized Different Vegetation Index (NDVI) data by generating a synthetic NDVI image highly similar to actual Landsat NDVI observation with a high R of 0.97.

  14. Sensor Fusion to Estimate the Depth and Width of the Weld Bead in Real Time in GMAW Processes.

    PubMed

    Bestard, Guillermo Alvarez; Sampaio, Renato Coral; Vargas, José A R; Alfaro, Sadek C Absi

    2018-03-23

    The arc welding process is widely used in industry but its automatic control is limited by the difficulty in measuring the weld bead geometry and closing the control loop on the arc, which has adverse environmental conditions. To address this problem, this work proposes a system to capture the welding variables and send stimuli to the Gas Metal Arc Welding (GMAW) conventional process with a constant voltage power source, which allows weld bead geometry estimation with an open-loop control. Dynamic models of depth and width estimators of the weld bead are implemented based on the fusion of thermographic data, welding current and welding voltage in a multilayer perceptron neural network. The estimators were trained and validated off-line with data from a novel algorithm developed to extract the features of the infrared image, a laser profilometer was implemented to measure the bead dimensions and an image processing algorithm that measures depth by making a longitudinal cut in the weld bead. These estimators are optimized for embedded devices and real-time processing and were implemented on a Field-Programmable Gate Array (FPGA) device. Experiments to collect data, train and validate the estimators are presented and discussed. The results show that the proposed method is useful in industrial and research environments.

  15. Extension of an iterative closest point algorithm for simultaneous localization and mapping in corridor environments

    NASA Astrophysics Data System (ADS)

    Yue, Haosong; Chen, Weihai; Wu, Xingming; Wang, Jianhua

    2016-03-01

    Three-dimensional (3-D) simultaneous localization and mapping (SLAM) is a crucial technique for intelligent robots to navigate autonomously and execute complex tasks. It can also be applied to shape measurement, reverse engineering, and many other scientific or engineering fields. A widespread SLAM algorithm, named KinectFusion, performs well in environments with complex shapes. However, it cannot handle translation uncertainties well in highly structured scenes. This paper improves the KinectFusion algorithm and makes it competent in both structured and unstructured environments. 3-D line features are first extracted according to both color and depth data captured by Kinect sensor. Then the lines in the current data frame are matched with the lines extracted from the entire constructed world model. Finally, we fuse the distance errors of these line-pairs into the standard KinectFusion framework and estimate sensor poses using an iterative closest point-based algorithm. Comparative experiments with the KinectFusion algorithm and one state-of-the-art method in a corridor scene have been done. The experimental results demonstrate that after our improvement, the KinectFusion algorithm can also be applied to structured environments and has higher accuracy. Experiments on two open access datasets further validated our improvements.

  16. The continuum fusion theory of signal detection applied to a bi-modal fusion problem

    NASA Astrophysics Data System (ADS)

    Schaum, A.

    2011-05-01

    A new formalism has been developed that produces detection algorithms for model-based problems, in which one or more parameter values is unknown. Continuum Fusion can be used to generate different flavors of algorithm for any composite hypothesis testing problem. The methodology is defined by a fusion logic that can be translated into max/min conditions. Here it is applied to a simple sensor fusion model, but one for which the generalized likelihood ratio test is intractable. By contrast, a fusion-based response to the same problem can be devised that is solvable in closed form and represents a good approximation to the GLR test.

  17. [Research Progress of Multi-Model Medical Image Fusion at Feature Level].

    PubMed

    Zhang, Junjie; Zhou, Tao; Lu, Huiling; Wang, Huiqun

    2016-04-01

    Medical image fusion realizes advantage integration of functional images and anatomical images.This article discusses the research progress of multi-model medical image fusion at feature level.We firstly describe the principle of medical image fusion at feature level.Then we analyze and summarize fuzzy sets,rough sets,D-S evidence theory,artificial neural network,principal component analysis and other fusion methods’ applications in medical image fusion and get summery.Lastly,we in this article indicate present problems and the research direction of multi-model medical images in the future.

  18. Enhanced image capture through fusion

    NASA Technical Reports Server (NTRS)

    Burt, Peter J.; Hanna, Keith; Kolczynski, Raymond J.

    1993-01-01

    Image fusion may be used to combine images from different sensors, such as IR and visible cameras, to obtain a single composite with extended information content. Fusion may also be used to combine multiple images from a given sensor to form a composite image in which information of interest is enhanced. We present a general method for performing image fusion and show that this method is effective for diverse fusion applications. We suggest that fusion may provide a powerful tool for enhanced image capture with broad utility in image processing and computer vision.

  19. Correction of nonuniform attenuation and image fusion in SPECT imaging by means of separate X-ray CT.

    PubMed

    Kashiwagi, Toru; Yutani, Kenji; Fukuchi, Minoru; Naruse, Hitoshi; Iwasaki, Tadaaki; Yokozuka, Koichi; Inoue, Shinichi; Kondo, Shoji

    2002-06-01

    Improvements in image quality and quantitation measurement, and the addition of detailed anatomical structures are important topics for single-photon emission tomography (SPECT). The goal of this study was to develop a practical system enabling both nonuniform attenuation correction and image fusion of SPECT images by means of high-performance X-ray computed tomography (CT). A SPECT system and a helical X-ray CT system were placed next to each other and linked with Ethernet. To avoid positional differences between the SPECT and X-ray CT studies, identical flat patient tables were used for both scans; body distortion was minimized with laser beams from the upper and lateral directions to detect the position of the skin surface. For the raw projection data of SPECT, a scatter correction was performed with the triple energy window method. Image fusion of the X-ray CT and SPECT images was performed automatically by auto-registration of fiducial markers attached to the skin surface. After registration of the X-ray CT and SPECT images, an X-ray CT-derived attenuation map was created with the calibration curve for 99mTc. The SPECT images were then reconstructed with scatter and attenuation correction by means of a maximum likelihood expectation maximization algorithm. This system was evaluated in torso and cylindlical phantoms and in 4 patients referred for myocardial SPECT imaging with Tc-99m tetrofosmin. In the torso phantom study, the SPECT and X-ray CT images overlapped exactly on the computer display. After scatter and attenuation correction, the artifactual activity reduction in the inferior wall of the myocardium improved. Conversely, the incresed activity around the torso surface and the lungs was reduced. In the abdomen, the liver activity, which was originally uniform, had recovered after scatter and attenuation correction processing. The clinical study also showed good overlapping of cardiac and skin surface outlines on the fused SPECT and X-ray CT images. The effectiveness of the scatter and attenuation correction process was similar to that observed in the phantom study. Because the total time required for computer processing was less than 10 minutes, this method of attenuation correction and image fusion for SPECT images is expected to become popular in clinical practice.

  20. 2D to 3D fusion of echocardiography and cardiac CT for TAVR and TAVI image guidance.

    PubMed

    Khalil, Azira; Faisal, Amir; Lai, Khin Wee; Ng, Siew Cheok; Liew, Yih Miin

    2017-08-01

    This study proposed a registration framework to fuse 2D echocardiography images of the aortic valve with preoperative cardiac CT volume. The registration facilitates the fusion of CT and echocardiography to aid the diagnosis of aortic valve diseases and provide surgical guidance during transcatheter aortic valve replacement and implantation. The image registration framework consists of two major steps: temporal synchronization and spatial registration. Temporal synchronization allows time stamping of echocardiography time series data to identify frames that are at similar cardiac phase as the CT volume. Spatial registration is an intensity-based normalized mutual information method applied with pattern search optimization algorithm to produce an interpolated cardiac CT image that matches the echocardiography image. Our proposed registration method has been applied on the short-axis "Mercedes Benz" sign view of the aortic valve and long-axis parasternal view of echocardiography images from ten patients. The accuracy of our fully automated registration method was 0.81 ± 0.08 and 1.30 ± 0.13 mm in terms of Dice coefficient and Hausdorff distance for short-axis aortic valve view registration, whereas for long-axis parasternal view registration it was 0.79 ± 0.02 and 1.19 ± 0.11 mm, respectively. This accuracy is comparable to gold standard manual registration by expert. There was no significant difference in aortic annulus diameter measurement between the automatically and manually registered CT images. Without the use of optical tracking, we have shown the applicability of this technique for effective fusion of echocardiography with preoperative CT volume to potentially facilitate catheter-based surgery.

  1. Reconstruction of color images via Haar wavelet based on digital micromirror device

    NASA Astrophysics Data System (ADS)

    Liu, Xingjiong; He, Weiji; Gu, Guohua

    2015-10-01

    A digital micro mirror device( DMD) is introduced to form Haar wavelet basis , projecting on the color target image by making use of structured illumination, including red, green and blue light. The light intensity signals reflected from the target image are received synchronously by the bucket detector which has no spatial resolution, converted into voltage signals and then transferred into PC[1] .To reach the aim of synchronization, several synchronization processes are added during data acquisition. In the data collection process, according to the wavelet tree structure, the locations of significant coefficients at the finer scale are predicted by comparing the coefficients sampled at the coarsest scale with the threshold. The monochrome grayscale images are obtained under red , green and blue structured illumination by using Haar wavelet inverse transform algorithm, respectively. The color fusion algorithm is carried on the three monochrome grayscale images to obtain the final color image. According to the imaging principle, the experimental demonstration device is assembled. The letter "K" and the X-rite Color Checker Passport are projected and reconstructed as target images, and the final reconstructed color images have good qualities. This article makes use of the method of Haar wavelet reconstruction, reducing the sampling rate considerably. It provides color information without compromising the resolution of the final image.

  2. Real-time sensor validation and fusion for distributed autonomous sensors

    NASA Astrophysics Data System (ADS)

    Yuan, Xiaojing; Li, Xiangshang; Buckles, Bill P.

    2004-04-01

    Multi-sensor data fusion has found widespread applications in industrial and research sectors. The purpose of real time multi-sensor data fusion is to dynamically estimate an improved system model from a set of different data sources, i.e., sensors. This paper presented a systematic and unified real time sensor validation and fusion framework (RTSVFF) based on distributed autonomous sensors. The RTSVFF is an open architecture which consists of four layers - the transaction layer, the process fusion layer, the control layer, and the planning layer. This paradigm facilitates distribution of intelligence to the sensor level and sharing of information among sensors, controllers, and other devices in the system. The openness of the architecture also provides a platform to test different sensor validation and fusion algorithms and thus facilitates the selection of near optimal algorithms for specific sensor fusion application. In the version of the model presented in this paper, confidence weighted averaging is employed to address the dynamic system state issue noted above. The state is computed using an adaptive estimator and dynamic validation curve for numeric data fusion and a robust diagnostic map for decision level qualitative fusion. The framework is then applied to automatic monitoring of a gas-turbine engine, including a performance comparison of the proposed real-time sensor fusion algorithms and a traditional numerical weighted average.

  3. Real-time registration of 3D to 2D ultrasound images for image-guided prostate biopsy.

    PubMed

    Gillies, Derek J; Gardi, Lori; De Silva, Tharindu; Zhao, Shuang-Ren; Fenster, Aaron

    2017-09-01

    During image-guided prostate biopsy, needles are targeted at tissues that are suspicious of cancer to obtain specimen for histological examination. Unfortunately, patient motion causes targeting errors when using an MR-transrectal ultrasound (TRUS) fusion approach to augment the conventional biopsy procedure. This study aims to develop an automatic motion correction algorithm approaching the frame rate of an ultrasound system to be used in fusion-based prostate biopsy systems. Two modes of operation have been investigated for the clinical implementation of the algorithm: motion compensation using a single user initiated correction performed prior to biopsy, and real-time continuous motion compensation performed automatically as a background process. Retrospective 2D and 3D TRUS patient images acquired prior to biopsy gun firing were registered using an intensity-based algorithm utilizing normalized cross-correlation and Powell's method for optimization. 2D and 3D images were downsampled and cropped to estimate the optimal amount of image information that would perform registrations quickly and accurately. The optimal search order during optimization was also analyzed to avoid local optima in the search space. Error in the algorithm was computed using target registration errors (TREs) from manually identified homologous fiducials in a clinical patient dataset. The algorithm was evaluated for real-time performance using the two different modes of clinical implementations by way of user initiated and continuous motion compensation methods on a tissue mimicking prostate phantom. After implementation in a TRUS-guided system with an image downsampling factor of 4, the proposed approach resulted in a mean ± std TRE and computation time of 1.6 ± 0.6 mm and 57 ± 20 ms respectively. The user initiated mode performed registrations with in-plane, out-of-plane, and roll motions computation times of 108 ± 38 ms, 60 ± 23 ms, and 89 ± 27 ms, respectively, and corresponding registration errors of 0.4 ± 0.3 mm, 0.2 ± 0.4 mm, and 0.8 ± 0.5°. The continuous method performed registration significantly faster (P < 0.05) than the user initiated method, with observed computation times of 35 ± 8 ms, 43 ± 16 ms, and 27 ± 5 ms for in-plane, out-of-plane, and roll motions, respectively, and corresponding registration errors of 0.2 ± 0.3 mm, 0.7 ± 0.4 mm, and 0.8 ± 1.0°. The presented method encourages real-time implementation of motion compensation algorithms in prostate biopsy with clinically acceptable registration errors. Continuous motion compensation demonstrated registration accuracy with submillimeter and subdegree error, while performing < 50 ms computation times. Image registration technique approaching the frame rate of an ultrasound system offers a key advantage to be smoothly integrated to the clinical workflow. In addition, this technique could be used further for a variety of image-guided interventional procedures to treat and diagnose patients by improving targeting accuracy. © 2017 American Association of Physicists in Medicine.

  4. The method for froth floatation condition recognition based on adaptive feature weighted

    NASA Astrophysics Data System (ADS)

    Wang, Jieran; Zhang, Jun; Tian, Jinwen; Zhang, Daimeng; Liu, Xiaomao

    2018-03-01

    The fusion of foam characteristics can play a complementary role in expressing the content of foam image. The weight of foam characteristics is the key to make full use of the relationship between the different features. In this paper, an Adaptive Feature Weighted Method For Froth Floatation Condition Recognition is proposed. Foam features without and with weights are both classified by using support vector machine (SVM).The classification accuracy and optimal equaling algorithm under the each ore grade are regarded as the result of the adaptive feature weighting algorithm. At the same time the effectiveness of adaptive weighted method is demonstrated.

  5. Small maritime target detection through false color fusion

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; Wu, Tirui

    2008-04-01

    We present an algorithm that produces a fused false color representation of a combined multiband IR and visual imaging system for maritime applications. Multispectral IR imaging techniques are increasingly deployed in maritime operations, to detect floating mines or to find small dinghies and swimmers during search and rescue operations. However, maritime backgrounds usually contain a large amount of clutter that severely hampers the detection of small targets. Our new algorithm deploys the correlation between the target signatures in two different IR frequency bands (3-5 and 8-12 μm) to construct a fused IR image with a reduced amount of clutter. The fused IR image is then combined with a visual image in a false color RGB representation for display to a human operator. The algorithm works as follows. First, both individual IR bands are filtered with a morphological opening top-hat transform to extract small details. Second, a common image is extracted from the two filtered IR bands, and assigned to the red channel of an RGB image. Regions of interest that appear in both IR bands remain in this common image, while most uncorrelated noise details are filtered out. Third, the visual band is assigned to the green channel and, after multiplication with a constant (typically 1.6) also to the blue channel. Fourth, the brightness and colors of this intermediate false color image are renormalized by adjusting its first order statistics to those of a representative reference scene. The result of these four steps is a fused color image, with naturalistic colors (bluish sky and grayish water), in which small targets are clearly visible.

  6. Unsupervised, Robust Estimation-based Clustering for Multispectral Images

    NASA Technical Reports Server (NTRS)

    Netanyahu, Nathan S.

    1997-01-01

    To prepare for the challenge of handling the archiving and querying of terabyte-sized scientific spatial databases, the NASA Goddard Space Flight Center's Applied Information Sciences Branch (AISB, Code 935) developed a number of characterization algorithms that rely on supervised clustering techniques. The research reported upon here has been aimed at continuing the evolution of some of these supervised techniques, namely the neural network and decision tree-based classifiers, plus extending the approach to incorporating unsupervised clustering algorithms, such as those based on robust estimation (RE) techniques. The algorithms developed under this task should be suited for use by the Intelligent Information Fusion System (IIFS) metadata extraction modules, and as such these algorithms must be fast, robust, and anytime in nature. Finally, so that the planner/schedule module of the IlFS can oversee the use and execution of these algorithms, all information required by the planner/scheduler must be provided to the IIFS development team to ensure the timely integration of these algorithms into the overall system.

  7. High-power fused assemblies enabled by advances in fiber-processing technologies

    NASA Astrophysics Data System (ADS)

    Wiley, Robert; Clark, Brett

    2011-02-01

    The power handling capabilities of fiber lasers are limited by the technologies available to fabricate and assemble the key optical system components. Previous tools for the assembly, tapering, and fusion of fiber laser elements have had drawbacks with regard to temperature range, alignment capability, assembly flexibility and surface contamination. To provide expanded capabilities for fiber laser assembly, a wide-area electrical plasma heat source was used in conjunction with an optimized image analysis method and a flexible alignment system, integrated according to mechatronic principles. High-resolution imaging and vision-based measurement provided feedback to adjust assembly, fusion, and tapering process parameters. The system was used to perform assembly steps including dissimilar-fiber splicing, tapering, bundling, capillary bundling, and fusion of fibers to bulk optic devices up to several mm in diameter. A wide range of fiber types and diameters were tested, including extremely large diameters and photonic crystal fibers. The assemblies were evaluated for conformation to optical and mechanical design criteria, such as taper geometry and splice loss. The completed assemblies met the performance targets and exhibited reduced surface contamination compared to assemblies prepared on previously existing equipment. The imaging system and image analysis algorithms provided in situ fiber geometry measurement data that agreed well with external measurement. The ability to adjust operating parameters dynamically based on imaging was shown to provide substantial performance benefits, particularly in the tapering of fibers and bundles. The integrated design approach was shown to provide sufficient flexibility to perform all required operations with a minimum of reconfiguration.

  8. Multi-focus image fusion using a guided-filter-based difference image.

    PubMed

    Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Yang, Tingwu

    2016-03-20

    The aim of multi-focus image fusion technology is to integrate different partially focused images into one all-focused image. To realize this goal, a new multi-focus image fusion method based on a guided filter is proposed and an efficient salient feature extraction method is presented in this paper. Furthermore, feature extraction is primarily the main objective of the present work. Based on salient feature extraction, the guided filter is first used to acquire the smoothing image containing the most sharpness regions. To obtain the initial fusion map, we compose a mixed focus measure by combining the variance of image intensities and the energy of the image gradient together. Then, the initial fusion map is further processed by a morphological filter to obtain a good reprocessed fusion map. Lastly, the final fusion map is determined via the reprocessed fusion map and is optimized by a guided filter. Experimental results demonstrate that the proposed method does markedly improve the fusion performance compared to previous fusion methods and can be competitive with or even outperform state-of-the-art fusion methods in terms of both subjective visual effects and objective quality metrics.

  9. Autofocus method for automated microscopy using embedded GPUs.

    PubMed

    Castillo-Secilla, J M; Saval-Calvo, M; Medina-Valdès, L; Cuenca-Asensi, S; Martínez-Álvarez, A; Sánchez, C; Cristóbal, G

    2017-03-01

    In this paper we present a method for autofocusing images of sputum smears taken from a microscope which combines the finding of the optimal focus distance with an algorithm for extending the depth of field (EDoF). Our multifocus fusion method produces an unique image where all the relevant objects of the analyzed scene are well focused, independently to their distance to the sensor. This process is computationally expensive which makes unfeasible its automation using traditional embedded processors. For this purpose a low-cost optimized implementation is proposed using limited resources embedded GPU integrated on cutting-edge NVIDIA system on chip. The extensive tests performed on different sputum smear image sets show the real-time capabilities of our implementation maintaining the quality of the output image.

  10. Deep features for efficient multi-biometric recognition with face and ear images

    NASA Astrophysics Data System (ADS)

    Omara, Ibrahim; Xiao, Gang; Amrani, Moussa; Yan, Zifei; Zuo, Wangmeng

    2017-07-01

    Recently, multimodal biometric systems have received considerable research interest in many applications especially in the fields of security. Multimodal systems can increase the resistance to spoof attacks, provide more details and flexibility, and lead to better performance and lower error rate. In this paper, we present a multimodal biometric system based on face and ear, and propose how to exploit the extracted deep features from Convolutional Neural Networks (CNNs) on the face and ear images to introduce more powerful discriminative features and robust representation ability for them. First, the deep features for face and ear images are extracted based on VGG-M Net. Second, the extracted deep features are fused by using a traditional concatenation and a Discriminant Correlation Analysis (DCA) algorithm. Third, multiclass support vector machine is adopted for matching and classification. The experimental results show that the proposed multimodal system based on deep features is efficient and achieves a promising recognition rate up to 100 % by using face and ear. In addition, the results indicate that the fusion based on DCA is superior to traditional fusion.

  11. Three-dimensional digital mapping of the optic nerve head cupping in glaucoma

    NASA Astrophysics Data System (ADS)

    Mitra, Sunanda; Ramirez, Manuel; Morales, Jose

    1992-08-01

    Visualization of the optic nerve head cupping is clinically achieved by stereoscopic viewing of a fundus image pair of the suspected eye. A novel algorithm for three-dimensional digital surface representation of the optic nerve head, using fusion of stereo depth map with a linearly stretched intensity image of a stereo fundus image pair, is presented. Prior to depth map acquisition, a number of preprocessing tasks including feature extraction, registration by cepstral analysis, and correction for intensity variations are performed. The depth map is obtained by using a coarse to fine strategy for obtaining disparities between corresponding areas. The required matching techniques to obtain the translational differences in every step, uses cepstral analysis and correlation-like scanning technique in the spatial domain for the finest details. The quantitative and precise representation of the optic nerve head surface topography following this algorithm is not computationally intensive and should provide more useful information than just qualitative stereoscopic viewing of the fundus as one of the diagnostic criteria for diagnosis of glaucoma.

  12. Recognizing human activities using appearance metric feature and kinematics feature

    NASA Astrophysics Data System (ADS)

    Qian, Huimin; Zhou, Jun; Lu, Xinbiao; Wu, Xinye

    2017-05-01

    The problem of automatically recognizing human activities from videos through the fusion of the two most important cues, appearance metric feature and kinematics feature, is considered. And a system of two-dimensional (2-D) Poisson equations is introduced to extract the more discriminative appearance metric feature. Specifically, the moving human blobs are first detected out from the video by background subtraction technique to form a binary image sequence, from which the appearance feature designated as the motion accumulation image and the kinematics feature termed as centroid instantaneous velocity are extracted. Second, 2-D discrete Poisson equations are employed to reinterpret the motion accumulation image to produce a more differentiated Poisson silhouette image, from which the appearance feature vector is created through the dimension reduction technique called bidirectional 2-D principal component analysis, considering the balance between classification accuracy and time consumption. Finally, a cascaded classifier based on the nearest neighbor classifier and two directed acyclic graph support vector machine classifiers, integrated with the fusion of the appearance feature vector and centroid instantaneous velocity vector, is applied to recognize the human activities. Experimental results on the open databases and a homemade one confirm the recognition performance of the proposed algorithm.

  13. Fusion of Low-Cost Imaging and Inertial Sensors for Navigation

    DTIC Science & Technology

    2007-01-01

    an Integrated GPS/MEMS Inertial Navigation Pack- age. In Proceedings of ION GNSS 2004, pp. 825–832, September 2004. [3] R. G. Brown and P. Y. Hwang ...track- ing, with no a priori knowledge is provided in [13]. An on- line (Extended Kalman Filter-based) method for calculat- ing a trajectory by tracking...transformation, effectively constraining the resulting correspondence search space. The algorithm was incorporated into an extended Kalman filter and

  14. An effective image classification method with the fusion of invariant feature and a new color descriptor

    NASA Astrophysics Data System (ADS)

    Mansourian, Leila; Taufik Abdullah, Muhamad; Nurliyana Abdullah, Lili; Azman, Azreen; Mustaffa, Mas Rina

    2017-02-01

    Pyramid Histogram of Words (PHOW), combined Bag of Visual Words (BoVW) with the spatial pyramid matching (SPM) in order to add location information to extracted features. However, different PHOW extracted from various color spaces, and they did not extract color information individually, that means they discard color information, which is an important characteristic of any image that is motivated by human vision. This article, concatenated PHOW Multi-Scale Dense Scale Invariant Feature Transform (MSDSIFT) histogram and a proposed Color histogram to improve the performance of existing image classification algorithms. Performance evaluation on several datasets proves that the new approach outperforms other existing, state-of-the-art methods.

  15. TU-E-BRB-00: Deformable Image Registration: Is It Right for Your Clinic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2015-06-15

    Deformable image registration (DIR) is developing rapidly and is poised to substantially improve dose fusion accuracy for adaptive and retreatment planning and motion management and PET fusion to enhance contour delineation for treatment planning. However, DIR dose warping accuracy is difficult to quantify, in general, and particularly difficult to do so on a patient-specific basis. As clinical DIR options become more widely available, there is an increased need to understand the implications of incorporating DIR into clinical workflow. Several groups have assessed DIR accuracy in clinically relevant scenarios, but no comprehensive review material is yet available. This session will alsomore » discuss aspects of the AAPM Task Group 132 on the Use of Image Registration and Data Fusion Algorithms and Techniques in Radiotherapy Treatment Planning official report, which provides recommendations for DIR clinical use. We will summarize and compare various commercial DIR software options, outline successful clinical techniques, show specific examples with discussion of appropriate and inappropriate applications of DIR, discuss the clinical implications of DIR, provide an overview of current DIR error analysis research, review QA options and research phantom development and present TG-132 recommendations. Learning Objectives: Compare/contrast commercial DIR software and QA options Overview clinical DIR workflow for retreatment To understand uncertainties introduced by DIR Review TG-132 proposed recommendations.« less

  16. Automatic tissue segmentation of head and neck MR images for hyperthermia treatment planning

    NASA Astrophysics Data System (ADS)

    Fortunati, Valerio; Verhaart, René F.; Niessen, Wiro J.; Veenland, Jifke F.; Paulides, Margarethus M.; van Walsum, Theo

    2015-08-01

    A hyperthermia treatment requires accurate, patient-specific treatment planning. This planning is based on 3D anatomical models which are generally derived from computed tomography. Because of its superior soft tissue contrast, magnetic resonance imaging (MRI) information can be introduced to improve the quality of these 3D patient models and therefore the treatment planning itself. Thus, we present here an automatic atlas-based segmentation algorithm for MR images of the head and neck. Our method combines multiatlas local weighting fusion with intensity modelling. The accuracy of the method was evaluated using a leave-one-out cross validation experiment over a set of 11 patients for which manual delineation were available. The accuracy of the proposed method was high both in terms of the Dice similarity coefficient (DSC) and the 95th percentile Hausdorff surface distance (HSD) with median DSC higher than 0.8 for all tissues except sclera. For all tissues, except the spine tissues, the accuracy was approaching the interobserver agreement/variability both in terms of DSC and HSD. The positive effect of adding the intensity modelling to the multiatlas fusion decreased when a more accurate atlas fusion method was used. Using the proposed approach we improved the performance of the approach previously presented for H&N hyperthermia treatment planning, making the method suitable for clinical application.

  17. An epidemic model for biological data fusion in ad hoc sensor networks

    NASA Astrophysics Data System (ADS)

    Chang, K. C.; Kotari, Vikas

    2009-05-01

    Bio terrorism can be a very refined and a catastrophic approach of attacking a nation. This requires the development of a complete architecture dedicatedly designed for this purpose which includes but is not limited to Sensing/Detection, Tracking and Fusion, Communication, and others. In this paper we focus on one such architecture and evaluate its performance. Various sensors for this specific purpose have been studied. The accent has been on use of Distributed systems such as ad-hoc networks and on application of epidemic data fusion algorithms to better manage the bio threat data. The emphasis has been on understanding the performance characteristics of these algorithms under diversified real time scenarios which are implemented through extensive JAVA based simulations. Through comparative studies on communication and fusion the performance of channel filter algorithm for the purpose of biological sensor data fusion are validated.

  18. Demonstration of accuracy and clinical versatility of mutual information for automatic multimodality image fusion using affine and thin-plate spline warped geometric deformations.

    PubMed

    Meyer, C R; Boes, J L; Kim, B; Bland, P H; Zasadny, K R; Kison, P V; Koral, K; Frey, K A; Wahl, R L

    1997-04-01

    This paper applies and evaluates an automatic mutual information-based registration algorithm across a broad spectrum of multimodal volume data sets. The algorithm requires little or no pre-processing, minimal user input and easily implements either affine, i.e. linear or thin-plate spline (TPS) warped registrations. We have evaluated the algorithm in phantom studies as well as in selected cases where few other algorithms could perform as well, if at all, to demonstrate the value of this new method. Pairs of multimodal gray-scale volume data sets were registered by iteratively changing registration parameters to maximize mutual information. Quantitative registration errors were assessed in registrations of a thorax phantom using PET/CT and in the National Library of Medicine's Visible Male using MRI T2-/T1-weighted acquisitions. Registrations of diverse clinical data sets were demonstrated including rotate-translate mapping of PET/MRI brain scans with significant missing data, full affine mapping of thoracic PET/CT and rotate-translate mapping of abdominal SPECT/CT. A five-point thin-plate spline (TPS) warped registration of thoracic PET/CT is also demonstrated. The registration algorithm converged in times ranging between 3.5 and 31 min for affine clinical registrations and 57 min for TPS warping. Mean error vector lengths for rotate-translate registrations were measured to be subvoxel in phantoms. More importantly the rotate-translate algorithm performs well even with missing data. The demonstrated clinical fusions are qualitatively excellent at all levels. We conclude that such automatic, rapid, robust algorithms significantly increase the likelihood that multimodality registrations will be routinely used to aid clinical diagnoses and post-therapeutic assessment in the near future.

  19. Multiscale Medical Image Fusion in Wavelet Domain

    PubMed Central

    Khare, Ashish

    2013-01-01

    Wavelet transforms have emerged as a powerful tool in image fusion. However, the study and analysis of medical image fusion is still a challenging area of research. Therefore, in this paper, we propose a multiscale fusion of multimodal medical images in wavelet domain. Fusion of medical images has been performed at multiple scales varying from minimum to maximum level using maximum selection rule which provides more flexibility and choice to select the relevant fused images. The experimental analysis of the proposed method has been performed with several sets of medical images. Fusion results have been evaluated subjectively and objectively with existing state-of-the-art fusion methods which include several pyramid- and wavelet-transform-based fusion methods and principal component analysis (PCA) fusion method. The comparative analysis of the fusion results has been performed with edge strength (Q), mutual information (MI), entropy (E), standard deviation (SD), blind structural similarity index metric (BSSIM), spatial frequency (SF), and average gradient (AG) metrics. The combined subjective and objective evaluations of the proposed fusion method at multiple scales showed the effectiveness and goodness of the proposed approach. PMID:24453868

  20. Development of algorithms for building inventory compilation through remote sensing and statistical inferencing

    NASA Astrophysics Data System (ADS)

    Sarabandi, Pooya

    Building inventories are one of the core components of disaster vulnerability and loss estimations models, and as such, play a key role in providing decision support for risk assessment, disaster management and emergency response efforts. In may parts of the world inclusive building inventories, suitable for the use in catastrophe models cannot be found. Furthermore, there are serious shortcomings in the existing building inventories that include incomplete or out-dated information on critical attributes as well as missing or erroneous values for attributes. In this dissertation a set of methodologies for updating spatial and geometric information of buildings from single and multiple high-resolution optical satellite images are presented. Basic concepts, terminologies and fundamentals of 3-D terrain modeling from satellite images are first introduced. Different sensor projection models are then presented and sources of optical noise such as lens distortions are discussed. An algorithm for extracting height and creating 3-D building models from a single high-resolution satellite image is formulated. The proposed algorithm is a semi-automated supervised method capable of extracting attributes such as longitude, latitude, height, square footage, perimeter, irregularity index and etc. The associated errors due to the interactive nature of the algorithm are quantified and solutions for minimizing the human-induced errors are proposed. The height extraction algorithm is validated against independent survey data and results are presented. The validation results show that an average height modeling accuracy of 1.5% can be achieved using this algorithm. Furthermore, concept of cross-sensor data fusion for the purpose of 3-D scene reconstruction using quasi-stereo images is developed in this dissertation. The developed algorithm utilizes two or more single satellite images acquired from different sensors and provides the means to construct 3-D building models in a more economical way. A terrain-dependent-search algorithm is formulated to facilitate the search for correspondences in a quasi-stereo pair of images. The calculated heights for sample buildings using cross-sensor data fusion algorithm show an average coefficient of variation 1.03%. In order to infer structural-type and occupancy-type, i.e. engineering attributes, of buildings from spatial and geometric attributes of 3-D models, a statistical data analysis framework is formulated. Applications of "Classification Trees" and "Multinomial Logistic Models" in modeling the marginal probabilities of class-membership of engineering attributes are investigated. Adaptive statistical models to incorporate different spatial and geometric attributes of buildings---while inferring the engineering attributes---are developed in this dissertation. The inferred engineering attributes in conjunction with the spatial and geometric attributes derived from the imagery can be used to augment regional building inventories and therefore enhance the result of catastrophe models. In the last part of the dissertation, a set of empirically-derived motion-damage relationships based on the correlation of observed building performance with measured ground-motion parameters from 1994 Northridge and 1999 Chi-Chi Taiwan earthquakes are developed. Fragility functions in the form of cumulative lognormal distributions and damage probability matrices for several classes of buildings (wood, steel and concrete), as well as number of ground-motion intensity measures are developed and compared to currently-used motion-damage relationships.

  1. Present status and trends of image fusion

    NASA Astrophysics Data System (ADS)

    Xiang, Dachao; Fu, Sheng; Cai, Yiheng

    2009-10-01

    Image fusion information extracted from multiple images which is more accurate and reliable than that from just a single image. Since various images contain different information aspects of the measured parts, and comprehensive information can be obtained by integrating them together. Image fusion is a main branch of the application of data fusion technology. At present, it was widely used in computer vision technology, remote sensing, robot vision, medical image processing and military field. This paper mainly presents image fusion's contents, research methods, and the status quo at home and abroad, and analyzes the development trend.

  2. A Multistage Approach for Image Registration.

    PubMed

    Bowen, Francis; Hu, Jianghai; Du, Eliza Yingzi

    2016-09-01

    Successful image registration is an important step for object recognition, target detection, remote sensing, multimodal content fusion, scene blending, and disaster assessment and management. The geometric and photometric variations between images adversely affect the ability for an algorithm to estimate the transformation parameters that relate the two images. Local deformations, lighting conditions, object obstructions, and perspective differences all contribute to the challenges faced by traditional registration techniques. In this paper, a novel multistage registration approach is proposed that is resilient to view point differences, image content variations, and lighting conditions. Robust registration is realized through the utilization of a novel region descriptor which couples with the spatial and texture characteristics of invariant feature points. The proposed region descriptor is exploited in a multistage approach. A multistage process allows the utilization of the graph-based descriptor in many scenarios thus allowing the algorithm to be applied to a broader set of images. Each successive stage of the registration technique is evaluated through an effective similarity metric which determines subsequent action. The registration of aerial and street view images from pre- and post-disaster provide strong evidence that the proposed method estimates more accurate global transformation parameters than traditional feature-based methods. Experimental results show the robustness and accuracy of the proposed multistage image registration methodology.

  3. One registration multi-atlas-based pseudo-CT generation for attenuation correction in PET/MRI.

    PubMed

    Arabi, Hossein; Zaidi, Habib

    2016-10-01

    The outcome of a detailed assessment of various strategies for atlas-based whole-body bone segmentation from magnetic resonance imaging (MRI) was exploited to select the optimal parameters and setting, with the aim of proposing a novel one-registration multi-atlas (ORMA) pseudo-CT generation approach. The proposed approach consists of only one online registration between the target and reference images, regardless of the number of atlas images (N), while for the remaining atlas images, the pre-computed transformation matrices to the reference image are used to align them to the target image. The performance characteristics of the proposed method were evaluated and compared with conventional atlas-based attenuation map generation strategies (direct registration of the entire atlas images followed by voxel-wise weighting (VWW) and arithmetic averaging atlas fusion). To this end, four different positron emission tomography (PET) attenuation maps were generated via arithmetic averaging and VWW scheme using both direct registration and ORMA approaches as well as the 3-class attenuation map obtained from the Philips Ingenuity TF PET/MRI scanner commonly used in the clinical setting. The evaluation was performed based on the accuracy of extracted whole-body bones by the different attenuation maps and by quantitative analysis of resulting PET images compared to CT-based attenuation-corrected PET images serving as reference. The comparison of validation metrics regarding the accuracy of extracted bone using the different techniques demonstrated the superiority of the VWW atlas fusion algorithm achieving a Dice similarity measure of 0.82 ± 0.04 compared to arithmetic averaging atlas fusion (0.60 ± 0.02), which uses conventional direct registration. Application of the ORMA approach modestly compromised the accuracy, yielding a Dice similarity measure of 0.76 ± 0.05 for ORMA-VWW and 0.55 ± 0.03 for ORMA-averaging. The results of quantitative PET analysis followed the same trend with less significant differences in terms of SUV bias, whereas massive improvements were observed compared to PET images corrected for attenuation using the 3-class attenuation map. The maximum absolute bias achieved by VWW and VWW-ORMA methods was 06.4 ± 5.5 in the lung and 07.9 ± 4.8 in the bone, respectively. The proposed algorithm is capable of generating decent attenuation maps. The quantitative analysis revealed a good correlation between PET images corrected for attenuation using the proposed pseudo-CT generation approach and the corresponding CT images. The computational time is reduced by a factor of 1/N at the expense of a modest decrease in quantitative accuracy, thus allowing us to achieve a reasonable compromise between computing time and quantitative performance.

  4. Radiotherapy treatment planning: benefits of CT-MR image registration and fusion in tumor volume delineation.

    PubMed

    Djan, Igor; Petrović, Borislava; Erak, Marko; Nikolić, Ivan; Lucić, Silvija

    2013-08-01

    Development of imaging techniques, computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET), made great impact on radiotherapy treatment planning by improving the localization of target volumes. Improved localization allows better local control of tumor volumes, but also minimizes geographical misses. Mutual information is obtained by registration and fusion of images achieved manually or automatically. The aim of this study was to validate the CT-MRI image fusion method and compare delineation obtained by CT versus CT-MRI image fusion. The image fusion software (XIO CMS 4.50.0) was applied to delineate 16 patients. The patients were scanned on CT and MRI in the treatment position within an immobilization device before the initial treatment. The gross tumor volume (GTV) and clinical target volume (CTV) were delineated on CT alone and on CT+MRI images consecutively and image fusion was obtained. Image fusion showed that CTV delineated on a CT image study set is mainly inadequate for treatment planning, in comparison with CTV delineated on CT-MRI fused image study set. Fusion of different modalities enables the most accurate target volume delineation. This study shows that registration and image fusion allows precise target localization in terms of GTV and CTV and local disease control.

  5. [Possibilities of sonographic image fusion: Current developments].

    PubMed

    Jung, E M; Clevert, D-A

    2015-11-01

    For diagnostic and interventional procedures ultrasound (US) image fusion can be used as a complementary imaging technique. Image fusion has the advantage of real time imaging and can be combined with other cross-sectional imaging techniques. With the introduction of US contrast agents sonography and image fusion have gained more importance in the detection and characterization of liver lesions. Fusion of US images with computed tomography (CT) or magnetic resonance imaging (MRI) facilitates the diagnostics and postinterventional therapy control. In addition to the primary application of image fusion in the diagnosis and treatment of liver lesions, there are more useful indications for contrast-enhanced US (CEUS) in routine clinical diagnostic procedures, such as intraoperative US (IOUS), vascular imaging and diagnostics of other organs, such as the kidneys and prostate gland.

  6. Automatic registration of Iphone images to LASER point clouds of the urban structures using shape features

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R. C.; Menenti, M.

    2013-10-01

    Fusion of 3D airborne laser (LIDAR) data and terrestrial optical imagery can be applied in 3D urban modeling and model up-dating. The most challenging aspect of the fusion procedure is registering the terrestrial optical images on the LIDAR point clouds. In this article, we propose an approach for registering these two different data from different sensor sources. As we use iPhone camera images which are taken in front of the interested urban structure by the application user and the high resolution LIDAR point clouds of the acquired by an airborne laser sensor. After finding the photo capturing position and orientation from the iPhone photograph metafile, we automatically select the area of interest in the point cloud and transform it into a range image which has only grayscale intensity levels according to the distance from the image acquisition position. We benefit from local features for registering the iPhone image to the generated range image. In this article, we have applied the registration process based on local feature extraction and graph matching. Finally, the registration result is used for facade texture mapping on the 3D building surface mesh which is generated from the LIDAR point cloud. Our experimental results indicate possible usage of the proposed algorithm framework for 3D urban map updating and enhancing purposes.

  7. Airborne Infrared and Visible Image Fusion Combined with Region Segmentation

    PubMed Central

    Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao

    2017-01-01

    This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems. PMID:28505137

  8. Airborne Infrared and Visible Image Fusion Combined with Region Segmentation.

    PubMed

    Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao

    2017-05-15

    This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems.

  9. Ultrahigh field magnetic resonance and colour Doppler real-time fusion imaging of the orbit--a hybrid tool for assessment of choroidal melanoma.

    PubMed

    Walter, Uwe; Niendorf, Thoralf; Graessl, Andreas; Rieger, Jan; Krüger, Paul-Christian; Langner, Sönke; Guthoff, Rudolf F; Stachs, Oliver

    2014-05-01

    A combination of magnetic resonance images with real-time high-resolution ultrasound known as fusion imaging may improve ophthalmologic examination. This study was undertaken to evaluate the feasibility of orbital high-field magnetic resonance and real-time colour Doppler ultrasound image fusion and navigation. This case study, performed between April and June 2013, included one healthy man (age, 47 years) and two patients (one woman, 57 years; one man, 67 years) with choroidal melanomas. All cases underwent 7.0-T magnetic resonance imaging using a custom-made ocular imaging surface coil. The Digital Imaging and Communications in Medicine volume data set was then loaded into the ultrasound system for manual registration of the live ultrasound image and fusion imaging examination. Data registration, matching and then volume navigation were feasible in all cases. Fusion imaging provided real-time imaging capabilities and high tissue contrast of choroidal tumour and optic nerve. It also allowed adding a real-time colour Doppler signal on magnetic resonance images for assessment of vasculature of tumour and retrobulbar structures. The combination of orbital high-field magnetic resonance and colour Doppler ultrasound image fusion and navigation is feasible. Multimodal fusion imaging promises to foster assessment and monitoring of choroidal melanoma and optic nerve disorders. • Orbital magnetic resonance and colour Doppler ultrasound real-time fusion imaging is feasible • Fusion imaging combines the spatial and temporal resolution advantages of each modality • Magnetic resonance and ultrasound fusion imaging improves assessment of choroidal melanoma vascularisation.

  10. Open set recognition of aircraft in aerial imagery using synthetic template models

    NASA Astrophysics Data System (ADS)

    Bapst, Aleksander B.; Tran, Jonathan; Koch, Mark W.; Moya, Mary M.; Swahn, Robert

    2017-05-01

    Fast, accurate and robust automatic target recognition (ATR) in optical aerial imagery can provide game-changing advantages to military commanders and personnel. ATR algorithms must reject non-targets with a high degree of confidence in a world with an infinite number of possible input images. Furthermore, they must learn to recognize new targets without requiring massive data collections. Whereas most machine learning algorithms classify data in a closed set manner by mapping inputs to a fixed set of training classes, open set recognizers incorporate constraints that allow for inputs to be labelled as unknown. We have adapted two template-based open set recognizers to use computer generated synthetic images of military aircraft as training data, to provide a baseline for military-grade ATR: (1) a frequentist approach based on probabilistic fusion of extracted image features, and (2) an open set extension to the one-class support vector machine (SVM). These algorithms both use histograms of oriented gradients (HOG) as features as well as artificial augmentation of both real and synthetic image chips to take advantage of minimal training data. Our results show that open set recognizers trained with synthetic data and tested with real data can successfully discriminate real target inputs from non-targets. However, there is still a requirement for some knowledge of the real target in order to calibrate the relationship between synthetic template and target score distributions. We conclude by proposing algorithm modifications that may improve the ability of synthetic data to represent real data.

  11. Dual-Tree Complex Wavelet Transform and Image Block Residual-Based Multi-Focus Image Fusion in Visual Sensor Networks

    PubMed Central

    Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan

    2014-01-01

    This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs. PMID:25587878

  12. Dual-tree complex wavelet transform and image block residual-based multi-focus image fusion in visual sensor networks.

    PubMed

    Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan

    2014-11-26

    This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs.

  13. Assessing the Performance of Sensor Fusion Methods: Application to Magnetic-Inertial-Based Human Body Tracking

    PubMed Central

    Ligorio, Gabriele; Bergamini, Elena; Pasciuto, Ilaria; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria

    2016-01-01

    Information from complementary and redundant sensors are often combined within sensor fusion algorithms to obtain a single accurate observation of the system at hand. However, measurements from each sensor are characterized by uncertainties. When multiple data are fused, it is often unclear how all these uncertainties interact and influence the overall performance of the sensor fusion algorithm. To address this issue, a benchmarking procedure is presented, where simulated and real data are combined in different scenarios in order to quantify how each sensor’s uncertainties influence the accuracy of the final result. The proposed procedure was applied to the estimation of the pelvis orientation using a waist-worn magnetic-inertial measurement unit. Ground-truth data were obtained from a stereophotogrammetric system and used to obtain simulated data. Two Kalman-based sensor fusion algorithms were submitted to the proposed benchmarking procedure. For the considered application, gyroscope uncertainties proved to be the main error source in orientation estimation accuracy for both tested algorithms. Moreover, although different performances were obtained using simulated data, these differences became negligible when real data were considered. The outcome of this evaluation may be useful both to improve the design of new sensor fusion methods and to drive the algorithm tuning process. PMID:26821027

  14. Assessing the Performance of Sensor Fusion Methods: Application to Magnetic-Inertial-Based Human Body Tracking.

    PubMed

    Ligorio, Gabriele; Bergamini, Elena; Pasciuto, Ilaria; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria

    2016-01-26

    Information from complementary and redundant sensors are often combined within sensor fusion algorithms to obtain a single accurate observation of the system at hand. However, measurements from each sensor are characterized by uncertainties. When multiple data are fused, it is often unclear how all these uncertainties interact and influence the overall performance of the sensor fusion algorithm. To address this issue, a benchmarking procedure is presented, where simulated and real data are combined in different scenarios in order to quantify how each sensor's uncertainties influence the accuracy of the final result. The proposed procedure was applied to the estimation of the pelvis orientation using a waist-worn magnetic-inertial measurement unit. Ground-truth data were obtained from a stereophotogrammetric system and used to obtain simulated data. Two Kalman-based sensor fusion algorithms were submitted to the proposed benchmarking procedure. For the considered application, gyroscope uncertainties proved to be the main error source in orientation estimation accuracy for both tested algorithms. Moreover, although different performances were obtained using simulated data, these differences became negligible when real data were considered. The outcome of this evaluation may be useful both to improve the design of new sensor fusion methods and to drive the algorithm tuning process.

  15. Objective quality assessment for multiexposure multifocus image fusion.

    PubMed

    Hassen, Rania; Wang, Zhou; Salama, Magdy M A

    2015-09-01

    There has been a growing interest in image fusion technologies, but how to objectively evaluate the quality of fused images has not been fully understood. Here, we propose a method for objective quality assessment of multiexposure multifocus image fusion based on the evaluation of three key factors of fused image quality: 1) contrast preservation; 2) sharpness; and 3) structure preservation. Subjective experiments are conducted to create an image fusion database, based on which, performance evaluation shows that the proposed fusion quality index correlates well with subjective scores, and gives a significant improvement over the existing fusion quality measures.

  16. Cell segmentation in time-lapse fluorescence microscopy with temporally varying sub-cellular fusion protein patterns.

    PubMed

    Bunyak, Filiz; Palaniappan, Kannappan; Chagin, Vadim; Cardoso, M

    2009-01-01

    Fluorescently tagged proteins such as GFP-PCNA produce rich dynamically varying textural patterns of foci distributed in the nucleus. This enables the behavioral study of sub-cellular structures during different phases of the cell cycle. The varying punctuate patterns of fluorescence, drastic changes in SNR, shape and position during mitosis and abundance of touching cells, however, require more sophisticated algorithms for reliable automatic cell segmentation and lineage analysis. Since the cell nuclei are non-uniform in appearance, a distribution-based modeling of foreground classes is essential. The recently proposed graph partitioning active contours (GPAC) algorithm supports region descriptors and flexible distance metrics. We extend GPAC for fluorescence-based cell segmentation using regional density functions and dramatically improve its efficiency for segmentation from O(N(4)) to O(N(2)), for an image with N(2) pixels, making it practical and scalable for high throughput microscopy imaging studies.

  17. Simultaneous skull-stripping and lateral ventricle segmentation via fast multi-atlas likelihood fusion

    NASA Astrophysics Data System (ADS)

    Tang, Xiaoying; Kutten, Kwame; Ceritoglu, Can; Mori, Susumu; Miller, Michael I.

    2015-03-01

    In this paper, we propose and validate a fully automated pipeline for simultaneous skull-stripping and lateral ventricle segmentation using T1-weighted images. The pipeline is built upon a segmentation algorithm entitled fast multi-atlas likelihood-fusion (MALF) which utilizes multiple T1 atlases that have been pre-segmented into six whole-brain labels - the gray matter, the white matter, the cerebrospinal fluid, the lateral ventricles, the skull, and the background of the entire image. This algorithm, MALF, was designed for estimating brain anatomical structures in the framework of coordinate changes via large diffeomorphisms. In the proposed pipeline, we use a variant of MALF to estimate those six whole-brain labels in the test T1-weighted image. The three tissue labels (gray matter, white matter, and cerebrospinal fluid) and the lateral ventricles are then grouped together to form a binary brain mask to which we apply morphological smoothing so as to create the final mask for brain extraction. For computational purposes, all input images to MALF are down-sampled by a factor of two. In addition, small deformations are used for the changes of coordinates. This substantially reduces the computational complexity, hence we use the term "fast MALF". The skull-stripping performance is qualitatively evaluated on a total of 486 brain scans from a longitudinal study on Alzheimer dementia. Quantitative error analysis is carried out on 36 scans for evaluating the accuracy of the pipeline in segmenting the lateral ventricle. The volumes of the automated lateral ventricle segmentations, obtained from the proposed pipeline, are compared across three different clinical groups. The ventricle volumes from our pipeline are found to be sensitive to the diagnosis.

  18. Multispectral simulation environment for modeling low-light-level sensor systems

    NASA Astrophysics Data System (ADS)

    Ientilucci, Emmett J.; Brown, Scott D.; Schott, John R.; Raqueno, Rolando V.

    1998-11-01

    Image intensifying cameras have been found to be extremely useful in low-light-level (LLL) scenarios including military night vision and civilian rescue operations. These sensors utilize the available visible region photons and an amplification process to produce high contrast imagery. It has been demonstrated that processing techniques can further enhance the quality of this imagery. For example, fusion with matching thermal IR imagery can improve image content when very little visible region contrast is available. To aid in the improvement of current algorithms and the development of new ones, a high fidelity simulation environment capable of producing radiometrically correct multi-band imagery for low- light-level conditions is desired. This paper describes a modeling environment attempting to meet these criteria by addressing the task as two individual components: (1) prediction of a low-light-level radiance field from an arbitrary scene, and (2) simulation of the output from a low- light-level sensor for a given radiance field. The radiance prediction engine utilized in this environment is the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model which is a first principles based multi-spectral synthetic image generation model capable of producing an arbitrary number of bands in the 0.28 to 20 micrometer region. The DIRSIG model is utilized to produce high spatial and spectral resolution radiance field images. These images are then processed by a user configurable multi-stage low-light-level sensor model that applies the appropriate noise and modulation transfer function (MTF) at each stage in the image processing chain. This includes the ability to reproduce common intensifying sensor artifacts such as saturation and 'blooming.' Additionally, co-registered imagery in other spectral bands may be simultaneously generated for testing fusion and exploitation algorithms. This paper discusses specific aspects of the DIRSIG radiance prediction for low- light-level conditions including the incorporation of natural and man-made sources which emphasizes the importance of accurate BRDF. A description of the implementation of each stage in the image processing and capture chain for the LLL model is also presented. Finally, simulated images are presented and qualitatively compared to lab acquired imagery from a commercial system.

  19. ACM-based automatic liver segmentation from 3-D CT images by combining multiple atlases and improved mean-shift techniques.

    PubMed

    Ji, Hongwei; He, Jiangping; Yang, Xin; Deklerck, Rudi; Cornelis, Jan

    2013-05-01

    In this paper, we present an autocontext model(ACM)-based automatic liver segmentation algorithm, which combines ACM, multiatlases, and mean-shift techniques to segment liver from 3-D CT images. Our algorithm is a learning-based method and can be divided into two stages. At the first stage, i.e., the training stage, ACM is performed to learn a sequence of classifiers in each atlas space (based on each atlas and other aligned atlases). With the use of multiple atlases, multiple sequences of ACM-based classifiers are obtained. At the second stage, i.e., the segmentation stage, the test image will be segmented in each atlas space by applying each sequence of ACM-based classifiers. The final segmentation result will be obtained by fusing segmentation results from all atlas spaces via a multiclassifier fusion technique. Specially, in order to speed up segmentation, given a test image, we first use an improved mean-shift algorithm to perform over-segmentation and then implement the region-based image labeling instead of the original inefficient pixel-based image labeling. The proposed method is evaluated on the datasets of MICCAI 2007 liver segmentation challenge. The experimental results show that the average volume overlap error and the average surface distance achieved by our method are 8.3% and 1.5 m, respectively, which are comparable to the results reported in the existing state-of-the-art work on liver segmentation.

  20. [Three-dimensional data fusion method for tooth crown and root based on curvature continuity algorithm].

    PubMed

    Zhao, Y J; Liu, Y; Sun, Y C; Wang, Y

    2017-08-18

    To explore a three-dimensional (3D) data fusion and integration method of optical scanning tooth crowns and cone beam CT (CBCT) reconstructing tooth roots for their natural transition in the 3D profile. One mild dental crowding case was chosen from orthodontics clinics with full denture. The CBCT data were acquired to reconstruct the dental model with tooth roots by Mimics 17.0 medical imaging software, and the optical impression was taken to obtain the dentition model with high precision physiological contour of crowns by Smart Optics dental scanner. The two models were doing 3D registration based on their common part of the crowns' shape in Geomagic Studio 2012 reverse engineering software. The model coordinate system was established by defining the occlusal plane. crown-gingiva boundary was extracted from optical scanning model manually, then crown-root boundary was generated by offsetting and projecting crown-gingiva boundary to the root model. After trimming the crown and root models, the 3D fusion model with physiological contour crown and nature root was formed by curvature continuity filling algorithm finally. In the study, 10 patients with dentition mild crowded from the oral clinics were followed up with this method to obtain 3D crown and root fusion models, and 10 high qualification doctors were invited to do subjective evaluation of these fusion models. This study based on commercial software platform, preliminarily realized the 3D data fusion and integration method of optical scanning tooth crowns and CBCT tooth roots with a curvature continuous shape transition. The 10 patients' 3D crown and root fusion models were constructed successfully by the method, and the average score of the doctors' subjective evaluation for these 10 models was 8.6 points (0-10 points). which meant that all the fusion models could basically meet the need of the oral clinics, and also showed the method in our study was feasible and efficient in orthodontics study and clinics. The method of this study for 3D crown and root data fusion could obtain an integrate tooth or dental model more close to the nature shape. CBCT model calibration may probably improve the precision of the fusion model. The adaptation of this method for severe dentition crowding and micromaxillary deformity needs further research.

  1. A New Approach to Image Fusion Based on Cokriging

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; LeMoigne, Jacqueline; Mount, David M.; Morisette, Jeffrey T.

    2005-01-01

    We consider the image fusion problem involving remotely sensed data. We introduce cokriging as a method to perform fusion. We investigate the advantages of fusing Hyperion with ALI. The evaluation is performed by comparing the classification of the fused data with that of input images and by calculating well-chosen quantitative fusion quality metrics. We consider the Invasive Species Forecasting System (ISFS) project as our fusion application. The fusion of ALI with Hyperion data is studies using PCA and wavelet-based fusion. We then propose utilizing a geostatistical based interpolation method called cokriging as a new approach for image fusion.

  2. Sensor fusion of cameras and a laser for city-scale 3D reconstruction.

    PubMed

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-11-04

    This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.

  3. Gradient-based multiresolution image fusion.

    PubMed

    Petrović, Valdimir S; Xydeas, Costas S

    2004-02-01

    A novel approach to multiresolution signal-level image fusion is presented for accurately transferring visual information from any number of input image signals, into a single fused image without loss of information or the introduction of distortion. The proposed system uses a "fuse-then-decompose" technique realized through a novel, fusion/decomposition system architecture. In particular, information fusion is performed on a multiresolution gradient map representation domain of image signal information. At each resolution, input images are represented as gradient maps and combined to produce new, fused gradient maps. Fused gradient map signals are processed, using gradient filters derived from high-pass quadrature mirror filters to yield a fused multiresolution pyramid representation. The fused output image is obtained by applying, on the fused pyramid, a reconstruction process that is analogous to that of conventional discrete wavelet transform. This new gradient fusion significantly reduces the amount of distortion artefacts and the loss of contrast information usually observed in fused images obtained from conventional multiresolution fusion schemes. This is because fusion in the gradient map domain significantly improves the reliability of the feature selection and information fusion processes. Fusion performance is evaluated through informal visual inspection and subjective psychometric preference tests, as well as objective fusion performance measurements. Results clearly demonstrate the superiority of this new approach when compared to conventional fusion systems.

  4. Fusion of multi-spectral and panchromatic images based on 2D-PWVD and SSIM

    NASA Astrophysics Data System (ADS)

    Tan, Dongjie; Liu, Yi; Hou, Ruonan; Xue, Bindang

    2016-03-01

    A combined method using 2D pseudo Wigner-Ville distribution (2D-PWVD) and structural similarity(SSIM) index is proposed for fusion of low resolution multi-spectral (MS) image and high resolution panchromatic (PAN) image. First, the intensity component of multi-spectral image is extracted with generalized IHS transform. Then, the spectrum diagrams of the intensity components of multi-spectral image and panchromatic image are obtained with 2D-PWVD. Different fusion rules are designed for different frequency information of the spectrum diagrams. SSIM index is used to evaluate the high frequency information of the spectrum diagrams for assigning the weights in the fusion processing adaptively. After the new spectrum diagram is achieved according to the fusion rule, the final fusion image can be obtained by inverse 2D-PWVD and inverse GIHS transform. Experimental results show that, the proposed method can obtain high quality fusion images.

  5. A hybrid MLP-CNN classifier for very fine resolution remotely sensed image classification

    NASA Astrophysics Data System (ADS)

    Zhang, Ce; Pan, Xin; Li, Huapeng; Gardiner, Andy; Sargent, Isabel; Hare, Jonathon; Atkinson, Peter M.

    2018-06-01

    The contextual-based convolutional neural network (CNN) with deep architecture and pixel-based multilayer perceptron (MLP) with shallow structure are well-recognized neural network algorithms, representing the state-of-the-art deep learning method and the classical non-parametric machine learning approach, respectively. The two algorithms, which have very different behaviours, were integrated in a concise and effective way using a rule-based decision fusion approach for the classification of very fine spatial resolution (VFSR) remotely sensed imagery. The decision fusion rules, designed primarily based on the classification confidence of the CNN, reflect the generally complementary patterns of the individual classifiers. In consequence, the proposed ensemble classifier MLP-CNN harvests the complementary results acquired from the CNN based on deep spatial feature representation and from the MLP based on spectral discrimination. Meanwhile, limitations of the CNN due to the adoption of convolutional filters such as the uncertainty in object boundary partition and loss of useful fine spatial resolution detail were compensated. The effectiveness of the ensemble MLP-CNN classifier was tested in both urban and rural areas using aerial photography together with an additional satellite sensor dataset. The MLP-CNN classifier achieved promising performance, consistently outperforming the pixel-based MLP, spectral and textural-based MLP, and the contextual-based CNN in terms of classification accuracy. This research paves the way to effectively address the complicated problem of VFSR image classification.

  6. Repurposing the Microsoft Kinect for Windows v2 for external head motion tracking for brain PET.

    PubMed

    Noonan, P J; Howard, J; Hallett, W A; Gunn, R N

    2015-11-21

    Medical imaging systems such as those used in positron emission tomography (PET) are capable of spatial resolutions that enable the imaging of small, functionally important brain structures. However, the quality of data from PET brain studies is often limited by subject motion during acquisition. This is particularly challenging for patients with neurological disorders or with dynamic research studies that can last 90 min or more. Restraining head movement during the scan does not eliminate motion entirely and can be unpleasant for the subject. Head motion can be detected and measured using a variety of techniques that either use the PET data itself or an external tracking system. Advances in computer vision arising from the video gaming industry could offer significant benefits when re-purposed for medical applications. A method for measuring rigid body type head motion using the Microsoft Kinect v2 is described with results presenting  ⩽0.5 mm spatial accuracy. Motion data is measured in real-time at 30 Hz using the KinectFusion algorithm. Non-rigid motion is detected using the residual alignment energy data of the KinectFusion algorithm allowing for unreliable motion to be discarded. Motion data is aligned to PET listmode data using injected pulse sequences into the PET/CT gantry allowing for correction of rigid body motion. Pilot data from a clinical dynamic PET/CT examination is shown.

  7. Repurposing the Microsoft Kinect for Windows v2 for external head motion tracking for brain PET

    NASA Astrophysics Data System (ADS)

    Noonan, P. J.; Howard, J.; Hallett, W. A.; Gunn, R. N.

    2015-11-01

    Medical imaging systems such as those used in positron emission tomography (PET) are capable of spatial resolutions that enable the imaging of small, functionally important brain structures. However, the quality of data from PET brain studies is often limited by subject motion during acquisition. This is particularly challenging for patients with neurological disorders or with dynamic research studies that can last 90 min or more. Restraining head movement during the scan does not eliminate motion entirely and can be unpleasant for the subject. Head motion can be detected and measured using a variety of techniques that either use the PET data itself or an external tracking system. Advances in computer vision arising from the video gaming industry could offer significant benefits when re-purposed for medical applications. A method for measuring rigid body type head motion using the Microsoft Kinect v2 is described with results presenting  ⩽0.5 mm spatial accuracy. Motion data is measured in real-time at 30 Hz using the KinectFusion algorithm. Non-rigid motion is detected using the residual alignment energy data of the KinectFusion algorithm allowing for unreliable motion to be discarded. Motion data is aligned to PET listmode data using injected pulse sequences into the PET/CT gantry allowing for correction of rigid body motion. Pilot data from a clinical dynamic PET/CT examination is shown.

  8. Correlative Microscopy Combining Secondary Ion Mass Spectrometry and Electron Microscopy: Comparison of Intensity-Hue-Saturation and Laplacian Pyramid Methods for Image Fusion.

    PubMed

    Vollnhals, Florian; Audinot, Jean-Nicolas; Wirtz, Tom; Mercier-Bonin, Muriel; Fourquaux, Isabelle; Schroeppel, Birgit; Kraushaar, Udo; Lev-Ram, Varda; Ellisman, Mark H; Eswara, Santhana

    2017-10-17

    Correlative microscopy combining various imaging modalities offers powerful insights into obtaining a comprehensive understanding of physical, chemical, and biological phenomena. In this article, we investigate two approaches for image fusion in the context of combining the inherently lower-resolution chemical images obtained using secondary ion mass spectrometry (SIMS) with the high-resolution ultrastructural images obtained using electron microscopy (EM). We evaluate the image fusion methods with three different case studies selected to broadly represent the typical samples in life science research: (i) histology (unlabeled tissue), (ii) nanotoxicology, and (iii) metabolism (isotopically labeled tissue). We show that the intensity-hue-saturation fusion method often applied for EM-sharpening can result in serious image artifacts, especially in cases where different contrast mechanisms interplay. Here, we introduce and demonstrate Laplacian pyramid fusion as a powerful and more robust alternative method for image fusion. Both physical and technical aspects of correlative image overlay and image fusion specific to SIMS-based correlative microscopy are discussed in detail alongside the advantages, limitations, and the potential artifacts. Quantitative metrics to evaluate the results of image fusion are also discussed.

  9. Generation, recognition, and consistent fusion of partial boundary representations from range images

    NASA Astrophysics Data System (ADS)

    Kohlhepp, Peter; Hanczak, Andrzej M.; Li, Gang

    1994-10-01

    This paper presents SOMBRERO, a new system for recognizing and locating 3D, rigid, non- moving objects from range data. The objects may be polyhedral or curved, partially occluding, touching or lying flush with each other. For data collection, we employ 2D time- of-flight laser scanners mounted to a moving gantry robot. By combining sensor and robot coordinates, we obtain 3D cartesian coordinates. Boundary representations (Brep's) provide view independent geometry models that are both efficiently recognizable and derivable automatically from sensor data. SOMBRERO's methods for generating, matching and fusing Brep's are highly synergetic. A split-and-merge segmentation algorithm with dynamic triangular builds a partial (21/2D) Brep from scattered data. The recognition module matches this scene description with a model database and outputs recognized objects, their positions and orientations, and possibly surfaces corresponding to unknown objects. We present preliminary results in scene segmentation and recognition. Partial Brep's corresponding to different range sensors or viewpoints can be merged into a consistent, complete and irredundant 3D object or scene model. This fusion algorithm itself uses the recognition and segmentation methods.

  10. Atmospheric electricity/meteorology analysis

    NASA Technical Reports Server (NTRS)

    Goodman, Steven J.; Blakeslee, Richard; Buechler, Dennis

    1993-01-01

    This activity focuses on Lightning Imaging Sensor (LIS)/Lightning Mapper Sensor (LMS) algorithm development and applied research. Specifically we are exploring the relationships between (1) global and regional lightning activity and rainfall, and (2) storm electrical development, physics, and the role of the environment. U.S. composite radar-rainfall maps and ground strike lightning maps are used to understand lightning-rainfall relationships at the regional scale. These observations are then compared to SSM/I brightness temperatures to simulate LIS/TRMM multi-sensor algorithm data sets. These data sets are supplied to the WETNET project archive. WSR88-D (NEXRAD) data are also used as it becomes available. The results of this study allow us to examine the information content from lightning imaging sensors in low-earth and geostationary orbits. Analysis of tropical and U.S. data sets continues. A neural network/sensor fusion algorithm is being refined for objectively associating lightning and rainfall with their parent storm systems. Total lightning data from interferometers are being used in conjunction with data from the national lightning network. A 6-year lightning/rainfall climatology has been assembled for LIS sampling studies.

  11. Cargo identification algorithms facilitating unmanned/unattended inspection at high throughput portals

    NASA Astrophysics Data System (ADS)

    Chalmers, Alex

    2007-10-01

    A simple model is presented of a possible inspection regimen applied to each leg of a cargo containers' journey between its point of origin and destination. Several candidate modalities are proposed to be used at multiple remote locations to act as a pre-screen inspection as the target approaches a perimeter and as the primary inspection modality at the portal. Information from multiple data sets are fused to optimize the costs and performance of a network of such inspection systems. A series of image processing algorithms are presented that automatically process X-ray images of containerized cargo. The goal of this processing is to locate the container in a real time stream of traffic traversing a portal without impeding the flow of commerce. Such processing may facilitate the inclusion of unmanned/unattended inspection systems in such a network. Several samples of the processing applied to data collected from deployed systems are included. Simulated data from a notional cargo inspection system with multiple sensor modalities and advanced data fusion algorithms are also included to show the potential increased detection and throughput performance of such a configuration.

  12. An efficient classification method based on principal component and sparse representation.

    PubMed

    Zhai, Lin; Fu, Shujun; Zhang, Caiming; Liu, Yunxian; Wang, Lu; Liu, Guohua; Yang, Mingqiang

    2016-01-01

    As an important application in optical imaging, palmprint recognition is interfered by many unfavorable factors. An effective fusion of blockwise bi-directional two-dimensional principal component analysis and grouping sparse classification is presented. The dimension reduction and normalizing are implemented by the blockwise bi-directional two-dimensional principal component analysis for palmprint images to extract feature matrixes, which are assembled into an overcomplete dictionary in sparse classification. A subspace orthogonal matching pursuit algorithm is designed to solve the grouping sparse representation. Finally, the classification result is gained by comparing the residual between testing and reconstructed images. Experiments are carried out on a palmprint database, and the results show that this method has better robustness against position and illumination changes of palmprint images, and can get higher rate of palmprint recognition.

  13. Segmentation of neuronal structures using SARSA (λ)-based boundary amendment with reinforced gradient-descent curve shape fitting.

    PubMed

    Zhu, Fei; Liu, Quan; Fu, Yuchen; Shen, Bairong

    2014-01-01

    The segmentation of structures in electron microscopy (EM) images is very important for neurobiological research. The low resolution neuronal EM images contain noise and generally few features are available for segmentation, therefore application of the conventional approaches to identify the neuron structure from EM images is not successful. We therefore present a multi-scale fused structure boundary detection algorithm in this study. In the algorithm, we generate an EM image Gaussian pyramid first, then at each level of the pyramid, we utilize Laplacian of Gaussian function (LoG) to attain structure boundary, we finally assemble the detected boundaries by using fusion algorithm to attain a combined neuron structure image. Since the obtained neuron structures usually have gaps, we put forward a reinforcement learning-based boundary amendment method to connect the gaps in the detected boundaries. We use a SARSA (λ)-based curve traveling and amendment approach derived from reinforcement learning to repair the incomplete curves. Using this algorithm, a moving point starts from one end of the incomplete curve and walks through the image where the decisions are supervised by the approximated curve model, with the aim of minimizing the connection cost until the gap is closed. Our approach provided stable and efficient structure segmentation. The test results using 30 EM images from ISBI 2012 indicated that both of our approaches, i.e., with or without boundary amendment, performed better than six conventional boundary detection approaches. In particular, after amendment, the Rand error and warping error, which are the most important performance measurements during structure segmentation, were reduced to very low values. The comparison with the benchmark method of ISBI 2012 and the recent developed methods also indicates that our method performs better for the accurate identification of substructures in EM images and therefore useful for the identification of imaging features related to brain diseases.

  14. Segmentation of Neuronal Structures Using SARSA (λ)-Based Boundary Amendment with Reinforced Gradient-Descent Curve Shape Fitting

    PubMed Central

    Zhu, Fei; Liu, Quan; Fu, Yuchen; Shen, Bairong

    2014-01-01

    The segmentation of structures in electron microscopy (EM) images is very important for neurobiological research. The low resolution neuronal EM images contain noise and generally few features are available for segmentation, therefore application of the conventional approaches to identify the neuron structure from EM images is not successful. We therefore present a multi-scale fused structure boundary detection algorithm in this study. In the algorithm, we generate an EM image Gaussian pyramid first, then at each level of the pyramid, we utilize Laplacian of Gaussian function (LoG) to attain structure boundary, we finally assemble the detected boundaries by using fusion algorithm to attain a combined neuron structure image. Since the obtained neuron structures usually have gaps, we put forward a reinforcement learning-based boundary amendment method to connect the gaps in the detected boundaries. We use a SARSA (λ)-based curve traveling and amendment approach derived from reinforcement learning to repair the incomplete curves. Using this algorithm, a moving point starts from one end of the incomplete curve and walks through the image where the decisions are supervised by the approximated curve model, with the aim of minimizing the connection cost until the gap is closed. Our approach provided stable and efficient structure segmentation. The test results using 30 EM images from ISBI 2012 indicated that both of our approaches, i.e., with or without boundary amendment, performed better than six conventional boundary detection approaches. In particular, after amendment, the Rand error and warping error, which are the most important performance measurements during structure segmentation, were reduced to very low values. The comparison with the benchmark method of ISBI 2012 and the recent developed methods also indicates that our method performs better for the accurate identification of substructures in EM images and therefore useful for the identification of imaging features related to brain diseases. PMID:24625699

  15. Comparative evaluation of three-dimensional Gd-EOB-DTPA-enhanced MR fusion imaging with CT fusion imaging in the assessment of treatment effect of radiofrequency ablation of hepatocellular carcinoma.

    PubMed

    Makino, Yuki; Imai, Yasuharu; Igura, Takumi; Hori, Masatoshi; Fukuda, Kazuto; Sawai, Yoshiyuki; Kogita, Sachiyo; Fujita, Norihiko; Takehara, Tetsuo; Murakami, Takamichi

    2015-01-01

    To assess the feasibility of fusion of pre- and post-ablation gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid-enhanced magnetic resonance imaging (Gd-EOB-DTPA-MRI) to evaluate the effects of radiofrequency ablation (RFA) of hepatocellular carcinoma (HCC), compared with similarly fused CT images This retrospective study included 67 patients with 92 HCCs treated with RFA. Fusion images of pre- and post-RFA dynamic CT, and pre- and post-RFA Gd-EOB-DTPA-MRI were created, using a rigid registration method. The minimal ablative margin measured on fusion imaging was categorized into three groups: (1) tumor protruding outside the ablation zone boundary, (2) ablative margin 0-<5.0 mm beyond the tumor boundary, and (3) ablative margin ≥5.0 mm beyond the tumor boundary. The categorization of minimal ablative margins was compared between CT and MR fusion images. In 57 (62.0%) HCCs, treatment evaluation was possible both on CT and MR fusion images, and the overall agreement between them for the categorization of minimal ablative margin was good (κ coefficient = 0.676, P < 0.01). MR fusion imaging enabled treatment evaluation in a significantly larger number of HCCs than CT fusion imaging (86/92 [93.5%] vs. 62/92 [67.4%], P < 0.05). Fusion of pre- and post-ablation Gd-EOB-DTPA-MRI is feasible for treatment evaluation after RFA. It may enable accurate treatment evaluation in cases where CT fusion imaging is not helpful.

  16. Image-fusion of MR spectroscopic images for treatment planning of gliomas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang Jenghwa; Thakur, Sunitha; Perera, Gerard

    2006-01-15

    {sup 1}H magnetic resonance spectroscopic imaging (MRSI) can improve the accuracy of target delineation for gliomas, but it lacks the anatomic resolution needed for image fusion. This paper presents a simple protocol for fusing simulation computer tomography (CT) and MRSI images for glioma intensity-modulated radiotherapy (IMRT), including a retrospective study of 12 patients. Each patient first underwent whole-brain axial fluid-attenuated-inversion-recovery (FLAIR) MRI (3 mm slice thickness, no spacing), followed by three-dimensional (3D) MRSI measurements (TE/TR: 144/1000 ms) of a user-specified volume encompassing the extent of the tumor. The nominal voxel size of MRSI ranged from 8x8x10 mm{sup 3} to 12x12x10more » mm{sup 3}. A system was developed to grade the tumor using the choline-to-creatine (Cho/Cr) ratios from each MRSI voxel. The merged MRSI images were then generated by replacing the Cho/Cr value of each MRSI voxel with intensities according to the Cho/Cr grades, and resampling the poorer-resolution Cho/Cr map into the higher-resolution FLAIR image space. The FUNCTOOL processing software was also used to create the screen-dumped MRSI images in which these data were overlaid with each FLAIR MRI image. The screen-dumped MRSI images were manually translated and fused with the FLAIR MRI images. Since the merged MRSI images were intrinsically fused with the FLAIR MRI images, they were also registered with the screen-dumped MRSI images. The position of the MRSI volume on the merged MRSI images was compared with that of the screen-dumped MRSI images and was shifted until agreement was within a predetermined tolerance. Three clinical target volumes (CTVs) were then contoured on the FLAIR MRI images corresponding to the Cho/Cr grades. Finally, the FLAIR MRI images were fused with the simulation CT images using a mutual-information algorithm, yielding an IMRT plan that simultaneously delivers three different dose levels to the three CTVs. The image-fusion protocol was tested on 12 (six high-grade and six low-grade) glioma patients. The average agreement of the MRSI volume position on the screen-dumped MRSI images and the merged MRSI images was 0.29 mm with a standard deviation of 0.07 mm. Of all the voxels with Cho/Cr grade one or above, the distribution of Cho/Cr grade was found to correlate with the glioma grade from pathologic finding and is consistent with literature results indicating Cho/Cr elevation as a marker for malignancy. In conclusion, an image-fusion protocol was developed that successfully incorporates MRSI information into the IMRT treatment plan for glioma.« less

  17. Panoramic 3D Reconstruction by Fusing Color Intensity and Laser Range Data

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Lu, Jian

    Technology for capturing panoramic (360 degrees) three-dimensional information in a real environment have many applications in fields: virtual and complex reality, security, robot navigation, and so forth. In this study, we examine an acquisition device constructed of a regular CCD camera and a 2D laser range scanner, along with a technique for panoramic 3D reconstruction using a data fusion algorithm based on an energy minimization framework. The acquisition device can capture two types of data of a panoramic scene without occlusion between two sensors: a dense spatio-temporal volume from a camera and distance information from a laser scanner. We resample the dense spatio-temporal volume for generating a dense multi-perspective panorama that has equal spatial resolution to that of the original images acquired using a regular camera, and also estimate a dense panoramic depth-map corresponding to the generated reference panorama by extracting trajectories from the dense spatio-temporal volume with a selecting camera. Moreover, for determining distance information robustly, we propose a data fusion algorithm that is embedded into an energy minimization framework that incorporates active depth measurements using a 2D laser range scanner and passive geometry reconstruction from an image sequence obtained using the CCD camera. Thereby, measurement precision and robustness can be improved beyond those available by conventional methods using either passive geometry reconstruction (stereo vision) or a laser range scanner. Experimental results using both synthetic and actual images show that our approach can produce high-quality panoramas and perform accurate 3D reconstruction in a panoramic environment.

  18. Smart photonic networks and computer security for image data

    NASA Astrophysics Data System (ADS)

    Campello, Jorge; Gill, John T.; Morf, Martin; Flynn, Michael J.

    1998-02-01

    Work reported here is part of a larger project on 'Smart Photonic Networks and Computer Security for Image Data', studying the interactions of coding and security, switching architecture simulations, and basic technologies. Coding and security: coding methods that are appropriate for data security in data fusion networks were investigated. These networks have several characteristics that distinguish them form other currently employed networks, such as Ethernet LANs or the Internet. The most significant characteristics are very high maximum data rates; predominance of image data; narrowcasting - transmission of data form one source to a designated set of receivers; data fusion - combining related data from several sources; simple sensor nodes with limited buffering. These characteristics affect both the lower level network design and the higher level coding methods.Data security encompasses privacy, integrity, reliability, and availability. Privacy, integrity, and reliability can be provided through encryption and coding for error detection and correction. Availability is primarily a network issue; network nodes must be protected against failure or routed around in the case of failure. One of the more promising techniques is the use of 'secret sharing'. We consider this method as a special case of our new space-time code diversity based algorithms for secure communication. These algorithms enable us to exploit parallelism and scalable multiplexing schemes to build photonic network architectures. A number of very high-speed switching and routing architectures and their relationships with very high performance processor architectures were studied. Indications are that routers for very high speed photonic networks can be designed using the very robust and distributed TCP/IP protocol, if suitable processor architecture support is available.

  19. Vehicle logo recognition using multi-level fusion model

    NASA Astrophysics Data System (ADS)

    Ming, Wei; Xiao, Jianli

    2018-04-01

    Vehicle logo recognition plays an important role in manufacturer identification and vehicle recognition. This paper proposes a new vehicle logo recognition algorithm. It has a hierarchical framework, which consists of two fusion levels. At the first level, a feature fusion model is employed to map the original features to a higher dimension feature space. In this space, the vehicle logos become more recognizable. At the second level, a weighted voting strategy is proposed to promote the accuracy and the robustness of the recognition results. To evaluate the performance of the proposed algorithm, extensive experiments are performed, which demonstrate that the proposed algorithm can achieve high recognition accuracy and work robustly.

  20. Evaluation of Effective Parameters on Quality of Magnetic Resonance Imaging-computed Tomography Image Fusion in Head and Neck Tumors for Application in Treatment Planning.

    PubMed

    Shirvani, Atefeh; Jabbari, Keyvan; Amouheidari, Alireza

    2017-01-01

    In radiation therapy, computed tomography (CT) simulation is used for treatment planning to define the location of tumor. Magnetic resonance imaging (MRI)-CT image fusion leads to more efficient tumor contouring. This work tried to identify the practical issues for the combination of CT and MRI images in real clinical cases. The effect of various factors is evaluated on image fusion quality. In this study, the data of thirty patients with brain tumors were used for image fusion. The effect of several parameters on possibility and quality of image fusion was evaluated. These parameters include angles of the patient's head on the bed, slices thickness, slice gap, and height of the patient's head. According to the results, the first dominating factor on quality of image fusion was the difference slice gap between CT and MRI images (cor = 0.86, P < 0.005) and second factor was the angle between CT and MRI slice in the sagittal plane (cor = 0.75, P < 0.005). In 20% of patients, this angle was more than 28° and image fusion was not efficient. In 17% of patients, difference slice gap in CT and MRI was >4 cm and image fusion quality was <25%. The most important problem in image fusion is that MRI images are taken without regard to their use in treatment planning. In general, parameters related to the patient position during MRI imaging should be chosen to be consistent with CT images of the patient in terms of location and angle.

  1. Extended depth of field integral imaging using multi-focus fusion

    NASA Astrophysics Data System (ADS)

    Piao, Yongri; Zhang, Miao; Wang, Xiaohui; Li, Peihua

    2018-03-01

    In this paper, we propose a new method for depth of field extension in integral imaging by realizing the image fusion method on the multi-focus elemental images. In the proposed method, a camera is translated on a 2D grid to take multi-focus elemental images by sweeping the focus plane across the scene. Simply applying an image fusion method on the elemental images holding rich parallax information does not work effectively because registration accuracy of images is the prerequisite for image fusion. To solve this problem an elemental image generalization method is proposed. The aim of this generalization process is to geometrically align the objects in all elemental images so that the correct regions of multi-focus elemental images can be exacted. The all-in focus elemental images are then generated by fusing the generalized elemental images using the block based fusion method. The experimental results demonstrate that the depth of field of synthetic aperture integral imaging system has been extended by realizing the generation method combined with the image fusion on multi-focus elemental images in synthetic aperture integral imaging system.

  2. 3D second harmonic generation imaging tomography by multi-view excitation

    PubMed Central

    Campbell, Kirby R.; Wen, Bruce; Shelton, Emily M.; Swader, Robert; Cox, Benjamin L.; Eliceiri, Kevin; Campagnola, Paul J.

    2018-01-01

    Biological tissues have complex 3D collagen fiber architecture that cannot be fully visualized by conventional second harmonic generation (SHG) microscopy due to electric dipole considerations. We have developed a multi-view SHG imaging platform that successfully visualizes all orientations of collagen fibers. This is achieved by rotating tissues relative to the excitation laser plane of incidence, where the complete fibrillar structure is then visualized following registration and reconstruction. We evaluated high frequency and Gaussian weighted fusion reconstruction algorithms, and found the former approach performs better in terms of the resulting resolution. The new approach is a first step toward SHG tomography. PMID:29541654

  3. Automatic image fusion of real-time ultrasound with computed tomography images: a prospective comparison between two auto-registration methods.

    PubMed

    Cha, Dong Ik; Lee, Min Woo; Kim, Ah Yeong; Kang, Tae Wook; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga

    2017-11-01

    Background A major drawback of conventional manual image fusion is that the process may be complex, especially for less-experienced operators. Recently, two automatic image fusion techniques called Positioning and Sweeping auto-registration have been developed. Purpose To compare the accuracy and required time for image fusion of real-time ultrasonography (US) and computed tomography (CT) images between Positioning and Sweeping auto-registration. Material and Methods Eighteen consecutive patients referred for planning US for radiofrequency ablation or biopsy for focal hepatic lesions were enrolled. Image fusion using both auto-registration methods was performed for each patient. Registration error, time required for image fusion, and number of point locks used were compared using the Wilcoxon signed rank test. Results Image fusion was successful in all patients. Positioning auto-registration was significantly faster than Sweeping auto-registration for both initial (median, 11 s [range, 3-16 s] vs. 32 s [range, 21-38 s]; P < 0.001] and complete (median, 34.0 s [range, 26-66 s] vs. 47.5 s [range, 32-90]; P = 0.001] image fusion. Registration error of Positioning auto-registration was significantly higher for initial image fusion (median, 38.8 mm [range, 16.0-84.6 mm] vs. 18.2 mm [6.7-73.4 mm]; P = 0.029), but not for complete image fusion (median, 4.75 mm [range, 1.7-9.9 mm] vs. 5.8 mm [range, 2.0-13.0 mm]; P = 0.338]. Number of point locks required to refine the initially fused images was significantly higher with Positioning auto-registration (median, 2 [range, 2-3] vs. 1 [range, 1-2]; P = 0.012]. Conclusion Positioning auto-registration offers faster image fusion between real-time US and pre-procedural CT images than Sweeping auto-registration. The final registration error is similar between the two methods.

  4. Multispectral image fusion for target detection

    NASA Astrophysics Data System (ADS)

    Leviner, Marom; Maltz, Masha

    2009-09-01

    Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in an experiment using MSSF against two established methods: Averaging and Principle Components Analysis (PCA), and against its two source bands, visible and infrared. The task that we studied was: target detection in the cluttered environment. MSSF proved superior to the other fusion methods. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.

  5. Optic disk localization by a robust fusion method

    NASA Astrophysics Data System (ADS)

    Zhang, Jielin; Yin, Fengshou; Wong, Damon W. K.; Liu, Jiang; Baskaran, Mani; Cheng, Ching-Yu; Wong, Tien Yin

    2013-02-01

    The optic disk localization plays an important role in developing computer-aided diagnosis (CAD) systems for ocular diseases such as glaucoma, diabetic retinopathy and age-related macula degeneration. In this paper, we propose an intelligent fusion of methods for the localization of the optic disk in retinal fundus images. Three different approaches are developed to detect the location of the optic disk separately. The first method is the maximum vessel crossing method, which finds the region with the most number of blood vessel crossing points. The second one is the multichannel thresholding method, targeting the area with the highest intensity. The final method searches the vertical and horizontal region-of-interest separately on the basis of blood vessel structure and neighborhood entropy profile. Finally, these three methods are combined using an intelligent fusion method to improve the overall accuracy. The proposed algorithm was tested on the STARE database and the ORIGAlight database, each consisting of images with various pathologies. The preliminary result on the STARE database can achieve 81.5%, while a higher result of 99% can be obtained for the ORIGAlight database. The proposed method outperforms each individual approach and state-of-the-art method which utilizes an intensity-based approach. The result demonstrates a high potential for this method to be used in retinal CAD systems.

  6. TU-E-BRB-03: Overview of Proposed TG-132 Recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brock, K.

    2015-06-15

    Deformable image registration (DIR) is developing rapidly and is poised to substantially improve dose fusion accuracy for adaptive and retreatment planning and motion management and PET fusion to enhance contour delineation for treatment planning. However, DIR dose warping accuracy is difficult to quantify, in general, and particularly difficult to do so on a patient-specific basis. As clinical DIR options become more widely available, there is an increased need to understand the implications of incorporating DIR into clinical workflow. Several groups have assessed DIR accuracy in clinically relevant scenarios, but no comprehensive review material is yet available. This session will alsomore » discuss aspects of the AAPM Task Group 132 on the Use of Image Registration and Data Fusion Algorithms and Techniques in Radiotherapy Treatment Planning official report, which provides recommendations for DIR clinical use. We will summarize and compare various commercial DIR software options, outline successful clinical techniques, show specific examples with discussion of appropriate and inappropriate applications of DIR, discuss the clinical implications of DIR, provide an overview of current DIR error analysis research, review QA options and research phantom development and present TG-132 recommendations. Learning Objectives: Compare/contrast commercial DIR software and QA options Overview clinical DIR workflow for retreatment To understand uncertainties introduced by DIR Review TG-132 proposed recommendations.« less

  7. Applicability of common measures in multifocus image fusion comparison

    NASA Astrophysics Data System (ADS)

    Vajgl, Marek

    2017-11-01

    Image fusion is an image processing area aimed at fusion of multiple input images to achieve an output image somehow better then each of the input ones. In the case of "multifocus fusion", input images are capturing the same scene differing ina focus distance. The aim is to obtain an image, which is sharp in all its areas. The are several different approaches and methods used to solve this problem. However, it is common question which one is the best. This work describes a research covering the field of common measures with a question, if some of them can be used as a quality measure of the fusion result evaluation.

  8. Application of Multimodality Imaging Fusion Technology in Diagnosis and Treatment of Malignant Tumors under the Precision Medicine Plan.

    PubMed

    Wang, Shun-Yi; Chen, Xian-Xia; Li, Yi; Zhang, Yu-Ying

    2016-12-20

    The arrival of precision medicine plan brings new opportunities and challenges for patients undergoing precision diagnosis and treatment of malignant tumors. With the development of medical imaging, information on different modality imaging can be integrated and comprehensively analyzed by imaging fusion system. This review aimed to update the application of multimodality imaging fusion technology in the precise diagnosis and treatment of malignant tumors under the precision medicine plan. We introduced several multimodality imaging fusion technologies and their application to the diagnosis and treatment of malignant tumors in clinical practice. The data cited in this review were obtained mainly from the PubMed database from 1996 to 2016, using the keywords of "precision medicine", "fusion imaging", "multimodality", and "tumor diagnosis and treatment". Original articles, clinical practice, reviews, and other relevant literatures published in English were reviewed. Papers focusing on precision medicine, fusion imaging, multimodality, and tumor diagnosis and treatment were selected. Duplicated papers were excluded. Multimodality imaging fusion technology plays an important role in tumor diagnosis and treatment under the precision medicine plan, such as accurate location, qualitative diagnosis, tumor staging, treatment plan design, and real-time intraoperative monitoring. Multimodality imaging fusion systems could provide more imaging information of tumors from different dimensions and angles, thereby offing strong technical support for the implementation of precision oncology. Under the precision medicine plan, personalized treatment of tumors is a distinct possibility. We believe that multimodality imaging fusion technology will find an increasingly wide application in clinical practice.

  9. Parametric PET/MR Fusion Imaging to Differentiate Aggressive from Indolent Primary Prostate Cancer with Application for Image-Guided Prostate Cancer Biopsies

    DTIC Science & Technology

    2013-10-01

    AD_________________ Award Number: W81XWH-12-1-0597 TITLE: Parametric PET /MR Fusion Imaging to...Parametric PET /MR Fusion Imaging to Differentiate Aggressive from Indolent Primary Prostate Cancer with Application for Image-Guided Prostate Cancer Biopsies...The study investigates whether fusion PET /MRI imaging with 18F-choline PET /CT and diffusion-weighted MRI can be successfully applied to target prostate

  10. A visual analytic framework for data fusion in investigative intelligence

    NASA Astrophysics Data System (ADS)

    Cai, Guoray; Gross, Geoff; Llinas, James; Hall, David

    2014-05-01

    Intelligence analysis depends on data fusion systems to provide capabilities of detecting and tracking important objects, events, and their relationships in connection to an analytical situation. However, automated data fusion technologies are not mature enough to offer reliable and trustworthy information for situation awareness. Given the trend of increasing sophistication of data fusion algorithms and loss of transparency in data fusion process, analysts are left out of the data fusion process cycle with little to no control and confidence on the data fusion outcome. Following the recent rethinking of data fusion as human-centered process, this paper proposes a conceptual framework towards developing alternative data fusion architecture. This idea is inspired by the recent advances in our understanding of human cognitive systems, the science of visual analytics, and the latest thinking about human-centered data fusion. Our conceptual framework is supported by an analysis of the limitation of existing fully automated data fusion systems where the effectiveness of important algorithmic decisions depend on the availability of expert knowledge or the knowledge of the analyst's mental state in an investigation. The success of this effort will result in next generation data fusion systems that can be better trusted while maintaining high throughput.

  11. Evaluation of Effective Parameters on Quality of Magnetic Resonance Imaging-computed Tomography Image Fusion in Head and Neck Tumors for Application in Treatment Planning

    PubMed Central

    Shirvani, Atefeh; Jabbari, Keyvan; Amouheidari, Alireza

    2017-01-01

    Background: In radiation therapy, computed tomography (CT) simulation is used for treatment planning to define the location of tumor. Magnetic resonance imaging (MRI)-CT image fusion leads to more efficient tumor contouring. This work tried to identify the practical issues for the combination of CT and MRI images in real clinical cases. The effect of various factors is evaluated on image fusion quality. Materials and Methods: In this study, the data of thirty patients with brain tumors were used for image fusion. The effect of several parameters on possibility and quality of image fusion was evaluated. These parameters include angles of the patient's head on the bed, slices thickness, slice gap, and height of the patient's head. Results: According to the results, the first dominating factor on quality of image fusion was the difference slice gap between CT and MRI images (cor = 0.86, P < 0.005) and second factor was the angle between CT and MRI slice in the sagittal plane (cor = 0.75, P < 0.005). In 20% of patients, this angle was more than 28° and image fusion was not efficient. In 17% of patients, difference slice gap in CT and MRI was >4 cm and image fusion quality was <25%. Conclusion: The most important problem in image fusion is that MRI images are taken without regard to their use in treatment planning. In general, parameters related to the patient position during MRI imaging should be chosen to be consistent with CT images of the patient in terms of location and angle. PMID:29387672

  12. Infrared and visible image fusion with spectral graph wavelet transform.

    PubMed

    Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Zong, Jing-guo

    2015-09-01

    Infrared and visible image fusion technique is a popular topic in image analysis because it can integrate complementary information and obtain reliable and accurate description of scenes. Multiscale transform theory as a signal representation method is widely used in image fusion. In this paper, a novel infrared and visible image fusion method is proposed based on spectral graph wavelet transform (SGWT) and bilateral filter. The main novelty of this study is that SGWT is used for image fusion. On the one hand, source images are decomposed by SGWT in its transform domain. The proposed approach not only effectively preserves the details of different source images, but also excellently represents the irregular areas of the source images. On the other hand, a novel weighted average method based on bilateral filter is proposed to fuse low- and high-frequency subbands by taking advantage of spatial consistency of natural images. Experimental results demonstrate that the proposed method outperforms seven recently proposed image fusion methods in terms of both visual effect and objective evaluation metrics.

  13. Self-assessed performance improves statistical fusion of image labels

    PubMed Central

    Bryan, Frederick W.; Xu, Zhoubing; Asman, Andrew J.; Allen, Wade M.; Reich, Daniel S.; Landman, Bennett A.

    2014-01-01

    Purpose: Expert manual labeling is the gold standard for image segmentation, but this process is difficult, time-consuming, and prone to inter-individual differences. While fully automated methods have successfully targeted many anatomies, automated methods have not yet been developed for numerous essential structures (e.g., the internal structure of the spinal cord as seen on magnetic resonance imaging). Collaborative labeling is a new paradigm that offers a robust alternative that may realize both the throughput of automation and the guidance of experts. Yet, distributing manual labeling expertise across individuals and sites introduces potential human factors concerns (e.g., training, software usability) and statistical considerations (e.g., fusion of information, assessment of confidence, bias) that must be further explored. During the labeling process, it is simple to ask raters to self-assess the confidence of their labels, but this is rarely done and has not been previously quantitatively studied. Herein, the authors explore the utility of self-assessment in relation to automated assessment of rater performance in the context of statistical fusion. Methods: The authors conducted a study of 66 volumes manually labeled by 75 minimally trained human raters recruited from the university undergraduate population. Raters were given 15 min of training during which they were shown examples of correct segmentation, and the online segmentation tool was demonstrated. The volumes were labeled 2D slice-wise, and the slices were unordered. A self-assessed quality metric was produced by raters for each slice by marking a confidence bar superimposed on the slice. Volumes produced by both voting and statistical fusion algorithms were compared against a set of expert segmentations of the same volumes. Results: Labels for 8825 distinct slices were obtained. Simple majority voting resulted in statistically poorer performance than voting weighted by self-assessed performance. Statistical fusion resulted in statistically indistinguishable performance from self-assessed weighted voting. The authors developed a new theoretical basis for using self-assessed performance in the framework of statistical fusion and demonstrated that the combined sources of information (both statistical assessment and self-assessment) yielded statistically significant improvement over the methods considered separately. Conclusions: The authors present the first systematic characterization of self-assessed performance in manual labeling. The authors demonstrate that self-assessment and statistical fusion yield similar, but complementary, benefits for label fusion. Finally, the authors present a new theoretical basis for combining self-assessments with statistical label fusion. PMID:24593721

  14. State-of-the-Art Fusion-Finder Algorithms Sensitivity and Specificity

    PubMed Central

    Carrara, Matteo; Beccuti, Marco; Lazzarato, Fulvio; Cavallo, Federica; Cordero, Francesca; Donatelli, Susanna; Calogero, Raffaele A.

    2013-01-01

    Background. Gene fusions arising from chromosomal translocations have been implicated in cancer. RNA-seq has the potential to discover such rearrangements generating functional proteins (chimera/fusion). Recently, many methods for chimeras detection have been published. However, specificity and sensitivity of those tools were not extensively investigated in a comparative way. Results. We tested eight fusion-detection tools (FusionHunter, FusionMap, FusionFinder, MapSplice, deFuse, Bellerophontes, ChimeraScan, and TopHat-fusion) to detect fusion events using synthetic and real datasets encompassing chimeras. The comparison analysis run only on synthetic data could generate misleading results since we found no counterpart on real dataset. Furthermore, most tools report a very high number of false positive chimeras. In particular, the most sensitive tool, ChimeraScan, reports a large number of false positives that we were able to significantly reduce by devising and applying two filters to remove fusions not supported by fusion junction-spanning reads or encompassing large intronic regions. Conclusions. The discordant results obtained using synthetic and real datasets suggest that synthetic datasets encompassing fusion events may not fully catch the complexity of RNA-seq experiment. Moreover, fusion detection tools are still limited in sensitivity or specificity; thus, there is space for further improvement in the fusion-finder algorithms. PMID:23555082

  15. Computational Performance of Intel MIC, Sandy Bridge, and GPU Architectures: Implementation of a 1D c++/OpenMP Electrostatic Particle-In-Cell Code

    DTIC Science & Technology

    2014-05-01

    fusion, space and astrophysical plasmas, but still the general picture can be presented quite well with the fluid approach [6, 7]. The microscopic...purpose computing CPU for algorithms where processing of large blocks of data is done in parallel. The reason for that is the GPU’s highly effective...parallel structure. Most of the image and video processing computations involve heavy matrix and vector op- erations over large amounts of data and

  16. Range and Panoramic Image Fusion Into a Textured Range Image for Culture Heritage Documentation

    NASA Astrophysics Data System (ADS)

    Bila, Z.; Reznicek, J.; Pavelka, K.

    2013-07-01

    This paper deals with a fusion of range and panoramic images, where the range image is acquired by a 3D laser scanner and the panoramic image is acquired with a digital still camera mounted on a panoramic head and tripod. The fused resulting dataset, called "textured range image", provides more reliable information about the investigated object for conservators and historians, than using both datasets separately. A simple example of fusion of a range and panoramic images, both obtained in St. Francis Xavier Church in town Opařany, is given here. Firstly, we describe the process of data acquisition, then the processing of both datasets into a proper format for following fusion and the process of fusion. The process of fusion can be divided into a two main parts: transformation and remapping. In the first, transformation, part, both images are related by matching similar features detected on both images with a proper detector, which results in transformation matrix enabling transformation of the range image onto a panoramic image. Then, the range data are remapped from the range image space into a panoramic image space and stored as an additional "range" channel. The process of image fusion is validated by comparing similar features extracted on both datasets.

  17. Granule mobility, fusion frequency and insulin secretion are differentially affected by insulinotropic stimuli.

    PubMed

    Schumacher, Kirstin; Matz, Magnus; Brüning, Dennis; Baumann, Knut; Rustenbeck, Ingo

    2015-05-01

    The pre-exocytotic behavior of insulin granules was studied against the background of the entirety of submembrane granules in MIN6 cells, and the characteristics were compared with the macroscopic secretion pattern and the cytosolic Ca(2+) concentration of MIN6 pseudo-islets at 22°C, 32°C and 37°C. The mobility of granules labeled by insulin-EGFP and the fusion events were assessed by TIRF microscopy utilizing an observer-independent algorithm. In the z-dimension, 40 mm K(+) or 30 mm glucose increased the granule turnover. The effect of high K(+) was quickly reversible. The increase by glucose was more sustained and modified the efficacy of a subsequent K(+) stimulus. The effect size of glucose increased with physiological temperature whereas that of high K(+) did not. The mobility in the x/y-dimension and the fusion rates were little affected by the stimuli, in contrast to secretion. Fusion and secretion, however, had the same temperature dependence. Granules that appeared and fused within one image sequence had significantly larger caging diameters than pre-existent granules that underwent fusion. These in turn had a different mobility than residence-matched non-fusing granules. In conclusion, delivery to the membrane, tethering and fusion of granules are differently affected by insulinotropic stimuli. Fusion rates and secretion do not appear to be tightly coupled. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. Automated Discovery of Long Intergenic RNAs Associated with Breast Cancer Progression

    DTIC Science & Technology

    2012-02-01

    manuscript in preparation), (2) development and publication of an algorithm for detecting gene fusions in RNA-Seq data [1], and (3) discovery of outlier long...subjected to de novo assembly algorithms to discover novel transcripts representing either unannotated genes or novel somatic mutations such as gene...fusions. To this end the P.I. developed and published a novel algorithm called ChimeraScan to facilitate the discovery and validation of gene

  19. Image registration for multi-exposed HDRI and motion deblurring

    NASA Astrophysics Data System (ADS)

    Lee, Seok; Wey, Ho-Cheon; Lee, Seong-Deok

    2009-02-01

    In multi-exposure based image fusion task, alignment is an essential prerequisite to prevent ghost artifact after blending. Compared to usual matching problem, registration is more difficult when each image is captured under different photographing conditions. In HDR imaging, we use long and short exposure images, which have different brightness and there exist over/under satuated regions. In motion deblurring problem, we use blurred and noisy image pair and the amount of motion blur varies from one image to another due to the different exposure times. The main difficulty is that luminance levels of the two images are not in linear relationship and we cannot perfectly equalize or normalize the brightness of each image and this leads to unstable and inaccurate alignment results. To solve this problem, we applied probabilistic measure such as mutual information to represent similarity between images after alignment. In this paper, we discribed about the characteristics of multi-exposed input images in the aspect of registration and also analyzed the magnitude of camera hand shake. By exploiting the independence of luminance of mutual information, we proposed a fast and practically useful image registration technique in multiple capturing. Our algorithm can be applied to extreme HDR scenes and motion blurred scenes with over 90% success rate and its simplicity enables to be embedded in digital camera and mobile camera phone. The effectiveness of our registration algorithm is examined by various experiments on real HDR or motion deblurring cases using hand-held camera.

  20. Horror Image Recognition Based on Context-Aware Multi-Instance Learning.

    PubMed

    Li, Bing; Xiong, Weihua; Wu, Ou; Hu, Weiming; Maybank, Stephen; Yan, Shuicheng

    2015-12-01

    Horror content sharing on the Web is a growing phenomenon that can interfere with our daily life and affect the mental health of those involved. As an important form of expression, horror images have their own characteristics that can evoke extreme emotions. In this paper, we present a novel context-aware multi-instance learning (CMIL) algorithm for horror image recognition. The CMIL algorithm identifies horror images and picks out the regions that cause the sensation of horror in these horror images. It obtains contextual cues among adjacent regions in an image using a random walk on a contextual graph. Borrowing the strength of the fuzzy support vector machine (FSVM), we define a heuristic optimization procedure based on the FSVM to search for the optimal classifier for the CMIL. To improve the initialization of the CMIL, we propose a novel visual saliency model based on the tensor analysis. The average saliency value of each segmented region is set as its initial fuzzy membership in the CMIL. The advantage of the tensor-based visual saliency model is that it not only adaptively selects features, but also dynamically determines fusion weights for saliency value combination from different feature subspaces. The effectiveness of the proposed CMIL model is demonstrated by its use in horror image recognition on two large-scale image sets collected from the Internet.

  1. Object detection system based on multimodel saliency maps

    NASA Astrophysics Data System (ADS)

    Guo, Ya'nan; Luo, Chongfan; Ma, Yide

    2017-03-01

    Detection of visually salient image regions is extensively applied in computer vision and computer graphics, such as object detection, adaptive compression, and object recognition, but any single model always has its limitations to various images, so in our work, we establish a method based on multimodel saliency maps to detect the object, which intelligently absorbs the merits of various individual saliency detection models to achieve promising results. The method can be roughly divided into three steps: in the first step, we propose a decision-making system to evaluate saliency maps obtained by seven competitive methods and merely select the three most valuable saliency maps; in the second step, we introduce heterogeneous PCNN algorithm to obtain three prime foregrounds; and then a self-designed nonlinear fusion method is proposed to merge these saliency maps; at last, the adaptive improved and simplified PCNN model is used to detect the object. Our proposed method can constitute an object detection system for different occasions, which requires no training, is simple, and highly efficient. The proposed saliency fusion technique shows better performance over a broad range of images and enriches the applicability range by fusing different individual saliency models, this proposed system is worthy enough to be called a strong model. Moreover, the proposed adaptive improved SPCNN model is stemmed from the Eckhorn's neuron model, which is skilled in image segmentation because of its biological background, and in which all the parameters are adaptive to image information. We extensively appraise our algorithm on classical salient object detection database, and the experimental results demonstrate that the aggregation of saliency maps outperforms the best saliency model in all cases, yielding highest precision of 89.90%, better recall rates of 98.20%, greatest F-measure of 91.20%, and lowest mean absolute error value of 0.057, the value of proposed saliency evaluation EHA reaches to 215.287. We deem our method can be wielded to diverse applications in the future.

  2. Free-breathing diffusion tensor imaging and tractography of the human heart in healthy volunteers using wavelet-based image fusion.

    PubMed

    Wei, Hongjiang; Viallon, Magalie; Delattre, Benedicte M A; Moulin, Kevin; Yang, Feng; Croisille, Pierre; Zhu, Yuemin

    2015-01-01

    Free-breathing cardiac diffusion tensor imaging (DTI) is a promising but challenging technique for the study of fiber structures of the human heart in vivo. This work proposes a clinically compatible and robust technique to provide three-dimensional (3-D) fiber architecture properties of the human heart. To this end, 10 short-axis slices were acquired across the entire heart using a multiple shifted trigger delay (TD) strategy under free breathing conditions. Interscan motion was first corrected automatically using a nonrigid registration method. Then, two post-processing schemes were optimized and compared: an algorithm based on principal component analysis (PCA) filtering and temporal maximum intensity projection (TMIP), and an algorithm that uses the wavelet-based image fusion (WIF) method. The two methods were applied to the registered diffusion-weighted (DW) images to cope with intrascan motion-induced signal loss. The tensor fields were finally calculated, from which fractional anisotropy (FA), mean diffusivity (MD), and 3-D fiber tracts were derived and compared. The results show that the comparison of the FA values (FA(PCATMIP) = 0.45 ±0.10, FA(WIF) = 0.42 ±0.05, P=0.06) showed no significant difference, while the MD values ( MD(PCATMIP)=0.83 ±0.12×10(-3) mm (2)/s, MD(WIF)=0.74±0.05×10(-3) mm (2)/s, P=0.028) were significantly different. Improved helix angle variations through the myocardium wall reflecting the rotation characteristic of cardiac fibers were observed with WIF. This study demonstrates that the combination of multiple shifted TD acquisitions and dedicated post-processing makes it feasible to retrieve in vivo cardiac tractographies from free-breathing DTI acquisitions. The substantial improvements were observed using the WIF method instead of the previously published PCATMIP technique.

  3. Sensor fusion for synthetic vision

    NASA Technical Reports Server (NTRS)

    Pavel, M.; Larimer, J.; Ahumada, A.

    1991-01-01

    Display methodologies are explored for fusing images gathered by millimeter wave sensors with images rendered from an on-board terrain data base to facilitate visually guided flight and ground operations in low visibility conditions. An approach to fusion based on multiresolution image representation and processing is described which facilitates fusion of images differing in resolution within and between images. To investigate possible fusion methods, a workstation-based simulation environment is being developed.

  4. Ultrasound fusion image error correction using subject-specific liver motion model and automatic image registration.

    PubMed

    Yang, Minglei; Ding, Hui; Zhu, Lei; Wang, Guangzhi

    2016-12-01

    Ultrasound fusion imaging is an emerging tool and benefits a variety of clinical applications, such as image-guided diagnosis and treatment of hepatocellular carcinoma and unresectable liver metastases. However, respiratory liver motion-induced misalignment of multimodal images (i.e., fusion error) compromises the effectiveness and practicability of this method. The purpose of this paper is to develop a subject-specific liver motion model and automatic registration-based method to correct the fusion error. An online-built subject-specific motion model and automatic image registration method for 2D ultrasound-3D magnetic resonance (MR) images were combined to compensate for the respiratory liver motion. The key steps included: 1) Build a subject-specific liver motion model for current subject online and perform the initial registration of pre-acquired 3D MR and intra-operative ultrasound images; 2) During fusion imaging, compensate for liver motion first using the motion model, and then using an automatic registration method to further correct the respiratory fusion error. Evaluation experiments were conducted on liver phantom and five subjects. In the phantom study, the fusion error (superior-inferior axis) was reduced from 13.90±2.38mm to 4.26±0.78mm by using the motion model only. The fusion error further decreased to 0.63±0.53mm by using the registration method. The registration method also decreased the rotation error from 7.06±0.21° to 1.18±0.66°. In the clinical study, the fusion error was reduced from 12.90±9.58mm to 6.12±2.90mm by using the motion model alone. Moreover, the fusion error decreased to 1.96±0.33mm by using the registration method. The proposed method can effectively correct the respiration-induced fusion error to improve the fusion image quality. This method can also reduce the error correction dependency on the initial registration of ultrasound and MR images. Overall, the proposed method can improve the clinical practicability of ultrasound fusion imaging. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Multiscale image fusion using the undecimated wavelet transform with spectral factorization and nonorthogonal filter banks.

    PubMed

    Ellmauthaler, Andreas; Pagliari, Carla L; da Silva, Eduardo A B

    2013-03-01

    Multiscale transforms are among the most popular techniques in the field of pixel-level image fusion. However, the fusion performance of these methods often deteriorates for images derived from different sensor modalities. In this paper, we demonstrate that for such images, results can be improved using a novel undecimated wavelet transform (UWT)-based fusion scheme, which splits the image decomposition process into two successive filtering operations using spectral factorization of the analysis filters. The actual fusion takes place after convolution with the first filter pair. Its significantly smaller support size leads to the minimization of the unwanted spreading of coefficient values around overlapping image singularities. This usually complicates the feature selection process and may lead to the introduction of reconstruction errors in the fused image. Moreover, we will show that the nonsubsampled nature of the UWT allows the design of nonorthogonal filter banks, which are more robust to artifacts introduced during fusion, additionally improving the obtained results. The combination of these techniques leads to a fusion framework, which provides clear advantages over traditional multiscale fusion approaches, independent of the underlying fusion rule, and reduces unwanted side effects such as ringing artifacts in the fused reconstruction.

  6. Normalized gradient fields cross-correlation for automated detection of prostate in magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Fotin, Sergei V.; Yin, Yin; Periaswamy, Senthil; Kunz, Justin; Haldankar, Hrishikesh; Muradyan, Naira; Cornud, François; Turkbey, Baris; Choyke, Peter L.

    2012-02-01

    Fully automated prostate segmentation helps to address several problems in prostate cancer diagnosis and treatment: it can assist in objective evaluation of multiparametric MR imagery, provides a prostate contour for MR-ultrasound (or CT) image fusion for computer-assisted image-guided biopsy or therapy planning, may facilitate reporting and enables direct prostate volume calculation. Among the challenges in automated analysis of MR images of the prostate are the variations of overall image intensities across scanners, the presence of nonuniform multiplicative bias field within scans and differences in acquisition setup. Furthermore, images acquired with the presence of an endorectal coil suffer from localized high-intensity artifacts at the posterior part of the prostate. In this work, a three-dimensional method for fast automated prostate detection based on normalized gradient fields cross-correlation, insensitive to intensity variations and coil-induced artifacts, is presented and evaluated. The components of the method, offline template learning and the localization algorithm, are described in detail. The method was validated on a dataset of 522 T2-weighted MR images acquired at the National Cancer Institute, USA that was split in two halves for development and testing. In addition, second dataset of 29 MR exams from Centre d'Imagerie Médicale Tourville, France were used to test the algorithm. The 95% confidence intervals for the mean Euclidean distance between automatically and manually identified prostate centroids were 4.06 +/- 0.33 mm and 3.10 +/- 0.43 mm for the first and second test datasets respectively. Moreover, the algorithm provided the centroid within the true prostate volume in 100% of images from both datasets. Obtained results demonstrate high utility of the detection method for a fully automated prostate segmentation.

  7. Enhancing radiological volumes with symbolic anatomy using image fusion and collaborative virtual reality.

    PubMed

    Silverstein, Jonathan C; Dech, Fred; Kouchoukos, Philip L

    2004-01-01

    Radiological volumes are typically reviewed by surgeons using cross-sections and iso-surface reconstructions. Applications that combine collaborative stereo volume visualization with symbolic anatomic information and data fusions would expand surgeons' capabilities in interpretation of data and in planning treatment. Such an application has not been seen clinically. We are developing methods to systematically combine symbolic anatomy (term hierarchies and iso-surface atlases) with patient data using data fusion. We describe our progress toward integrating these methods into our collaborative virtual reality application. The fully combined application will be a feature-rich stereo collaborative volume visualization environment for use by surgeons in which DICOM datasets will self-report underlying anatomy with visual feedback. Using hierarchical navigation of SNOMED-CT anatomic terms integrated with our existing Tele-immersive DICOM-based volumetric rendering application, we will display polygonal representations of anatomic systems on the fly from menus that query a database. The methods and tools involved in this application development are SNOMED-CT, DICOM, VISIBLE HUMAN, volumetric fusion and C++ on a Tele-immersive platform. This application will allow us to identify structures and display polygonal representations from atlas data overlaid with the volume rendering. First, atlas data is automatically translated, rotated, and scaled to the patient data during loading using a public domain volumetric fusion algorithm. This generates a modified symbolic representation of the underlying canonical anatomy. Then, through the use of collision detection or intersection testing of various transparent polygonal representations, the polygonal structures are highlighted into the volumetric representation while the SNOMED names are displayed. Thus, structural names and polygonal models are associated with the visualized DICOM data. This novel juxtaposition of information promises to expand surgeons' abilities to interpret images and plan treatment.

  8. Neural Network Based Sensory Fusion for Landmark Detection

    NASA Technical Reports Server (NTRS)

    Kumbla, Kishan -K.; Akbarzadeh, Mohammad R.

    1997-01-01

    NASA is planning to send numerous unmanned planetary missions to explore the space. This requires autonomous robotic vehicles which can navigate in an unstructured, unknown, and uncertain environment. Landmark based navigation is a new area of research which differs from the traditional goal-oriented navigation, where a mobile robot starts from an initial point and reaches a destination in accordance with a pre-planned path. The landmark based navigation has the advantage of allowing the robot to find its way without communication with the mission control station and without exact knowledge of its coordinates. Current algorithms based on landmark navigation however pose several constraints. First, they require large memories to store the images. Second, the task of comparing the images using traditional methods is computationally intensive and consequently real-time implementation is difficult. The method proposed here consists of three stages, First stage utilizes a heuristic-based algorithm to identify significant objects. The second stage utilizes a neural network (NN) to efficiently classify images of the identified objects. The third stage combines distance information with the classification results of neural networks for efficient and intelligent navigation.

  9. Image fusion based on millimeter-wave for concealed weapon detection

    NASA Astrophysics Data System (ADS)

    Zhu, Weiwen; Zhao, Yuejin; Deng, Chao; Zhang, Cunlin; Zhang, Yalin; Zhang, Jingshui

    2010-11-01

    This paper describes a novel multi sensors image fusion technology which is presented for concealed weapon detection (CWD). It is known to all, because of the good transparency of the clothes at millimeter wave band, a millimeter wave radiometer can be used to image and distinguish concealed contraband beneath clothes, for example guns, knives, detonator and so on. As a result, we adopt the passive millimeter wave (PMMW) imaging technology for airport security. However, in consideration of the wavelength of millimeter wave and the single channel mechanical scanning, the millimeter wave image has law optical resolution, which can't meet the need of practical application. Therefore, visible image (VI), which has higher resolution, is proposed for the image fusion with the millimeter wave image to enhance the readability. Before the image fusion, a novel image pre-processing which specifics to the fusion of millimeter wave imaging and visible image is adopted. And in the process of image fusion, multi resolution analysis (MRA) based on Wavelet Transform (WT) is adopted. In this way, the experiment result shows that this method has advantages in concealed weapon detection and has practical significance.

  10. A method based on IHS cylindrical transform model for quality assessment of image fusion

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaokun; Jia, Yonghong

    2005-10-01

    Image fusion technique has been widely applied to remote sensing image analysis and processing, and methods for quality assessment of image fusion in remote sensing have also become the research issues at home and abroad. Traditional assessment methods combine calculation of quantitative indexes and visual interpretation to compare fused images quantificationally and qualitatively. However, in the existing assessment methods, there are two defects: on one hand, most imdexes lack the theoretic support to compare different fusion methods. On the hand, there is not a uniform preference for most of the quantitative assessment indexes when they are applied to estimate the fusion effects. That is, the spatial resolution and spectral feature could not be analyzed synchronously by these indexes and there is not a general method to unify the spatial and spectral feature assessment. So in this paper, on the basis of the approximate general model of four traditional fusion methods, including Intensity Hue Saturation(IHS) triangle transform fusion, High Pass Filter(HPF) fusion, Principal Component Analysis(PCA) fusion, Wavelet Transform(WT) fusion, a correlation coefficient assessment method based on IHS cylindrical transform is proposed. By experiments, this method can not only get the evaluation results of spatial and spectral features on the basis of uniform preference, but also can acquire the comparison between fusion image sources and fused images, and acquire differences among fusion methods. Compared with the traditional assessment methods, the new methods is more intuitionistic, and in accord with subjective estimation.

  11. Neural-net-based image matching

    NASA Astrophysics Data System (ADS)

    Jerebko, Anna K.; Barabanov, Nikita E.; Luciv, Vadim R.; Allinson, Nigel M.

    2000-04-01

    The paper describes a neural-based method for matching spatially distorted image sets. The matching of partially overlapping images is important in many applications-- integrating information from images formed from different spectral ranges, detecting changes in a scene and identifying objects of differing orientations and sizes. Our approach consists of extracting contour features from both images, describing the contour curves as sets of line segments, comparing these sets, determining the corresponding curves and their common reference points, calculating the image-to-image co-ordinate transformation parameters on the basis of the most successful variant of the derived curve relationships. The main steps are performed by custom neural networks. The algorithms describe in this paper have been successfully tested on a large set of images of the same terrain taken in different spectral ranges, at different seasons and rotated by various angles. In general, this experimental verification indicates that the proposed method for image fusion allows the robust detection of similar objects in noisy, distorted scenes where traditional approaches often fail.

  12. Percutaneous Thermal Ablation with Ultrasound Guidance. Fusion Imaging Guidance to Improve Conspicuity of Liver Metastasis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hakime, Antoine, E-mail: thakime@yahoo.com; Yevich, Steven; Tselikas, Lambros

    PurposeTo assess whether fusion imaging-guided percutaneous microwave ablation (MWA) can improve visibility and targeting of liver metastasis that were deemed inconspicuous on ultrasound (US).Materials and MethodsMWA of liver metastasis not judged conspicuous enough on US was performed under CT/US fusion imaging guidance. The conspicuity before and after the fusion imaging was graded on a five-point scale, and significance was assessed by Wilcoxon test. Technical success, procedure time, and procedure-related complications were evaluated.ResultsA total of 35 patients with 40 liver metastases (mean size 1.3 ± 0.4 cm) were enrolled. Image fusion improved conspicuity sufficiently to allow fusion-targeted MWA in 33 patients. The time requiredmore » for image fusion processing and tumors’ identification averaged 10 ± 2.1 min (range 5–14). Initial conspicuity on US by inclusion criteria was 1.2 ± 0.4 (range 0–2), while conspicuity after localization on fusion imaging was 3.5 ± 1 (range 1–5, p < 0.001). Technical success rate was 83% (33/40) in intention-to-treat analysis and 100% in analysis of treated tumors. There were no major procedure-related complications.ConclusionsFusion imaging broadens the scope of US-guided MWA to metastasis lacking adequate conspicuity on conventional US. Fusion imaging is an effective tool to increase the conspicuity of liver metastases that were initially deemed non visualizable on conventional US imaging.« less

  13. Image fusion via nonlocal sparse K-SVD dictionary learning.

    PubMed

    Li, Ying; Li, Fangyi; Bai, Bendu; Shen, Qiang

    2016-03-01

    Image fusion aims to merge two or more images captured via various sensors of the same scene to construct a more informative image by integrating their details. Generally, such integration is achieved through the manipulation of the representations of the images concerned. Sparse representation plays an important role in the effective description of images, offering a great potential in a variety of image processing tasks, including image fusion. Supported by sparse representation, in this paper, an approach for image fusion by the use of a novel dictionary learning scheme is proposed. The nonlocal self-similarity property of the images is exploited, not only at the stage of learning the underlying description dictionary but during the process of image fusion. In particular, the property of nonlocal self-similarity is combined with the traditional sparse dictionary. This results in an improved learned dictionary, hereafter referred to as the nonlocal sparse K-SVD dictionary (where K-SVD stands for the K times singular value decomposition that is commonly used in the literature), and abbreviated to NL_SK_SVD. The performance of the NL_SK_SVD dictionary is applied for image fusion using simultaneous orthogonal matching pursuit. The proposed approach is evaluated with different types of images, and compared with a number of alternative image fusion techniques. The resultant superior fused images using the present approach demonstrates the efficacy of the NL_SK_SVD dictionary in sparse image representation.

  14. Multifocus image fusion using phase congruency

    NASA Astrophysics Data System (ADS)

    Zhan, Kun; Li, Qiaoqiao; Teng, Jicai; Wang, Mingying; Shi, Jinhui

    2015-05-01

    We address the problem of fusing multifocus images based on the phase congruency (PC). PC provides a sharpness feature of a natural image. The focus measure (FM) is identified as strong PC near a distinctive image feature evaluated by the complex Gabor wavelet. The PC is more robust against noise than other FMs. The fusion image is obtained by a new fusion rule (FR), and the focused region is selected by the FR from one of the input images. Experimental results show that the proposed fusion scheme achieves the fusion performance of the state-of-the-art methods in terms of visual quality and quantitative evaluations.

  15. Integration of retinal image sequences

    NASA Astrophysics Data System (ADS)

    Ballerini, Lucia

    1998-10-01

    In this paper a method for noise reduction in ocular fundus image sequences is described. The eye is the only part of the human body where the capillary network can be observed along with the arterial and venous circulation using a non invasive technique. The study of the retinal vessels is very important both for the study of the local pathology (retinal disease) and for the large amount of information it offers on systematic haemodynamics, such as hypertension, arteriosclerosis, and diabetes. In this paper a method for image integration of ocular fundus image sequences is described. The procedure can be divided in two step: registration and fusion. First we describe an automatic alignment algorithm for registration of ocular fundus images. In order to enhance vessel structures, we used a spatially oriented bank of filters designed to match the properties of the objects of interest. To evaluate interframe misalignment we adopted a fast cross-correlation algorithm. The performances of the alignment method have been estimated by simulating shifts between image pairs and by using a cross-validation approach. Then we propose a temporal integration technique of image sequences so as to compute enhanced pictures of the overall capillary network. Image registration is combined with image enhancement by fusing subsequent frames of a same region. To evaluate the attainable results, the signal-to-noise ratio was estimated before and after integration. Experimental results on synthetic images of vessel-like structures with different kind of Gaussian additive noise as well as on real fundus images are reported.

  16. Robust perception algorithms for road and track autonomous following

    NASA Astrophysics Data System (ADS)

    Marion, Vincent; Lecointe, Olivier; Lewandowski, Cecile; Morillon, Joel G.; Aufrere, Romuald; Marcotegui, Beatrix; Chapuis, Roland; Beucher, Serge

    2004-09-01

    The French Military Robotic Study Program (introduced in Aerosense 2003), sponsored by the French Defense Procurement Agency and managed by Thales Airborne Systems as the prime contractor, focuses on about 15 robotic themes, which can provide an immediate "operational add-on value." The paper details the "road and track following" theme (named AUT2), which main purpose was to develop a vision based sub-system to automatically detect roadsides of an extended range of roads and tracks suitable to military missions. To achieve the goal, efforts focused on three main areas: (1) Improvement of images quality at algorithms inputs, thanks to the selection of adapted video cameras, and the development of a THALES patented algorithm: it removes in real time most of the disturbing shadows in images taken in natural environments, enhances contrast and lowers reflection effect due to films of water. (2) Selection and improvement of two complementary algorithms (one is segment oriented, the other region based) (3) Development of a fusion process between both algorithms, which feeds in real time a road model with the best available data. Each previous step has been developed so that the global perception process is reliable and safe: as an example, the process continuously evaluates itself and outputs confidence criteria qualifying roadside detection. The paper presents the processes in details, and the results got from passed military acceptance tests, which trigger the next step: autonomous track following (named AUT3).

  17. [Contrast-enhanced ultrasound (CEUS) and image fusion for procedures of liver interventions].

    PubMed

    Jung, E M; Clevert, D A

    2018-06-01

    Contrast-enhanced ultrasound (CEUS) is becoming increasingly important for the detection and characterization of malignant liver lesions and allows percutaneous treatment when surgery is not possible. Contrast-enhanced ultrasound image fusion with computed tomography (CT) and magnetic resonance imaging (MRI) opens up further options for the targeted investigation of a modified tumor treatment. Ultrasound image fusion offers the potential for real-time imaging and can be combined with other cross-sectional imaging techniques as well as CEUS. With the implementation of ultrasound contrast agents and image fusion, ultrasound has been improved in the detection and characterization of liver lesions in comparison to other cross-sectional imaging techniques. In addition, this method can also be used for intervention procedures. The success rate of fusion-guided biopsies or CEUS-guided tumor ablation lies between 80 and 100% in the literature. Ultrasound-guided image fusion using CT or MRI data, in combination with CEUS, can facilitate diagnosis and therapy follow-up after liver interventions. In addition to the primary applications of image fusion in the diagnosis and treatment of liver lesions, further useful indications can be integrated into daily work. These include, for example, intraoperative and vascular applications as well applications in other organ systems.

  18. Diffuse prior monotonic likelihood ratio test for evaluation of fused image quality measures.

    PubMed

    Wei, Chuanming; Kaplan, Lance M; Burks, Stephen D; Blum, Rick S

    2011-02-01

    This paper introduces a novel method to score how well proposed fused image quality measures (FIQMs) indicate the effectiveness of humans to detect targets in fused imagery. The human detection performance is measured via human perception experiments. A good FIQM should relate to perception results in a monotonic fashion. The method computes a new diffuse prior monotonic likelihood ratio (DPMLR) to facilitate the comparison of the H(1) hypothesis that the intrinsic human detection performance is related to the FIQM via a monotonic function against the null hypothesis that the detection and image quality relationship is random. The paper discusses many interesting properties of the DPMLR and demonstrates the effectiveness of the DPMLR test via Monte Carlo simulations. Finally, the DPMLR is used to score FIQMs with test cases considering over 35 scenes and various image fusion algorithms.

  19. Fusion of GFP and phase contrast images with complex shearlet transform and Haar wavelet-based energy rule.

    PubMed

    Qiu, Chenhui; Wang, Yuanyuan; Guo, Yanen; Xia, Shunren

    2018-03-14

    Image fusion techniques can integrate the information from different imaging modalities to get a composite image which is more suitable for human visual perception and further image processing tasks. Fusing green fluorescent protein (GFP) and phase contrast images is very important for subcellular localization, functional analysis of protein and genome expression. The fusion method of GFP and phase contrast images based on complex shearlet transform (CST) is proposed in this paper. Firstly the GFP image is converted to IHS model and its intensity component is obtained. Secondly the CST is performed on the intensity component and the phase contrast image to acquire the low-frequency subbands and the high-frequency subbands. Then the high-frequency subbands are merged by the absolute-maximum rule while the low-frequency subbands are merged by the proposed Haar wavelet-based energy (HWE) rule. Finally the fused image is obtained by performing the inverse CST on the merged subbands and conducting IHS-to-RGB conversion. The proposed fusion method is tested on a number of GFP and phase contrast images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. © 2018 Wiley Periodicals, Inc.

  20. Dual wavelength imaging allows analysis of membrane fusion of influenza virus inside cells.

    PubMed

    Sakai, Tatsuya; Ohuchi, Masanobu; Imai, Masaki; Mizuno, Takafumi; Kawasaki, Kazunori; Kuroda, Kazumichi; Yamashina, Shohei

    2006-02-01

    Influenza virus hemagglutinin (HA) is a determinant of virus infectivity. Therefore, it is important to determine whether HA of a new influenza virus, which can potentially cause pandemics, is functional against human cells. The novel imaging technique reported here allows rapid analysis of HA function by visualizing viral fusion inside cells. This imaging was designed to detect fusion changing the spectrum of the fluorescence-labeled virus. Using this imaging, we detected the fusion between a virus and a very small endosome that could not be detected previously, indicating that the imaging allows highly sensitive detection of viral fusion.

Top