[Research Progress of Multi-Model Medical Image Fusion at Feature Level].
Zhang, Junjie; Zhou, Tao; Lu, Huiling; Wang, Huiqun
2016-04-01
Medical image fusion realizes advantage integration of functional images and anatomical images.This article discusses the research progress of multi-model medical image fusion at feature level.We firstly describe the principle of medical image fusion at feature level.Then we analyze and summarize fuzzy sets,rough sets,D-S evidence theory,artificial neural network,principal component analysis and other fusion methods’ applications in medical image fusion and get summery.Lastly,we in this article indicate present problems and the research direction of multi-model medical images in the future.
A color fusion method of infrared and low-light-level images based on visual perception
NASA Astrophysics Data System (ADS)
Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa
2014-11-01
The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.
Study on polarization image methods in turbid medium
NASA Astrophysics Data System (ADS)
Fu, Qiang; Mo, Chunhe; Liu, Boyu; Duan, Jin; Zhang, Su; Zhu, Yong
2014-11-01
Polarization imaging detection technology in addition to the traditional imaging information, also can get polarization multi-dimensional information, thus improve the probability of target detection and recognition.Image fusion in turbid medium target polarization image research, is helpful to obtain high quality images. Based on visible light wavelength of light wavelength of laser polarization imaging, through the rotation Angle of polaroid get corresponding linear polarized light intensity, respectively to obtain the concentration range from 5% to 10% of turbid medium target stocks of polarization parameters, introduces the processing of image fusion technology, main research on access to the polarization of the image by using different polarization image fusion methods for image processing, discusses several kinds of turbid medium has superior performance of polarization image fusion method, and gives the treatment effect and analysis of data tables. Then use pixel level, feature level and decision level fusion algorithm on three levels of information fusion, DOLP polarization image fusion, the results show that: with the increase of the polarization Angle, polarization image will be more and more fuzzy, quality worse and worse. Than a single fused image contrast of the image be improved obviously, the finally analysis on reasons of the increase the image contrast and polarized light.
A robust color image fusion for low light level and infrared images
NASA Astrophysics Data System (ADS)
Liu, Chao; Zhang, Xiao-hui; Hu, Qing-ping; Chen, Yong-kang
2016-09-01
The low light level and infrared color fusion technology has achieved great success in the field of night vision, the technology is designed to make the hot target of fused image pop out with intenser colors, represent the background details with a nearest color appearance to nature, and improve the ability in target discovery, detection and identification. The low light level images have great noise under low illumination, and that the existing color fusion methods are easily to be influenced by low light level channel noise. To be explicit, when the low light level image noise is very large, the quality of the fused image decreases significantly, and even targets in infrared image would be submerged by the noise. This paper proposes an adaptive color night vision technology, the noise evaluation parameters of low light level image is introduced into fusion process, which improve the robustness of the color fusion. The color fuse results are still very good in low-light situations, which shows that this method can effectively improve the quality of low light level and infrared fused image under low illumination conditions.
A new multi-spectral feature level image fusion method for human interpretation
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-03-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in a three-task experiment using MSSF against two established methods: averaging and principle components analysis (PCA), and against its two source bands, visible and infrared. The three tasks that we studied were: (1) simple target detection, (2) spatial orientation, and (3) camouflaged target detection. MSSF proved superior to the other fusion methods in all three tests; MSSF also outperformed the source images in the spatial orientation and camouflaged target detection tasks. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
Multispectral image fusion for target detection
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-09-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in an experiment using MSSF against two established methods: Averaging and Principle Components Analysis (PCA), and against its two source bands, visible and infrared. The task that we studied was: target detection in the cluttered environment. MSSF proved superior to the other fusion methods. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
Multiscale infrared and visible image fusion using gradient domain guided image filtering
NASA Astrophysics Data System (ADS)
Zhu, Jin; Jin, Weiqi; Li, Li; Han, Zhenghao; Wang, Xia
2018-03-01
For better surveillance with infrared and visible imaging, a novel hybrid multiscale decomposition fusion method using gradient domain guided image filtering (HMSD-GDGF) is proposed in this study. In this method, hybrid multiscale decomposition with guided image filtering and gradient domain guided image filtering of source images are first applied before the weight maps of each scale are obtained using a saliency detection technology and filtering means with three different fusion rules at different scales. The three types of fusion rules are for small-scale detail level, large-scale detail level, and base level. Finally, the target becomes more salient and can be more easily detected in the fusion result, with the detail information of the scene being fully displayed. After analyzing the experimental comparisons with state-of-the-art fusion methods, the HMSD-GDGF method has obvious advantages in fidelity of salient information (including structural similarity, brightness, and contrast), preservation of edge features, and human visual perception. Therefore, visual effects can be improved by using the proposed HMSD-GDGF method.
Pixel-based image fusion with false color mapping
NASA Astrophysics Data System (ADS)
Zhao, Wei; Mao, Shiyi
2003-06-01
In this paper, we propose a pixel-based image fusion algorithm that combines the gray-level image fusion method with the false color mapping. This algorithm integrates two gray-level images presenting different sensor modalities or at different frequencies and produces a fused false-color image. The resulting image has higher information content than each of the original images. The objects in the fused color image are easy to be recognized. This algorithm has three steps: first, obtaining the fused gray-level image of two original images; second, giving the generalized high-boost filtering images between fused gray-level image and two source images respectively; third, generating the fused false-color image. We use the hybrid averaging and selection fusion method to obtain the fused gray-level image. The fused gray-level image will provide better details than two original images and reduce noise at the same time. But the fused gray-level image can't contain all detail information in two source images. At the same time, the details in gray-level image cannot be discerned as easy as in a color image. So a color fused image is necessary. In order to create color variation and enhance details in the final fusion image, we produce three generalized high-boost filtering images. These three images are displayed through red, green and blue channel respectively. A fused color image is produced finally. This method is used to fuse two SAR images acquired on the San Francisco area (California, USA). The result shows that fused false-color image enhances the visibility of certain details. The resolution of the final false-color image is the same as the resolution of the input images.
Novel cooperative neural fusion algorithms for image restoration and image fusion.
Xia, Youshen; Kamel, Mohamed S
2007-02-01
To deal with the problem of restoring degraded images with non-Gaussian noise, this paper proposes a novel cooperative neural fusion regularization (CNFR) algorithm for image restoration. Compared with conventional regularization algorithms for image restoration, the proposed CNFR algorithm can relax need of the optimal regularization parameter to be estimated. Furthermore, to enhance the quality of restored images, this paper presents a cooperative neural fusion (CNF) algorithm for image fusion. Compared with existing signal-level image fusion algorithms, the proposed CNF algorithm can greatly reduce the loss of contrast information under blind Gaussian noise environments. The performance analysis shows that the proposed two neural fusion algorithms can converge globally to the robust and optimal image estimate. Simulation results confirm that in different noise environments, the proposed two neural fusion algorithms can obtain a better image estimate than several well known image restoration and image fusion methods.
Landcover classification in MRF context using Dempster-Shafer fusion for multisensor imagery.
Sarkar, Anjan; Banerjee, Anjan; Banerjee, Nilanjan; Brahma, Siddhartha; Kartikeyan, B; Chakraborty, Manab; Majumder, K L
2005-05-01
This work deals with multisensor data fusion to obtain landcover classification. The role of feature-level fusion using the Dempster-Shafer rule and that of data-level fusion in the MRF context is studied in this paper to obtain an optimally segmented image. Subsequently, segments are validated and classification accuracy for the test data is evaluated. Two examples of data fusion of optical images and a synthetic aperture radar image are presented, each set having been acquired on different dates. Classification accuracies of the technique proposed are compared with those of some recent techniques in literature for the same image data.
Some new classification methods for hyperspectral remote sensing
NASA Astrophysics Data System (ADS)
Du, Pei-jun; Chen, Yun-hao; Jones, Simon; Ferwerda, Jelle G.; Chen, Zhi-jun; Zhang, Hua-peng; Tan, Kun; Yin, Zuo-xia
2006-10-01
Hyperspectral Remote Sensing (HRS) is one of the most significant recent achievements of Earth Observation Technology. Classification is the most commonly employed processing methodology. In this paper three new hyperspectral RS image classification methods are analyzed. These methods are: Object-oriented FIRS image classification, HRS image classification based on information fusion and HSRS image classification by Back Propagation Neural Network (BPNN). OMIS FIRS image is used as the example data. Object-oriented techniques have gained popularity for RS image classification in recent years. In such method, image segmentation is used to extract the regions from the pixel information based on homogeneity criteria at first, and spectral parameters like mean vector, texture, NDVI and spatial/shape parameters like aspect ratio, convexity, solidity, roundness and orientation for each region are calculated, finally classification of the image using the region feature vectors and also using suitable classifiers such as artificial neural network (ANN). It proves that object-oriented methods can improve classification accuracy since they utilize information and features both from the point and the neighborhood, and the processing unit is a polygon (in which all pixels are homogeneous and belong to the class). HRS image classification based on information fusion, divides all bands of the image into different groups initially, and extracts features from every group according to the properties of each group. Three levels of information fusion: data level fusion, feature level fusion and decision level fusion are used to HRS image classification. Artificial Neural Network (ANN) can perform well in RS image classification. In order to promote the advances of ANN used for HIRS image classification, Back Propagation Neural Network (BPNN), the most commonly used neural network, is used to HRS image classification.
Biometric image enhancement using decision rule based image fusion techniques
NASA Astrophysics Data System (ADS)
Sagayee, G. Mary Amirtha; Arumugam, S.
2010-02-01
Introducing biometrics into information systems may result in considerable benefits. Most of the researchers confirmed that the finger print is widely used than the iris or face and more over it is the primary choice for most privacy concerned applications. For finger prints applications, choosing proper sensor is at risk. The proposed work deals about, how the image quality can be improved by introducing image fusion technique at sensor levels. The results of the images after introducing the decision rule based image fusion technique are evaluated and analyzed with its entropy levels and root mean square error.
Multiscale Medical Image Fusion in Wavelet Domain
Khare, Ashish
2013-01-01
Wavelet transforms have emerged as a powerful tool in image fusion. However, the study and analysis of medical image fusion is still a challenging area of research. Therefore, in this paper, we propose a multiscale fusion of multimodal medical images in wavelet domain. Fusion of medical images has been performed at multiple scales varying from minimum to maximum level using maximum selection rule which provides more flexibility and choice to select the relevant fused images. The experimental analysis of the proposed method has been performed with several sets of medical images. Fusion results have been evaluated subjectively and objectively with existing state-of-the-art fusion methods which include several pyramid- and wavelet-transform-based fusion methods and principal component analysis (PCA) fusion method. The comparative analysis of the fusion results has been performed with edge strength (Q), mutual information (MI), entropy (E), standard deviation (SD), blind structural similarity index metric (BSSIM), spatial frequency (SF), and average gradient (AG) metrics. The combined subjective and objective evaluations of the proposed fusion method at multiple scales showed the effectiveness and goodness of the proposed approach. PMID:24453868
Improved detection probability of low level light and infrared image fusion system
NASA Astrophysics Data System (ADS)
Luo, Yuxiang; Fu, Rongguo; Zhang, Junju; Wang, Wencong; Chang, Benkang
2018-02-01
Low level light(LLL) image contains rich information on environment details, but is easily affected by the weather. In the case of smoke, rain, cloud or fog, much target information will lose. Infrared image, which is from the radiation produced by the object itself, can be "active" to obtain the target information in the scene. However, the image contrast and resolution is bad, the ability of the acquisition of target details is very poor, and the imaging mode does not conform to the human visual habit. The fusion of LLL and infrared image can make up for the deficiency of each sensor and give play to the advantages of single sensor. At first, we show the hardware design of fusion circuit. Then, through the recognition probability calculation of the target(one person) and the background image(trees), we find that the trees detection probability of LLL image is higher than that of the infrared image, and the person detection probability of the infrared image is obviously higher than that of LLL image. The detection probability of fusion image for one person and trees is higher than that of single detector. Therefore, image fusion can significantly enlarge recognition probability and improve detection efficiency.
Infrared and visible image fusion scheme based on NSCT and low-level visual features
NASA Astrophysics Data System (ADS)
Li, Huafeng; Qiu, Hongmei; Yu, Zhengtao; Zhang, Yafei
2016-05-01
Multi-scale transform (MST) is an efficient tool for image fusion. Recently, many fusion methods have been developed based on different MSTs, and they have shown potential application in many fields. In this paper, we propose an effective infrared and visible image fusion scheme in nonsubsampled contourlet transform (NSCT) domain, in which the NSCT is firstly employed to decompose each of the source images into a series of high frequency subbands and one low frequency subband. To improve the fusion performance we designed two new activity measures for fusion of the lowpass subbands and the highpass subbands. These measures are developed based on the fact that the human visual system (HVS) percept the image quality mainly according to its some low-level features. Then, the selection principles of different subbands are presented based on the corresponding activity measures. Finally, the merged subbands are constructed according to the selection principles, and the final fused image is produced by applying the inverse NSCT on these merged subbands. Experimental results demonstrate the effectiveness and superiority of the proposed method over the state-of-the-art fusion methods in terms of both visual effect and objective evaluation results.
An FPGA-based heterogeneous image fusion system design method
NASA Astrophysics Data System (ADS)
Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong
2011-08-01
Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.
Enhancement of low light level images using color-plus-mono dual camera.
Jung, Yong Ju
2017-05-15
In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-01-01
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs. PMID:25587878
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-11-26
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs.
Gradient-based multiresolution image fusion.
Petrović, Valdimir S; Xydeas, Costas S
2004-02-01
A novel approach to multiresolution signal-level image fusion is presented for accurately transferring visual information from any number of input image signals, into a single fused image without loss of information or the introduction of distortion. The proposed system uses a "fuse-then-decompose" technique realized through a novel, fusion/decomposition system architecture. In particular, information fusion is performed on a multiresolution gradient map representation domain of image signal information. At each resolution, input images are represented as gradient maps and combined to produce new, fused gradient maps. Fused gradient map signals are processed, using gradient filters derived from high-pass quadrature mirror filters to yield a fused multiresolution pyramid representation. The fused output image is obtained by applying, on the fused pyramid, a reconstruction process that is analogous to that of conventional discrete wavelet transform. This new gradient fusion significantly reduces the amount of distortion artefacts and the loss of contrast information usually observed in fused images obtained from conventional multiresolution fusion schemes. This is because fusion in the gradient map domain significantly improves the reliability of the feature selection and information fusion processes. Fusion performance is evaluated through informal visual inspection and subjective psychometric preference tests, as well as objective fusion performance measurements. Results clearly demonstrate the superiority of this new approach when compared to conventional fusion systems.
Zhang, Xin; Cui, Jintian; Wang, Weisheng; Lin, Chao
2017-01-01
To address the problem of image texture feature extraction, a direction measure statistic that is based on the directionality of image texture is constructed, and a new method of texture feature extraction, which is based on the direction measure and a gray level co-occurrence matrix (GLCM) fusion algorithm, is proposed in this paper. This method applies the GLCM to extract the texture feature value of an image and integrates the weight factor that is introduced by the direction measure to obtain the final texture feature of an image. A set of classification experiments for the high-resolution remote sensing images were performed by using support vector machine (SVM) classifier with the direction measure and gray level co-occurrence matrix fusion algorithm. Both qualitative and quantitative approaches were applied to assess the classification results. The experimental results demonstrated that texture feature extraction based on the fusion algorithm achieved a better image recognition, and the accuracy of classification based on this method has been significantly improved. PMID:28640181
BP fusion model for the detection of oil spills on the sea by remote sensing
NASA Astrophysics Data System (ADS)
Chen, Weiwei; An, Jubai; Zhang, Hande; Lin, Bin
2003-06-01
Oil spills are very serious marine pollution in many countries. In order to detect and identify the oil-spilled on the sea by remote sensor, scientists have to conduct a research work on the remote sensing image. As to the detection of oil spills on the sea, edge detection is an important technology in image processing. There are many algorithms of edge detection developed for image processing. These edge detection algorithms always have their own advantages and disadvantages in the image processing. Based on the primary requirements of edge detection of the oil spills" image on the sea, computation time and detection accuracy, we developed a fusion model. The model employed a BP neural net to fuse the detection results of simple operators. The reason we selected BP neural net as the fusion technology is that the relation between simple operators" result of edge gray level and the image"s true edge gray level is nonlinear, while BP neural net is good at solving the nonlinear identification problem. Therefore in this paper we trained a BP neural net by some oil spill images, then applied the BP fusion model on the edge detection of other oil spill images and obtained a good result. In this paper the detection result of some gradient operators and Laplacian operator are also compared with the result of BP fusion model to analysis the fusion effect. At last the paper pointed out that the fusion model has higher accuracy and higher speed in the processing oil spill image"s edge detection.
Ellmauthaler, Andreas; Pagliari, Carla L; da Silva, Eduardo A B
2013-03-01
Multiscale transforms are among the most popular techniques in the field of pixel-level image fusion. However, the fusion performance of these methods often deteriorates for images derived from different sensor modalities. In this paper, we demonstrate that for such images, results can be improved using a novel undecimated wavelet transform (UWT)-based fusion scheme, which splits the image decomposition process into two successive filtering operations using spectral factorization of the analysis filters. The actual fusion takes place after convolution with the first filter pair. Its significantly smaller support size leads to the minimization of the unwanted spreading of coefficient values around overlapping image singularities. This usually complicates the feature selection process and may lead to the introduction of reconstruction errors in the fused image. Moreover, we will show that the nonsubsampled nature of the UWT allows the design of nonorthogonal filter banks, which are more robust to artifacts introduced during fusion, additionally improving the obtained results. The combination of these techniques leads to a fusion framework, which provides clear advantages over traditional multiscale fusion approaches, independent of the underlying fusion rule, and reduces unwanted side effects such as ringing artifacts in the fused reconstruction.
A dual-channel fusion system of visual and infrared images based on color transfer
NASA Astrophysics Data System (ADS)
Pei, Chuang; Jiang, Xiao-yu; Zhang, Peng-wei; Liang, Hao-cong
2013-09-01
A dual-channel fusion system of visual and infrared images based on color transfer The increasing availability and deployment of imaging sensors operating in multiple spectrums has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, most of these algorithms have gray or false color fusion results which are not adapt to human vision. Transfer color from a day-time reference image to get natural color fusion result is an effective way to solve this problem, but the computation cost of color transfer is expensive and can't meet the request of real-time image processing. We developed a dual-channel infrared and visual images fusion system based on TMS320DM642 digital signal processing chip. The system is divided into image acquisition and registration unit, image fusion processing unit, system control unit and image fusion result out-put unit. The image registration of dual-channel images is realized by combining hardware and software methods in the system. False color image fusion algorithm in RGB color space is used to get R-G fused image, then the system chooses a reference image to transfer color to the fusion result. A color lookup table based on statistical properties of images is proposed to solve the complexity computation problem in color transfer. The mapping calculation between the standard lookup table and the improved color lookup table is simple and only once for a fixed scene. The real-time fusion and natural colorization of infrared and visual images are realized by this system. The experimental result shows that the color-transferred images have a natural color perception to human eyes, and can highlight the targets effectively with clear background details. Human observers with this system will be able to interpret the image better and faster, thereby improving situational awareness and reducing target detection time.
NASA Astrophysics Data System (ADS)
Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.
2016-01-01
Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.
A comparative study of multi-focus image fusion validation metrics
NASA Astrophysics Data System (ADS)
Giansiracusa, Michael; Lutz, Adam; Messer, Neal; Ezekiel, Soundararajan; Alford, Mark; Blasch, Erik; Bubalo, Adnan; Manno, Michael
2016-05-01
Fusion of visual information from multiple sources is relevant for applications security, transportation, and safety applications. One way that image fusion can be particularly useful is when fusing imagery data from multiple levels of focus. Different focus levels can create different visual qualities for different regions in the imagery, which can provide much more visual information to analysts when fused. Multi-focus image fusion would benefit a user through automation, which requires the evaluation of the fused images to determine whether they have properly fused the focused regions of each image. Many no-reference metrics, such as information theory based, image feature based and structural similarity-based have been developed to accomplish comparisons. However, it is hard to scale an accurate assessment of visual quality which requires the validation of these metrics for different types of applications. In order to do this, human perception based validation methods have been developed, particularly dealing with the use of receiver operating characteristics (ROC) curves and the area under them (AUC). Our study uses these to analyze the effectiveness of no-reference image fusion metrics applied to multi-resolution fusion methods in order to determine which should be used when dealing with multi-focus data. Preliminary results show that the Tsallis, SF, and spatial frequency metrics are consistent with the image quality and peak signal to noise ratio (PSNR).
Baek, Jihye; Huh, Jangyoung; Kim, Myungsoo; Hyun An, So; Oh, Yoonjin; Kim, DongYoung; Chung, Kwangzoo; Cho, Sungho; Lee, Rena
2013-02-01
To evaluate the accuracy of measuring volumes using three-dimensional ultrasound (3D US), and to verify the feasibility of the replacement of CT-MR fusion images with CT-3D US in radiotherapy treatment planning. Phantoms, consisting of water, contrast agent, and agarose, were manufactured. The volume was measured using 3D US, CT, and MR devices. A CT-3D US and MR-3D US image fusion software was developed using the Insight Toolkit library in order to acquire three-dimensional fusion images. The quality of the image fusion was evaluated using metric value and fusion images. Volume measurement, using 3D US, shows a 2.8 ± 1.5% error, 4.4 ± 3.0% error for CT, and 3.1 ± 2.0% error for MR. The results imply that volume measurement using the 3D US devices has a similar accuracy level to that of CT and MR. Three-dimensional image fusion of CT-3D US and MR-3D US was successfully performed using phantom images. Moreover, MR-3D US image fusion was performed using human bladder images. 3D US could be used in the volume measurement of human bladders and prostates. CT-3D US image fusion could be used in monitoring the target position in each fraction of external beam radiation therapy. Moreover, the feasibility of replacing the CT-MR image fusion to the CT-3D US in radiotherapy treatment planning was verified.
Multisensor fusion for 3D target tracking using track-before-detect particle filter
NASA Astrophysics Data System (ADS)
Moshtagh, Nima; Romberg, Paul M.; Chan, Moses W.
2015-05-01
This work presents a novel fusion mechanism for estimating the three-dimensional trajectory of a moving target using images collected by multiple imaging sensors. The proposed projective particle filter avoids the explicit target detection prior to fusion. In projective particle filter, particles that represent the posterior density (of target state in a high-dimensional space) are projected onto the lower-dimensional observation space. Measurements are generated directly in the observation space (image plane) and a marginal (sensor) likelihood is computed. The particles states and their weights are updated using the joint likelihood computed from all the sensors. The 3D state estimate of target (system track) is then generated from the states of the particles. This approach is similar to track-before-detect particle filters that are known to perform well in tracking dim and stealthy targets in image collections. Our approach extends the track-before-detect approach to 3D tracking using the projective particle filter. The performance of this measurement-level fusion method is compared with that of a track-level fusion algorithm using the projective particle filter. In the track-level fusion algorithm, the 2D sensor tracks are generated separately and transmitted to a fusion center, where they are treated as measurements to the state estimator. The 2D sensor tracks are then fused to reconstruct the system track. A realistic synthetic scenario with a boosting target was generated, and used to study the performance of the fusion mechanisms.
NASA Astrophysics Data System (ADS)
Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Lee, Shin-Jye; He, Kangjian
2018-01-01
In order to promote the performance of infrared and visual image fusion and provide better visual effects, this paper proposes a hybrid fusion method for infrared and visual image by the combination of discrete stationary wavelet transform (DSWT), discrete cosine transform (DCT) and local spatial frequency (LSF). The proposed method has three key processing steps. Firstly, DSWT is employed to decompose the important features of the source image into a series of sub-images with different levels and spatial frequencies. Secondly, DCT is used to separate the significant details of the sub-images according to the energy of different frequencies. Thirdly, LSF is applied to enhance the regional features of DCT coefficients, and it can be helpful and useful for image feature extraction. Some frequently-used image fusion methods and evaluation metrics are employed to evaluate the validity of the proposed method. The experiments indicate that the proposed method can achieve good fusion effect, and it is more efficient than other conventional image fusion methods.
PET-CT image fusion using random forest and à-trous wavelet transform.
Seal, Ayan; Bhattacharjee, Debotosh; Nasipuri, Mita; Rodríguez-Esparragón, Dionisio; Menasalvas, Ernestina; Gonzalo-Martin, Consuelo
2018-03-01
New image fusion rules for multimodal medical images are proposed in this work. Image fusion rules are defined by random forest learning algorithm and a translation-invariant à-trous wavelet transform (AWT). The proposed method is threefold. First, source images are decomposed into approximation and detail coefficients using AWT. Second, random forest is used to choose pixels from the approximation and detail coefficients for forming the approximation and detail coefficients of the fused image. Lastly, inverse AWT is applied to reconstruct fused image. All experiments have been performed on 198 slices of both computed tomography and positron emission tomography images of a patient. A traditional fusion method based on Mallat wavelet transform has also been implemented on these slices. A new image fusion performance measure along with 4 existing measures has been presented, which helps to compare the performance of 2 pixel level fusion methods. The experimental results clearly indicate that the proposed method outperforms the traditional method in terms of visual and quantitative qualities and the new measure is meaningful. Copyright © 2017 John Wiley & Sons, Ltd.
Multispectral image fusion for illumination-invariant palmprint recognition
Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng
2017-01-01
Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied. PMID:28558064
Multispectral image fusion for illumination-invariant palmprint recognition.
Lu, Longbin; Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng
2017-01-01
Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied.
Fusion of infrared and visible images based on BEMD and NSDFB
NASA Astrophysics Data System (ADS)
Zhu, Pan; Huang, Zhanhua; Lei, Hai
2016-07-01
This paper presents a new fusion method based on the adaptive multi-scale decomposition of bidimensional empirical mode decomposition (BEMD) and the flexible directional expansion of nonsubsampled directional filter banks (NSDFB) for visible-infrared images. Compared with conventional multi-scale fusion methods, BEMD is non-parametric and completely data-driven, which is relatively more suitable for non-linear signals decomposition and fusion. NSDFB can provide direction filtering on the decomposition levels to capture more geometrical structure of the source images effectively. In our fusion framework, the entropies of the two patterns of source images are firstly calculated and the residue of the image whose entropy is larger is extracted to make it highly relevant with the other source image. Then, the residue and the other source image are decomposed into low-frequency sub-bands and a sequence of high-frequency directional sub-bands in different scales by using BEMD and NSDFB. In this fusion scheme, two relevant fusion rules are used in low-frequency sub-bands and high-frequency directional sub-bands, respectively. Finally, the fused image is obtained by applying corresponding inverse transform. Experimental results indicate that the proposed fusion algorithm can obtain state-of-the-art performance for visible-infrared images fusion in both aspects of objective assessment and subjective visual quality even for the source images obtained in different conditions. Furthermore, the fused results have high contrast, remarkable target information and rich details information that are more suitable for human visual characteristics or machine perception.
Different source image fusion based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Xiao; Piao, Yan
2016-03-01
The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.
Feature level fusion of hand and face biometrics
NASA Astrophysics Data System (ADS)
Ross, Arun A.; Govindarajan, Rohin
2005-03-01
Multibiometric systems utilize the evidence presented by multiple biometric sources (e.g., face and fingerprint, multiple fingers of a user, multiple matchers, etc.) in order to determine or verify the identity of an individual. Information from multiple sources can be consolidated in several distinct levels, including the feature extraction level, match score level and decision level. While fusion at the match score and decision levels have been extensively studied in the literature, fusion at the feature level is a relatively understudied problem. In this paper we discuss fusion at the feature level in 3 different scenarios: (i) fusion of PCA and LDA coefficients of face; (ii) fusion of LDA coefficients corresponding to the R,G,B channels of a face image; (iii) fusion of face and hand modalities. Preliminary results are encouraging and help in highlighting the pros and cons of performing fusion at this level. The primary motivation of this work is to demonstrate the viability of such a fusion and to underscore the importance of pursuing further research in this direction.
Multisensor data fusion for IED threat detection
NASA Astrophysics Data System (ADS)
Mees, Wim; Heremans, Roel
2012-10-01
In this paper we present the multi-sensor registration and fusion algorithms that were developed for a force protection research project in order to detect threats against military patrol vehicles. The fusion is performed at object level, using a hierarchical evidence aggregation approach. It first uses expert domain knowledge about the features used to characterize the detected threats, that is implemented in the form of a fuzzy expert system. The next level consists in fusing intra-sensor and inter-sensor information. Here an ordered weighted averaging operator is used. The object level fusion between candidate threats that are detected asynchronously on a moving vehicle by sensors with different imaging geometries, requires an accurate sensor to world coordinate transformation. This image registration will also be discussed in this paper.
Guggenberger, R; Winklhofer, S; Osterhoff, G; Wanner, G A; Fortunati, M; Andreisek, G; Alkadhi, H; Stolzmann, P
2012-11-01
To evaluate optimal monoenergetic dual-energy computed tomography (DECT) settings for artefact reduction of posterior spinal fusion implants of various vendors and spine levels. Posterior spinal fusion implants of five vendors for cervical, thoracic and lumbar spine were examined ex vivo with single-energy (SE) CT (120 kVp) and DECT (140/100 kVp). Extrapolated monoenergetic DECT images at 64, 69, 88, 105 keV and individually adjusted monoenergy for optimised image quality (OPTkeV) were generated. Two independent radiologists assessed quantitative and qualitative image parameters for each device and spine level. Inter-reader agreements of quantitative and qualitative parameters were high (ICC = 0.81-1.00, κ = 0.54-0.77). HU values of spinal fusion implants were significantly different among vendors (P < 0.001), spine levels (P < 0.01) and among SECT, monoenergetic DECT of 64, 69, 88, 105 keV and OPTkeV (P < 0.01). Image quality was significantly (P < 0.001) different between datasets and improved with higher monoenergies of DECT compared with SECT (V = 0.58, P < 0.001). Artefacts decreased significantly (V = 0.51, P < 0.001) at higher monoenergies. OPTkeV values ranged from 123-141 keV. OPTkeV according to vendor and spine level are presented herein. Monoenergetic DECT provides significantly better image quality and less metallic artefacts from implants than SECT. Use of individual keV values for vendor and spine level is recommended. • Artefacts pose problems for CT following posterior spinal fusion implants. • CT images are interpreted better with monoenergetic extrapolation using dual-energy (DE) CT. • DECT extrapolation improves image quality and reduces metallic artefacts over SECT. • There were considerable differences in monoenergy values among vendors and spine levels. • Use of individualised monoenergy values is indicated for different metallic hardware devices.
Face-iris multimodal biometric scheme based on feature level fusion
NASA Astrophysics Data System (ADS)
Huo, Guang; Liu, Yuanning; Zhu, Xiaodong; Dong, Hongxing; He, Fei
2015-11-01
Unlike score level fusion, feature level fusion demands all the features extracted from unimodal traits with high distinguishability, as well as homogeneity and compatibility, which is difficult to achieve. Therefore, most multimodal biometric research focuses on score level fusion, whereas few investigate feature level fusion. We propose a face-iris recognition method based on feature level fusion. We build a special two-dimensional-Gabor filter bank to extract local texture features from face and iris images, and then transform them by histogram statistics into an energy-orientation variance histogram feature with lower dimensions and higher distinguishability. Finally, through a fusion-recognition strategy based on principal components analysis and support vector machine (FRSPS), feature level fusion and one-to-n identification are accomplished. The experimental results demonstrate that this method can not only effectively extract face and iris features but also provide higher recognition accuracy. Compared with some state-of-the-art fusion methods, the proposed method has a significant performance advantage.
Desai, Atman; Pendharkar, Arjun V; Swienckowski, Jessica G; Ball, Perry A; Lollis, Scott; Simmons, Nathan E
2015-11-23
Construct failure is an uncommon but well-recognized complication following anterior cervical corpectomy and fusion (ACCF). In order to screen for these complications, many centers routinely image patients at outpatient visits following surgery. There remains, however, little data on the utility of such imaging. The electronic medical record of all patients undergoing anterior cervical corpectomy and fusion at Dartmouth-Hitchcock Medical Center between 2004 and 2009 were reviewed. All patients had routine cervical spine radiographs performed perioperatively. Follow-up visits up to two years postoperatively were analyzed. Sixty-five patients (mean age 52.2) underwent surgery during the time period. Eighteen patients were female. Forty patients had surgery performed for spondylosis, 20 for trauma, three for tumor, and two for infection. Forty-three patients underwent one-level corpectomy, 20 underwent two-level corpectomy, and two underwent three-level corpectomy, using an allograft, autograft, or both. Sixty-two of the fusions were instrumented using a plate and 13 had posterior augmentation. Fifty-seven patients had follow-up with imaging at four to 12 weeks following surgery, 54 with plain radiographs, two with CT scans, and one with an MRI scan. Unexpected findings were noted in six cases. One of those patients, found to have asymptomatic recurrent kyphosis following a two-level corpectomy, had repeat surgery because of those findings. Only one further patient was found to have abnormal imaging up to two years, and this patient required no further intervention. Routine imaging after ACCF can demonstrate asymptomatic occurrences of clinically significant instrument failure. In 43 consecutive single-level ACCF however, routine imaging did not change management, even when an abnormality was discovered. This may suggest a limited role for routine imaging after ACCF in longer constructs involving multiple levels.
Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D
2015-05-08
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.
Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.
2015-01-01
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714
A research on radiation calibration of high dynamic range based on the dual channel CMOS
NASA Astrophysics Data System (ADS)
Ma, Kai; Shi, Zhan; Pan, Xiaodong; Wang, Yongsheng; Wang, Jianghua
2017-10-01
The dual channel complementary metal-oxide semiconductor (CMOS) can get high dynamic range (HDR) image through extending the gray level of the image by using image fusion with high gain channel image and low gain channel image in a same frame. In the process of image fusion with dual channel, it adopts the coefficients of radiation response of a pixel from dual channel in a same frame, and then calculates the gray level of the pixel in the HDR image. For the coefficients of radiation response play a crucial role in image fusion, it has to find an effective method to acquire these parameters. In this article, it makes a research on radiation calibration of high dynamic range based on the dual channel CMOS, and designs an experiment to calibrate the coefficients of radiation response for the sensor it used. In the end, it applies these response parameters in the dual channel CMOS which calibrates, and verifies the correctness and feasibility of the method mentioned in this paper.
Research on Remote Sensing Image Classification Based on Feature Level Fusion
NASA Astrophysics Data System (ADS)
Yuan, L.; Zhu, G.
2018-04-01
Remote sensing image classification, as an important direction of remote sensing image processing and application, has been widely studied. However, in the process of existing classification algorithms, there still exists the phenomenon of misclassification and missing points, which leads to the final classification accuracy is not high. In this paper, we selected Sentinel-1A and Landsat8 OLI images as data sources, and propose a classification method based on feature level fusion. Compare three kind of feature level fusion algorithms (i.e., Gram-Schmidt spectral sharpening, Principal Component Analysis transform and Brovey transform), and then select the best fused image for the classification experimental. In the classification process, we choose four kinds of image classification algorithms (i.e. Minimum distance, Mahalanobis distance, Support Vector Machine and ISODATA) to do contrast experiment. We use overall classification precision and Kappa coefficient as the classification accuracy evaluation criteria, and the four classification results of fused image are analysed. The experimental results show that the fusion effect of Gram-Schmidt spectral sharpening is better than other methods. In four kinds of classification algorithms, the fused image has the best applicability to Support Vector Machine classification, the overall classification precision is 94.01 % and the Kappa coefficients is 0.91. The fused image with Sentinel-1A and Landsat8 OLI is not only have more spatial information and spectral texture characteristics, but also enhances the distinguishing features of the images. The proposed method is beneficial to improve the accuracy and stability of remote sensing image classification.
Fast single image dehazing based on image fusion
NASA Astrophysics Data System (ADS)
Liu, Haibo; Yang, Jie; Wu, Zhengping; Zhang, Qingnian
2015-01-01
Images captured in foggy weather conditions often fade the colors and reduce the contrast of the observed objects. An efficient image fusion method is proposed to remove haze from a single input image. First, the initial medium transmission is estimated based on the dark channel prior. Second, the method adopts an assumption that the degradation level affected by haze of each region is the same, which is similar to the Retinex theory, and uses a simple Gaussian filter to get the coarse medium transmission. Then, pixel-level fusion is achieved between the initial medium transmission and coarse medium transmission. The proposed method can recover a high-quality haze-free image based on the physical model, and the complexity of the proposed method is only a linear function of the number of input image pixels. Experimental results demonstrate that the proposed method can allow a very fast implementation and achieve better restoration for visibility and color fidelity compared to some state-of-the-art methods.
Single-Scale Fusion: An Effective Approach to Merging Images.
Ancuti, Codruta O; Ancuti, Cosmin; De Vleeschouwer, Christophe; Bovik, Alan C
2017-01-01
Due to its robustness and effectiveness, multi-scale fusion (MSF) based on the Laplacian pyramid decomposition has emerged as a popular technique that has shown utility in many applications. Guided by several intuitive measures (weight maps) the MSF process is versatile and straightforward to be implemented. However, the number of pyramid levels increases with the image size, which implies sophisticated data management and memory accesses, as well as additional computations. Here, we introduce a simplified formulation that reduces MSF to only a single level process. Starting from the MSF decomposition, we explain both mathematically and intuitively (visually) a way to simplify the classical MSF approach with minimal loss of information. The resulting single-scale fusion (SSF) solution is a close approximation of the MSF process that eliminates important redundant computations. It also provides insights regarding why MSF is so effective. While our simplified expression is derived in the context of high dynamic range imaging, we show its generality on several well-known fusion-based applications, such as image compositing, extended depth of field, medical imaging, and blending thermal (infrared) images with visible light. Besides visual validation, quantitative evaluations demonstrate that our SSF strategy is able to yield results that are highly competitive with traditional MSF approaches.
Combined use of iterative reconstruction and monochromatic imaging in spinal fusion CT images.
Wang, Fengdan; Zhang, Yan; Xue, Huadan; Han, Wei; Yang, Xianda; Jin, Zhengyu; Zwar, Richard
2017-01-01
Spinal fusion surgery is an important procedure for treating spinal diseases and computed tomography (CT) is a critical tool for postoperative evaluation. However, CT image quality is considerably impaired by metal artifacts and image noise. To explore whether metal artifacts and image noise can be reduced by combining two technologies, adaptive statistical iterative reconstruction (ASIR) and monochromatic imaging generated by gemstone spectral imaging (GSI) dual-energy CT. A total of 51 patients with 318 spinal pedicle screws were prospectively scanned by dual-energy CT using fast kV-switching GSI between 80 and 140 kVp. Monochromatic GSI images at 110 keV were reconstructed either without or with various levels of ASIR (30%, 50%, 70%, and 100%). The quality of five sets of images was objectively and subjectively assessed. With objective image quality assessment, metal artifacts decreased when increasing levels of ASIR were applied (P < 0.001). Moreover, adding ASIR to GSI also decreased image noise (P < 0.001) and improved the signal-to-noise ratio (P < 0.001). The subjective image quality analysis showed good inter-reader concordance, with intra-class correlation coefficients between 0.89 and 0.99. The visualization of peri-implant soft tissue was improved at higher ASIR levels (P < 0.001). Combined use of ASIR and GSI decreased image noise and improved image quality in post-spinal fusion CT scans. Optimal results were achieved with ASIR levels ≥70%. © The Foundation Acta Radiologica 2016.
Han, Lei; Wulie, Buzha; Yang, Yiling; Wang, Hongqing
2015-01-05
This study investigated a novel method of fusing visible (VIS) and infrared (IR) images with the major objective of obtaining higher-resolution IR images. Most existing image fusion methods focus only on visual performance and many fail to consider the thermal physical properties of the IR images, leading to spectral distortion in the fused image. In this study, we use the IR thermal physical property to correct the VIS image directly. Specifically, the Stefan-Boltzmann Law is used as a strong constraint to modulate the VIS image, such that the fused result shows a similar level of regional thermal energy as the original IR image, while preserving the high-resolution structural features from the VIS image. This method is an improvement over our previous study, which required VIS-IR multi-wavelet fusion before the same correction method was applied. The results of experiments show that applying this correction to the VIS image directly without multi-resolution analysis (MRA) processing achieves similar results, but is considerably more computationally efficient, thereby providing a new perspective on VIS and IR image fusion.
Han, Lei; Wulie, Buzha; Yang, Yiling; Wang, Hongqing
2015-01-01
This study investigated a novel method of fusing visible (VIS) and infrared (IR) images with the major objective of obtaining higher-resolution IR images. Most existing image fusion methods focus only on visual performance and many fail to consider the thermal physical properties of the IR images, leading to spectral distortion in the fused image. In this study, we use the IR thermal physical property to correct the VIS image directly. Specifically, the Stefan-Boltzmann Law is used as a strong constraint to modulate the VIS image, such that the fused result shows a similar level of regional thermal energy as the original IR image, while preserving the high-resolution structural features from the VIS image. This method is an improvement over our previous study, which required VIS-IR multi-wavelet fusion before the same correction method was applied. The results of experiments show that applying this correction to the VIS image directly without multi-resolution analysis (MRA) processing achieves similar results, but is considerably more computationally efficient, thereby providing a new perspective on VIS and IR image fusion. PMID:25569749
Hamm, Klaus D; Surber, Gunnar; Schmücking, Michael; Wurm, Reinhard E; Aschenbach, Rene; Kleinert, Gabriele; Niesen, A; Baum, Richard P
2004-11-01
Innovative new software solutions may enable image fusion to produce the desired data superposition for precise target definition and follow-up studies in radiosurgery/stereotactic radiotherapy in patients with intracranial lesions. The aim is to integrate the anatomical and functional information completely into the radiation treatment planning and to achieve an exact comparison for follow-up examinations. Special conditions and advantages of BrainLAB's fully automatic image fusion system are evaluated and described for this purpose. In 458 patients, the radiation treatment planning and some follow-up studies were performed using an automatic image fusion technique involving the use of different imaging modalities. Each fusion was visually checked and corrected as necessary. The computerized tomography (CT) scans for radiation treatment planning (slice thickness 1.25 mm), as well as stereotactic angiography for arteriovenous malformations, were acquired using head fixation with stereotactic arc or, in the case of stereotactic radiotherapy, with a relocatable stereotactic mask. Different magnetic resonance (MR) imaging sequences (T1, T2, and fluid-attenuated inversion-recovery images) and positron emission tomography (PET) scans were obtained without head fixation. Fusion results and the effects on radiation treatment planning and follow-up studies were analyzed. The precision level of the results of the automatic fusion depended primarily on the image quality, especially the slice thickness and the field homogeneity when using MR images, as well as on patient movement during data acquisition. Fully automated image fusion of different MR, CT, and PET studies was performed for each patient. Only in a few cases was it necessary to correct the fusion manually after visual evaluation. These corrections were minor and did not materially affect treatment planning. High-quality fusion of thin slices of a region of interest with a complete head data set could be performed easily. The target volume for radiation treatment planning could be accurately delineated using multimodal information provided by CT, MR, angiography, and PET studies. The fusion of follow-up image data sets yielded results that could be successfully compared and quantitatively evaluated. Depending on the quality of the originally acquired image, automated image fusion can be a very valuable tool, allowing for fast (approximately 1-2 minute) and precise fusion of all relevant data sets. Fused multimodality imaging improves the target volume definition for radiation treatment planning. High-quality follow-up image data sets should be acquired for image fusion to provide exactly comparable slices and volumetric results that will contribute to quality contol.
Liu, Xingbin; Mei, Wenbo; Du, Huiqian
2018-02-13
In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images. Graphical abstract The detailed implementation of the proposed medical image fusion algorithm.
Adaptive structured dictionary learning for image fusion based on group-sparse-representation
NASA Astrophysics Data System (ADS)
Yang, Jiajie; Sun, Bin; Luo, Chengwei; Wu, Yuzhong; Xu, Limei
2018-04-01
Dictionary learning is the key process of sparse representation which is one of the most widely used image representation theories in image fusion. The existing dictionary learning method does not use the group structure information and the sparse coefficients well. In this paper, we propose a new adaptive structured dictionary learning algorithm and a l1-norm maximum fusion rule that innovatively utilizes grouped sparse coefficients to merge the images. In the dictionary learning algorithm, we do not need prior knowledge about any group structure of the dictionary. By using the characteristics of the dictionary in expressing the signal, our algorithm can automatically find the desired potential structure information that hidden in the dictionary. The fusion rule takes the physical meaning of the group structure dictionary, and makes activity-level judgement on the structure information when the images are being merged. Therefore, the fused image can retain more significant information. Comparisons have been made with several state-of-the-art dictionary learning methods and fusion rules. The experimental results demonstrate that, the dictionary learning algorithm and the fusion rule both outperform others in terms of several objective evaluation metrics.
Multisource image fusion method using support value transform.
Zheng, Sheng; Shi, Wen-Zhong; Liu, Jian; Zhu, Guang-Xi; Tian, Jin-Wen
2007-07-01
With the development of numerous imaging sensors, many images can be simultaneously pictured by various sensors. However, there are many scenarios where no one sensor can give the complete picture. Image fusion is an important approach to solve this problem and produces a single image which preserves all relevant information from a set of different sensors. In this paper, we proposed a new image fusion method using the support value transform, which uses the support value to represent the salient features of image. This is based on the fact that, in support vector machines (SVMs), the data with larger support values have a physical meaning in the sense that they reveal relative more importance of the data points for contributing to the SVM model. The mapped least squares SVM (mapped LS-SVM) is used to efficiently compute the support values of image. The support value analysis is developed by using a series of multiscale support value filters, which are obtained by filling zeros in the basic support value filter deduced from the mapped LS-SVM to match the resolution of the desired level. Compared with the widely used image fusion methods, such as the Laplacian pyramid, discrete wavelet transform methods, the proposed method is an undecimated transform-based approach. The fusion experiments are undertaken on multisource images. The results demonstrate that the proposed approach is effective and is superior to the conventional image fusion methods in terms of the pertained quantitative fusion evaluation indexes, such as quality of visual information (Q(AB/F)), the mutual information, etc.
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1990-01-01
Various papers on human and machine strategies in sensor fusion are presented. The general topics addressed include: active vision, measurement and analysis of visual motion, decision models for sensor fusion, implementation of sensor fusion algorithms, applying sensor fusion to image analysis, perceptual modules and their fusion, perceptual organization and object recognition, planning and the integration of high-level knowledge with perception, using prior knowledge and context in sensor fusion.
Apostolou, N; Papazoglou, Th; Koutsouris, D
2006-01-01
Image fusion is a process of combining information from multiple sensors. It is a useful tool implemented in the treatment planning programme of Gamma Knife Radiosurgery. In this paper we evaluate advanced image fusion algorithms for Matlab platform and head images. We develop nine level grayscale image fusion methods: average, principal component analysis (PCA), discrete wavelet transform (DWT) and Laplacian, filter - subtract - decimate (FSD), contrast, gradient, morphological pyramid and a shift invariant discrete wavelet transform (SIDWT) method in Matlab platform. We test these methods qualitatively and quantitatively. The quantitative criteria we use are the Root Mean Square Error (RMSE), the Mutual Information (MI), the Standard Deviation (STD), the Entropy (H), the Difference Entropy (DH) and the Cross Entropy (CEN). The qualitative are: natural appearance, brilliance contrast, presence of complementary features and enhancement of common features. Finally we make clinically useful suggestions.
NASA Astrophysics Data System (ADS)
Câmara, F.; Oliveira, J.; Hormigo, T.; Araújo, J.; Ribeiro, R.; Falcão, A.; Gomes, M.; Dubois-Matra, O.; Vijendran, S.
2015-06-01
This paper discusses the design and evaluation of data fusion strategies to perform tiered fusion of several heterogeneous sensors and a priori data. The aim is to increase robustness and performance of hazard detection and avoidance systems, while enabling safe planetary and small body landings anytime, anywhere. The focus is on Mars and asteroid landing mission scenarios and three distinct data fusion algorithms are introduced and compared. The first algorithm consists of a hybrid camera-LIDAR hazard detection and avoidance system, the H2DAS, in which data fusion is performed at both sensor-level data (reconstruction of the point cloud obtained with a scanning LIDAR using the navigation motion states and correcting the image for motion compensation using IMU data), feature-level data (concatenation of multiple digital elevation maps, obtained from consecutive LIDAR images, to achieve higher accuracy and resolution maps while enabling relative positioning) as well as decision-level data (fusing hazard maps from multiple sensors onto a single image space, with a single grid orientation and spacing). The second method presented is a hybrid reasoning fusion, the HRF, in which innovative algorithms replace the decision-level functions of the previous method, by combining three different reasoning engines—a fuzzy reasoning engine, a probabilistic reasoning engine and an evidential reasoning engine—to produce safety maps. Finally, the third method presented is called Intelligent Planetary Site Selection, the IPSIS, an innovative multi-criteria, dynamic decision-level data fusion algorithm that takes into account historical information for the selection of landing sites and a piloting function with a non-exhaustive landing site search capability, i.e., capable of finding local optima by searching a reduced set of global maps. All the discussed data fusion strategies and algorithms have been integrated, verified and validated in a closed-loop simulation environment. Monte Carlo simulation campaigns were performed for the algorithms performance assessment and benchmarking. The simulations results comprise the landing phases of Mars and Phobos landing mission scenarios.
Statistical image quantification toward optimal scan fusion and change quantification
NASA Astrophysics Data System (ADS)
Potesil, Vaclav; Zhou, Xiang Sean
2007-03-01
Recent advance of imaging technology has brought new challenges and opportunities for automatic and quantitative analysis of medical images. With broader accessibility of more imaging modalities for more patients, fusion of modalities/scans from one time point and longitudinal analysis of changes across time points have become the two most critical differentiators to support more informed, more reliable and more reproducible diagnosis and therapy decisions. Unfortunately, scan fusion and longitudinal analysis are both inherently plagued with increased levels of statistical errors. A lack of comprehensive analysis by imaging scientists and a lack of full awareness by physicians pose potential risks in clinical practice. In this paper, we discuss several key error factors affecting imaging quantification, studying their interactions, and introducing a simulation strategy to establish general error bounds for change quantification across time. We quantitatively show that image resolution, voxel anisotropy, lesion size, eccentricity, and orientation are all contributing factors to quantification error; and there is an intricate relationship between voxel anisotropy and lesion shape in affecting quantification error. Specifically, when two or more scans are to be fused at feature level, optimal linear fusion analysis reveals that scans with voxel anisotropy aligned with lesion elongation should receive a higher weight than other scans. As a result of such optimal linear fusion, we will achieve a lower variance than naïve averaging. Simulated experiments are used to validate theoretical predictions. Future work based on the proposed simulation methods may lead to general guidelines and error lower bounds for quantitative image analysis and change detection.
Wu, Guorong; Kim, Minjeong; Sanroma, Gerard; Wang, Qian; Munsell, Brent C.; Shen, Dinggang
2014-01-01
Multi-atlas patch-based label fusion methods have been successfully used to improve segmentation accuracy in many important medical image analysis applications. In general, to achieve label fusion a single target image is first registered to several atlas images, after registration a label is assigned to each target point in the target image by determining the similarity between the underlying target image patch (centered at the target point) and the aligned image patch in each atlas image. To achieve the highest level of accuracy during the label fusion process it’s critical the chosen patch similarity measurement accurately captures the tissue/shape appearance of the anatomical structure. One major limitation of existing state-of-the-art label fusion methods is that they often apply a fixed size image patch throughout the entire label fusion procedure. Doing so may severely affect the fidelity of the patch similarity measurement, which in turn may not adequately capture complex tissue appearance patterns expressed by the anatomical structure. To address this limitation, we advance state-of-the-art by adding three new label fusion contributions: First, each image patch now characterized by a multi-scale feature representation that encodes both local and semi-local image information. Doing so will increase the accuracy of the patch-based similarity measurement. Second, to limit the possibility of the patch-based similarity measurement being wrongly guided by the presence of multiple anatomical structures in the same image patch, each atlas image patch is further partitioned into a set of label-specific partial image patches according to the existing labels. Since image information has now been semantically divided into different patterns, these new label-specific atlas patches make the label fusion process more specific and flexible. Lastly, in order to correct target points that are mislabeled during label fusion, a hierarchically approach is used to improve the label fusion results. In particular, a coarse-to-fine iterative label fusion approach is used that gradually reduces the patch size. To evaluate the accuracy of our label fusion approach, the proposed method was used to segment the hippocampus in the ADNI dataset and 7.0 tesla MR images, sub-cortical regions in LONI LBPA40 dataset, mid-brain regions in SATA dataset from MICCAI 2013 segmentation challenge, and a set of key internal gray matter structures in IXI dataset. In all experiments, the segmentation results of the proposed hierarchical label fusion method with multi-scale feature representations and label-specific atlas patches are more accurate than several well-known state-of-the-art label fusion methods. PMID:25463474
A Hierarchical Convolutional Neural Network for vesicle fusion event classification.
Li, Haohan; Mao, Yunxiang; Yin, Zhaozheng; Xu, Yingke
2017-09-01
Quantitative analysis of vesicle exocytosis and classification of different modes of vesicle fusion from the fluorescence microscopy are of primary importance for biomedical researches. In this paper, we propose a novel Hierarchical Convolutional Neural Network (HCNN) method to automatically identify vesicle fusion events in time-lapse Total Internal Reflection Fluorescence Microscopy (TIRFM) image sequences. Firstly, a detection and tracking method is developed to extract image patch sequences containing potential fusion events. Then, a Gaussian Mixture Model (GMM) is applied on each image patch of the patch sequence with outliers rejected for robust Gaussian fitting. By utilizing the high-level time-series intensity change features introduced by GMM and the visual appearance features embedded in some key moments of the fusion process, the proposed HCNN architecture is able to classify each candidate patch sequence into three classes: full fusion event, partial fusion event and non-fusion event. Finally, we validate the performance of our method on 9 challenging datasets that have been annotated by cell biologists, and our method achieves better performances when comparing with three previous methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Graph-based Data Modeling and Analysis for Data Fusion in Remote Sensing
NASA Astrophysics Data System (ADS)
Fan, Lei
Hyperspectral imaging provides the capability of increased sensitivity and discrimination over traditional imaging methods by combining standard digital imaging with spectroscopic methods. For each individual pixel in a hyperspectral image (HSI), a continuous spectrum is sampled as the spectral reflectance/radiance signature to facilitate identification of ground cover and surface material. The abundant spectrum knowledge allows all available information from the data to be mined. The superior qualities within hyperspectral imaging allow wide applications such as mineral exploration, agriculture monitoring, and ecological surveillance, etc. The processing of massive high-dimensional HSI datasets is a challenge since many data processing techniques have a computational complexity that grows exponentially with the dimension. Besides, a HSI dataset may contain a limited number of degrees of freedom due to the high correlations between data points and among the spectra. On the other hand, merely taking advantage of the sampled spectrum of individual HSI data point may produce inaccurate results due to the mixed nature of raw HSI data, such as mixed pixels, optical interferences and etc. Fusion strategies are widely adopted in data processing to achieve better performance, especially in the field of classification and clustering. There are mainly three types of fusion strategies, namely low-level data fusion, intermediate-level feature fusion, and high-level decision fusion. Low-level data fusion combines multi-source data that is expected to be complementary or cooperative. Intermediate-level feature fusion aims at selection and combination of features to remove redundant information. Decision level fusion exploits a set of classifiers to provide more accurate results. The fusion strategies have wide applications including HSI data processing. With the fast development of multiple remote sensing modalities, e.g. Very High Resolution (VHR) optical sensors, LiDAR, etc., fusion of multi-source data can in principal produce more detailed information than each single source. On the other hand, besides the abundant spectral information contained in HSI data, features such as texture and shape may be employed to represent data points from a spatial perspective. Furthermore, feature fusion also includes the strategy of removing redundant and noisy features in the dataset. One of the major problems in machine learning and pattern recognition is to develop appropriate representations for complex nonlinear data. In HSI processing, a particular data point is usually described as a vector with coordinates corresponding to the intensities measured in the spectral bands. This vector representation permits the application of linear and nonlinear transformations with linear algebra to find an alternative representation of the data. More generally, HSI is multi-dimensional in nature and the vector representation may lose the contextual correlations. Tensor representation provides a more sophisticated modeling technique and a higher-order generalization to linear subspace analysis. In graph theory, data points can be generalized as nodes with connectivities measured from the proximity of a local neighborhood. The graph-based framework efficiently characterizes the relationships among the data and allows for convenient mathematical manipulation in many applications, such as data clustering, feature extraction, feature selection and data alignment. In this thesis, graph-based approaches applied in the field of multi-source feature and data fusion in remote sensing area are explored. We will mainly investigate the fusion of spatial, spectral and LiDAR information with linear and multilinear algebra under graph-based framework for data clustering and classification problems.
Bo, Xiao-Wan; Xu, Hui-Xiong; Wang, Dan; Guo, Le-Hang; Sun, Li-Ping; Li, Xiao-Long; Zhao, Chong-Ke; He, Ya-Ping; Liu, Bo-Ji; Li, Dan-Dan; Zhang, Kun
2016-11-01
To investigate the usefulness of fusion imaging of contrast-enhanced ultrasound (CEUS) and CECT/CEMRI before percutaneous ultrasound-guided radiofrequency ablation (RFA) for liver cancers. 45 consecutive patients with 70 liver lesions were included between March 2013 and October 2015, and all the lesions were identified on CEMRI/CECT prior to inclusion in the study. Planning ultrasound for percutaneous RFA was performed using conventional ultrasound, ultrasound-CECT/CEMRI and CEUS and CECT/CEMRI fusion imaging during the same session. The numbers of the conspicuous lesions on ultrasound and fusion imaging were recorded. RFA was performed according to the results of fusion imaging. Complete response (CR) rate was calculated and the complications were recorded. On conventional ultrasound, 25 (35.7%) of the 70 lesions were conspicuous, whereas 45 (64.3%) were inconspicuous. Ultrasound-CECT/CEMRI fusion imaging detected additional 24 lesions thus increased the number of the conspicuous lesions to 49 (70.0%) (70.0% vs 35.7%; p < 0.001 in comparison with conventional ultrasound). With the use of CEUS and CECT/CEMRI fusion imaging, the number of the conspicuous lesions further increased to 67 (95.7%, 67/70) (95.7% vs 70.0%, 95.7% vs 35.7%; both p < 0.001 in comparison with ultrasound and ultrasound-CECT/CEMRI fusion imaging, respectively). With the assistance of CEUS and CECT/CEMRI fusion imaging, the confidence level of the operator for performing RFA improved significantly with regard to visualization of the target lesions (p = 0.001). The CR rate for RFA was 97.0% (64/66) in accordance to the CECT/CEMRI results 1 month later. No procedure-related deaths and major complications occurred during and after RFA. Fusion of CEUS and CECT/CEMRI improves the visualization of those inconspicuous lesions on conventional ultrasound. It also facilitates improvement in the RFA operators' confidence and CR of RFA. Advances in knowledge: CEUS and CECT/CEMRI fusion imaging is better than both conventional ultrasound and ultrasound-CECT/CEMRI fusion imaging for lesion visualization and improves the operator confidence, thus it should be recommended to be used as a routine in ultrasound-guided percutaneous RFA procedures for liver cancer.
Bo, Xiao-Wan; Wang, Dan; Guo, Le-Hang; Sun, Li-Ping; Li, Xiao-Long; Zhao, Chong-Ke; He, Ya-Ping; Liu, Bo-Ji; Li, Dan-Dan; Zhang, Kun
2016-01-01
Objective: To investigate the usefulness of fusion imaging of contrast-enhanced ultrasound (CEUS) and CECT/CEMRI before percutaneous ultrasound-guided radiofrequency ablation (RFA) for liver cancers. Methods: 45 consecutive patients with 70 liver lesions were included between March 2013 and October 2015, and all the lesions were identified on CEMRI/CECT prior to inclusion in the study. Planning ultrasound for percutaneous RFA was performed using conventional ultrasound, ultrasound-CECT/CEMRI and CEUS and CECT/CEMRI fusion imaging during the same session. The numbers of the conspicuous lesions on ultrasound and fusion imaging were recorded. RFA was performed according to the results of fusion imaging. Complete response (CR) rate was calculated and the complications were recorded. Results: On conventional ultrasound, 25 (35.7%) of the 70 lesions were conspicuous, whereas 45 (64.3%) were inconspicuous. Ultrasound-CECT/CEMRI fusion imaging detected additional 24 lesions thus increased the number of the conspicuous lesions to 49 (70.0%) (70.0% vs 35.7%; p < 0.001 in comparison with conventional ultrasound). With the use of CEUS and CECT/CEMRI fusion imaging, the number of the conspicuous lesions further increased to 67 (95.7%, 67/70) (95.7% vs 70.0%, 95.7% vs 35.7%; both p < 0.001 in comparison with ultrasound and ultrasound-CECT/CEMRI fusion imaging, respectively). With the assistance of CEUS and CECT/CEMRI fusion imaging, the confidence level of the operator for performing RFA improved significantly with regard to visualization of the target lesions (p = 0.001). The CR rate for RFA was 97.0% (64/66) in accordance to the CECT/CEMRI results 1 month later. No procedure-related deaths and major complications occurred during and after RFA. Conclusion: Fusion of CEUS and CECT/CEMRI improves the visualization of those inconspicuous lesions on conventional ultrasound. It also facilitates improvement in the RFA operators' confidence and CR of RFA. Advances in knowledge: CEUS and CECT/CEMRI fusion imaging is better than both conventional ultrasound and ultrasound-CECT/CEMRI fusion imaging for lesion visualization and improves the operator confidence, thus it should be recommended to be used as a routine in ultrasound-guided percutaneous RFA procedures for liver cancer. PMID:27626506
Detection of buried objects by fusing dual-band infrared images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, G.A.; Sengupta, S.K.; Sherwood, R.J.
1993-11-01
We have conducted experiments to demonstrate the enhanced detectability of buried land mines using sensor fusion techniques. Multiple sensors, including visible imagery, infrared imagery, and ground penetrating radar (GPR), have been used to acquire data on a number of buried mines and mine surrogates. Because the visible wavelength and GPR data are currently incomplete. This paper focuses on the fusion of two-band infrared images. We use feature-level fusion and supervised learning with the probabilistic neural network (PNN) to evaluate detection performance. The novelty of the work lies in the application of advanced target recognition algorithms, the fusion of dual-band infraredmore » images and evaluation of the techniques using two real data sets.« less
Wobser, Hella; Wiest, Reiner; Salzberger, Bernd; Wohlgemuth, Walter Alexander; Stroszczynski, Christian; Jung, Ernst-Michael
2014-01-01
To evaluate treatment response of hepatocellular carcinoma (HCC) after transarterial chemoembolization (TACE) with a new real-time imaging fusion technique of contrast-enhanced ultrasound (CEUS) with multi-slice detection computed tomography (CT) in comparison to conventional post-interventional follow-up. 40 patients with HCC (26 male, ages 46-81 years) were evaluated 24 hours after TACE using CEUS with ultrasound volume navigation and image fusion with CT compared to non-enhanced CT and follow-up contrast-enhanced CT after 6-8 weeks. Reduction of tumor vascularization to less than 25% was regarded as "successful" treatment, whereas reduction to levels >25% was considered as "partial" treatment response. Homogenous lipiodol retention was regarded as successful treatment in non-enhanced CT. Post-interventional image fusion of CEUS with CT was feasible in all 40 patients. In 24 patients (24/40), post-interventional image fusion with CEUS revealed residual tumor vascularity, that was confirmed by contrast-enhanced CT 6-8 weeks later in 24/24 patients. In 16 patients (16/40), post-interventional image fusion with CEUS demonstrated successful treatment, but follow-up CT detected residual viable tumor (6/16). Non-enhanced CT did not identify any case of treatment failure. Image fusion with CEUS assessed treatment efficacy with a specificity of 100%, sensitivity of 80% and a positive predictive value of 1 (negative predictive value 0.63). Image fusion of CEUS with CT allows a reliable, highly specific post-interventional evaluation of embolization response with good sensitivity without any further radiation exposure. It can detect residual viable tumor at early state, resulting in a close patient monitoring or re-therapy.
Twofold processing for denoising ultrasound medical images.
Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y
2015-01-01
Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India.
Huang, Yan; Bi, Duyan; Wu, Dongpeng
2018-04-11
There are many artificial parameters when fuse infrared and visible images, to overcome the lack of detail in the fusion image because of the artifacts, a novel fusion algorithm for infrared and visible images that is based on different constraints in non-subsampled shearlet transform (NSST) domain is proposed. There are high bands and low bands of images that are decomposed by the NSST. After analyzing the characters of the bands, fusing the high level bands by the gradient constraint, the fused image can obtain more details; fusing the low bands by the constraint of saliency in the images, the targets are more salient. Before the inverse NSST, the Nash equilibrium is used to update the coefficient. The fused images and the quantitative results demonstrate that our method is more effective in reserving details and highlighting the targets when compared with other state-of-the-art methods.
Huang, Yan; Bi, Duyan; Wu, Dongpeng
2018-01-01
There are many artificial parameters when fuse infrared and visible images, to overcome the lack of detail in the fusion image because of the artifacts, a novel fusion algorithm for infrared and visible images that is based on different constraints in non-subsampled shearlet transform (NSST) domain is proposed. There are high bands and low bands of images that are decomposed by the NSST. After analyzing the characters of the bands, fusing the high level bands by the gradient constraint, the fused image can obtain more details; fusing the low bands by the constraint of saliency in the images, the targets are more salient. Before the inverse NSST, the Nash equilibrium is used to update the coefficient. The fused images and the quantitative results demonstrate that our method is more effective in reserving details and highlighting the targets when compared with other state-of-the-art methods. PMID:29641505
Fusion and quality analysis for remote sensing images using contourlet transform
NASA Astrophysics Data System (ADS)
Choi, Yoonsuk; Sharifahmadian, Ershad; Latifi, Shahram
2013-05-01
Recent developments in remote sensing technologies have provided various images with high spatial and spectral resolutions. However, multispectral images have low spatial resolution and panchromatic images have low spectral resolution. Therefore, image fusion techniques are necessary to improve the spatial resolution of spectral images by injecting spatial details of high-resolution panchromatic images. The objective of image fusion is to provide useful information by improving the spatial resolution and the spectral information of the original images. The fusion results can be utilized in various applications, such as military, medical imaging, and remote sensing. This paper addresses two issues in image fusion: i) image fusion method and ii) quality analysis of fusion results. First, a new contourlet-based image fusion method is presented, which is an improvement over the wavelet-based fusion. This fusion method is then applied to a case study to demonstrate its fusion performance. Fusion framework and scheme used in the study are discussed in detail. Second, quality analysis for the fusion results is discussed. We employed various quality metrics in order to analyze the fusion results both spatially and spectrally. Our results indicate that the proposed contourlet-based fusion method performs better than the conventional wavelet-based fusion methods.
Nighttime images fusion based on Laplacian pyramid
NASA Astrophysics Data System (ADS)
Wu, Cong; Zhan, Jinhao; Jin, Jicheng
2018-02-01
This paper expounds method of the average weighted fusion, image pyramid fusion, the wavelet transform and apply these methods on the fusion of multiple exposures nighttime images. Through calculating information entropy and cross entropy of fusion images, we can evaluate the effect of different fusion. Experiments showed that Laplacian pyramid image fusion algorithm is suitable for processing nighttime images fusion, it can reduce the halo while preserving image details.
Zeitoun, Rania; Hussein, Manar
2017-11-01
To reach a practical approach to interpret MDCT findings in post-operative spine cases and to change the false belief of CT failure in the setting of instruments secondary to related artefacts. We performed observational retrospective analysis of premier, early and late MDCT scans in 68 post-operative spine patients, with emphasis on instruments related complications and osseous fusion status. We used a grading system for assessment of osseous fusion in 35 patients and we further analysed the findings in failure of fusion, grade (D). We observed a variety of instruments related complications (mostly screws medially penetrating the pedicle) and osseous fusion status in late scans. We graded 11 interbody and 14 posterolateral levels as osseous fusion failure, showing additional instruments related complications, end plates erosive changes, adjacent segments spondylosis and malalignment. Modern MDCT scanners provide high quality images and are strongly recommended in assessment of the instruments and status of osseous fusion. In post-operative imaging of the spine, it is essential to be aware for what you are looking for, in relevance to the date of surgery. Advances in knowledge: Modern MDCT scanners allow assessment of instruments position and integrity and osseous fusion status in post-operative spine. We propose a helpful algorithm to simplify interpreting post-operative spine imaging.
Spectral CT imaging in patients with Budd-Chiari syndrome: investigation of image quality.
Su, Lei; Dong, Junqiang; Sun, Qiang; Liu, Jie; Lv, Peijie; Hu, Lili; Yan, Liangliang; Gao, Jianbo
2014-11-01
To assess the image quality of monochromatic imaging from spectral CT in patients with Budd-Chiari syndrome (BCS), fifty patients with BCS underwent spectral CT to generate conventional 140 kVp polychromatic images (group A) and monochromatic images, with energy levels from 40 to 80, 40 + 70, and 50 + 70 keV fusion images (group B) during the portal venous phase (PVP) and the hepatic venous phase (HVP). Two-sample t tests compared vessel-to-liver contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR) for the portal vein (PV), hepatic vein (HV), inferior vena cava. Readers' subjective evaluations of the image quality were recorded. The highest SNR values in group B were distributed at 50 keV; the highest CNR values in group B were distributed at 40 keV. The higher CNR values and SNR values were obtained though PVP of PV (SNR 18.39 ± 6.13 vs. 10.56 ± 3.31, CNR 7.81 ± 3.40 vs. 3.58 ± 1.31) and HVP of HV (3.89 ± 2.08 vs. 1.27 ± 1.55) in the group B; the lower image noise for group B was at 70 keV and 50 + 70 keV (15.54 ± 8.39 vs. 18.40 ± 4.97, P = 0.0004 and 18.97 ± 7.61 vs. 18.40 ± 4.97, P = 0.0691); the results show that the 50 + 70 keV fusion image quality was better than that in group A. Monochromatic energy levels of 40-70, 40 + 70, and 50 + 70 keV fusion image can increase vascular contrast and that will be helpful for the diagnosis of BCS, we select the 50 + 70 keV fusion image to acquire the best BCS images.
A novel imaging method for photonic crystal fiber fusion splicer
NASA Astrophysics Data System (ADS)
Bi, Weihong; Fu, Guangwei; Guo, Xuan
2007-01-01
Because the structure of Photonic Crystal Fiber (PCF) is very complex, and it is very difficult that traditional fiber fusion splice obtains optical axial information of PCF. Therefore, we must search for a bran-new optical imaging method to get section information of Photonic Crystal Fiber. Based on complex trait of PCF, a novel high-precision optics imaging system is presented in this article. The system uses a thinned electron-bombarded CCD (EBCCD) which is a kind of image sensor as imaging element, the thinned electron-bombarded CCD can offer low light level performance superior to conventional image intensifier coupled CCD approaches, this high-performance device can provide high contrast high resolution in low light level surveillance imaging; in order to realize precision focusing of image, we use a ultra-highprecision pace motor to adjust position of imaging lens. In this way, we can obtain legible section information of PCF. We may realize further concrete analysis for section information of PCF by digital image processing technology. Using this section information may distinguish different sorts of PCF, compute some parameters such as the size of PCF ventage, cladding structure of PCF and so on, and provide necessary analysis data for PCF fixation, adjustment, regulation, fusion and cutting system.
Formulation of image fusion as a constrained least squares optimization problem
Dwork, Nicholas; Lasry, Eric M.; Pauly, John M.; Balbás, Jorge
2017-01-01
Abstract. Fusing a lower resolution color image with a higher resolution monochrome image is a common practice in medical imaging. By incorporating spatial context and/or improving the signal-to-noise ratio, it provides clinicians with a single frame of the most complete information for diagnosis. In this paper, image fusion is formulated as a convex optimization problem that avoids image decomposition and permits operations at the pixel level. This results in a highly efficient and embarrassingly parallelizable algorithm based on widely available robust and simple numerical methods that realizes the fused image as the global minimizer of the convex optimization problem. PMID:28331885
A fast and automatic fusion algorithm for unregistered multi-exposure image sequence
NASA Astrophysics Data System (ADS)
Liu, Yan; Yu, Feihong
2014-09-01
Human visual system (HVS) can visualize all the brightness levels of the scene through visual adaptation. However, the dynamic range of most commercial digital cameras and display devices are smaller than the dynamic range of human eye. This implies low dynamic range (LDR) images captured by normal digital camera may lose image details. We propose an efficient approach to high dynamic (HDR) image fusion that copes with image displacement and image blur degradation in a computationally efficient manner, which is suitable for implementation on mobile devices. The various image registration algorithms proposed in the previous literatures are unable to meet the efficiency and performance requirements in the application of mobile devices. In this paper, we selected Oriented Brief (ORB) detector to extract local image structures. The descriptor selected in multi-exposure image fusion algorithm has to be fast and robust to illumination variations and geometric deformations. ORB descriptor is the best candidate in our algorithm. Further, we perform an improved RANdom Sample Consensus (RANSAC) algorithm to reject incorrect matches. For the fusion of images, a new approach based on Stationary Wavelet Transform (SWT) is used. The experimental results demonstrate that the proposed algorithm generates high quality images at low computational cost. Comparisons with a number of other feature matching methods show that our method gets better performance.
Design of a new type synchronous focusing mechanism
NASA Astrophysics Data System (ADS)
Zhang, Jintao; Tan, Ruijun; Chen, Zhou; Zhang, Yongqi; Fu, Panlong; Qu, Yachen
2018-05-01
Aiming at the dual channel telescopic imaging system composed of infrared imaging system, low-light-level imaging system and image fusion module, In the fusion of low-light-level images and infrared images, it is obvious that using clear source images is easier to obtain high definition fused images. When the target is imaged at 15m to infinity, focusing is needed to ensure the imaging quality of the dual channel imaging system; therefore, a new type of synchronous focusing mechanism is designed. The synchronous focusing mechanism realizes the focusing function through the synchronous translational imaging devices, mainly including the structure of the screw rod nut, the shaft hole coordination structure and the spring steel ball eliminating clearance structure, etc. Starting from the synchronous focusing function of two imaging devices, the structure characteristics of the synchronous focusing mechanism are introduced in detail, and the focusing range is analyzed. The experimental results show that the synchronous focusing mechanism has the advantages of ingenious design, high focusing accuracy and stable and reliable operation.
Angiogram, fundus, and oxygen saturation optic nerve head image fusion
NASA Astrophysics Data System (ADS)
Cao, Hua; Khoobehi, Bahram
2009-02-01
A novel multi-modality optic nerve head image fusion approach has been successfully designed. The new approach has been applied on three ophthalmologic modalities: angiogram, fundus, and oxygen saturation retinal optic nerve head images. It has achieved an excellent result by giving the visualization of fundus or oxygen saturation images with a complete angiogram overlay. During this study, two contributions have been made in terms of novelty, efficiency, and accuracy. The first contribution is the automated control point detection algorithm for multi-sensor images. The new method employs retina vasculature and bifurcation features by identifying the initial good-guess of control points using the Adaptive Exploratory Algorithm. The second contribution is the heuristic optimization fusion algorithm. In order to maximize the objective function (Mutual-Pixel-Count), the iteration algorithm adjusts the initial guess of the control points at the sub-pixel level. A refinement of the parameter set is obtained at the end of each loop, and finally an optimal fused image is generated at the end of the iteration. It is the first time that Mutual-Pixel-Count concept has been introduced into biomedical image fusion area. By locking the images in one place, the fused image allows ophthalmologists to match the same eye over time and get a sense of disease progress and pinpoint surgical tools. The new algorithm can be easily expanded to human or animals' 3D eye, brain, or body image registration and fusion.
Günzel, Karsten; Cash, Hannes; Buckendahl, John; Königbauer, Maximilian; Asbach, Patrick; Haas, Matthias; Neymeyer, Jörg; Hinz, Stefan; Miller, Kurt; Kempkensteffen, Carsten
2017-01-13
To explore the diagnostic benefit of an additional image fusion of the sagittal plane in addition to the standard axial image fusion, using a sensor-based MRI/US fusion platform. During July 2013 and September 2015, 251 patients with at least one suspicious lesion on mpMRI (rated by PI-RADS) were included into the analysis. All patients underwent MRI/US targeted biopsy (TB) in combination with a 10 core systematic prostate biopsy (SB). All biopsies were performed on a sensor-based fusion system. Group A included 162 men who received TB by an axial MRI/US image fusion. Group B comprised 89 men in whom the TB was performed with an additional sagittal image fusion. The median age in group A was 67 years (IQR 61-72) and in group B 68 years (IQR 60-71). The median PSA level in group A was 8.10 ng/ml (IQR 6.05-14) and in group B 8.59 ng/ml (IQR 5.65-12.32). In group A the proportion of patients with a suspicious digital rectal examination (DRE) (14 vs. 29%, p = 0.007) and the proportion of primary biopsies (33 vs 46%, p = 0.046) were significantly lower. The rate of PI-RADS 3 lesions were overrepresented in group A compared to group B (19 vs. 9%; p = 0.044). Classified according to PI-RADS 3, 4 and 5, the detection rates of TB were 42, 48, 75% in group A and 25, 74, 90% in group B. The rate of PCa with a Gleason score ≥7 missed by TB was 33% (18 cases) in group A and 9% (5 cases) in group B; p-value 0.072. An explorative multivariate binary logistic regression analysis revealed that PI-RADS, a suspicious DRE and performing an additional sagittal image fusion were significant predictors for PCa detection in TB. 9 PCa were only detected by TB with sagittal fusion (sTB) and sTB identified 10 additional clinically significant PCa (Gleason ≥7). Performing an additional sagittal image fusion besides the standard axial fusion appears to improve the accuracy of the sensor-based MRI/US fusion platform.
Salient region detection by fusing bottom-up and top-down features extracted from a single image.
Tian, Huawei; Fang, Yuming; Zhao, Yao; Lin, Weisi; Ni, Rongrong; Zhu, Zhenfeng
2014-10-01
Recently, some global contrast-based salient region detection models have been proposed based on only the low-level feature of color. It is necessary to consider both color and orientation features to overcome their limitations, and thus improve the performance of salient region detection for images with low-contrast in color and high-contrast in orientation. In addition, the existing fusion methods for different feature maps, like the simple averaging method and the selective method, are not effective sufficiently. To overcome these limitations of existing salient region detection models, we propose a novel salient region model based on the bottom-up and top-down mechanisms: the color contrast and orientation contrast are adopted to calculate the bottom-up feature maps, while the top-down cue of depth-from-focus from the same single image is used to guide the generation of final salient regions, since depth-from-focus reflects the photographer's preference and knowledge of the task. A more general and effective fusion method is designed to combine the bottom-up feature maps. According to the degree-of-scattering and eccentricities of feature maps, the proposed fusion method can assign adaptive weights to different feature maps to reflect the confidence level of each feature map. The depth-from-focus of the image as a significant top-down feature for visual attention in the image is used to guide the salient regions during the fusion process; with its aid, the proposed fusion method can filter out the background and highlight salient regions for the image. Experimental results show that the proposed model outperforms the state-of-the-art models on three public available data sets.
Duarte, Cristiana; Pinto-Gouveia, José
2017-12-01
This study examined the phenomenology of shame experiences from childhood and adolescence in a sample of women with Binge Eating Disorder. Moreover, a path analysis was investigated testing whether the association between shame-related memories which are traumatic and central to identity, and binge eating symptoms' severity, is mediated by current external shame, body image shame and body image cognitive fusion. Participants in this study were 114 patients, who were assessed through the Eating Disorder Examination and the Shame Experiences Interview, and through self-report measures of external shame, body image shame, body image cognitive fusion and binge eating symptoms. Shame experiences where physical appearance was negatively commented or criticized by others were the most frequently recalled. A path analysis showed a good fit between the hypothesised mediational model and the data. The traumatic and centrality qualities of shame-related memories predicted current external shame, especially body image shame. Current shame feelings were associated with body image cognitive fusion, which, in turn, predicted levels of binge eating symptomatology. Findings support the relevance of addressing early shame-related memories and negative affective and self-evaluative experiences, namely related to body image, in the understanding and management of binge eating. Copyright © 2017 Elsevier B.V. All rights reserved.
Enhanced image capture through fusion
NASA Technical Reports Server (NTRS)
Burt, Peter J.; Hanna, Keith; Kolczynski, Raymond J.
1993-01-01
Image fusion may be used to combine images from different sensors, such as IR and visible cameras, to obtain a single composite with extended information content. Fusion may also be used to combine multiple images from a given sensor to form a composite image in which information of interest is enhanced. We present a general method for performing image fusion and show that this method is effective for diverse fusion applications. We suggest that fusion may provide a powerful tool for enhanced image capture with broad utility in image processing and computer vision.
Minimizing the semantic gap in biomedical content-based image retrieval
NASA Astrophysics Data System (ADS)
Guan, Haiying; Antani, Sameer; Long, L. Rodney; Thoma, George R.
2010-03-01
A major challenge in biomedical Content-Based Image Retrieval (CBIR) is to achieve meaningful mappings that minimize the semantic gap between the high-level biomedical semantic concepts and the low-level visual features in images. This paper presents a comprehensive learning-based scheme toward meeting this challenge and improving retrieval quality. The article presents two algorithms: a learning-based feature selection and fusion algorithm and the Ranking Support Vector Machine (Ranking SVM) algorithm. The feature selection algorithm aims to select 'good' features and fuse them using different similarity measurements to provide a better representation of the high-level concepts with the low-level image features. Ranking SVM is applied to learn the retrieval rank function and associate the selected low-level features with query concepts, given the ground-truth ranking of the training samples. The proposed scheme addresses four major issues in CBIR to improve the retrieval accuracy: image feature extraction, selection and fusion, similarity measurements, the association of the low-level features with high-level concepts, and the generation of the rank function to support high-level semantic image retrieval. It models the relationship between semantic concepts and image features, and enables retrieval at the semantic level. We apply it to the problem of vertebra shape retrieval from a digitized spine x-ray image set collected by the second National Health and Nutrition Examination Survey (NHANES II). The experimental results show an improvement of up to 41.92% in the mean average precision (MAP) over conventional image similarity computation methods.
Morphological filtering and multiresolution fusion for mammographic microcalcification detection
NASA Astrophysics Data System (ADS)
Chen, Lulin; Chen, Chang W.; Parker, Kevin J.
1997-04-01
Mammographic images are often of relatively low contrast and poor sharpness with non-stationary background or clutter and are usually corrupted by noise. In this paper, we propose a new method for microcalcification detection using gray scale morphological filtering followed by multiresolution fusion and present a unified general filtering form called the local operating transformation for whitening filtering and adaptive thresholding. The gray scale morphological filters are used to remove all large areas that are considered as non-stationary background or clutter variations, i.e., to prewhiten images. The multiresolution fusion decision is based on matched filter theory. In addition to the normal matched filter, the Laplacian matched filter which is directly related through the wavelet transforms to multiresolution analysis is exploited for microcalcification feature detection. At the multiresolution fusion stage, the region growing techniques are used in each resolution level. The parent-child relations between resolution levels are adopted to make final detection decision. FROC is computed from test on the Nijmegen database.
Hybrid Image Fusion for Sharpness Enhancement of Multi-Spectral Lunar Images
NASA Astrophysics Data System (ADS)
Awumah, Anna; Mahanti, Prasun; Robinson, Mark
2016-10-01
Image fusion enhances the sharpness of a multi-spectral (MS) image by incorporating spatial details from a higher-resolution panchromatic (Pan) image [1,2]. Known applications of image fusion for planetary images are rare, although image fusion is well-known for its applications to Earth-based remote sensing. In a recent work [3], six different image fusion algorithms were implemented and their performances were verified with images from the Lunar Reconnaissance Orbiter (LRO) Camera. The image fusion procedure obtained a high-resolution multi-spectral (HRMS) product from the LRO Narrow Angle Camera (used as Pan) and LRO Wide Angle Camera (used as MS) images. The results showed that the Intensity-Hue-Saturation (IHS) algorithm results in a high-spatial quality product while the Wavelet-based image fusion algorithm best preserves spectral quality among all the algorithms. In this work we show the results of a hybrid IHS-Wavelet image fusion algorithm when applied to LROC MS images. The hybrid method provides the best HRMS product - both in terms of spatial resolution and preservation of spectral details. Results from hybrid image fusion can enable new science and increase the science return from existing LROC images.[1] Pohl, Cle, and John L. Van Genderen. "Review article multisensor image fusion in remote sensing: concepts, methods and applications." International journal of remote sensing 19.5 (1998): 823-854.[2] Zhang, Yun. "Understanding image fusion." Photogramm. Eng. Remote Sens 70.6 (2004): 657-661.[3] Mahanti, Prasun et al. "Enhancement of spatial resolution of the LROC Wide Angle Camera images." Archives, XXIII ISPRS Congress Archives (2016).
Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators
Bai, Xiangzhi
2015-01-01
The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion. PMID:26184229
Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators.
Bai, Xiangzhi
2015-07-15
The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion.
NASA Technical Reports Server (NTRS)
Foyle, David C.
1993-01-01
Based on existing integration models in the psychological literature, an evaluation framework is developed to assess sensor fusion displays as might be implemented in an enhanced/synthetic vision system. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The pilot's performance with the sensor fusion image is compared to models' predictions based on the pilot's performance when viewing the original component sensor images prior to fusion. This allows for the determination as to when a sensor fusion system leads to: poorer performance than one of the original sensor displays, clearly an undesirable system in which the fused sensor system causes some distortion or interference; better performance than with either single sensor system alone, but at a sub-optimal level compared to model predictions; optimal performance compared to model predictions; or, super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays.
Multi-focus image fusion using a guided-filter-based difference image.
Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Yang, Tingwu
2016-03-20
The aim of multi-focus image fusion technology is to integrate different partially focused images into one all-focused image. To realize this goal, a new multi-focus image fusion method based on a guided filter is proposed and an efficient salient feature extraction method is presented in this paper. Furthermore, feature extraction is primarily the main objective of the present work. Based on salient feature extraction, the guided filter is first used to acquire the smoothing image containing the most sharpness regions. To obtain the initial fusion map, we compose a mixed focus measure by combining the variance of image intensities and the energy of the image gradient together. Then, the initial fusion map is further processed by a morphological filter to obtain a good reprocessed fusion map. Lastly, the final fusion map is determined via the reprocessed fusion map and is optimized by a guided filter. Experimental results demonstrate that the proposed method does markedly improve the fusion performance compared to previous fusion methods and can be competitive with or even outperform state-of-the-art fusion methods in terms of both subjective visual effects and objective quality metrics.
Fusion of infrared polarization and intensity images based on improved toggle operator
NASA Astrophysics Data System (ADS)
Zhu, Pan; Ding, Lei; Ma, Xiaoqing; Huang, Zhanhua
2018-01-01
Integration of infrared polarization and intensity images has been a new topic in infrared image understanding and interpretation. The abundant infrared details and target from infrared image and the salient edge and shape information from polarization image should be preserved or even enhanced in the fused result. In this paper, a new fusion method is proposed for infrared polarization and intensity images based on the improved multi-scale toggle operator with spatial scale, which can effectively extract the feature information of source images and heavily reduce redundancy among different scale. Firstly, the multi-scale image features of infrared polarization and intensity images are respectively extracted at different scale levels by the improved multi-scale toggle operator. Secondly, the redundancy of the features among different scales is reduced by using spatial scale. Thirdly, the final image features are combined by simply adding all scales of feature images together, and a base image is calculated by performing mean value weighted method on smoothed source images. Finally, the fusion image is obtained by importing the combined image features into the base image with a suitable strategy. Both objective assessment and subjective vision of the experimental results indicate that the proposed method obtains better performance in preserving the details and edge information as well as improving the image contrast.
Multi-Atlas Based Segmentation of Brainstem Nuclei from MR Images by Deep Hyper-Graph Learning.
Dong, Pei; Guo, Yangrong; Gao, Yue; Liang, Peipeng; Shi, Yonghong; Wang, Qian; Shen, Dinggang; Wu, Guorong
2016-10-01
Accurate segmentation of brainstem nuclei (red nucleus and substantia nigra) is very important in various neuroimaging applications such as deep brain stimulation and the investigation of imaging biomarkers for Parkinson's disease (PD). Due to iron deposition during aging, image contrast in the brainstem is very low in Magnetic Resonance (MR) images. Hence, the ambiguity of patch-wise similarity makes the recently successful multi-atlas patch-based label fusion methods have difficulty to perform as competitive as segmenting cortical and sub-cortical regions from MR images. To address this challenge, we propose a novel multi-atlas brainstem nuclei segmentation method using deep hyper-graph learning. Specifically, we achieve this goal in three-fold. First , we employ hyper-graph to combine the advantage of maintaining spatial coherence from graph-based segmentation approaches and the benefit of harnessing population priors from multi-atlas based framework. Second , besides using low-level image appearance, we also extract high-level context features to measure the complex patch-wise relationship. Since the context features are calculated on a tentatively estimated label probability map, we eventually turn our hyper-graph learning based label propagation into a deep and self-refining model. Third , since anatomical labels on some voxels (usually located in uniform regions) can be identified much more reliably than other voxels (usually located at the boundary between two regions), we allow these reliable voxels to propagate their labels to the nearby difficult-to-label voxels. Such hierarchical strategy makes our proposed label fusion method deep and dynamic. We evaluate our proposed label fusion method in segmenting substantia nigra (SN) and red nucleus (RN) from 3.0 T MR images, where our proposed method achieves significant improvement over the state-of-the-art label fusion methods.
Feature-Motivated Simplified Adaptive PCNN-Based Medical Image Fusion Algorithm in NSST Domain.
Ganasala, Padma; Kumar, Vinod
2016-02-01
Multimodality medical image fusion plays a vital role in diagnosis, treatment planning, and follow-up studies of various diseases. It provides a composite image containing critical information of source images required for better localization and definition of different organs and lesions. In the state-of-the-art image fusion methods based on nonsubsampled shearlet transform (NSST) and pulse-coupled neural network (PCNN), authors have used normalized coefficient value to motivate the PCNN-processing both low-frequency (LF) and high-frequency (HF) sub-bands. This makes the fused image blurred and decreases its contrast. The main objective of this work is to design an image fusion method that gives the fused image with better contrast, more detail information, and suitable for clinical use. We propose a novel image fusion method utilizing feature-motivated adaptive PCNN in NSST domain for fusion of anatomical images. The basic PCNN model is simplified, and adaptive-linking strength is used. Different features are used to motivate the PCNN-processing LF and HF sub-bands. The proposed method is extended for fusion of functional image with an anatomical image in improved nonlinear intensity hue and saturation (INIHS) color model. Extensive fusion experiments have been performed on CT-MRI and SPECT-MRI datasets. Visual and quantitative analysis of experimental results proved that the proposed method provides satisfactory fusion outcome compared to other image fusion methods.
Infrared and visible fusion face recognition based on NSCT domain
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan
2018-01-01
Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In this paper, a novel fusion algorithm in non-subsampled contourlet transform (NSCT) domain is proposed for Infrared and visible face fusion recognition. Firstly, NSCT is used respectively to process the infrared and visible face images, which exploits the image information at multiple scales, orientations, and frequency bands. Then, to exploit the effective discriminant feature and balance the power of high-low frequency band of NSCT coefficients, the local Gabor binary pattern (LGBP) and Local Binary Pattern (LBP) are applied respectively in different frequency parts to obtain the robust representation of infrared and visible face images. Finally, the score-level fusion is used to fuse the all the features for final classification. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. Experiments results show that the proposed method extracts the complementary features of near-infrared and visible-light images and improves the robustness of unconstrained face recognition.
Adaptive fusion of infrared and visible images in dynamic scene
NASA Astrophysics Data System (ADS)
Yang, Guang; Yin, Yafeng; Man, Hong; Desai, Sachi
2011-11-01
Multiple modalities sensor fusion has been widely employed in various surveillance and military applications. A variety of image fusion techniques including PCA, wavelet, curvelet and HSV has been proposed in recent years to improve human visual perception for object detection. One of the main challenges for visible and infrared image fusion is to automatically determine an optimal fusion strategy for different input scenes along with an acceptable computational cost. This paper, we propose a fast and adaptive feature selection based image fusion method to obtain high a contrast image from visible and infrared sensors for targets detection. At first, fuzzy c-means clustering is applied on the infrared image to highlight possible hotspot regions, which will be considered as potential targets' locations. After that, the region surrounding the target area is segmented as the background regions. Then image fusion is locally applied on the selected target and background regions by computing different linear combination of color components from registered visible and infrared images. After obtaining different fused images, histogram distributions are computed on these local fusion images as the fusion feature set. The variance ratio which is based on Linear Discriminative Analysis (LDA) measure is employed to sort the feature set and the most discriminative one is selected for the whole image fusion. As the feature selection is performed over time, the process will dynamically determine the most suitable feature for the image fusion in different scenes. Experiment is conducted on the OSU Color-Thermal database, and TNO Human Factor dataset. The fusion results indicate that our proposed method achieved a competitive performance compared with other fusion algorithms at a relatively low computational cost.
Present status and trends of image fusion
NASA Astrophysics Data System (ADS)
Xiang, Dachao; Fu, Sheng; Cai, Yiheng
2009-10-01
Image fusion information extracted from multiple images which is more accurate and reliable than that from just a single image. Since various images contain different information aspects of the measured parts, and comprehensive information can be obtained by integrating them together. Image fusion is a main branch of the application of data fusion technology. At present, it was widely used in computer vision technology, remote sensing, robot vision, medical image processing and military field. This paper mainly presents image fusion's contents, research methods, and the status quo at home and abroad, and analyzes the development trend.
Djan, Igor; Petrović, Borislava; Erak, Marko; Nikolić, Ivan; Lucić, Silvija
2013-08-01
Development of imaging techniques, computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET), made great impact on radiotherapy treatment planning by improving the localization of target volumes. Improved localization allows better local control of tumor volumes, but also minimizes geographical misses. Mutual information is obtained by registration and fusion of images achieved manually or automatically. The aim of this study was to validate the CT-MRI image fusion method and compare delineation obtained by CT versus CT-MRI image fusion. The image fusion software (XIO CMS 4.50.0) was applied to delineate 16 patients. The patients were scanned on CT and MRI in the treatment position within an immobilization device before the initial treatment. The gross tumor volume (GTV) and clinical target volume (CTV) were delineated on CT alone and on CT+MRI images consecutively and image fusion was obtained. Image fusion showed that CTV delineated on a CT image study set is mainly inadequate for treatment planning, in comparison with CTV delineated on CT-MRI fused image study set. Fusion of different modalities enables the most accurate target volume delineation. This study shows that registration and image fusion allows precise target localization in terms of GTV and CTV and local disease control.
[Possibilities of sonographic image fusion: Current developments].
Jung, E M; Clevert, D-A
2015-11-01
For diagnostic and interventional procedures ultrasound (US) image fusion can be used as a complementary imaging technique. Image fusion has the advantage of real time imaging and can be combined with other cross-sectional imaging techniques. With the introduction of US contrast agents sonography and image fusion have gained more importance in the detection and characterization of liver lesions. Fusion of US images with computed tomography (CT) or magnetic resonance imaging (MRI) facilitates the diagnostics and postinterventional therapy control. In addition to the primary application of image fusion in the diagnosis and treatment of liver lesions, there are more useful indications for contrast-enhanced US (CEUS) in routine clinical diagnostic procedures, such as intraoperative US (IOUS), vascular imaging and diagnostics of other organs, such as the kidneys and prostate gland.
The fusion of large scale classified side-scan sonar image mosaics.
Reed, Scott; Tena, Ruiz Ioseba; Capus, Chris; Petillot, Yvan
2006-07-01
This paper presents a unified framework for the creation of classified maps of the seafloor from sonar imagery. Significant challenges in photometric correction, classification, navigation and registration, and image fusion are addressed. The techniques described are directly applicable to a range of remote sensing problems. Recent advances in side-scan data correction are incorporated to compensate for the sonar beam pattern and motion of the acquisition platform. The corrected images are segmented using pixel-based textural features and standard classifiers. In parallel, the navigation of the sonar device is processed using Kalman filtering techniques. A simultaneous localization and mapping framework is adopted to improve the navigation accuracy and produce georeferenced mosaics of the segmented side-scan data. These are fused within a Markovian framework and two fusion models are presented. The first uses a voting scheme regularized by an isotropic Markov random field and is applicable when the reliability of each information source is unknown. The Markov model is also used to inpaint regions where no final classification decision can be reached using pixel level fusion. The second model formally introduces the reliability of each information source into a probabilistic model. Evaluation of the two models using both synthetic images and real data from a large scale survey shows significant quantitative and qualitative improvement using the fusion approach.
Airborne Infrared and Visible Image Fusion Combined with Region Segmentation
Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao
2017-01-01
This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems. PMID:28505137
Airborne Infrared and Visible Image Fusion Combined with Region Segmentation.
Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao
2017-05-15
This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems.
Walter, Uwe; Niendorf, Thoralf; Graessl, Andreas; Rieger, Jan; Krüger, Paul-Christian; Langner, Sönke; Guthoff, Rudolf F; Stachs, Oliver
2014-05-01
A combination of magnetic resonance images with real-time high-resolution ultrasound known as fusion imaging may improve ophthalmologic examination. This study was undertaken to evaluate the feasibility of orbital high-field magnetic resonance and real-time colour Doppler ultrasound image fusion and navigation. This case study, performed between April and June 2013, included one healthy man (age, 47 years) and two patients (one woman, 57 years; one man, 67 years) with choroidal melanomas. All cases underwent 7.0-T magnetic resonance imaging using a custom-made ocular imaging surface coil. The Digital Imaging and Communications in Medicine volume data set was then loaded into the ultrasound system for manual registration of the live ultrasound image and fusion imaging examination. Data registration, matching and then volume navigation were feasible in all cases. Fusion imaging provided real-time imaging capabilities and high tissue contrast of choroidal tumour and optic nerve. It also allowed adding a real-time colour Doppler signal on magnetic resonance images for assessment of vasculature of tumour and retrobulbar structures. The combination of orbital high-field magnetic resonance and colour Doppler ultrasound image fusion and navigation is feasible. Multimodal fusion imaging promises to foster assessment and monitoring of choroidal melanoma and optic nerve disorders. • Orbital magnetic resonance and colour Doppler ultrasound real-time fusion imaging is feasible • Fusion imaging combines the spatial and temporal resolution advantages of each modality • Magnetic resonance and ultrasound fusion imaging improves assessment of choroidal melanoma vascularisation.
Fusion of Deep Learning and Compressed Domain features for Content Based Image Retrieval.
Liu, Peizhong; Guo, Jing-Ming; Wu, Chi-Yi; Cai, Danlin
2017-08-29
This paper presents an effective image retrieval method by combining high-level features from Convolutional Neural Network (CNN) model and low-level features from Dot-Diffused Block Truncation Coding (DDBTC). The low-level features, e.g., texture and color, are constructed by VQ-indexed histogram from DDBTC bitmap, maximum, and minimum quantizers. Conversely, high-level features from CNN can effectively capture human perception. With the fusion of the DDBTC and CNN features, the extended deep learning two-layer codebook features (DL-TLCF) is generated using the proposed two-layer codebook, dimension reduction, and similarity reweighting to improve the overall retrieval rate. Two metrics, average precision rate (APR) and average recall rate (ARR), are employed to examine various datasets. As documented in the experimental results, the proposed schemes can achieve superior performance compared to the state-of-the-art methods with either low- or high-level features in terms of the retrieval rate. Thus, it can be a strong candidate for various image retrieval related applications.
[An improved medical image fusion algorithm and quality evaluation].
Chen, Meiling; Tao, Ling; Qian, Zhiyu
2009-08-01
Medical image fusion is of very important value for application in medical image analysis and diagnosis. In this paper, the conventional method of wavelet fusion is improved,so a new algorithm of medical image fusion is presented and the high frequency and low frequency coefficients are studied respectively. When high frequency coefficients are chosen, the regional edge intensities of each sub-image are calculated to realize adaptive fusion. The choice of low frequency coefficient is based on the edges of images, so that the fused image preserves all useful information and appears more distinctly. We apply the conventional and the improved fusion algorithms based on wavelet transform to fuse two images of human body and also evaluate the fusion results through a quality evaluation method. Experimental results show that this algorithm can effectively retain the details of information on original images and enhance their edge and texture features. This new algorithm is better than the conventional fusion algorithm based on wavelet transform.
A survey of infrared and visual image fusion methods
NASA Astrophysics Data System (ADS)
Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Hai, Jinjin; He, Kangjian
2017-09-01
Infrared (IR) and visual (VI) image fusion is designed to fuse multiple source images into a comprehensive image to boost imaging quality and reduce redundancy information, which is widely used in various imaging equipment to improve the visual ability of human and robot. The accurate, reliable and complementary descriptions of the scene in fused images make these techniques be widely used in various fields. In recent years, a large number of fusion methods for IR and VI images have been proposed due to the ever-growing demands and the progress of image representation methods; however, there has not been published an integrated survey paper about this field in last several years. Therefore, we make a survey to report the algorithmic developments of IR and VI image fusion. In this paper, we first characterize the IR and VI image fusion based applications to represent an overview of the research status. Then we present a synthesize survey of the state of the art. Thirdly, the frequently-used image fusion quality measures are introduced. Fourthly, we perform some experiments of typical methods and make corresponding analysis. At last, we summarize the corresponding tendencies and challenges in IR and VI image fusion. This survey concludes that although various IR and VI image fusion methods have been proposed, there still exist further improvements or potential research directions in different applications of IR and VI image fusion.
Objective quality assessment for multiexposure multifocus image fusion.
Hassen, Rania; Wang, Zhou; Salama, Magdy M A
2015-09-01
There has been a growing interest in image fusion technologies, but how to objectively evaluate the quality of fused images has not been fully understood. Here, we propose a method for objective quality assessment of multiexposure multifocus image fusion based on the evaluation of three key factors of fused image quality: 1) contrast preservation; 2) sharpness; and 3) structure preservation. Subjective experiments are conducted to create an image fusion database, based on which, performance evaluation shows that the proposed fusion quality index correlates well with subjective scores, and gives a significant improvement over the existing fusion quality measures.
Nonlinear Fusion of Multispectral Citrus Fruit Image Data with Information Contents.
Li, Peilin; Lee, Sang-Heon; Hsu, Hung-Yao; Park, Jae-Sam
2017-01-13
The main issue of vison-based automatic harvesting manipulators is the difficulty in the correct fruit identification in the images under natural lighting conditions. Mostly, the solution has been based on a linear combination of color components in the multispectral images. However, the results have not reached a satisfactory level. To overcome this issue, this paper proposes a robust nonlinear fusion method to augment the original color image with the synchronized near infrared image. The two images are fused with Daubechies wavelet transform (DWT) in a multiscale decomposition approach. With DWT, the background noises are reduced and the necessary image features are enhanced by fusing the color contrast of the color components and the homogeneity of the near infrared (NIR) component. The resulting fused color image is classified with a C-means algorithm for reconstruction. The performance of the proposed approach is evaluated with the statistical F measure in comparison to some existing methods using linear combinations of color components. The results show that the fusion of information in different spectral components has the advantage of enhancing the image quality, therefore improving the classification accuracy in citrus fruit identification in natural lighting conditions.
Nonlinear Fusion of Multispectral Citrus Fruit Image Data with Information Contents
Li, Peilin; Lee, Sang-Heon; Hsu, Hung-Yao; Park, Jae-Sam
2017-01-01
The main issue of vison-based automatic harvesting manipulators is the difficulty in the correct fruit identification in the images under natural lighting conditions. Mostly, the solution has been based on a linear combination of color components in the multispectral images. However, the results have not reached a satisfactory level. To overcome this issue, this paper proposes a robust nonlinear fusion method to augment the original color image with the synchronized near infrared image. The two images are fused with Daubechies wavelet transform (DWT) in a multiscale decomposition approach. With DWT, the background noises are reduced and the necessary image features are enhanced by fusing the color contrast of the color components and the homogeneity of the near infrared (NIR) component. The resulting fused color image is classified with a C-means algorithm for reconstruction. The performance of the proposed approach is evaluated with the statistical F measure in comparison to some existing methods using linear combinations of color components. The results show that the fusion of information in different spectral components has the advantage of enhancing the image quality, therefore improving the classification accuracy in citrus fruit identification in natural lighting conditions. PMID:28098797
Kapfhammer, A; Winkens, T; Lesser, T; Reissig, A; Steinert, M; Freesmeyer, M
2015-01-01
To retrospectively evaluate the feasibility and value of CT-CT image fusion to assess the shift of peripheral lung cancers with/-out chest wall infiltration, comparing computed tomography acquisitions in shallow-breathing (SB-CT) and deep-inspiration breath-hold (DIBH-CT) in patients undergoing FDG-PET/CT for lung cancer staging. Image fusion of SB-CT and DIBH-CT was performed with a multimodal workstation used for nuclear medicine fusion imaging. The distance of intrathoracic landmarks and the positional shift of tumours were measured using semi-transparent overlay of both CT series. Statistical analyses were adjusted for confounders of tumour infiltration. Cutoff levels were calculated for prediction of no-/infiltration. Lateral pleural recessus and diaphragm showed the largest respiratory excursions. Infiltrating lung cancers showed more limited respiratory shifts than non-infiltrating tumours. A large respiratory tumour-motility accurately predicted non-infiltration. However, the tumour shifts were limited and variable, limiting the accuracy of prediction. This pilot fusion study proved feasible and allowed a simple analysis of the respiratory shifts of peripheral lung tumours using CT-CT image fusion in a PET/CT setting. The calculated cutoffs were useful in predicting the exclusion of chest wall infiltration but did not accurately predict tumour infiltration. This method can provide additional qualitative information in patients with lung cancers with contact to the chest wall but unclear CT evidence of infiltration undergoing PET/CT without the need of additional investigations. Considering the small sample size investigated, further studies are necessary to verify the obtained results.
A New Approach to Image Fusion Based on Cokriging
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; LeMoigne, Jacqueline; Mount, David M.; Morisette, Jeffrey T.
2005-01-01
We consider the image fusion problem involving remotely sensed data. We introduce cokriging as a method to perform fusion. We investigate the advantages of fusing Hyperion with ALI. The evaluation is performed by comparing the classification of the fused data with that of input images and by calculating well-chosen quantitative fusion quality metrics. We consider the Invasive Species Forecasting System (ISFS) project as our fusion application. The fusion of ALI with Hyperion data is studies using PCA and wavelet-based fusion. We then propose utilizing a geostatistical based interpolation method called cokriging as a new approach for image fusion.
Impact of Age on Change in Self-Image 5 Years After Complex Spinal Fusion (≥5 Levels).
Elsamadicy, Aladine A; Adogwa, Owoicho; Sergesketter, Amanda; Behrens, Shay; Hobbs, Cassie; Bridwell, Keith H; Karikari, Isaac O
2017-01-01
Spinal deformities that require ≥5 fusion levels are difficult and challenging for both the surgeon and patient. Corrections of moderate to severe deformities have been shown to improve patient-reported outcomes (PROs), and provide patients with a better quality of life. Self-image is an important PRO because it sheds insight into the patient's perception of health, as well as serving as a proxy of satisfaction for patients with spine deformity undergoing corrective surgery. However, with an aging population, the impact of age on long-term change in self-image is unknown. The aim of this study is to determine the effects of age on self-image 5 years after undergoing an elective complex spinal fusion (≥5 levels). This was a retrospective analysis of prospectively collected data of 55 adult patients (≥18 years old) undergoing ≥5 levels of spinal fusion to the sacrum with iliac fixation from January 2002 to December 2008. Patients were grouped by age: young (<60 years old) and older (≥60 years old). Patient demographics, comorbidities, preoperative variables (sagittal and Cobb angles) and postoperative complication rates were collected. All patients had prospectively collected outcome measures and a minimum of 5 years follow-up. PRO instrument SRS-22r (function, self-image, mental health, and pain) was completed before surgery then at follow-up (at least 5 years after surgery). The primary outcome investigated in this study was the change in self-image after surgery. Baseline characteristics and preoperative variables were similar in both cohorts. There were no significant differences in intraoperative variables, including the mean ± standard deviation number of fusion levels between the cohorts (young, 11.2 ±4.3 vs. older, 12.1 ± 4.0; P = 0.42). Complication rates were similar between the cohorts, with no significant differences in the types of complications (young, 29.63% vs. older, 25.0%; P = 0.77). There were no significant differences in preoperative and follow-up PROs between the cohorts. The mean ± standard deviation preoperative and follow-up self-image scores were (young, 2.35±0.58 vs. older, 2.68 ± 0.64; P = 0.51) and (young, 3.82 ± 0.63 vs. older, 3.51 ± 0.94), respectively. There were no significant differences in the change of function, mental health, or pain between the cohorts. However, the younger cohort experienced a significantly greater overall change in self-image than did the older cohort (young, 1.49 ± 0.87 vs. older, 0.70±1.14; P = 0.01). Our study suggests that age significantly affects the perception of self-image after deformity correction surgery; with younger patients reporting a greater change from baseline in self-image after surgery. Further studies are necessary to corroborate our observed findings. Copyright © 2016 Elsevier Inc. All rights reserved.
Ashamed and Fused with Body Image and Eating: Binge Eating as an Avoidance Strategy.
Duarte, Cristiana; Pinto-Gouveia, José; Ferreira, Cláudia
2017-01-01
Binge Eating Disorder (BED) is currently recognized as a severe disorder associated with relevant psychiatric and physical comorbidity, and marked emotional distress. Shame is a specific negative emotion that has been highlighted as central in eating disorders. However, the effect of shame and underlying mechanisms on binge eating symptomatology severity remained unclear. This study examines the role of shame, depressive symptoms, weight and shape concerns and eating concerns, and body image-related cognitive fusion, on binge eating symptomatology severity. Participated in this study 73 patients with the diagnosis of BED, established through a clinical interview-Eating Disorder Examination 17.0D-who completed measures of external shame, body-image related cognitive fusion, depressive symptoms and binge eating symptomatology. Results revealed positive associations between binge eating severity and depressive symptoms, shame, weight and shape concerns, eating concerns and body image-related cognitive fusion. A path analysis showed that, when controlling for the effect of depressive symptoms, external shame has a direct effect on binge eating severity, and an indirect effect mediated by increased eating concern and higher levels of body image-related cognitive fusion. Results confirmed the plausibility of the model, which explained 43% of the severity of binge eating symptoms. The proposed model suggests that, in BED patients, perceiving that others see the self negatively may be associated with an entanglement with body image-related thoughts and concerns about eating, which may, in turn, fuel binge eating symptoms. Findings have important clinical implications supporting the relevance of addressing shame and associated processes in binge eating. Copyright © 2015 John Wiley & Sons, Ltd. Shame is a significant predictor of symptomatology severity of BED patients. Shame significantly impacts binge eating, even controlling for depressive symptoms. Shame significantly predicts body image-related cognitive fusion and eating concern. Body image-fusion and eating concern mediate the link between shame and binge eating. Binge eating may be seen as an avoidance strategy from negative self-evaluations. Copyright © 2015 John Wiley & Sons, Ltd.
Near infrared and visible face recognition based on decision fusion of LBP and DCT features
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan
2018-03-01
Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In order to extract the discriminative complementary features between near infrared and visible images, in this paper, we proposed a novel near infrared and visible face fusion recognition algorithm based on DCT and LBP features. Firstly, the effective features in near-infrared face image are extracted by the low frequency part of DCT coefficients and the partition histograms of LBP operator. Secondly, the LBP features of visible-light face image are extracted to compensate for the lacking detail features of the near-infrared face image. Then, the LBP features of visible-light face image, the DCT and LBP features of near-infrared face image are sent to each classifier for labeling. Finally, decision level fusion strategy is used to obtain the final recognition result. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. The experiment results show that the proposed method extracts the complementary features of near-infrared and visible face images and improves the robustness of unconstrained face recognition. Especially for the circumstance of small training samples, the recognition rate of proposed method can reach 96.13%, which has improved significantly than 92.75 % of the method based on statistical feature fusion.
Higuchi-Sanabria, Ryo; Garcia, Enrique J; Tomoiaga, Delia; Munteanu, Emilia L; Feinstein, Paul; Pon, Liza A
2016-01-01
Saccharomyces cerevisiae are widely used for imaging fluorescently tagged protein fusions. Fluorescent proteins can easily be inserted into yeast genes at their chromosomal locus, by homologous recombination, for expression of tagged proteins at endogenous levels. This is especially useful for incorporation of multiple fluorescent protein fusions into a single strain, which can be challenging in organisms where genetic manipulation is more complex. However, the availability of optimal fluorescent protein combinations for 3-color imaging is limited. Here, we have characterized a combination of fluorescent proteins, mTFP1/mCitrine/mCherry for multicolor live cell imaging in S. cerevisiae. This combination can be used with conventional blue dyes, such as DAPI, for potential four-color live cell imaging.
Fusion of multi-spectral and panchromatic images based on 2D-PWVD and SSIM
NASA Astrophysics Data System (ADS)
Tan, Dongjie; Liu, Yi; Hou, Ruonan; Xue, Bindang
2016-03-01
A combined method using 2D pseudo Wigner-Ville distribution (2D-PWVD) and structural similarity(SSIM) index is proposed for fusion of low resolution multi-spectral (MS) image and high resolution panchromatic (PAN) image. First, the intensity component of multi-spectral image is extracted with generalized IHS transform. Then, the spectrum diagrams of the intensity components of multi-spectral image and panchromatic image are obtained with 2D-PWVD. Different fusion rules are designed for different frequency information of the spectrum diagrams. SSIM index is used to evaluate the high frequency information of the spectrum diagrams for assigning the weights in the fusion processing adaptively. After the new spectrum diagram is achieved according to the fusion rule, the final fusion image can be obtained by inverse 2D-PWVD and inverse GIHS transform. Experimental results show that, the proposed method can obtain high quality fusion images.
Ghost detection and removal based on super-pixel grouping in exposure fusion
NASA Astrophysics Data System (ADS)
Jiang, Shenyu; Xu, Zhihai; Li, Qi; Chen, Yueting; Feng, Huajun
2014-09-01
A novel multi-exposure images fusion method for dynamic scenes is proposed. The commonly used techniques for high dynamic range (HDR) imaging are based on the combination of multiple differently exposed images of the same scene. The drawback of these methods is that ghosting artifacts will be introduced into the final HDR image if the scene is not static. In this paper, a super-pixel grouping based method is proposed to detect the ghost in the image sequences. We introduce the zero mean normalized cross correlation (ZNCC) as a measure of similarity between a given exposure image and the reference. The calculation of ZNCC is implemented in super-pixel level, and the super-pixels which have low correlation with the reference are excluded by adjusting the weight maps for fusion. Without any prior information on camera response function or exposure settings, the proposed method generates low dynamic range (LDR) images which can be shown on conventional display devices directly with details preserving and ghost effects reduced. Experimental results show that the proposed method generates high quality images which have less ghost artifacts and provide a better visual quality than previous approaches.
Vollnhals, Florian; Audinot, Jean-Nicolas; Wirtz, Tom; Mercier-Bonin, Muriel; Fourquaux, Isabelle; Schroeppel, Birgit; Kraushaar, Udo; Lev-Ram, Varda; Ellisman, Mark H; Eswara, Santhana
2017-10-17
Correlative microscopy combining various imaging modalities offers powerful insights into obtaining a comprehensive understanding of physical, chemical, and biological phenomena. In this article, we investigate two approaches for image fusion in the context of combining the inherently lower-resolution chemical images obtained using secondary ion mass spectrometry (SIMS) with the high-resolution ultrastructural images obtained using electron microscopy (EM). We evaluate the image fusion methods with three different case studies selected to broadly represent the typical samples in life science research: (i) histology (unlabeled tissue), (ii) nanotoxicology, and (iii) metabolism (isotopically labeled tissue). We show that the intensity-hue-saturation fusion method often applied for EM-sharpening can result in serious image artifacts, especially in cases where different contrast mechanisms interplay. Here, we introduce and demonstrate Laplacian pyramid fusion as a powerful and more robust alternative method for image fusion. Both physical and technical aspects of correlative image overlay and image fusion specific to SIMS-based correlative microscopy are discussed in detail alongside the advantages, limitations, and the potential artifacts. Quantitative metrics to evaluate the results of image fusion are also discussed.
Makino, Yuki; Imai, Yasuharu; Igura, Takumi; Hori, Masatoshi; Fukuda, Kazuto; Sawai, Yoshiyuki; Kogita, Sachiyo; Fujita, Norihiko; Takehara, Tetsuo; Murakami, Takamichi
2015-01-01
To assess the feasibility of fusion of pre- and post-ablation gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid-enhanced magnetic resonance imaging (Gd-EOB-DTPA-MRI) to evaluate the effects of radiofrequency ablation (RFA) of hepatocellular carcinoma (HCC), compared with similarly fused CT images This retrospective study included 67 patients with 92 HCCs treated with RFA. Fusion images of pre- and post-RFA dynamic CT, and pre- and post-RFA Gd-EOB-DTPA-MRI were created, using a rigid registration method. The minimal ablative margin measured on fusion imaging was categorized into three groups: (1) tumor protruding outside the ablation zone boundary, (2) ablative margin 0-<5.0 mm beyond the tumor boundary, and (3) ablative margin ≥5.0 mm beyond the tumor boundary. The categorization of minimal ablative margins was compared between CT and MR fusion images. In 57 (62.0%) HCCs, treatment evaluation was possible both on CT and MR fusion images, and the overall agreement between them for the categorization of minimal ablative margin was good (κ coefficient = 0.676, P < 0.01). MR fusion imaging enabled treatment evaluation in a significantly larger number of HCCs than CT fusion imaging (86/92 [93.5%] vs. 62/92 [67.4%], P < 0.05). Fusion of pre- and post-ablation Gd-EOB-DTPA-MRI is feasible for treatment evaluation after RFA. It may enable accurate treatment evaluation in cases where CT fusion imaging is not helpful.
[Research progress of multi-model medical image fusion and recognition].
Zhou, Tao; Lu, Huiling; Chen, Zhiqiang; Ma, Jingxian
2013-10-01
Medical image fusion and recognition has a wide range of applications, such as focal location, cancer staging and treatment effect assessment. Multi-model medical image fusion and recognition are analyzed and summarized in this paper. Firstly, the question of multi-model medical image fusion and recognition is discussed, and its advantage and key steps are discussed. Secondly, three fusion strategies are reviewed from the point of algorithm, and four fusion recognition structures are discussed. Thirdly, difficulties, challenges and possible future research direction are discussed.
Gunay, Osman; Toreyin, Behçet Ugur; Kose, Kivanc; Cetin, A Enis
2012-05-01
In this paper, an entropy-functional-based online adaptive decision fusion (EADF) framework is developed for image analysis and computer vision applications. In this framework, it is assumed that the compound algorithm consists of several subalgorithms, each of which yields its own decision as a real number centered around zero, representing the confidence level of that particular subalgorithm. Decision values are linearly combined with weights that are updated online according to an active fusion method based on performing entropic projections onto convex sets describing subalgorithms. It is assumed that there is an oracle, who is usually a human operator, providing feedback to the decision fusion method. A video-based wildfire detection system was developed to evaluate the performance of the decision fusion algorithm. In this case, image data arrive sequentially, and the oracle is the security guard of the forest lookout tower, verifying the decision of the combined algorithm. The simulation results are presented.
Region-based multifocus image fusion for the precise acquisition of Pap smear images.
Tello-Mijares, Santiago; Bescós, Jesús
2018-05-01
A multifocus image fusion method to obtain a single focused image from a sequence of microscopic high-magnification Papanicolau source (Pap smear) images is presented. These images, captured each in a different position of the microscope lens, frequently show partially focused cells or parts of cells, which makes them unpractical for the direct application of image analysis techniques. The proposed method obtains a focused image with a high preservation of original pixels information while achieving a negligible visibility of the fusion artifacts. The method starts by identifying the best-focused image of the sequence; then, it performs a mean-shift segmentation over this image; the focus level of the segmented regions is evaluated in all the images of the sequence, and best-focused regions are merged in a single combined image; finally, this image is processed with an adaptive artifact removal process. The combination of a region-oriented approach, instead of block-based approaches, and a minimum modification of the value of focused pixels in the original images achieve a highly contrasted image with no visible artifacts, which makes this method especially convenient for the medical imaging domain. The proposed method is compared with several state-of-the-art alternatives over a representative dataset. The experimental results show that our proposal obtains the best and more stable quality indicators. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Shirvani, Atefeh; Jabbari, Keyvan; Amouheidari, Alireza
2017-01-01
In radiation therapy, computed tomography (CT) simulation is used for treatment planning to define the location of tumor. Magnetic resonance imaging (MRI)-CT image fusion leads to more efficient tumor contouring. This work tried to identify the practical issues for the combination of CT and MRI images in real clinical cases. The effect of various factors is evaluated on image fusion quality. In this study, the data of thirty patients with brain tumors were used for image fusion. The effect of several parameters on possibility and quality of image fusion was evaluated. These parameters include angles of the patient's head on the bed, slices thickness, slice gap, and height of the patient's head. According to the results, the first dominating factor on quality of image fusion was the difference slice gap between CT and MRI images (cor = 0.86, P < 0.005) and second factor was the angle between CT and MRI slice in the sagittal plane (cor = 0.75, P < 0.005). In 20% of patients, this angle was more than 28° and image fusion was not efficient. In 17% of patients, difference slice gap in CT and MRI was >4 cm and image fusion quality was <25%. The most important problem in image fusion is that MRI images are taken without regard to their use in treatment planning. In general, parameters related to the patient position during MRI imaging should be chosen to be consistent with CT images of the patient in terms of location and angle.
Extended depth of field integral imaging using multi-focus fusion
NASA Astrophysics Data System (ADS)
Piao, Yongri; Zhang, Miao; Wang, Xiaohui; Li, Peihua
2018-03-01
In this paper, we propose a new method for depth of field extension in integral imaging by realizing the image fusion method on the multi-focus elemental images. In the proposed method, a camera is translated on a 2D grid to take multi-focus elemental images by sweeping the focus plane across the scene. Simply applying an image fusion method on the elemental images holding rich parallax information does not work effectively because registration accuracy of images is the prerequisite for image fusion. To solve this problem an elemental image generalization method is proposed. The aim of this generalization process is to geometrically align the objects in all elemental images so that the correct regions of multi-focus elemental images can be exacted. The all-in focus elemental images are then generated by fusing the generalized elemental images using the block based fusion method. The experimental results demonstrate that the depth of field of synthetic aperture integral imaging system has been extended by realizing the generation method combined with the image fusion on multi-focus elemental images in synthetic aperture integral imaging system.
Cha, Dong Ik; Lee, Min Woo; Kim, Ah Yeong; Kang, Tae Wook; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga
2017-11-01
Background A major drawback of conventional manual image fusion is that the process may be complex, especially for less-experienced operators. Recently, two automatic image fusion techniques called Positioning and Sweeping auto-registration have been developed. Purpose To compare the accuracy and required time for image fusion of real-time ultrasonography (US) and computed tomography (CT) images between Positioning and Sweeping auto-registration. Material and Methods Eighteen consecutive patients referred for planning US for radiofrequency ablation or biopsy for focal hepatic lesions were enrolled. Image fusion using both auto-registration methods was performed for each patient. Registration error, time required for image fusion, and number of point locks used were compared using the Wilcoxon signed rank test. Results Image fusion was successful in all patients. Positioning auto-registration was significantly faster than Sweeping auto-registration for both initial (median, 11 s [range, 3-16 s] vs. 32 s [range, 21-38 s]; P < 0.001] and complete (median, 34.0 s [range, 26-66 s] vs. 47.5 s [range, 32-90]; P = 0.001] image fusion. Registration error of Positioning auto-registration was significantly higher for initial image fusion (median, 38.8 mm [range, 16.0-84.6 mm] vs. 18.2 mm [6.7-73.4 mm]; P = 0.029), but not for complete image fusion (median, 4.75 mm [range, 1.7-9.9 mm] vs. 5.8 mm [range, 2.0-13.0 mm]; P = 0.338]. Number of point locks required to refine the initially fused images was significantly higher with Positioning auto-registration (median, 2 [range, 2-3] vs. 1 [range, 1-2]; P = 0.012]. Conclusion Positioning auto-registration offers faster image fusion between real-time US and pre-procedural CT images than Sweeping auto-registration. The final registration error is similar between the two methods.
Subudhi, Badri Narayan; Thangaraj, Veerakumar; Sankaralingam, Esakkirajan; Ghosh, Ashish
2016-11-01
In this article, a statistical fusion based segmentation technique is proposed to identify different abnormality in magnetic resonance images (MRI). The proposed scheme follows seed selection, region growing-merging and fusion of multiple image segments. In this process initially, an image is divided into a number of blocks and for each block we compute the phase component of the Fourier transform. The phase component of each block reflects the gray level variation among the block but contains a large correlation among them. Hence a singular value decomposition (SVD) technique is adhered to generate a singular value of each block. Then a thresholding procedure is applied on these singular values to identify edgy and smooth regions and some seed points are selected for segmentation. By considering each seed point we perform a binary segmentation of the complete MRI and hence with all seed points we get an equal number of binary images. A parcel based statistical fusion process is used to fuse all the binary images into multiple segments. Effectiveness of the proposed scheme is tested on identifying different abnormalities: prostatic carcinoma detection, tuberculous granulomas identification and intracranial neoplasm or brain tumor detection. The proposed technique is established by comparing its results against seven state-of-the-art techniques with six performance evaluation measures. Copyright © 2016 Elsevier Inc. All rights reserved.
Multispectral Palmprint Recognition Using a Quaternion Matrix
Xu, Xingpeng; Guo, Zhenhua; Song, Changjiang; Li, Yafeng
2012-01-01
Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR) illuminations were represented by a quaternion matrix, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%. PMID:22666049
Multispectral palmprint recognition using a quaternion matrix.
Xu, Xingpeng; Guo, Zhenhua; Song, Changjiang; Li, Yafeng
2012-01-01
Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR) illuminations were represented by a quaternion matrix, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%.
Applicability of common measures in multifocus image fusion comparison
NASA Astrophysics Data System (ADS)
Vajgl, Marek
2017-11-01
Image fusion is an image processing area aimed at fusion of multiple input images to achieve an output image somehow better then each of the input ones. In the case of "multifocus fusion", input images are capturing the same scene differing ina focus distance. The aim is to obtain an image, which is sharp in all its areas. The are several different approaches and methods used to solve this problem. However, it is common question which one is the best. This work describes a research covering the field of common measures with a question, if some of them can be used as a quality measure of the fusion result evaluation.
NASA Astrophysics Data System (ADS)
Huang, Yadong; Gao, Kun; Gong, Chen; Han, Lu; Guo, Yue
2016-03-01
During traditional multi-resolution infrared and visible image fusion processing, the low contrast ratio target may be weakened and become inconspicuous because of the opposite DN values in the source images. So a novel target pseudo-color enhanced image fusion algorithm based on the modified attention model and fast discrete curvelet transformation is proposed. The interesting target regions are extracted from source images by introducing the motion features gained from the modified attention model, and source images are performed the gray fusion via the rules based on physical characteristics of sensors in curvelet domain. The final fusion image is obtained by mapping extracted targets into the gray result with the proper pseudo-color instead. The experiments show that the algorithm can highlight dim targets effectively and improve SNR of fusion image.
Wang, Shun-Yi; Chen, Xian-Xia; Li, Yi; Zhang, Yu-Ying
2016-12-20
The arrival of precision medicine plan brings new opportunities and challenges for patients undergoing precision diagnosis and treatment of malignant tumors. With the development of medical imaging, information on different modality imaging can be integrated and comprehensively analyzed by imaging fusion system. This review aimed to update the application of multimodality imaging fusion technology in the precise diagnosis and treatment of malignant tumors under the precision medicine plan. We introduced several multimodality imaging fusion technologies and their application to the diagnosis and treatment of malignant tumors in clinical practice. The data cited in this review were obtained mainly from the PubMed database from 1996 to 2016, using the keywords of "precision medicine", "fusion imaging", "multimodality", and "tumor diagnosis and treatment". Original articles, clinical practice, reviews, and other relevant literatures published in English were reviewed. Papers focusing on precision medicine, fusion imaging, multimodality, and tumor diagnosis and treatment were selected. Duplicated papers were excluded. Multimodality imaging fusion technology plays an important role in tumor diagnosis and treatment under the precision medicine plan, such as accurate location, qualitative diagnosis, tumor staging, treatment plan design, and real-time intraoperative monitoring. Multimodality imaging fusion systems could provide more imaging information of tumors from different dimensions and angles, thereby offing strong technical support for the implementation of precision oncology. Under the precision medicine plan, personalized treatment of tumors is a distinct possibility. We believe that multimodality imaging fusion technology will find an increasingly wide application in clinical practice.
2013-10-01
AD_________________ Award Number: W81XWH-12-1-0597 TITLE: Parametric PET /MR Fusion Imaging to...Parametric PET /MR Fusion Imaging to Differentiate Aggressive from Indolent Primary Prostate Cancer with Application for Image-Guided Prostate Cancer Biopsies...The study investigates whether fusion PET /MRI imaging with 18F-choline PET /CT and diffusion-weighted MRI can be successfully applied to target prostate
Multilevel cervical laminectomy and fusion with posterior cervical cages
Bou Monsef, Jad N; Siemionow, Krzysztof B
2017-01-01
Context: Cervical spondylotic myelopathy (CSM) is a progressive disease that can result in significant disability. Single-level stenosis can be effectively decompressed through either anterior or posterior techniques. However, multilevel pathology can be challenging, especially in the presence of significant spinal stenosis. Three-level anterior decompression and fusion are associated with higher nonunion rates and prolonged dysphagia. Posterior multilevel laminectomies with foraminotomies jeopardize the bone stock required for stable fixation with lateral mass screws (LMSs). Aims: This is the first case series of multilevel laminectomy and fusion for CSM instrumented with posterior cervical cages. Settings and Design: Three patients presented with a history of worsening neck pain, numbness in bilateral upper extremities and gait disturbance, and examination findings consistent with myeloradiculopathy. Cervical magnetic resonance imaging demonstrated multilevel spondylosis resulting in moderate to severe bilateral foraminal stenosis at three cervical levels. Materials and Methods: The patients underwent a multilevel posterior cervical laminectomy and instrumented fusion with intervertebral cages placed between bilateral facet joints over three levels. Oswestry disability index and visual analog scores were collected preoperatively and at each follow-up. Pre- and post-operative images were analyzed for changes in cervical alignment and presence of arthrodesis. Results: Postoperatively, all patients showed marked improvement in neurological symptoms and neck pain. They had full resolution of radicular symptoms by 6 weeks postoperatively. At 12-month follow-up, they demonstrated solid arthrodesis on X-rays and computed tomography scan. Conclusions: Posterior cervical cages may be an alternative option to LMSs in multilevel cervical laminectomy and fusion for cervical spondylotic myeloradiculopathy. PMID:29403242
Feasibility study on sensor data fusion for the CP-140 aircraft: fusion architecture analyses
NASA Astrophysics Data System (ADS)
Shahbazian, Elisa
1995-09-01
Loral Canada completed (May 1995) a Department of National Defense (DND) Chief of Research and Development (CRAD) contract, to study the feasibility of implementing a multi- sensor data fusion (MSDF) system onboard the CP-140 Aurora aircraft. This system is expected to fuse data from: (a) attributed measurement oriented sensors (ESM, IFF, etc.); (b) imaging sensors (FLIR, SAR, etc.); (c) tracking sensors (radar, acoustics, etc.); (d) data from remote platforms (data links); and (e) non-sensor data (intelligence reports, environmental data, visual sightings, encyclopedic data, etc.). Based on purely theoretical considerations a central-level fusion architecture will lead to a higher performance fusion system. However, there are a number of systems and fusion architecture issues involving fusion of such dissimilar data: (1) the currently existing sensors are not designed to provide the type of data required by a fusion system; (2) the different types (attribute, imaging, tracking, etc.) of data may require different degree of processing, before they can be used within a fusion system efficiently; (3) the data quality from different sensors, and more importantly from remote platforms via the data links must be taken into account before fusing; and (4) the non-sensor data may impose specific requirements on the fusion architecture (e.g. variable weight/priority for the data from different sensors). This paper presents the analyses performed for the selection of the fusion architecture for the enhanced sensor suite planned for the CP-140 aircraft in the context of the mission requirements and environmental conditions.
Shirvani, Atefeh; Jabbari, Keyvan; Amouheidari, Alireza
2017-01-01
Background: In radiation therapy, computed tomography (CT) simulation is used for treatment planning to define the location of tumor. Magnetic resonance imaging (MRI)-CT image fusion leads to more efficient tumor contouring. This work tried to identify the practical issues for the combination of CT and MRI images in real clinical cases. The effect of various factors is evaluated on image fusion quality. Materials and Methods: In this study, the data of thirty patients with brain tumors were used for image fusion. The effect of several parameters on possibility and quality of image fusion was evaluated. These parameters include angles of the patient's head on the bed, slices thickness, slice gap, and height of the patient's head. Results: According to the results, the first dominating factor on quality of image fusion was the difference slice gap between CT and MRI images (cor = 0.86, P < 0.005) and second factor was the angle between CT and MRI slice in the sagittal plane (cor = 0.75, P < 0.005). In 20% of patients, this angle was more than 28° and image fusion was not efficient. In 17% of patients, difference slice gap in CT and MRI was >4 cm and image fusion quality was <25%. Conclusion: The most important problem in image fusion is that MRI images are taken without regard to their use in treatment planning. In general, parameters related to the patient position during MRI imaging should be chosen to be consistent with CT images of the patient in terms of location and angle. PMID:29387672
Infrared and visible image fusion with spectral graph wavelet transform.
Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Zong, Jing-guo
2015-09-01
Infrared and visible image fusion technique is a popular topic in image analysis because it can integrate complementary information and obtain reliable and accurate description of scenes. Multiscale transform theory as a signal representation method is widely used in image fusion. In this paper, a novel infrared and visible image fusion method is proposed based on spectral graph wavelet transform (SGWT) and bilateral filter. The main novelty of this study is that SGWT is used for image fusion. On the one hand, source images are decomposed by SGWT in its transform domain. The proposed approach not only effectively preserves the details of different source images, but also excellently represents the irregular areas of the source images. On the other hand, a novel weighted average method based on bilateral filter is proposed to fuse low- and high-frequency subbands by taking advantage of spatial consistency of natural images. Experimental results demonstrate that the proposed method outperforms seven recently proposed image fusion methods in terms of both visual effect and objective evaluation metrics.
Higuchi-Sanabria, Ryo; Garcia, Enrique J.; Tomoiaga, Delia; Munteanu, Emilia L.; Feinstein, Paul; Pon, Liza A.
2016-01-01
Saccharomyces cerevisiae are widely used for imaging fluorescently tagged protein fusions. Fluorescent proteins can easily be inserted into yeast genes at their chromosomal locus, by homologous recombination, for expression of tagged proteins at endogenous levels. This is especially useful for incorporation of multiple fluorescent protein fusions into a single strain, which can be challenging in organisms where genetic manipulation is more complex. However, the availability of optimal fluorescent protein combinations for 3-color imaging is limited. Here, we have characterized a combination of fluorescent proteins, mTFP1/mCitrine/mCherry for multicolor live cell imaging in S. cerevisiae. This combination can be used with conventional blue dyes, such as DAPI, for potential four-color live cell imaging. PMID:26727004
Range and Panoramic Image Fusion Into a Textured Range Image for Culture Heritage Documentation
NASA Astrophysics Data System (ADS)
Bila, Z.; Reznicek, J.; Pavelka, K.
2013-07-01
This paper deals with a fusion of range and panoramic images, where the range image is acquired by a 3D laser scanner and the panoramic image is acquired with a digital still camera mounted on a panoramic head and tripod. The fused resulting dataset, called "textured range image", provides more reliable information about the investigated object for conservators and historians, than using both datasets separately. A simple example of fusion of a range and panoramic images, both obtained in St. Francis Xavier Church in town Opařany, is given here. Firstly, we describe the process of data acquisition, then the processing of both datasets into a proper format for following fusion and the process of fusion. The process of fusion can be divided into a two main parts: transformation and remapping. In the first, transformation, part, both images are related by matching similar features detected on both images with a proper detector, which results in transformation matrix enabling transformation of the range image onto a panoramic image. Then, the range data are remapped from the range image space into a panoramic image space and stored as an additional "range" channel. The process of image fusion is validated by comparing similar features extracted on both datasets.
Investigations of image fusion
NASA Astrophysics Data System (ADS)
Zhang, Zhong
1999-12-01
The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a single image which is more suitable for the purpose of human visual perception or further image processing tasks. In this thesis, a region-based fusion algorithm using the wavelet transform is proposed. The identification of important features in each image, such as edges and regions of interest, are used to guide the fusion process. The idea of multiscale grouping is also introduced and a generic image fusion framework based on multiscale decomposition is studied. The framework includes all of the existing multiscale-decomposition- based fusion approaches we found in the literature which did not assume a statistical model for the source images. Comparisons indicate that our framework includes some new approaches which outperform the existing approaches for the cases we consider. Registration must precede our fusion algorithms. So we proposed a hybrid scheme which uses both feature-based and intensity-based methods. The idea of robust estimation of optical flow from time- varying images is employed with a coarse-to-fine multi- resolution approach and feature-based registration to overcome some of the limitations of the intensity-based schemes. Experiments show that this approach is robust and efficient. Assessing image fusion performance in a real application is a complicated issue. In this dissertation, a mixture probability density function model is used in conjunction with the Expectation- Maximization algorithm to model histograms of edge intensity. Some new techniques are proposed for estimating the quality of a noisy image of a natural scene. Such quality measures can be used to guide the fusion. Finally, we study fusion of images obtained from several copies of a new type of camera developed for video surveillance. Our techniques increase the capability and reliability of the surveillance system and provide an easy way to obtain 3-D information of objects in the space monitored by the system.
Sensor fusion for synthetic vision
NASA Technical Reports Server (NTRS)
Pavel, M.; Larimer, J.; Ahumada, A.
1991-01-01
Display methodologies are explored for fusing images gathered by millimeter wave sensors with images rendered from an on-board terrain data base to facilitate visually guided flight and ground operations in low visibility conditions. An approach to fusion based on multiresolution image representation and processing is described which facilitates fusion of images differing in resolution within and between images. To investigate possible fusion methods, a workstation-based simulation environment is being developed.
Yang, Minglei; Ding, Hui; Zhu, Lei; Wang, Guangzhi
2016-12-01
Ultrasound fusion imaging is an emerging tool and benefits a variety of clinical applications, such as image-guided diagnosis and treatment of hepatocellular carcinoma and unresectable liver metastases. However, respiratory liver motion-induced misalignment of multimodal images (i.e., fusion error) compromises the effectiveness and practicability of this method. The purpose of this paper is to develop a subject-specific liver motion model and automatic registration-based method to correct the fusion error. An online-built subject-specific motion model and automatic image registration method for 2D ultrasound-3D magnetic resonance (MR) images were combined to compensate for the respiratory liver motion. The key steps included: 1) Build a subject-specific liver motion model for current subject online and perform the initial registration of pre-acquired 3D MR and intra-operative ultrasound images; 2) During fusion imaging, compensate for liver motion first using the motion model, and then using an automatic registration method to further correct the respiratory fusion error. Evaluation experiments were conducted on liver phantom and five subjects. In the phantom study, the fusion error (superior-inferior axis) was reduced from 13.90±2.38mm to 4.26±0.78mm by using the motion model only. The fusion error further decreased to 0.63±0.53mm by using the registration method. The registration method also decreased the rotation error from 7.06±0.21° to 1.18±0.66°. In the clinical study, the fusion error was reduced from 12.90±9.58mm to 6.12±2.90mm by using the motion model alone. Moreover, the fusion error decreased to 1.96±0.33mm by using the registration method. The proposed method can effectively correct the respiration-induced fusion error to improve the fusion image quality. This method can also reduce the error correction dependency on the initial registration of ultrasound and MR images. Overall, the proposed method can improve the clinical practicability of ultrasound fusion imaging. Copyright © 2016 Elsevier Ltd. All rights reserved.
a Comparative Analysis of Spatiotemporal Data Fusion Models for Landsat and Modis Data
NASA Astrophysics Data System (ADS)
Hazaymeh, K.; Almagbile, A.
2018-04-01
In this study, three documented spatiotemporal data fusion models were applied to Landsat-7 and MODIS surface reflectance, and NDVI. The algorithms included the spatial and temporal adaptive reflectance fusion model (STARFM), sparse representation based on a spatiotemporal reflectance fusion model (SPSTFM), and spatiotemporal image-fusion model (STI-FM). The objectives of this study were to (i) compare the performance of these three fusion models using a one Landsat-MODIS spectral reflectance image pairs using time-series datasets from the Coleambally irrigation area in Australia, and (ii) quantitatively evaluate the accuracy of the synthetic images generated from each fusion model using statistical measurements. Results showed that the three fusion models predicted the synthetic Landsat-7 image with adequate agreements. The STI-FM produced more accurate reconstructions of both Landsat-7 spectral bands and NDVI. Furthermore, it produced surface reflectance images having the highest correlation with the actual Landsat-7 images. This study indicated that STI-FM would be more suitable for spatiotemporal data fusion applications such as vegetation monitoring, drought monitoring, and evapotranspiration.
Image fusion based on millimeter-wave for concealed weapon detection
NASA Astrophysics Data System (ADS)
Zhu, Weiwen; Zhao, Yuejin; Deng, Chao; Zhang, Cunlin; Zhang, Yalin; Zhang, Jingshui
2010-11-01
This paper describes a novel multi sensors image fusion technology which is presented for concealed weapon detection (CWD). It is known to all, because of the good transparency of the clothes at millimeter wave band, a millimeter wave radiometer can be used to image and distinguish concealed contraband beneath clothes, for example guns, knives, detonator and so on. As a result, we adopt the passive millimeter wave (PMMW) imaging technology for airport security. However, in consideration of the wavelength of millimeter wave and the single channel mechanical scanning, the millimeter wave image has law optical resolution, which can't meet the need of practical application. Therefore, visible image (VI), which has higher resolution, is proposed for the image fusion with the millimeter wave image to enhance the readability. Before the image fusion, a novel image pre-processing which specifics to the fusion of millimeter wave imaging and visible image is adopted. And in the process of image fusion, multi resolution analysis (MRA) based on Wavelet Transform (WT) is adopted. In this way, the experiment result shows that this method has advantages in concealed weapon detection and has practical significance.
A method based on IHS cylindrical transform model for quality assessment of image fusion
NASA Astrophysics Data System (ADS)
Zhu, Xiaokun; Jia, Yonghong
2005-10-01
Image fusion technique has been widely applied to remote sensing image analysis and processing, and methods for quality assessment of image fusion in remote sensing have also become the research issues at home and abroad. Traditional assessment methods combine calculation of quantitative indexes and visual interpretation to compare fused images quantificationally and qualitatively. However, in the existing assessment methods, there are two defects: on one hand, most imdexes lack the theoretic support to compare different fusion methods. On the hand, there is not a uniform preference for most of the quantitative assessment indexes when they are applied to estimate the fusion effects. That is, the spatial resolution and spectral feature could not be analyzed synchronously by these indexes and there is not a general method to unify the spatial and spectral feature assessment. So in this paper, on the basis of the approximate general model of four traditional fusion methods, including Intensity Hue Saturation(IHS) triangle transform fusion, High Pass Filter(HPF) fusion, Principal Component Analysis(PCA) fusion, Wavelet Transform(WT) fusion, a correlation coefficient assessment method based on IHS cylindrical transform is proposed. By experiments, this method can not only get the evaluation results of spatial and spectral features on the basis of uniform preference, but also can acquire the comparison between fusion image sources and fused images, and acquire differences among fusion methods. Compared with the traditional assessment methods, the new methods is more intuitionistic, and in accord with subjective estimation.
Advances in multi-sensor data fusion: algorithms and applications.
Dong, Jiang; Zhuang, Dafang; Huang, Yaohuan; Fu, Jingying
2009-01-01
With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of "algorithm fusion" methods; (3) Establishment of an automatic quality assessment scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hakime, Antoine, E-mail: thakime@yahoo.com; Yevich, Steven; Tselikas, Lambros
PurposeTo assess whether fusion imaging-guided percutaneous microwave ablation (MWA) can improve visibility and targeting of liver metastasis that were deemed inconspicuous on ultrasound (US).Materials and MethodsMWA of liver metastasis not judged conspicuous enough on US was performed under CT/US fusion imaging guidance. The conspicuity before and after the fusion imaging was graded on a five-point scale, and significance was assessed by Wilcoxon test. Technical success, procedure time, and procedure-related complications were evaluated.ResultsA total of 35 patients with 40 liver metastases (mean size 1.3 ± 0.4 cm) were enrolled. Image fusion improved conspicuity sufficiently to allow fusion-targeted MWA in 33 patients. The time requiredmore » for image fusion processing and tumors’ identification averaged 10 ± 2.1 min (range 5–14). Initial conspicuity on US by inclusion criteria was 1.2 ± 0.4 (range 0–2), while conspicuity after localization on fusion imaging was 3.5 ± 1 (range 1–5, p < 0.001). Technical success rate was 83% (33/40) in intention-to-treat analysis and 100% in analysis of treated tumors. There were no major procedure-related complications.ConclusionsFusion imaging broadens the scope of US-guided MWA to metastasis lacking adequate conspicuity on conventional US. Fusion imaging is an effective tool to increase the conspicuity of liver metastases that were initially deemed non visualizable on conventional US imaging.« less
Image fusion via nonlocal sparse K-SVD dictionary learning.
Li, Ying; Li, Fangyi; Bai, Bendu; Shen, Qiang
2016-03-01
Image fusion aims to merge two or more images captured via various sensors of the same scene to construct a more informative image by integrating their details. Generally, such integration is achieved through the manipulation of the representations of the images concerned. Sparse representation plays an important role in the effective description of images, offering a great potential in a variety of image processing tasks, including image fusion. Supported by sparse representation, in this paper, an approach for image fusion by the use of a novel dictionary learning scheme is proposed. The nonlocal self-similarity property of the images is exploited, not only at the stage of learning the underlying description dictionary but during the process of image fusion. In particular, the property of nonlocal self-similarity is combined with the traditional sparse dictionary. This results in an improved learned dictionary, hereafter referred to as the nonlocal sparse K-SVD dictionary (where K-SVD stands for the K times singular value decomposition that is commonly used in the literature), and abbreviated to NL_SK_SVD. The performance of the NL_SK_SVD dictionary is applied for image fusion using simultaneous orthogonal matching pursuit. The proposed approach is evaluated with different types of images, and compared with a number of alternative image fusion techniques. The resultant superior fused images using the present approach demonstrates the efficacy of the NL_SK_SVD dictionary in sparse image representation.
Multifocus image fusion using phase congruency
NASA Astrophysics Data System (ADS)
Zhan, Kun; Li, Qiaoqiao; Teng, Jicai; Wang, Mingying; Shi, Jinhui
2015-05-01
We address the problem of fusing multifocus images based on the phase congruency (PC). PC provides a sharpness feature of a natural image. The focus measure (FM) is identified as strong PC near a distinctive image feature evaluated by the complex Gabor wavelet. The PC is more robust against noise than other FMs. The fusion image is obtained by a new fusion rule (FR), and the focused region is selected by the FR from one of the input images. Experimental results show that the proposed fusion scheme achieves the fusion performance of the state-of-the-art methods in terms of visual quality and quantitative evaluations.
NASA Astrophysics Data System (ADS)
Guan, Wen; Li, Li; Jin, Weiqi; Qiu, Su; Zou, Yan
2015-10-01
Extreme-Low-Light CMOS has been widely applied in the field of night-vision as a new type of solid image sensor. But if the illumination in the scene has drastic changes or the illumination is too strong, Extreme-Low-Light CMOS can't both clearly present the high-light scene and low-light region. According to the partial saturation problem in the field of night-vision, a HDR image fusion algorithm based on the Laplace Pyramid was researched. The overall gray value and the contrast of the low light image is very low. We choose the fusion strategy based on regional average gradient for the top layer of the long exposure image and short exposure image, which has rich brightness and textural features. The remained layers which represent the edge feature information of the target are based on the fusion strategy based on regional energy. In the process of source image reconstruction with Laplacian pyramid image, we compare the fusion results with four kinds of basal images. The algorithm is tested using Matlab and compared with the different fusion strategies. We use information entropy, average gradient and standard deviation these three objective evaluation parameters for the further analysis of the fusion result. Different low illumination environment experiments show that the algorithm in this paper can rapidly get wide dynamic range while keeping high entropy. Through the verification of this algorithm features, there is a further application prospect of the optimized algorithm. Keywords: high dynamic range imaging, image fusion, multi-exposure image, weight coefficient, information fusion, Laplacian pyramid transform.
Multi-focus image fusion based on window empirical mode decomposition
NASA Astrophysics Data System (ADS)
Qin, Xinqiang; Zheng, Jiaoyue; Hu, Gang; Wang, Jiao
2017-09-01
In order to improve multi-focus image fusion quality, a novel fusion algorithm based on window empirical mode decomposition (WEMD) is proposed. This WEMD is an improved form of bidimensional empirical mode decomposition (BEMD), due to its decomposition process using the adding window principle, effectively resolving the signal concealment problem. We used WEMD for multi-focus image fusion, and formulated different fusion rules for bidimensional intrinsic mode function (BIMF) components and the residue component. For fusion of the BIMF components, the concept of the Sum-modified-Laplacian was used and a scheme based on the visual feature contrast adopted; when choosing the residue coefficients, a pixel value based on the local visibility was selected. We carried out four groups of multi-focus image fusion experiments and compared objective evaluation criteria with other three fusion methods. The experimental results show that the proposed fusion approach is effective and performs better at fusing multi-focus images than some traditional methods.
NASA Astrophysics Data System (ADS)
Zhang, Cheng; Wenbo, Mei; Huiqian, Du; Zexian, Wang
2018-04-01
A new algorithm was proposed for medical images fusion in this paper, which combined gradient minimization smoothing filter (GMSF) with non-sampled directional filter bank (NSDFB). In order to preserve more detail information, a multi scale edge preserving decomposition framework (MEDF) was used to decompose an image into a base image and a series of detail images. For the fusion of base images, the local Gaussian membership function is applied to construct the fusion weighted factor. For the fusion of detail images, NSDFB was applied to decompose each detail image into multiple directional sub-images that are fused by pulse coupled neural network (PCNN) respectively. The experimental results demonstrate that the proposed algorithm is superior to the compared algorithms in both visual effect and objective assessment.
[Contrast-enhanced ultrasound (CEUS) and image fusion for procedures of liver interventions].
Jung, E M; Clevert, D A
2018-06-01
Contrast-enhanced ultrasound (CEUS) is becoming increasingly important for the detection and characterization of malignant liver lesions and allows percutaneous treatment when surgery is not possible. Contrast-enhanced ultrasound image fusion with computed tomography (CT) and magnetic resonance imaging (MRI) opens up further options for the targeted investigation of a modified tumor treatment. Ultrasound image fusion offers the potential for real-time imaging and can be combined with other cross-sectional imaging techniques as well as CEUS. With the implementation of ultrasound contrast agents and image fusion, ultrasound has been improved in the detection and characterization of liver lesions in comparison to other cross-sectional imaging techniques. In addition, this method can also be used for intervention procedures. The success rate of fusion-guided biopsies or CEUS-guided tumor ablation lies between 80 and 100% in the literature. Ultrasound-guided image fusion using CT or MRI data, in combination with CEUS, can facilitate diagnosis and therapy follow-up after liver interventions. In addition to the primary applications of image fusion in the diagnosis and treatment of liver lesions, further useful indications can be integrated into daily work. These include, for example, intraoperative and vascular applications as well applications in other organ systems.
Ando, Kei; Imagama, Shiro; Ito, Zenya; Kobayashi, Kazuyoshi; Ukai, Junichi; Muramoto, Akio; Shinjo, Ryuichi; Matsumoto, Tomohiro; Nakashima, Hiroaki; Ishiguro, Naoki
2014-05-01
Retrospective clinical study. To investigate, using multislice CT images, how thoracic ossification of the posterior longitudinal ligament (OPLL) changes with time after thoracic posterior fusion surgery. Few studies have evaluated thoracic OPLL preoperatively and post using computed tomography (CT). The subjects included 19 patients (7 men and 12 women) with an average age at surgery of 52 years (38-66 y) who underwent indirect posterior decompression with corrective fusion and instrumentation at our institute. Minimum follow-up period was 1 year, and averaged 3 years 10 months (12-120 mo). Using CT images, we investigated fusion range, preoperative and postoperative Cobb angles of thoracic fusion levels, intraoperative and postoperative blood loss, operative time, hyperintense areas on preoperative MRI of thoracic spine and thickness of the OPLL on the reconstructed sagittal, multislice CT images taken before the operation and at 3 months, 6 months and 1 year after surgery. The basic fusion area was 3 vertebrae above and below the OPLL lesion. The mean operative time was 7 hours and 48 min (4 h 39 min-10 h 28 min), and blood loss was 1631 mL (160-11,731 mL). Intramedullary signal intensity change on magnetic resonance images was observed at the most severe ossification area in 18 patients. Interestingly, the rostral and caudal ossification regions of the OPLLs, as seen on sagittal CT images, were discontinuous across the disk space in all patients. Postoperatively, the discontinuous segments connected in all patients without progression of OPLL thickness by 5.1 months on average. All patients needing surgery had discontinuity across the disk space between the rostral and caudal ossified lesions as seen on CT. This discontinuity was considered to be the main reason for the myelopathy because a high-intensity area on magnetic resonance imaging was seen in 18 of 19 patients at the same level. Rigid fixation with instrumentation may allow the discontinuous segments to connect in patients without a concomitant thickening of the OPLL.
Qiu, Chenhui; Wang, Yuanyuan; Guo, Yanen; Xia, Shunren
2018-03-14
Image fusion techniques can integrate the information from different imaging modalities to get a composite image which is more suitable for human visual perception and further image processing tasks. Fusing green fluorescent protein (GFP) and phase contrast images is very important for subcellular localization, functional analysis of protein and genome expression. The fusion method of GFP and phase contrast images based on complex shearlet transform (CST) is proposed in this paper. Firstly the GFP image is converted to IHS model and its intensity component is obtained. Secondly the CST is performed on the intensity component and the phase contrast image to acquire the low-frequency subbands and the high-frequency subbands. Then the high-frequency subbands are merged by the absolute-maximum rule while the low-frequency subbands are merged by the proposed Haar wavelet-based energy (HWE) rule. Finally the fused image is obtained by performing the inverse CST on the merged subbands and conducting IHS-to-RGB conversion. The proposed fusion method is tested on a number of GFP and phase contrast images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. © 2018 Wiley Periodicals, Inc.
Hierarchical patch-based co-registration of differently stained histopathology slides
NASA Astrophysics Data System (ADS)
Yigitsoy, Mehmet; Schmidt, Günter
2017-03-01
Over the past decades, digital pathology has emerged as an alternative way of looking at the tissue at subcellular level. It enables multiplexed analysis of different cell types at micron level. Information about cell types can be extracted by staining sections of a tissue block using different markers. However, robust fusion of structural and functional information from different stains is necessary for reproducible multiplexed analysis. Such a fusion can be obtained via image co-registration by establishing spatial correspondences between tissue sections. Spatial correspondences can then be used to transfer various statistics about cell types between sections. However, the multi-modal nature of images and sparse distribution of interesting cell types pose several challenges for the registration of differently stained tissue sections. In this work, we propose a co-registration framework that efficiently addresses such challenges. We present a hierarchical patch-based registration of intensity normalized tissue sections. Preliminary experiments demonstrate the potential of the proposed technique for the fusion of multi-modal information from differently stained digital histopathology sections.
Dual wavelength imaging allows analysis of membrane fusion of influenza virus inside cells.
Sakai, Tatsuya; Ohuchi, Masanobu; Imai, Masaki; Mizuno, Takafumi; Kawasaki, Kazunori; Kuroda, Kazumichi; Yamashina, Shohei
2006-02-01
Influenza virus hemagglutinin (HA) is a determinant of virus infectivity. Therefore, it is important to determine whether HA of a new influenza virus, which can potentially cause pandemics, is functional against human cells. The novel imaging technique reported here allows rapid analysis of HA function by visualizing viral fusion inside cells. This imaging was designed to detect fusion changing the spectrum of the fluorescence-labeled virus. Using this imaging, we detected the fusion between a virus and a very small endosome that could not be detected previously, indicating that the imaging allows highly sensitive detection of viral fusion.
V S, Unni; Mishra, Deepak; Subrahmanyam, G R K S
2016-12-01
The need for image fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. Image fusion is the process of combining substantial information from several sensors using mathematical techniques in order to create a single composite image that will be more comprehensive and thus more useful for a human operator or other computer vision tasks. This paper presents a new approach to multifocus image fusion based on sparse signal representation. Block-based compressive sensing integrated with a projection-driven compressive sensing (CS) recovery that encourages sparsity in the wavelet domain is used as a method to get the focused image from a set of out-of-focus images. Compression is achieved during the image acquisition process using a block compressive sensing method. An adaptive thresholding technique within the smoothed projected Landweber recovery process reconstructs high-resolution focused images from low-dimensional CS measurements of out-of-focus images. Discrete wavelet transform and dual-tree complex wavelet transform are used as the sparsifying basis for the proposed fusion. The main finding lies in the fact that sparsification enables a better selection of the fusion coefficients and hence better fusion. A Laplacian mixture model fit is done in the wavelet domain and estimation of the probability density function (pdf) parameters by expectation maximization leads us to the proper selection of the coefficients of the fused image. Using the proposed method compared with the fusion scheme without employing the projected Landweber (PL) scheme and the other existing CS-based fusion approaches, it is observed that with fewer samples itself, the proposed method outperforms other approaches.
Designing Image Operators for MRI-PET Image Fusion of the Brain
NASA Astrophysics Data System (ADS)
Márquez, Jorge; Gastélum, Alfonso; Padilla, Miguel A.
2006-09-01
Our goal is to obtain images combining in a useful and precise way the information from 3D volumes of medical imaging sets. We address two modalities combining anatomy (Magnetic Resonance Imaging or MRI) and functional information (Positron Emission Tomography or PET). Commercial imaging software offers image fusion tools based on fixed blending or color-channel combination of two modalities, and color Look-Up Tables (LUTs), without considering the anatomical and functional character of the image features. We used a sensible approach for image fusion taking advantage mainly from the HSL (Hue, Saturation and Luminosity) color space, in order to enhance the fusion results. We further tested operators for gradient and contour extraction to enhance anatomical details, plus other spatial-domain filters for functional features corresponding to wide point-spread-function responses in PET images. A set of image-fusion operators was formulated and tested on PET and MRI acquisitions.
Infrared and visible image fusion method based on saliency detection in sparse domain
NASA Astrophysics Data System (ADS)
Liu, C. H.; Qi, Y.; Ding, W. R.
2017-06-01
Infrared and visible image fusion is a key problem in the field of multi-sensor image fusion. To better preserve the significant information of the infrared and visible images in the final fused image, the saliency maps of the source images is introduced into the fusion procedure. Firstly, under the framework of the joint sparse representation (JSR) model, the global and local saliency maps of the source images are obtained based on sparse coefficients. Then, a saliency detection model is proposed, which combines the global and local saliency maps to generate an integrated saliency map. Finally, a weighted fusion algorithm based on the integrated saliency map is developed to achieve the fusion progress. The experimental results show that our method is superior to the state-of-the-art methods in terms of several universal quality evaluation indexes, as well as in the visual quality.
Segment fusion of ToF-SIMS images.
Milillo, Tammy M; Miller, Mary E; Fischione, Remo; Montes, Angelina; Gardella, Joseph A
2016-06-08
The imaging capabilities of time-of-flight secondary ion mass spectrometry (ToF-SIMS) have not been used to their full potential in the analysis of polymer and biological samples. Imaging has been limited by the size of the dataset and the chemical complexity of the sample being imaged. Pixel and segment based image fusion algorithms commonly used in remote sensing, ecology, geography, and geology provide a way to improve spatial resolution and classification of biological images. In this study, a sample of Arabidopsis thaliana was treated with silver nanoparticles and imaged with ToF-SIMS. These images provide insight into the uptake mechanism for the silver nanoparticles into the plant tissue, giving new understanding to the mechanism of uptake of heavy metals in the environment. The Munechika algorithm was programmed in-house and applied to achieve pixel based fusion, which improved the spatial resolution of the image obtained. Multispectral and quadtree segment or region based fusion algorithms were performed using ecognition software, a commercially available remote sensing software suite, and used to classify the images. The Munechika fusion improved the spatial resolution for the images containing silver nanoparticles, while the segment fusion allowed classification and fusion based on the tissue types in the sample, suggesting potential pathways for the uptake of the silver nanoparticles.
Aoki, Yasuko; Endo, Hidenori; Niizuma, Kuniyasu; Inoue, Takashi; Shimizu, Hiroaki; Tominaga, Teiji
2013-12-01
We report two cases with internal carotid artery(ICA)aneurysms, in which fusion image effectively indicated the anatomical variations of the anterior choroidal artery (AchoA). Fusion image was obtained using fusion application software (Integrated Registration, Advantage Workstation VS4, GE Healthcare). When the artery passed through the choroidal fissure, it was diagnosed as AchoA. Case 1 had an aneurysm at the left ICA. Left internal carotid angiography (ICAG) showed that an artery arising from the aneurysmal neck supplied the medial occipital lobe. Fusion image showed that this artery had a branch passing through the choroidal fissure, which was diagnosed as hyperplastic AchoA. Case 2 had an aneurysm at the supraclinoid segment of the right ICA. AchoA or posterior communicating artery (PcomA) were not detected by the right ICAG. Fusion image obtained from 3D vertebral angiography (VAG) and MRI showed that the right AchoA arose from the right PcomA. Fusion image obtained from the right ICAG and the left VAG suggested that the aneurysm was located on the ICA where the PcomA regressed. Fusion image is an effective tool for assessing anatomical variations of AchoA. The present method is simple and quick for obtaining a fusion image that can be used in a real-time clinical setting.
Rajpoot, Kashif; Grau, Vicente; Noble, J Alison; Becher, Harald; Szmigielski, Cezary
2011-08-01
Real-time 3D echocardiography (RT3DE) promises a more objective and complete cardiac functional analysis by dynamic 3D image acquisition. Despite several efforts towards automation of left ventricle (LV) segmentation and tracking, these remain challenging research problems due to the poor-quality nature of acquired images usually containing missing anatomical information, speckle noise, and limited field-of-view (FOV). Recently, multi-view fusion 3D echocardiography has been introduced as acquiring multiple conventional single-view RT3DE images with small probe movements and fusing them together after alignment. This concept of multi-view fusion helps to improve image quality and anatomical information and extends the FOV. We now take this work further by comparing single-view and multi-view fused images in a systematic study. In order to better illustrate the differences, this work evaluates image quality and information content of single-view and multi-view fused images using image-driven LV endocardial segmentation and tracking. The image-driven methods were utilized to fully exploit image quality and anatomical information present in the image, thus purposely not including any high-level constraints like prior shape or motion knowledge in the analysis approaches. Experiments show that multi-view fused images are better suited for LV segmentation and tracking, while relatively more failures and errors were observed on single-view images. Copyright © 2011 Elsevier B.V. All rights reserved.
Bhateja, Vikrant; Moin, Aisha; Srivastava, Anuja; Bao, Le Nguyen; Lay-Ekuakille, Aimé; Le, Dac-Nhuong
2016-07-01
Computer based diagnosis of Alzheimer's disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer's disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhateja, Vikrant, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn; Moin, Aisha; Srivastava, Anuja
Computer based diagnosis of Alzheimer’s disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer’s disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Componentmore » Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).« less
Enhancing hyperspectral spatial resolution using multispectral image fusion: A wavelet approach
NASA Astrophysics Data System (ADS)
Jazaeri, Amin
High spectral and spatial resolution images have a significant impact in remote sensing applications. Because both spatial and spectral resolutions of spaceborne sensors are fixed by design and it is not possible to further increase the spatial or spectral resolution, techniques such as image fusion must be applied to achieve such goals. This dissertation introduces the concept of wavelet fusion between hyperspectral and multispectral sensors in order to enhance the spectral and spatial resolution of a hyperspectral image. To test the robustness of this concept, images from Hyperion (hyperspectral sensor) and Advanced Land Imager (multispectral sensor) were first co-registered and then fused using different wavelet algorithms. A regression-based fusion algorithm was also implemented for comparison purposes. The results show that the fused images using a combined bi-linear wavelet-regression algorithm have less error than other methods when compared to the ground truth. In addition, a combined regression-wavelet algorithm shows more immunity to misalignment of the pixels due to the lack of proper registration. The quantitative measures of average mean square error show that the performance of wavelet-based methods degrades when the spatial resolution of hyperspectral images becomes eight times less than its corresponding multispectral image. Regardless of what method of fusion is utilized, the main challenge in image fusion is image registration, which is also a very time intensive process. Because the combined regression wavelet technique is computationally expensive, a hybrid technique based on regression and wavelet methods was also implemented to decrease computational overhead. However, the gain in faster computation was offset by the introduction of more error in the outcome. The secondary objective of this dissertation is to examine the feasibility and sensor requirements for image fusion for future NASA missions in order to be able to perform onboard image fusion. In this process, the main challenge of image registration was resolved by registering the input images using transformation matrices of previously acquired data. The composite image resulted from the fusion process remarkably matched the ground truth, indicating the possibility of real time onboard fusion processing.
NASA Astrophysics Data System (ADS)
Cheng, Boyang; Jin, Longxu; Li, Guoning
2018-06-01
Visible light and infrared images fusion has been a significant subject in imaging science. As a new contribution to this field, a novel fusion framework of visible light and infrared images based on adaptive dual-channel unit-linking pulse coupled neural networks with singular value decomposition (ADS-PCNN) in non-subsampled shearlet transform (NSST) domain is present in this paper. First, the source images are decomposed into multi-direction and multi-scale sub-images by NSST. Furthermore, an improved novel sum modified-Laplacian (INSML) of low-pass sub-image and an improved average gradient (IAVG) of high-pass sub-images are input to stimulate the ADS-PCNN, respectively. To address the large spectral difference between infrared and visible light and the occurrence of black artifacts in fused images, a local structure information operator (LSI), which comes from local area singular value decomposition in each source image, is regarded as the adaptive linking strength that enhances fusion accuracy. Compared with PCNN models in other studies, the proposed method simplifies certain peripheral parameters, and the time matrix is utilized to decide the iteration number adaptively. A series of images from diverse scenes are used for fusion experiments and the fusion results are evaluated subjectively and objectively. The results of the subjective and objective evaluation show that our algorithm exhibits superior fusion performance and is more effective than the existing typical fusion techniques.
Satellite image fusion based on principal component analysis and high-pass filtering.
Metwalli, Mohamed R; Nasr, Ayman H; Allah, Osama S Farag; El-Rabaie, S; Abd El-Samie, Fathi E
2010-06-01
This paper presents an integrated method for the fusion of satellite images. Several commercial earth observation satellites carry dual-resolution sensors, which provide high spatial resolution or simply high-resolution (HR) panchromatic (pan) images and low-resolution (LR) multi-spectral (MS) images. Image fusion methods are therefore required to integrate a high-spectral-resolution MS image with a high-spatial-resolution pan image to produce a pan-sharpened image with high spectral and spatial resolutions. Some image fusion methods such as the intensity, hue, and saturation (IHS) method, the principal component analysis (PCA) method, and the Brovey transform (BT) method provide HR MS images, but with low spectral quality. Another family of image fusion methods, such as the high-pass-filtering (HPF) method, operates on the basis of the injection of high frequency components from the HR pan image into the MS image. This family of methods provides less spectral distortion. In this paper, we propose the integration of the PCA method and the HPF method to provide a pan-sharpened MS image with superior spatial resolution and less spectral distortion. The experimental results show that the proposed fusion method retains the spectral characteristics of the MS image and, at the same time, improves the spatial resolution of the pan-sharpened image.
Remote sensing fusion based on guided image filtering
NASA Astrophysics Data System (ADS)
Zhao, Wenfei; Dai, Qinling; Wang, Leiguang
2015-12-01
In this paper, we propose a novel remote sensing fusion approach based on guided image filtering. The fused images can well preserve the spectral features of the original multispectral (MS) images, meanwhile, enhance the spatial details information. Four quality assessment indexes are also introduced to evaluate the fusion effect when compared with other fusion methods. Experiments carried out on Gaofen-2, QuickBird, WorldView-2 and Landsat-8 images. And the results show an excellent performance of the proposed method.
Brophy, Carl M; Hoh, Daniel J
2018-06-01
Cervical disc arthroplasty (CDA) has received widespread attention as an alternative to anterior fusion due to its similar neurological and functional improvement, with the advantage of preservation of segmental motion. As CDA becomes more widely implemented, the potential for unexpected device-related adverse events may be identified. The authors report on a 48-year-old man who presented with progressive neurological deficits 3 years after 2-level CDA was performed. Imaging demonstrated periprosthetic osteolysis of the vertebral endplates at the CDA levels, with a heterogeneously enhancing ventral epidural mass compressing the spinal cord. Diagnostic workup for infectious and neoplastic processes was negative. The presumptive diagnosis was an inflammatory pannus formation secondary to abnormal motion at the CDA levels. Posterior cervical decompression and instrumented fusion was performed without removal of the arthroplasty devices or the ventral epidural mass. Postoperative imaging at 2 months demonstrated complete resolution of the compressive pannus, with associated improvement in clinical symptoms. Follow-up MRI at > 6 months showed no recurrence of the pannus. At 1 year postoperatively, CT scanning revealed improvement in periprosthetic osteolysis. Inflammatory pannus formation may be an unexpected complication of abnormal segmental motion after CDA. This rare etiology of an epidural mass associated with an arthroplasty device should be considered, in addition to workup for other potential infectious or neoplastic mass lesions. In symptomatic individuals, compressive pannus lesions can be effectively treated with fusion across the involved segment without removal of the device.
NASA Astrophysics Data System (ADS)
Ma, Dan; Liu, Jun; Chen, Kai; Li, Huali; Liu, Ping; Chen, Huijuan; Qian, Jing
2016-04-01
In remote sensing fusion, the spatial details of a panchromatic (PAN) image and the spectrum information of multispectral (MS) images will be transferred into fused images according to the characteristics of the human visual system. Thus, a remote sensing image fusion quality assessment called feature-based fourth-order correlation coefficient (FFOCC) is proposed. FFOCC is based on the feature-based coefficient concept. Spatial features related to spatial details of the PAN image and spectral features related to the spectrum information of MS images are first extracted from the fused image. Then, the fourth-order correlation coefficient between the spatial and spectral features is calculated and treated as the assessment result. FFOCC was then compared with existing widely used indices, such as Erreur Relative Globale Adimensionnelle de Synthese, and quality assessed with no reference. Results of the fusion and distortion experiments indicate that the FFOCC is consistent with subjective evaluation. FFOCC significantly outperforms the other indices in evaluating fusion images that are produced by different fusion methods and that are distorted in spatial and spectral features by blurring, adding noise, and changing intensity. All the findings indicate that the proposed method is an objective and effective quality assessment for remote sensing image fusion.
A fast fusion scheme for infrared and visible light images in NSCT domain
NASA Astrophysics Data System (ADS)
Zhao, Chunhui; Guo, Yunting; Wang, Yulei
2015-09-01
Fusion of infrared and visible light images is an effective way to obtain a simultaneous visualization of details of background provided by visible light image and hiding target information provided by infrared image, which is more suitable for browsing and further processing. Two crucial components for infrared and visual light image fusion are improving its fusion performance as well as reducing its computational burden. In this paper, a novel fusion algorithm named pixel information estimation is proposed, which determines the weights by evaluating the information of pixel and is well applied in visible light and infrared image fusion with better fusion quality and lower time-consumption. Besides, a fast realization of non-subsampled contourlet transform is also proposed in this paper to improve the computational efficiency. To verify the advantage of the proposed method, this paper compares it with several popular ones in six evaluation metrics over four different image groups. Experimental results show that the proposed algorithm gets a more effective result with much less time consuming and performs well in both subjective evaluation and objective indicators.
Buschow, Christian; Charo, Jehad; Anders, Kathleen; Loddenkemper, Christoph; Jukica, Ana; Alsamah, Wisam; Perez, Cynthia; Willimsky, Gerald; Blankenstein, Thomas
2010-03-15
Visualizing oncogene/tumor Ag expression by noninvasive imaging is of great interest for understanding processes of tumor development and therapy. We established transgenic (Tg) mice conditionally expressing a fusion protein of the SV40 large T Ag and luciferase (TagLuc) that allows monitoring of oncogene/tumor Ag expression by bioluminescent imaging upon Cre recombinase-mediated activation. Independent of Cre-mediated recombination, the TagLuc gene was expressed at low levels in different tissues, probably due to the leakiness of the stop cassette. The level of spontaneous TagLuc expression, detected by bioluminescent imaging, varied between the different Tg lines, depended on the nature of the Tg expression cassette, and correlated with Tag-specific CTL tolerance. Following liver-specific Cre-loxP site-mediated excision of the stop cassette that separated the promoter from the TagLuc fusion gene, hepatocellular carcinoma development was visualized. The ubiquitous low level TagLuc expression caused the failure of transferred effector T cells to reject Tag-expressing tumors rather than causing graft-versus-host disease. This model may be useful to study different levels of tolerance, monitor tumor development at an early stage, and rapidly visualize the efficacy of therapeutic intervention versus potential side effects of low-level Ag expression in normal tissues.
2007-02-23
approach for signal-level watermark inheritance. 15. SUBJECT TERMS EOARD, Steganography , Image Fusion, Data Mining, Image ...in watermarking algorithms , a program interface and protocol has been de - veloped, which allows control of the embedding and retrieval processes by the...watermarks in an image . Watermarking algorithm (DLL) Watermarking editor (Delphi) - User marks all objects: ci - class information oi - object instance
[Perceptual sharpness metric for visible and infrared color fusion images].
Gao, Shao-Shu; Jin, Wei-Qi; Wang, Xia; Wang, Ling-Xue; Luo, Yuan
2012-12-01
For visible and infrared color fusion images, objective sharpness assessment model is proposed to measure the clarity of detail and edge definition of the fusion image. Firstly, the contrast sensitivity functions (CSF) of the human visual system is used to reduce insensitive frequency components under certain viewing conditions. Secondly, perceptual contrast model, which takes human luminance masking effect into account, is proposed based on local band-limited contrast model. Finally, the perceptual contrast is calculated in the region of interest (contains image details and edges) in the fusion image to evaluate image perceptual sharpness. Experimental results show that the proposed perceptual sharpness metrics provides better predictions, which are more closely matched to human perceptual evaluations, than five existing sharpness (blur) metrics for color images. The proposed perceptual sharpness metrics can evaluate the perceptual sharpness for color fusion images effectively.
Tools and Methods for the Registration and Fusion of Remotely Sensed Data
NASA Technical Reports Server (NTRS)
Goshtasby, Arthur Ardeshir; LeMoigne, Jacqueline
2010-01-01
Tools and methods for image registration were reviewed. Methods for the registration of remotely sensed data at NASA were discussed. Image fusion techniques were reviewed. Challenges in registration of remotely sensed data were discussed. Examples of image registration and image fusion were given.
Fusion Imaging for Procedural Guidance.
Wiley, Brandon M; Eleid, Mackram F; Thaden, Jeremy J
2018-05-01
The field of percutaneous structural heart interventions has grown tremendously in recent years. This growth has fueled the development of new imaging protocols and technologies in parallel to help facilitate these minimally-invasive procedures. Fusion imaging is an exciting new technology that combines the strength of 2 imaging modalities and has the potential to improve procedural planning and the safety of many commonly performed transcatheter procedures. In this review we discuss the basic concepts of fusion imaging along with the relative strengths and weaknesses of static vs dynamic fusion imaging modalities. This review will focus primarily on echocardiographic-fluoroscopic fusion imaging and its application in commonly performed transcatheter structural heart procedures. Copyright © 2017 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
NASA Astrophysics Data System (ADS)
Witharana, Chandi; LaRue, Michelle A.; Lynch, Heather J.
2016-03-01
Remote sensing is a rapidly developing tool for mapping the abundance and distribution of Antarctic wildlife. While both panchromatic and multispectral imagery have been used in this context, image fusion techniques have received little attention. We tasked seven widely-used fusion algorithms: Ehlers fusion, hyperspherical color space fusion, high-pass fusion, principal component analysis (PCA) fusion, University of New Brunswick fusion, and wavelet-PCA fusion to resolution enhance a series of single-date QuickBird-2 and Worldview-2 image scenes comprising penguin guano, seals, and vegetation. Fused images were assessed for spectral and spatial fidelity using a variety of quantitative quality indicators and visual inspection methods. Our visual evaluation elected the high-pass fusion algorithm and the University of New Brunswick fusion algorithm as best for manual wildlife detection while the quantitative assessment suggested the Gram-Schmidt fusion algorithm and the University of New Brunswick fusion algorithm as best for automated classification. The hyperspherical color space fusion algorithm exhibited mediocre results in terms of spectral and spatial fidelities. The PCA fusion algorithm showed spatial superiority at the expense of spectral inconsistencies. The Ehlers fusion algorithm and the wavelet-PCA algorithm showed the weakest performances. As remote sensing becomes a more routine method of surveying Antarctic wildlife, these benchmarks will provide guidance for image fusion and pave the way for more standardized products for specific types of wildlife surveys.
Feng, Peng; Wang, Jing; Wei, Biao; Mi, Deling
2013-01-01
A hybrid multiscale and multilevel image fusion algorithm for green fluorescent protein (GFP) image and phase contrast image of Arabidopsis cell is proposed in this paper. Combining intensity-hue-saturation (IHS) transform and sharp frequency localization Contourlet transform (SFL-CT), this algorithm uses different fusion strategies for different detailed subbands, which include neighborhood consistency measurement (NCM) that can adaptively find balance between color background and gray structure. Also two kinds of neighborhood classes based on empirical model are taken into consideration. Visual information fidelity (VIF) as an objective criterion is introduced to evaluate the fusion image. The experimental results of 117 groups of Arabidopsis cell image from John Innes Center show that the new algorithm cannot only make the details of original images well preserved but also improve the visibility of the fusion image, which shows the superiority of the novel method to traditional ones. PMID:23476716
Development of a fusion approach selection tool
NASA Astrophysics Data System (ADS)
Pohl, C.; Zeng, Y.
2015-06-01
During the last decades number and quality of available remote sensing satellite sensors for Earth observation has grown significantly. The amount of available multi-sensor images along with their increased spatial and spectral resolution provides new challenges to Earth scientists. With a Fusion Approach Selection Tool (FAST) the remote sensing community would obtain access to an optimized and improved image processing technology. Remote sensing image fusion is a mean to produce images containing information that is not inherent in the single image alone. In the meantime the user has access to sophisticated commercialized image fusion techniques plus the option to tune the parameters of each individual technique to match the anticipated application. This leaves the operator with an uncountable number of options to combine remote sensing images, not talking about the selection of the appropriate images, resolution and bands. Image fusion can be a machine and time-consuming endeavour. In addition it requires knowledge about remote sensing, image fusion, digital image processing and the application. FAST shall provide the user with a quick overview of processing flows to choose from to reach the target. FAST will ask for available images, application parameters and desired information to process this input to come out with a workflow to quickly obtain the best results. It will optimize data and image fusion techniques. It provides an overview on the possible results from which the user can choose the best. FAST will enable even inexperienced users to use advanced processing methods to maximize the benefit of multi-sensor image exploitation.
A novel framework of tissue membrane systems for image fusion.
Zhang, Zulin; Yi, Xinzhong; Peng, Hong
2014-01-01
This paper proposes a tissue membrane system-based framework to deal with the optimal image fusion problem. A spatial domain fusion algorithm is given, and a tissue membrane system of multiple cells is used as its computing framework. Based on the multicellular structure and inherent communication mechanism of the tissue membrane system, an improved velocity-position model is developed. The performance of the fusion framework is studied with comparison of several traditional fusion methods as well as genetic algorithm (GA)-based and differential evolution (DE)-based spatial domain fusion methods. Experimental results show that the proposed fusion framework is superior or comparable to the other methods and can be efficiently used for image fusion.
Sedai, Suman; Garnavi, Rahil; Roy, Pallab; Xi Liang
2015-08-01
Multi-atlas segmentation first registers each atlas image to the target image and transfers the label of atlas image to the coordinate system of the target image. The transferred labels are then combined, using a label fusion algorithm. In this paper, we propose a novel label fusion method which aggregates discriminative learning and generative modeling for segmentation of cardiac MR images. First, a probabilistic Random Forest classifier is trained as a discriminative model to obtain the prior probability of a label at the given voxel of the target image. Then, a probability distribution of image patches is modeled using Gaussian Mixture Model for each label, providing the likelihood of the voxel belonging to the label. The final label posterior is obtained by combining the classification score and the likelihood score under Bayesian rule. Comparative study performed on MICCAI 2013 SATA Segmentation Challenge demonstrates that our proposed hybrid label fusion algorithm is accurate than other five state-of-the-art label fusion methods. The proposed method obtains dice similarity coefficient of 0.94 and 0.92 in segmenting epicardium and endocardium respectively. Moreover, our label fusion method achieves more accurate segmentation results compared to four other label fusion methods.
Log-Gabor Energy Based Multimodal Medical Image Fusion in NSCT Domain
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-01-01
Multimodal medical image fusion is a powerful tool in clinical applications such as noninvasive diagnosis, image-guided radiotherapy, and treatment planning. In this paper, a novel nonsubsampled Contourlet transform (NSCT) based method for multimodal medical image fusion is presented, which is approximately shift invariant and can effectively suppress the pseudo-Gibbs phenomena. The source medical images are initially transformed by NSCT followed by fusing low- and high-frequency components. The phase congruency that can provide a contrast and brightness-invariant representation is applied to fuse low-frequency coefficients, whereas the Log-Gabor energy that can efficiently determine the frequency coefficients from the clear and detail parts is employed to fuse the high-frequency coefficients. The proposed fusion method has been compared with the discrete wavelet transform (DWT), the fast discrete curvelet transform (FDCT), and the dual tree complex wavelet transform (DTCWT) based image fusion methods and other NSCT-based methods. Visually and quantitatively experimental results indicate that the proposed fusion method can obtain more effective and accurate fusion results of multimodal medical images than other algorithms. Further, the applicability of the proposed method has been testified by carrying out a clinical example on a woman affected with recurrent tumor images. PMID:25214889
[A study on medical image fusion].
Zhang, Er-hu; Bian, Zheng-zhong
2002-09-01
Five algorithms with its advantages and disadvantage for medical image fusion are analyzed. Four kinds of quantitative evaluation criteria for the quality of image fusion algorithms are proposed and these will give us some guidance for future research.
Neural network fusion: a novel CT-MR aortic aneurysm image segmentation method
NASA Astrophysics Data System (ADS)
Wang, Duo; Zhang, Rui; Zhu, Jin; Teng, Zhongzhao; Huang, Yuan; Spiga, Filippo; Du, Michael Hong-Fei; Gillard, Jonathan H.; Lu, Qingsheng; Liò, Pietro
2018-03-01
Medical imaging examination on patients usually involves more than one imaging modalities, such as Computed Tomography (CT), Magnetic Resonance (MR) and Positron Emission Tomography(PET) imaging. Multimodal imaging allows examiners to benefit from the advantage of each modalities. For example, for Abdominal Aortic Aneurysm, CT imaging shows calcium deposits in the aorta clearly while MR imaging distinguishes thrombus and soft tissues better.1 Analysing and segmenting both CT and MR images to combine the results will greatly help radiologists and doctors to treat the disease. In this work, we present methods on using deep neural network models to perform such multi-modal medical image segmentation. As CT image and MR image of the abdominal area cannot be well registered due to non-affine deformations, a naive approach is to train CT and MR segmentation network separately. However, such approach is time-consuming and resource-inefficient. We propose a new approach to fuse the high-level part of the CT and MR network together, hypothesizing that neurons recognizing the high level concepts of Aortic Aneurysm can be shared across multiple modalities. Such network is able to be trained end-to-end with non-registered CT and MR image using shorter training time. Moreover network fusion allows a shared representation of Aorta in both CT and MR images to be learnt. Through experiments we discovered that for parts of Aorta showing similar aneurysm conditions, their neural presentations in neural network has shorter distances. Such distances on the feature level is helpful for registering CT and MR image.
Image Fusion of CT and MR with Sparse Representation in NSST Domain
Qiu, Chenhui; Wang, Yuanyuan; Zhang, Huan
2017-01-01
Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. PMID:29250134
Image Fusion of CT and MR with Sparse Representation in NSST Domain.
Qiu, Chenhui; Wang, Yuanyuan; Zhang, Huan; Xia, Shunren
2017-01-01
Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation.
Enhanced image fusion using directional contrast rules in fuzzy transform domain.
Nandal, Amita; Rosales, Hamurabi Gamboa
2016-01-01
In this paper a novel image fusion algorithm based on directional contrast in fuzzy transform (FTR) domain is proposed. Input images to be fused are first divided into several non-overlapping blocks. The components of these sub-blocks are fused using directional contrast based fuzzy fusion rule in FTR domain. The fused sub-blocks are then transformed into original size blocks using inverse-FTR. Further, these inverse transformed blocks are fused according to select maximum based fusion rule for reconstructing the final fused image. The proposed fusion algorithm is both visually and quantitatively compared with other standard and recent fusion algorithms. Experimental results demonstrate that the proposed method generates better results than the other methods.
Minimally invasive versus open sacroiliac joint fusion: are they similarly safe and effective?
Ledonio, Charles G T; Polly, David W; Swiontkowski, Marc F
2014-06-01
The sacroiliac joint has been implicated as a source of chronic low back pain in 15% to 30% of patients. When nonsurgical approaches fail, sacroiliac joint fusion may be recommended. Advances in intraoperative image guidance have assisted minimally invasive surgical (MIS) techniques using ingrowth-coated fusion rods; however, how these techniques perform relative to open anterior fusion of the sacroiliac joint using plates and screws is not known. We compared estimated blood loss (EBL), surgical time, length of hospital stay (LOS), and Oswestry Disability Index (ODI) between patients undergoing MIS and open sacroiliac joint fusion. We retrospectively studied 63 patients (open: 36; MIS: 27) who underwent sacroiliac joint fusion with minimum 1-year followup at our institution from 2006 to 2011. Of those, 10 in the open group had incomplete records. All patients had sacroiliac joint dysfunction confirmed by image-guided intraarticular anesthetic sacroiliac joint injection and had failed nonoperative treatment. Patients were matched via propensity score, adjusting for age, sex, BMI, history of spine fusion, and preoperative ODI scores, leaving 22 in each group. Nine patients were not matched. We reviewed patient medical records to obtain EBL, length of surgery, LOS, and pre- and postoperative ODI scores. Mean followup was 13 months (range, 11-33 months) in the open group and 15 months (range, 12-26 months) in the MIS group. Patients in the open group had a higher mean EBL (681 mL versus 41 mL, p < 0.001). Mean surgical time and LOS were shorter in the MIS group than in the open group (68 minutes versus 128 minutes and 3.3 days versus 2 days, p < 0.001 for both). With the numbers available, mean postoperative ODI scores were not different between groups (47% versus 54%, p = 0.272). EBL, surgery time, and LOS favored the MIS sacroiliac fusion group. With the numbers available, ODI scores were similar between groups, though the study size was relatively small and it is possible that the study was underpowered on this end point. Because the implants used for these procedures make assessment of fusion challenging with available imaging techniques, we do not know how many patients' sacroiliac joints successfully fused, so longer followup and critical evaluation of outcomes scores over time are called for. Level III, therapeutic study. See Instructions for Authors for a complete description of levels of evidence.
Julian Li, Jia Xi; Mobbs, Ralph J; Phan, Kevin
2018-03-01
Anterior lumbar interbody fusion and lateral lumbar interbody fusion are associated with approach-related disadvantages. Oblique lumbar interbody fusion (OLIF) is the proposed solution, especially for upper lumbar levels. We analyzed the size and regional anatomy of the corridor used in the OLIF technique between levels L1 and L5. This is a morphometric study of 200 randomly selected magnetic resonance imaging (MRI) studies with features of lumbar degenerative disease. On MRI, the oblique corridor was defined as the smallest distance between the psoas major muscle and aorta or inferior vena cava (or common iliac artery) and measured at the L1/L2, L2/L3, L3/L4, and L4/L5 disc levels on both the left and right on the axial images at the mid-disc level. Mean distances of the oblique corridor on the left side were L1/L2 = 18.90 mm, L2/L3 = 15.50 mm; L3/L4 = 12.75 mm, and L4/L5 = 8.92 mm; on the right side, they were L1/L2 = 14.80 mm, L2/L3 = 5.50 mm, L3/L4 = 3.00 mm, and L4/L5 = 1.46 mm. For both sides, the corridor size was not significantly affected by sex, and it increased with age and decreased at the inferior lumbar disc levels. The L1/L2 and L2/L3 levels may be obstructed by the ipsilateral kidney and renal vasculature on both sides and the liver on the right side. A left-sided OLIF approach is viable for both sexes. Oblique access to the L1/L2 and L2/L3 disc levels is feasible regardless of age, whereas the L3/L4 and L4/L5 levels may be more suitable in older patients, especially for male patients. The right-sided approach is less likely to be performed effectively. Copyright © 2018 Elsevier Inc. All rights reserved.
Speaker-independent phoneme recognition with a binaural auditory image model
NASA Astrophysics Data System (ADS)
Francis, Keith Ivan
1997-09-01
This dissertation presents phoneme recognition techniques based on a binaural fusion of outputs of the auditory image model and subsequent azimuth-selective phoneme recognition in a noisy environment. Background information concerning speech variations, phoneme recognition, current binaural fusion techniques and auditory modeling issues is explained. The research is constrained to sources in the frontal azimuthal plane of a simulated listener. A new method based on coincidence detection of neural activity patterns from the auditory image model of Patterson is used for azimuth-selective phoneme recognition. The method is tested in various levels of noise and the results are reported in contrast to binaural fusion methods based on various forms of correlation to demonstrate the potential of coincidence- based binaural phoneme recognition. This method overcomes smearing of fine speech detail typical of correlation based methods. Nevertheless, coincidence is able to measure similarity of left and right inputs and fuse them into useful feature vectors for phoneme recognition in noise.
Multi-atlas and label fusion approach for patient-specific MRI based skull estimation.
Torrado-Carvajal, Angel; Herraiz, Joaquin L; Hernandez-Tamames, Juan A; San Jose-Estepar, Raul; Eryaman, Yigitcan; Rozenholc, Yves; Adalsteinsson, Elfar; Wald, Lawrence L; Malpica, Norberto
2016-04-01
MRI-based skull segmentation is a useful procedure for many imaging applications. This study describes a methodology for automatic segmentation of the complete skull from a single T1-weighted volume. The skull is estimated using a multi-atlas segmentation approach. Using a whole head computed tomography (CT) scan database, the skull in a new MRI volume is detected by nonrigid image registration of the volume to every CT, and combination of the individual segmentations by label-fusion. We have compared Majority Voting, Simultaneous Truth and Performance Level Estimation (STAPLE), Shape Based Averaging (SBA), and the Selective and Iterative Method for Performance Level Estimation (SIMPLE) algorithms. The pipeline has been evaluated quantitatively using images from the Retrospective Image Registration Evaluation database (reaching an overlap of 72.46 ± 6.99%), a clinical CT-MR dataset (maximum overlap of 78.31 ± 6.97%), and a whole head CT-MRI pair (maximum overlap 78.68%). A qualitative evaluation has also been performed on MRI acquisition of volunteers. It is possible to automatically segment the complete skull from MRI data using a multi-atlas and label fusion approach. This will allow the creation of complete MRI-based tissue models that can be used in electromagnetic dosimetry applications and attenuation correction in PET/MR. © 2015 Wiley Periodicals, Inc.
A new evaluation method research for fusion quality of infrared and visible images
NASA Astrophysics Data System (ADS)
Ge, Xingguo; Ji, Yiguo; Tao, Zhongxiang; Tian, Chunyan; Ning, Chengda
2017-03-01
In order to objectively evaluate the fusion effect of infrared and visible image, a fusion evaluation method for infrared and visible images based on energy-weighted average structure similarity and edge information retention value is proposed for drawbacks of existing evaluation methods. The evaluation index of this method is given, and the infrared and visible image fusion results under different algorithms and environments are made evaluation experiments on the basis of this index. The experimental results show that the objective evaluation index is consistent with the subjective evaluation results obtained from this method, which shows that the method is a practical and effective fusion image quality evaluation method.
The optimal algorithm for Multi-source RS image fusion.
Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan
2016-01-01
In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.
Nagamachi, Shigeki; Nishii, Ryuichi; Wakamatsu, Hideyuki; Mizutani, Youichi; Kiyohara, Shogo; Fujita, Seigo; Futami, Shigemi; Sakae, Tatefumi; Furukoji, Eiji; Tamura, Shozo; Arita, Hideo; Chijiiwa, Kazuo; Kawai, Keiichi
2013-07-01
This study aimed at demonstrating the feasibility of retrospectively fused (18)F FDG-PET and MRI (PET/MRI fusion image) in diagnosing pancreatic tumor, in particular differentiating malignant tumor from benign lesions. In addition, we evaluated additional findings characterizing pancreatic lesions by FDG-PET/MRI fusion image. We analyzed retrospectively 119 patients: 96 cancers and 23 benign lesions. FDG-PET/MRI fusion images (PET/T1 WI or PET/T2WI) were made by dedicated software using 1.5 Tesla (T) MRI image and FDG-PET images. These images were interpreted by two well-trained radiologists without knowledge of clinical information and compared with FDG-PET/CT images. We compared the differential diagnostic capability between PET/CT and FDG-PET/MRI fusion image. In addition, we evaluated additional findings such as tumor structure and tumor invasion. FDG-PET/MRI fusion image significantly improved accuracy compared with that of PET/CT (96.6 vs. 86.6 %). As additional finding, dilatation of main pancreatic duct was noted in 65.9 % of solid types and in 22.6 % of cystic types, on PET/MRI-T2 fusion image. Similarly, encasement of adjacent vessels was noted in 43.1 % of solid types and in 6.5 % of cystic types. Particularly in cystic types, intra-tumor structures such as mural nodule (35.4 %) or intra-cystic septum (74.2 %) were detected additionally. Besides, PET/MRI-T2 fusion image could detect extra benign cystic lesions (9.1 % in solid type and 9.7 % in cystic type) that were not noted by PET/CT. In diagnosing pancreatic lesions, FDG-PET/MRI fusion image was useful in differentiating pancreatic cancer from benign lesions. Furthermore, it was helpful in evaluating relationship between lesions and surrounding tissues as well as in detecting extra benign cysts.
Weber-aware weighted mutual information evaluation for infrared-visible image fusion
NASA Astrophysics Data System (ADS)
Luo, Xiaoyan; Wang, Shining; Yuan, Ding
2016-10-01
A performance metric for infrared and visible image fusion is proposed based on Weber's law. To indicate the stimulus of source images, two Weber components are provided. One is differential excitation to reflect the spectral signal of visible and infrared images, and the other is orientation to capture the scene structure feature. By comparing the corresponding Weber component in infrared and visible images, the source pixels can be marked with different dominant properties in intensity or structure. If the pixels have the same dominant property label, the pixels are grouped to calculate the mutual information (MI) on the corresponding Weber components between dominant source and fused images. Then, the final fusion metric is obtained via weighting the group-wise MI values according to the number of pixels in different groups. Experimental results demonstrate that the proposed metric performs well on popular image fusion cases and outperforms other image fusion metrics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chugh, Brige Paul; Krishnan, Kalpagam; Liu, Jeff
2014-08-15
Integration of biological conductivity information provided by Electrical Impedance Tomography (EIT) with anatomical information provided by Computed Tomography (CT) imaging could improve the ability to characterize tissues in clinical applications. In this paper, we report results of our study which compared the fusion of EIT with CT using three different image fusion algorithms, namely: weighted averaging, wavelet fusion, and ROI indexing. The ROI indexing method of fusion involves segmenting the regions of interest from the CT image and replacing the pixels with the pixels of the EIT image. The three algorithms were applied to a CT and EIT image ofmore » an anthropomorphic phantom, constructed out of five acrylic contrast targets with varying diameter embedded in a base of gelatin bolus. The imaging performance was assessed using Detectability and Structural Similarity Index Measure (SSIM). Wavelet fusion and ROI-indexing resulted in lower Detectability (by 35% and 47%, respectively) yet higher SSIM (by 66% and 73%, respectively) than weighted averaging. Our results suggest that wavelet fusion and ROI-indexing yielded more consistent and optimal fusion performance than weighted averaging.« less
Steel, Morrison M; Papsin, Blake C; Gordon, Karen A
2015-01-01
Bilateral cochlear implants aim to provide hearing to both ears for children who are deaf and promote binaural/spatial hearing. Benefits are limited by mismatched devices and unilaterally-driven development which could compromise the normal integration of left and right ear input. We thus asked whether children hear a fused image (ie. 1 vs 2 sounds) from their bilateral implants and if this "binaural fusion" reduces listening effort. Binaural fusion was assessed by asking 25 deaf children with cochlear implants and 24 peers with normal hearing whether they heard one or two sounds when listening to bilaterally presented acoustic click-trains/electric pulses (250 Hz trains of 36 ms presented at 1 Hz). Reaction times and pupillary changes were recorded simultaneously to measure listening effort. Bilaterally implanted children heard one image of bilateral input less frequently than normal hearing peers, particularly when intensity levels on each side were balanced. Binaural fusion declined as brainstem asymmetries increased and age at implantation decreased. Children implanted later had access to acoustic input prior to implantation due to progressive deterioration of hearing. Increases in both pupil diameter and reaction time occurred as perception of binaural fusion decreased. Results indicate that, without binaural level cues, children have difficulty fusing input from their bilateral implants to perceive one sound which costs them increased listening effort. Brainstem asymmetries exacerbate this issue. By contrast, later implantation, reflecting longer access to bilateral acoustic hearing, may have supported development of auditory pathways underlying binaural fusion. Improved integration of bilateral cochlear implant signals for children is required to improve their binaural hearing.
Radar image and data fusion for natural hazards characterisation
Lu, Zhong; Dzurisin, Daniel; Jung, Hyung-Sup; Zhang, Jixian; Zhang, Yonghong
2010-01-01
Fusion of synthetic aperture radar (SAR) images through interferometric, polarimetric and tomographic processing provides an all - weather imaging capability to characterise and monitor various natural hazards. This article outlines interferometric synthetic aperture radar (InSAR) processing and products and their utility for natural hazards characterisation, provides an overview of the techniques and applications related to fusion of SAR/InSAR images with optical and other images and highlights the emerging SAR fusion technologies. In addition to providing precise land - surface digital elevation maps, SAR - derived imaging products can map millimetre - scale elevation changes driven by volcanic, seismic and hydrogeologic processes, by landslides and wildfires and other natural hazards. With products derived from the fusion of SAR and other images, scientists can monitor the progress of flooding, estimate water storage changes in wetlands for improved hydrological modelling predictions and assessments of future flood impacts and map vegetation structure on a global scale and monitor its changes due to such processes as fire, volcanic eruption and deforestation. With the availability of SAR images in near real - time from multiple satellites in the near future, the fusion of SAR images with other images and data is playing an increasingly important role in understanding and forecasting natural hazards.
Rouabah, K; Varoquaux, A; Caporossi, J M; Louis, G; Jacquier, A; Bartoli, J M; Moulin, G; Vidal, V
2016-11-01
The purpose of this study was to assess the feasibility and utility of image fusion (Easy-TIPS) obtained from pre-procedure CT angiography and per-procedure real-time fluoroscopy for portal vein puncture during transjugular intrahepatic portosystemic shunt (TIPS) placement. Eighteen patients (15 men, 3 women) with a mean age of 63 years (range: 48-81 years; median age, 65 years) were included in the study. All patients underwent TIPS placement by two groups of radiologists (one group with radiologists of an experience<3 years and one with an experience≥3 years) using fusion imaging obtained from three-dimensional computed tomography angiography of the portal vein and real-time fluoroscopic images of the portal vein. Image fusion was used to guide the portal vein puncture during TIPS placement. At the end of the procedure, the interventional radiologists evaluated the utility of fusion imaging for portal vein puncture during TIPS placement. Mismatch between three-dimensional computed tomography angiography and real-time fluoroscopic images of the portal vein on image fusion was quantitatively analyzed. Posttreatment CT time, number of the puncture attempts, total radiation exposure and radiation from the retrograde portography were also recorded. Image fusion was considered useful for portal vein puncture in 13/18 TIPS procedures (72%). The mean posttreatment time to obtain fusion images was 16.4minutes. 3D volume rendered CT angiography images was strictly superimposed on direct portography in 10/18 procedures (56%). The mismatch mean value was 0.69cm in height and 0.28cm laterally. A mean number of 4.6 portal vein puncture attempts was made. Eight patients required less than three attempts. The mean radiation dose from retrograde portography was 421.2dGy.cm 2 , corresponding to a mean additional exposure of 19%. Fusion imaging resulting from image fusion from pre-procedural CT angiography is feasible, safe and makes portal puncture easier during TIPS placement. Copyright © 2016 Editions françaises de radiologie. Published by Elsevier Masson SAS. All rights reserved.
Image Fusion During Vascular and Nonvascular Image-Guided Procedures☆
Abi-Jaoudeh, Nadine; Kobeiter, Hicham; Xu, Sheng; Wood, Bradford J.
2013-01-01
Image fusion may be useful in any procedure where previous imaging such as positron emission tomography, magnetic resonance imaging, or contrast-enhanced computed tomography (CT) defines information that is referenced to the procedural imaging, to the needle or catheter, or to an ultrasound transducer. Fusion of prior and intraoperative imaging provides real-time feedback on tumor location or margin, metabolic activity, device location, or vessel location. Multimodality image fusion in interventional radiology was initially introduced for biopsies and ablations, especially for lesions only seen on arterial phase CT, magnetic resonance imaging, or positron emission tomography/CT but has more recently been applied to other vascular and nonvascular procedures. Two different types of platforms are commonly used for image fusion and navigation: (1) electromagnetic tracking and (2) cone-beam CT. Both technologies would be reviewed as well as their strengths and weaknesses, indications, when to use one vs the other, tips and guidance to streamline use, and early evidence defining clinical benefits of these rapidly evolving, commercially available and emerging techniques. PMID:23993079
Information recovery through image sequence fusion under wavelet transformation
NASA Astrophysics Data System (ADS)
He, Qiang
2010-04-01
Remote sensing is widely applied to provide information of areas with limited ground access with applications such as to assess the destruction from natural disasters and to plan relief and recovery operations. However, the data collection of aerial digital images is constrained by bad weather, atmospheric conditions, and unstable camera or camcorder. Therefore, how to recover the information from the low-quality remote sensing images and how to enhance the image quality becomes very important for many visual understanding tasks, such like feature detection, object segmentation, and object recognition. The quality of remote sensing imagery can be improved through meaningful combination of the employed images captured from different sensors or from different conditions through information fusion. Here we particularly address information fusion to remote sensing images under multi-resolution analysis in the employed image sequences. The image fusion is to recover complete information by integrating multiple images captured from the same scene. Through image fusion, a new image with high-resolution or more perceptive for human and machine is created from a time series of low-quality images based on image registration between different video frames.
Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection
Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun
2016-01-01
Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated by OKTAL-SE. PMID:27447635
Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection.
Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun
2016-07-19
Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated by OKTAL-SE.
Distributed multimodal data fusion for large scale wireless sensor networks
NASA Astrophysics Data System (ADS)
Ertin, Emre
2006-05-01
Sensor network technology has enabled new surveillance systems where sensor nodes equipped with processing and communication capabilities can collaboratively detect, classify and track targets of interest over a large surveillance area. In this paper we study distributed fusion of multimodal sensor data for extracting target information from a large scale sensor network. Optimal tracking, classification, and reporting of threat events require joint consideration of multiple sensor modalities. Multiple sensor modalities improve tracking by reducing the uncertainty in the track estimates as well as resolving track-sensor data association problems. Our approach to solving the fusion problem with large number of multimodal sensors is construction of likelihood maps. The likelihood maps provide a summary data for the solution of the detection, tracking and classification problem. The likelihood map presents the sensory information in an easy format for the decision makers to interpret and is suitable with fusion of spatial prior information such as maps, imaging data from stand-off imaging sensors. We follow a statistical approach to combine sensor data at different levels of uncertainty and resolution. The likelihood map transforms each sensor data stream to a spatio-temporal likelihood map ideally suitable for fusion with imaging sensor outputs and prior geographic information about the scene. We also discuss distributed computation of the likelihood map using a gossip based algorithm and present simulation results.
An adaptive block-based fusion method with LUE-SSIM for multi-focus images
NASA Astrophysics Data System (ADS)
Zheng, Jianing; Guo, Yongcai; Huang, Yukun
2016-09-01
Because of the lenses' limited depth of field, digital cameras are incapable of acquiring an all-in-focus image of objects at varying distances in a scene. Multi-focus image fusion technique can effectively solve this problem. Aiming at the block-based multi-focus image fusion methods, the problem that blocking-artifacts often occurs. An Adaptive block-based fusion method based on lifting undistorted-edge structural similarity (LUE-SSIM) is put forward. In this method, image quality metrics LUE-SSIM is firstly proposed, which utilizes the characteristics of human visual system (HVS) and structural similarity (SSIM) to make the metrics consistent with the human visual perception. Particle swarm optimization(PSO) algorithm which selects LUE-SSIM as the object function is used for optimizing the block size to construct the fused image. Experimental results on LIVE image database shows that LUE-SSIM outperform SSIM on Gaussian defocus blur images quality assessment. Besides, multi-focus image fusion experiment is carried out to verify our proposed image fusion method in terms of visual and quantitative evaluation. The results show that the proposed method performs better than some other block-based methods, especially in reducing the blocking-artifact of the fused image. And our method can effectively preserve the undistorted-edge details in focus region of the source images.
Chen, Yuanbo; Li, Hulin; Wu, Dingtao; Bi, Keming; Liu, Chunxiao
2014-12-01
Construction of three-dimensional (3D) model of renal tumor facilitated surgical planning and imaging guidance of manual image fusion in laparoscopic partial nephrectomy (LPN) for intrarenal tumors. Fifteen patients with intrarenal tumors underwent LPN between January and December 2012. Computed tomography-based reconstruction of the 3D models of renal tumors was performed using Mimics 12.1 software. Surgical planning was performed through morphometry and multi-angle visual views of the tumor model. Two-step manual image fusion superimposed 3D model images onto 2D laparoscopic images. The image fusion was verified by intraoperative ultrasound. Imaging-guided laparoscopic hilar clamping and tumor excision was performed. Manual fusion time, patient demographics, surgical details, and postoperative treatment parameters were analyzed. The reconstructed 3D tumor models accurately represented the patient's physiological anatomical landmarks. The surgical planning markers were marked successfully. Manual image fusion was flexible and feasible with fusion time of 6 min (5-7 min). All surgeries were completed laparoscopically. The median tumor excision time was 5.4 min (3.5-10 min), whereas the median warm ischemia time was 25.5 min (16-32 min). Twelve patients (80 %) demonstrated renal cell carcinoma on final pathology, and all surgical margins were negative. No tumor recurrence was detected after a media follow-up of 1 year (3-15 months). The surgical planning and two-step manual image fusion based on 3D model of renal tumor facilitated visible-imaging-guided tumor resection with negative margin in LPN for intrarenal tumor. It is promising and moves us one step closer to imaging-guided surgery.
CTA with fluoroscopy image fusion guidance in endovascular complex aortic aneurysm repair.
Sailer, A M; de Haan, M W; Peppelenbosch, A G; Jacobs, M J; Wildberger, J E; Schurink, G W H
2014-04-01
To evaluate the effect of intraoperative guidance by means of live fluoroscopy image fusion with computed tomography angiography (CTA) on iodinated contrast material volume, procedure time, and fluoroscopy time in endovascular thoraco-abdominal aortic repair. CTA with fluoroscopy image fusion road-mapping was prospectively evaluated in patients with complex aortic aneurysms who underwent fenestrated and/or branched endovascular repair (FEVAR/BEVAR). Total iodinated contrast material volume, overall procedure time, and fluoroscopy time were compared between the fusion group (n = 31) and case controls (n = 31). Reasons for potential fusion image inaccuracy were analyzed. Fusion imaging was feasible in all patients. Fusion image road-mapping was used for navigation and positioning of the devices and catheter guidance during access to target vessels. Iodinated contrast material volume and procedure time were significantly lower in the fusion group than in case controls (159 mL [95% CI 132-186 mL] vs. 199 mL [95% CI 170-229 mL], p = .037 and 5.2 hours [95% CI 4.5-5.9 hours] vs. 6.3 hours (95% CI 5.4-7.2 hours), p = .022). No significant differences in fluoroscopy time were observed (p = .38). Respiration-related vessel displacement, vessel elongation, and displacement by stiff devices as well as patient movement were identified as reasons for fusion image inaccuracy. Image fusion guidance provides added value in complex endovascular interventions. The technology significantly reduces iodinated contrast material dose and procedure time. Copyright © 2014 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Dong, Min; Dong, Chenghui; Guo, Miao; Wang, Zhe; Mu, Xiaomin
2018-04-01
Multiresolution-based methods, such as wavelet and Contourlet are usually used to image fusion. This work presents a new image fusion frame-work by utilizing area-based standard deviation in dual tree Contourlet trans-form domain. Firstly, the pre-registered source images are decomposed with dual tree Contourlet transform; low-pass and high-pass coefficients are obtained. Then, the low-pass bands are fused with weighted average based on area standard deviation rather than the simple "averaging" rule. While the high-pass bands are merged with the "max-absolute' fusion rule. Finally, the modified low-pass and high-pass coefficients are used to reconstruct the final fused image. The major advantage of the proposed fusion method over conventional fusion is the approximately shift invariance and multidirectional selectivity of dual tree Contourlet transform. The proposed method is compared with wavelet- , Contourletbased methods and other the state-of-the art methods on common used multi focus images. Experiments demonstrate that the proposed fusion framework is feasible and effective, and it performs better in both subjective and objective evaluation.
Multi-focus image fusion with the all convolutional neural network
NASA Astrophysics Data System (ADS)
Du, Chao-ben; Gao, She-sheng
2018-01-01
A decision map contains complete and clear information about the image to be fused, which is crucial to various image fusion issues, especially multi-focus image fusion. However, in order to get a satisfactory image fusion effect, getting a decision map is very necessary and usually difficult to finish. In this letter, we address this problem with convolutional neural network (CNN), aiming to get a state-of-the-art decision map. The main idea is that the max-pooling of CNN is replaced by a convolution layer, the residuals are propagated backwards by gradient descent, and the training parameters of the individual layers of the CNN are updated layer by layer. Based on this, we propose a new all CNN (ACNN)-based multi-focus image fusion method in spatial domain. We demonstrate that the decision map obtained from the ACNN is reliable and can lead to high-quality fusion results. Experimental results clearly validate that the proposed algorithm can obtain state-of-the-art fusion performance in terms of both qualitative and quantitative evaluations.
Energy-resolved neutron imaging for inertial confinement fusion
NASA Astrophysics Data System (ADS)
Moran, M. J.; Haan, S. W.; Hatchett, S. P.; Izumi, N.; Koch, J. A.; Lerche, R. A.; Phillips, T. W.
2003-03-01
The success of the National Ignition Facility program will depend on diagnostic measurements which study the performance of inertial confinement fusion (ICF) experiments. Neutron yield, fusion-burn time history, and images are examples of important diagnostics. Neutron and x-ray images will record the geometries of compressed targets during the fusion-burn process. Such images provide a critical test of the accuracy of numerical modeling of ICF experiments. They also can provide valuable information in cases where experiments produce unexpected results. Although x-ray and neutron images provide similar data, they do have significant differences. X-ray images represent the distribution of high-temperature regions where fusion occurs, while neutron images directly reveal the spatial distribution of fusion-neutron emission. X-ray imaging has the advantage of a relatively straightforward path to the imaging system design. Neutron imaging, by using energy-resolved detection, offers the intriguing advantage of being able to provide independent images of burning and nonburning regions of the nuclear fuel. The usefulness of energy-resolved neutron imaging depends on both the information content of the data and on the quality of the data that can be recorded. The information content will relate to the characteristic neutron spectra that are associated with emission from different regions of the source. Numerical modeling of ICF fusion burn will be required to interpret the corresponding energy-dependent images. The exercise will be useful only if the images can be recorded with sufficient definition to reveal the spatial and energy-dependent features of interest. Several options are being evaluated with respect to the feasibility of providing the desired simultaneous spatial and energy resolution.
Sjöberg, C; Ahnesjö, A
2013-06-01
Label fusion multi-atlas approaches for image segmentation can give better segmentation results than single atlas methods. We present a multi-atlas label fusion strategy based on probabilistic weighting of distance maps. Relationships between image similarities and segmentation similarities are estimated in a learning phase and used to derive fusion weights that are proportional to the probability for each atlas to improve the segmentation result. The method was tested using a leave-one-out strategy on a database of 21 pre-segmented prostate patients for different image registrations combined with different image similarity scorings. The probabilistic weighting yields results that are equal or better compared to both fusion with equal weights and results using the STAPLE algorithm. Results from the experiments demonstrate that label fusion by weighted distance maps is feasible, and that probabilistic weighted fusion improves segmentation quality more the stronger the individual atlas segmentation quality depends on the corresponding registered image similarity. The regions used for evaluation of the image similarity measures were found to be more important than the choice of similarity measure. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, F.; Chen, T.; He, J.; Wen, Q.; Yu, F.; Gu, X.; Wang, Z.
2018-04-01
In recent years, the quick upgrading and improvement of SAR sensors provide beneficial complements for the traditional optical remote sensing in the aspects of theory, technology and data. In this paper, Sentinel-1A SAR data and GF-1 optical data were selected for image fusion, and more emphases were put on the dryland crop classification under a complex crop planting structure, regarding corn and cotton as the research objects. Considering the differences among various data fusion methods, the principal component analysis (PCA), Gram-Schmidt (GS), Brovey and wavelet transform (WT) methods were compared with each other, and the GS and Brovey methods were proved to be more applicable in the study area. Then, the classification was conducted based on the object-oriented technique process. And for the GS, Brovey fusion images and GF-1 optical image, the nearest neighbour algorithm was adopted to realize the supervised classification with the same training samples. Based on the sample plots in the study area, the accuracy assessment was conducted subsequently. The values of overall accuracy and kappa coefficient of fusion images were all higher than those of GF-1 optical image, and GS method performed better than Brovey method. In particular, the overall accuracy of GS fusion image was 79.8 %, and the Kappa coefficient was 0.644. Thus, the results showed that GS and Brovey fusion images were superior to optical images for dryland crop classification. This study suggests that the fusion of SAR and optical images is reliable for dryland crop classification under a complex crop planting structure.
Heterogeneous Vision Data Fusion for Independently Moving Cameras
2010-03-01
target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY
Cha, Dong Ik; Lee, Min Woo; Kang, Tae Wook; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Kim, Kyunga
2017-10-01
To identify the more accurate reference data sets for fusion imaging-guided radiofrequency ablation or biopsy of hepatic lesions between computed tomography (CT) and magnetic resonance (MR) images. This study was approved by the institutional review board, and written informed consent was received from all patients. Twelve consecutive patients who were referred to assess the feasibility of radiofrequency ablation or biopsy were enrolled. Automatic registration using CT and MR images was performed in each patient. Registration errors during optimal and opposite respiratory phases, time required for image fusion and number of point locks used were compared using the Wilcoxon signed-rank test. The registration errors during optimal respiratory phase were not significantly different between image fusion using CT and MR images as reference data sets (p = 0.969). During opposite respiratory phase, the registration error was smaller with MR images than CT (p = 0.028). The time and the number of points locks needed for complete image fusion were not significantly different between CT and MR images (p = 0.328 and p = 0.317, respectively). MR images would be more suitable as the reference data set for fusion imaging-guided procedures of focal hepatic lesions than CT images.
NASA Astrophysics Data System (ADS)
Hu, Ruiguang; Xiao, Liping; Zheng, Wenjuan
2015-12-01
In this paper, multi-kernel learning(MKL) is used for drug-related webpages classification. First, body text and image-label text are extracted through HTML parsing, and valid images are chosen by the FOCARSS algorithm. Second, text based BOW model is used to generate text representation, and image-based BOW model is used to generate images representation. Last, text and images representation are fused with a few methods. Experimental results demonstrate that the classification accuracy of MKL is higher than those of all other fusion methods in decision level and feature level, and much higher than the accuracy of single-modal classification.
Evaluation of MRI-US Fusion Technology in Sports-Related Musculoskeletal Injuries.
Wong-On, Manuel; Til-Pérez, Lluís; Balius, Ramón
2015-06-01
A combination of magnetic resonance imaging (MRI) with real-time high-resolution ultrasound (US) known as fusion imaging may improve visualization of musculoskeletal (MSK) sports medicine injuries. The aim of this study was to evaluate the applicability of MRI-US fusion technology in MSK sports medicine. This study was conducted by the medical services of the FC Barcelona. The participants included volunteers and referred athletes with symptomatic and asymptomatic MSK injuries. All cases underwent MRI which was loaded into the US system for manual registration on the live US image and fusion imaging examination. After every test, an evaluation form was completed in terms of advantages, disadvantages, and anatomic fusion landmarks. From November 2014 to March 2015, we evaluated 20 subjects who underwent fusion imaging, 5 non-injured volunteers and 15 injured athletes, 11 symptomatic and 4 asymptomatic, age range 16-50 years, mean 22. We describe some of the anatomic landmarks used to guide fusion in different regions. This technology allowed us to examine muscle and tendon injuries simultaneously in US and MRI, and the correlation of both techniques, especially low-grade muscular injuries. This has also helped compensate for the limited field of view with US. It improves spatial orientation of cartilage, labrum and meniscal injuries. However, a high-quality MRI image is essential in achieving an adequate fusion image, and 3D sequences need to be added in MRI protocols to improve navigation. The combination of real-time MRI and US image fusion and navigation is relatively easy to perform and is helping to improve understanding of MSK injuries. However, it requires specific skills in MSK imaging and still needs further research in sports-related injuries. Toshiba Medical Systems Corporation.
Momeni, Saba; Pourghassem, Hossein
2014-08-01
Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.
Effective Multifocus Image Fusion Based on HVS and BP Neural Network
Yang, Yong
2014-01-01
The aim of multifocus image fusion is to fuse the images taken from the same scene with different focuses to obtain a resultant image with all objects in focus. In this paper, a novel multifocus image fusion method based on human visual system (HVS) and back propagation (BP) neural network is presented. Three features which reflect the clarity of a pixel are firstly extracted and used to train a BP neural network to determine which pixel is clearer. The clearer pixels are then used to construct the initial fused image. Thirdly, the focused regions are detected by measuring the similarity between the source images and the initial fused image followed by morphological opening and closing operations. Finally, the final fused image is obtained by a fusion rule for those focused regions. Experimental results show that the proposed method can provide better performance and outperform several existing popular fusion methods in terms of both objective and subjective evaluations. PMID:24683327
Quantitative image fusion in infrared radiometry
NASA Astrophysics Data System (ADS)
Romm, Iliya; Cukurel, Beni
2018-05-01
Towards high-accuracy infrared radiance estimates, measurement practices and processing techniques aimed to achieve quantitative image fusion using a set of multi-exposure images of a static scene are reviewed. The conventional non-uniformity correction technique is extended, as the original is incompatible with quantitative fusion. Recognizing the inherent limitations of even the extended non-uniformity correction, an alternative measurement methodology, which relies on estimates of the detector bias using self-calibration, is developed. Combining data from multi-exposure images, two novel image fusion techniques that ultimately provide high tonal fidelity of a photoquantity are considered: ‘subtract-then-fuse’, which conducts image subtraction in the camera output domain and partially negates the bias frame contribution common to both the dark and scene frames; and ‘fuse-then-subtract’, which reconstructs the bias frame explicitly and conducts image fusion independently for the dark and the scene frames, followed by subtraction in the photoquantity domain. The performances of the different techniques are evaluated for various synthetic and experimental data, identifying the factors contributing to potential degradation of the image quality. The findings reflect the superiority of the ‘fuse-then-subtract’ approach, conducting image fusion via per-pixel nonlinear weighted least squares optimization.
Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei
2016-01-01
Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation. PMID:27649190
Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei
2016-09-15
Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation.
Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.
Liu, Min; Wang, Xueping; Zhang, Hongzhong
2018-03-01
In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Pavel, M.
1993-01-01
The topics covered include the following: a system overview of the basic components of a system designed to improve the ability of a pilot to fly through low-visibility conditions such as fog; the role of visual sciences; fusion issues; sensor characterization; sources of information; image processing; and image fusion.
Image fusion based on Bandelet and sparse representation
NASA Astrophysics Data System (ADS)
Zhang, Jiuxing; Zhang, Wei; Li, Xuzhi
2018-04-01
Bandelet transform could acquire geometric regular direction and geometric flow, sparse representation could represent signals with as little as possible atoms on over-complete dictionary, both of which could be used to image fusion. Therefore, a new fusion method is proposed based on Bandelet and Sparse Representation, to fuse Bandelet coefficients of multi-source images and obtain high quality fusion effects. The test are performed on remote sensing images and simulated multi-focus images, experimental results show that the performance of new method is better than tested methods according to objective evaluation indexes and subjective visual effects.
Ishibashi, Kenichiro; Ito, Yohei; Masaki, Ayako; Fujii, Kana; Beppu, Shintaro; Sakakibara, Takeo; Takino, Hisashi; Takase, Hiroshi; Ijichi, Kei; Shimozato, Kazuo; Inagaki, Hiroshi
2015-11-01
There has been some debate as to whether a subset of metaplastic Warthin tumors (mWTs) harbor the mucoepidermoid carcinoma (MEC)-associated CRTC1-MAML2 fusion. We analyzed 15 tumors originally diagnosed as mWT (mWT-like tumors), 2 of which had concurrent MECs. We looked for the CRTC1/3-MAML2 fusion transcripts and performed immunohistochemistry for p63 and fluorescence in situ hybridization (FISH) for the MAML2 split. To localize MAML2 split-positive cells at the cellular level, whole tumor tissue sections were digitalized (whole-slide imaging [WSI]). The CRTC1-MAML2, but not CRTC3-MAML2 was detected in 5/15 mWT-like tumors. FISH-WSI results showed that all epithelial cells harbored the MAML2 split in fusion-positive mWT-like tumors and were totally negative in fusion-negative mWT-like tumors. A review of the hematoxylin and eosin-stained slides showed that morphology of the "metaplastic" epithelium was virtually indistinguishable between fusion-positive and fusion-negative tumors. However, oncocytic bilayered tumor epithelium, characteristic to typical WT, was always found somewhere in the fusion-negative tumors but not in the fusion-positive tumors. This distinguishing histologic finding enabled 5 pathologists to easily differentiate the 2 tumor groups with 100% accuracy. The age and sex distribution of fusion-positive mWT-like tumor cases was similar to that of fusion-positive MEC cases and significantly different from those of fusion-negative mWT-like tumor and typical WT cases. In addition, only fusion-positive mWT-like tumors possessed concurrent low-grade MECs. In conclusion, a subset of mWT-like tumors were positive for the CRTC1-MAML2 fusion and had many features that are more in accord with MEC than with WT. The term Warthin-like MEC should be considered for fusion-positive mWT-like tumors.
A Standard Mammography Unit - Standard 3D Ultrasound Probe Fusion Prototype: First Results.
Schulz-Wendtland, Rüdiger; Jud, Sebastian M; Fasching, Peter A; Hartmann, Arndt; Radicke, Marcus; Rauh, Claudia; Uder, Michael; Wunderle, Marius; Gass, Paul; Langemann, Hanna; Beckmann, Matthias W; Emons, Julius
2017-06-01
The combination of different imaging modalities through the use of fusion devices promises significant diagnostic improvement for breast pathology. The aim of this study was to evaluate image quality and clinical feasibility of a prototype fusion device (fusion prototype) constructed from a standard tomosynthesis mammography unit and a standard 3D ultrasound probe using a new method of breast compression. Imaging was performed on 5 mastectomy specimens from patients with confirmed DCIS or invasive carcinoma (BI-RADS ™ 6). For the preclinical fusion prototype an ABVS system ultrasound probe from an Acuson S2000 was integrated into a MAMMOMAT Inspiration (both Siemens Healthcare Ltd) and, with the aid of a newly developed compression plate, digital mammogram and automated 3D ultrasound images were obtained. The quality of digital mammogram images produced by the fusion prototype was comparable to those produced using conventional compression. The newly developed compression plate did not influence the applied x-ray dose. The method was not more labour intensive or time-consuming than conventional mammography. From the technical perspective, fusion of the two modalities was achievable. In this study, using only a few mastectomy specimens, the fusion of an automated 3D ultrasound machine with a standard mammography unit delivered images of comparable quality to conventional mammography. The device allows simultaneous ultrasound - the second important imaging modality in complementary breast diagnostics - without increasing examination time or requiring additional staff.
Fusion method of SAR and optical images for urban object extraction
NASA Astrophysics Data System (ADS)
Jia, Yonghong; Blum, Rick S.; Li, Fangfang
2007-11-01
A new image fusion method of SAR, Panchromatic (Pan) and multispectral (MS) data is proposed. First of all, SAR texture is extracted by ratioing the despeckled SAR image to its low pass approximation, and is used to modulate high pass details extracted from the available Pan image by means of the á trous wavelet decomposition. Then, high pass details modulated with the texture is applied to obtain the fusion product by HPFM (High pass Filter-based Modulation) fusion method. A set of image data including co-registered Landsat TM, ENVISAT SAR and SPOT Pan is used for the experiment. The results demonstrate accurate spectral preservation on vegetated regions, bare soil, and also on textured areas (buildings and road network) where SAR texture information enhances the fusion product, and the proposed approach is effective for image interpret and classification.
Improved medical image fusion based on cascaded PCA and shift invariant wavelet transforms.
Reena Benjamin, J; Jayasree, T
2018-02-01
In the medical field, radiologists need more informative and high-quality medical images to diagnose diseases. Image fusion plays a vital role in the field of biomedical image analysis. It aims to integrate the complementary information from multimodal images, producing a new composite image which is expected to be more informative for visual perception than any of the individual input images. The main objective of this paper is to improve the information, to preserve the edges and to enhance the quality of the fused image using cascaded principal component analysis (PCA) and shift invariant wavelet transforms. A novel image fusion technique based on cascaded PCA and shift invariant wavelet transforms is proposed in this paper. PCA in spatial domain extracts relevant information from the large dataset based on eigenvalue decomposition, and the wavelet transform operating in the complex domain with shift invariant properties brings out more directional and phase details of the image. The significance of maximum fusion rule applied in dual-tree complex wavelet transform domain enhances the average information and morphological details. The input images of the human brain of two different modalities (MRI and CT) are collected from whole brain atlas data distributed by Harvard University. Both MRI and CT images are fused using cascaded PCA and shift invariant wavelet transform method. The proposed method is evaluated based on three main key factors, namely structure preservation, edge preservation, contrast preservation. The experimental results and comparison with other existing fusion methods show the superior performance of the proposed image fusion framework in terms of visual and quantitative evaluations. In this paper, a complex wavelet-based image fusion has been discussed. The experimental results demonstrate that the proposed method enhances the directional features as well as fine edge details. Also, it reduces the redundant details, artifacts, distortions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnan, Kalpagam; Liu, Jeff; Kohli, Kirpal
Purpose: Fusion of electrical impedance tomography (EIT) with computed tomography (CT) can be useful as a clinical tool for providing additional physiological information about tissues, but requires suitable fusion algorithms and validation procedures. This work explores the feasibility of fusing EIT and CT images using an algorithm for coregistration. The imaging performance is validated through feature space assessment on phantom contrast targets. Methods: EIT data were acquired by scanning a phantom using a circuit, configured for injecting current through 16 electrodes, placed around the phantom. A conductivity image of the phantom was obtained from the data using electrical impedance andmore » diffuse optical tomography reconstruction software (EIDORS). A CT image of the phantom was also acquired. The EIT and CT images were fused using a region of interest (ROI) coregistration fusion algorithm. Phantom imaging experiments were carried out on objects of different contrasts, sizes, and positions. The conductive medium of the phantoms was made of a tissue-mimicking bolus material that is routinely used in clinical radiation therapy settings. To validate the imaging performance in detecting different contrasts, the ROI of the phantom was filled with distilled water and normal saline. Spatially separated cylindrical objects of different sizes were used for validating the imaging performance in multiple target detection. Analyses of the CT, EIT and the EIT/CT phantom images were carried out based on the variations of contrast, correlation, energy, and homogeneity, using a gray level co-occurrence matrix (GLCM). A reference image of the phantom was simulated using EIDORS, and the performances of the CT and EIT imaging systems were evaluated and compared against the performance of the EIT/CT system using various feature metrics, detectability, and structural similarity index measures. Results: In detecting distilled and normal saline water in bolus medium, EIT as a stand-alone imaging system showed contrast discrimination of 47%, while the CT imaging system showed a discrimination of only 1.5%. The structural similarity index measure showed a drop of 24% with EIT imaging compared to CT imaging. The average detectability measure for CT imaging was found to be 2.375 ± 0.19 before fusion. After complementing with EIT information, the detectability measure increased to 11.06 ± 2.04. Based on the feature metrics, the functional imaging quality of CT and EIT were found to be 2.29% and 86%, respectively, before fusion. Structural imaging quality was found to be 66% for CT and 16% for EIT. After fusion, functional imaging quality improved in CT imaging from 2.29% to 42% and the structural imaging quality of EIT imaging changed from 16% to 66%. The improvement in image quality was also observed in detecting objects of different sizes. Conclusions: The authors found a significant improvement in the contrast detectability performance of CT imaging when complemented with functional imaging information from EIT. Along with the feature assessment metrics, the concept of complementing CT with EIT imaging can lead to an EIT/CT imaging modality which might fully utilize the functional imaging abilities of EIT imaging, thereby enhancing the quality of care in the areas of cancer diagnosis and radiotherapy treatment planning.« less
Schwein, Adeline; Chinnadurai, Ponraj; Shah, Dipan J; Lumsden, Alan B; Bechara, Carlos F; Bismuth, Jean
2017-05-01
Three-dimensional image fusion of preoperative computed tomography (CT) angiography with fluoroscopy using intraoperative noncontrast cone-beam CT (CBCT) has been shown to improve endovascular procedures by reducing procedure length, radiation dose, and contrast media volume. However, patients with a contraindication to CT angiography (renal insufficiency, iodinated contrast allergy) may not benefit from this image fusion technique. The primary objective of this study was to evaluate the feasibility of magnetic resonance angiography (MRA) and fluoroscopy image fusion using noncontrast CBCT as a guidance tool during complex endovascular aortic procedures, especially in patients with renal insufficiency. All endovascular aortic procedures done under MRA image fusion guidance at a single-center were retrospectively reviewed. The patients had moderate to severe renal insufficiency and underwent diagnostic contrast-enhanced magnetic resonance imaging after gadolinium or ferumoxytol injection. Relevant vascular landmarks electronically marked in MRA images were overlaid on real-time two-dimensional fluoroscopy for image guidance, after image fusion with noncontrast intraoperative CBCT. Technical success, time for image registration, procedure time, fluoroscopy time, number of digital subtraction angiography (DSA) acquisitions before stent deployment or vessel catheterization, and renal function before and after the procedure were recorded. The image fusion accuracy was qualitatively evaluated on a binary scale by three physicians after review of image data showing virtual landmarks from MRA on fluoroscopy. Between November 2012 and March 2016, 10 patients underwent endovascular procedures for aortoiliac aneurysmal disease or aortic dissection using MRA image fusion guidance. All procedures were technically successful. A paired t-test analysis showed no difference between preimaging and postoperative renal function (P = .6). The mean time required for MRA-CBCT image fusion was 4:09 ± 01:31 min:sec. Total fluoroscopy time was 20.1 ± 6.9 minutes. Five of 10 patients (50%) underwent stent graft deployment without any predeployment DSA acquisition. Three of six vessels (50%) were cannulated under image fusion guidance without any precannulation DSA runs, and the remaining vessels were cannulated after one planning DSA acquisition. Qualitative evaluation showed 14 of 22 virtual landmarks (63.6%) from MRA overlaid on fluoroscopy were completely accurate, without the need for adjustment. Five of eight incorrect virtual landmarks (iliac and visceral arteries) resulted from vessel deformation caused by endovascular devices. Ferumoxytol or gadolinium-enhanced MRA imaging and image fusion with fluoroscopy using noncontrast CBCT is feasible and allows patients with renal insufficiency to benefit from optimal guidance during complex endovascular aortic procedures, while preserving their residual renal function. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Li, Kai; Su, Zhong-Zhen; Xu, Er-Jiao; Ju, Jin-Xiu; Meng, Xiao-Chun; Zheng, Rong-Qin
2016-04-18
To assess whether intraoperative use of contrast-enhanced ultrasound (CEUS)-CT/MR image fusion can accurately evaluate ablative margin (AM) and guide supplementary ablation to improve AM after hepatocellular carcinoma (HCC) ablation. Ninety-eight patients with 126 HCCs designated to undergo thermal ablation treatment were enrolled in this prospective study. CEUS-CT/MR image fusion was performed intraoperatively to evaluate whether 5-mm AM was covered by the ablative area. If possible, supplementary ablation was applied at the site of inadequate AM. The CEUS image quality, the time used for CEUS-CT/MR image fusion and the success rate of image fusion were recorded. Local tumor progression (LTP) was observed during follow-up. Clinical factors including AM were examined to identify risk factors for LTP. The success rate of image fusion was 96.2% (126/131), and the duration required for image fusion was 4.9 ± 2.0 (3-13) min. The CEUS image quality was good in 36.1% (53/147) and medium in 63.9% (94/147) of the cases. By supplementary ablation, 21.8% (12/55) of lesions with inadequate AMs became adequate AMs. During follow-up, there were 5 LTPs in lesions with inadequate AMs and 1 LTP in lesions with adequate AMs. Multivariate analysis showed that AM was the only independent risk factor for LTP (hazard ratio, 9.167; 95% confidence interval, 1.070-78.571; p = 0.043). CEUS-CT/MR image fusion is feasible for intraoperative use and can serve as an accurate method to evaluate AMs and guide supplementary ablation to lower inadequate AMs.
NASA Astrophysics Data System (ADS)
Moonon, Altan-Ulzii; Hu, Jianwen; Li, Shutao
2015-12-01
The remote sensing image fusion is an important preprocessing technique in remote sensing image processing. In this paper, a remote sensing image fusion method based on the nonsubsampled shearlet transform (NSST) with sparse representation (SR) is proposed. Firstly, the low resolution multispectral (MS) image is upsampled and color space is transformed from Red-Green-Blue (RGB) to Intensity-Hue-Saturation (IHS). Then, the high resolution panchromatic (PAN) image and intensity component of MS image are decomposed by NSST to high and low frequency coefficients. The low frequency coefficients of PAN and the intensity component are fused by the SR with the learned dictionary. The high frequency coefficients of intensity component and PAN image are fused by local energy based fusion rule. Finally, the fused result is obtained by performing inverse NSST and inverse IHS transform. The experimental results on IKONOS and QuickBird satellites demonstrate that the proposed method provides better spectral quality and superior spatial information in the fused image than other remote sensing image fusion methods both in visual effect and object evaluation.
Ukimura, Osamu; Marien, Arnaud; Palmer, Suzanne; Villers, Arnauld; Aron, Manju; de Castro Abreu, Andre Luis; Leslie, Scott; Shoji, Sunao; Matsugasumi, Toru; Gross, Mitchell; Dasgupta, Prokar; Gill, Inderbir S
2015-11-01
To compare the diagnostic yield of targeted prostate biopsy using image-fusion of multi-parametric magnetic resonance (mp-MR) with real-time trans-rectal ultrasound (TRUS) for clinically significant lesions that are suspicious only on mp-MR versus lesions that are suspicious on both mp-MR and TRUS. Pre-biopsy MRI and TRUS were each scaled on a 3-point score: highly suspicious, likely, and unlikely for clinically significant cancer (sPCa). Using an MR-TRUS elastic image-fusion system (Koelis), a 127 consecutive patients with a suspicious clinically significant index lesion on pre-biopsy mp-MR underwent systematic biopsies and MR/US-fusion targeted biopsies (01/2010-09/2013). Biopsy histological outcomes were retrospectively compared with MR suspicion level and TRUS-visibility of the MR-suspicious lesion. sPCa was defined as biopsy Gleason score ≥7 and/or maximum cancer core length ≥5 mm. Targeted biopsies outperformed systematic biopsies in overall cancer detection rate (61 vs. 41 %; p = 0.007), sPCa detection rate (43 vs. 23 %; p = 0.0013), cancer core length (7.5 vs. 3.9 mm; p = 0.0002), and cancer rate per core (56 vs. 12 %; p < 0.0001), respectively. Highly suspicious lesions on mp-MR correlated with higher positive biopsy rate (p < 0.0001), higher Gleason score (p = 0.018), and greater cancer core length (p < 0.0001). Highly suspicious lesions on TRUS in corresponding to MR-suspicious lesion had a higher biopsy yield (p < 0.0001) and higher sPCa detection rate (p < 0.0001). Since majority of MR-suspicious lesions were also suspicious on TRUS, TRUS-visibility allowed selection of the specific MR-visible lesion which should be targeted from among the multiple TRUS suspicious lesions in each prostate. MR-TRUS fusion-image-guided biopsies outperformed systematic biopsies. TRUS-visibility of a MR-suspicious lesion facilitates image-guided biopsies, resulting in higher detection of significant cancer.
Computational polarization difference underwater imaging based on image fusion
NASA Astrophysics Data System (ADS)
Han, Hongwei; Zhang, Xiaohui; Guan, Feng
2016-01-01
Polarization difference imaging can improve the quality of images acquired underwater, whether the background and veiling light are unpolarized or partial polarized. Computational polarization difference imaging technique which replaces the mechanical rotation of polarization analyzer and shortens the time spent to select the optimum orthogonal ǁ and ⊥axes is the improvement of the conventional PDI. But it originally gets the output image by setting the weight coefficient manually to an identical constant for all pixels. In this paper, a kind of algorithm is proposed to combine the Q and U parameters of the Stokes vector through pixel-level image fusion theory based on non-subsample contourlet transform. The experimental system built by the green LED array with polarizer to illuminate a piece of flat target merged in water and the CCD with polarization analyzer to obtain target image under different angle is used to verify the effect of the proposed algorithm. The results showed that the output processed by our algorithm could show more details of the flat target and had higher contrast compared to original computational polarization difference imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gambhir, Sanjiv; Pritha, Ray
Novel double and triple fusion reporter gene constructs harboring distinct imagable reporter genes are provided, as well as applications for the use of such double and triple fusion constructs in living cells and in living animals using distinct imaging technologies.
Gambhir, Sanjiv; Pritha, Ray
2015-07-14
Novel double and triple fusion reporter gene constructs harboring distinct imagable reporter genes are provided, as well as applications for the use of such double and triple fusion constructs in living cells and in living animals using distinct imaging technologies.
Line-Based Registration of Panoramic Images and LiDAR Point Clouds for Mobile Mapping.
Cui, Tingting; Ji, Shunping; Shan, Jie; Gong, Jianya; Liu, Kejian
2016-12-31
For multi-sensor integrated systems, such as the mobile mapping system (MMS), data fusion at sensor-level, i.e., the 2D-3D registration between an optical camera and LiDAR, is a prerequisite for higher level fusion and further applications. This paper proposes a line-based registration method for panoramic images and a LiDAR point cloud collected by a MMS. We first introduce the system configuration and specification, including the coordinate systems of the MMS, the 3D LiDAR scanners, and the two panoramic camera models. We then establish the line-based transformation model for the panoramic camera. Finally, the proposed registration method is evaluated for two types of camera models by visual inspection and quantitative comparison. The results demonstrate that the line-based registration method can significantly improve the alignment of the panoramic image and the LiDAR datasets under either the ideal spherical or the rigorous panoramic camera model, with the latter being more reliable.
Line-Based Registration of Panoramic Images and LiDAR Point Clouds for Mobile Mapping
Cui, Tingting; Ji, Shunping; Shan, Jie; Gong, Jianya; Liu, Kejian
2016-01-01
For multi-sensor integrated systems, such as the mobile mapping system (MMS), data fusion at sensor-level, i.e., the 2D-3D registration between an optical camera and LiDAR, is a prerequisite for higher level fusion and further applications. This paper proposes a line-based registration method for panoramic images and a LiDAR point cloud collected by a MMS. We first introduce the system configuration and specification, including the coordinate systems of the MMS, the 3D LiDAR scanners, and the two panoramic camera models. We then establish the line-based transformation model for the panoramic camera. Finally, the proposed registration method is evaluated for two types of camera models by visual inspection and quantitative comparison. The results demonstrate that the line-based registration method can significantly improve the alignment of the panoramic image and the LiDAR datasets under either the ideal spherical or the rigorous panoramic camera model, with the latter being more reliable. PMID:28042855
Image fusion pitfalls for cranial radiosurgery.
Jonker, Benjamin P
2013-01-01
Stereotactic radiosurgery requires imaging to define both the stereotactic space in which the treatment is delivered and the target itself. Image fusion is the process of using rotation and translation to bring a second image set into alignment with the first image set. This allows the potential concurrent use of multiple image sets to define the target and stereotactic space. While a single magnetic resonance imaging (MRI) sequence alone can be used for delineation of the target and fiducials, there may be significant advantages to using additional imaging sets including other MRI sequences, computed tomography (CT) scans, and advanced imaging sets such as catheter-based angiography, diffusor tension imaging-based fiber tracking and positon emission tomography in order to more accurately define the target and surrounding critical structures. Stereotactic space is usually defined by detection of fiducials on the stereotactic head frame or mask system. Unfortunately MRI sequences are susceptible to geometric distortion, whereas CT scans do not face this problem (although they have poorer resolution of the target in most cases). Thus image fusion can allow the definition of stereotactic space to proceed from the geometrically accurate CT images at the same time as using MRI to define the target. The use of image fusion is associated with risk of error introduced by inaccuracies of the fusion process, as well as workflow changes that if not properly accounted for can mislead the treating clinician. The purpose of this review is to describe the uses of image fusion in stereotactic radiosurgery as well as its potential pitfalls.
Decision-level fusion of SAR and IR sensor information for automatic target detection
NASA Astrophysics Data System (ADS)
Cho, Young-Rae; Yim, Sung-Hyuk; Cho, Hyun-Woong; Won, Jin-Ju; Song, Woo-Jin; Kim, So-Hyeon
2017-05-01
We propose a decision-level architecture that combines synthetic aperture radar (SAR) and an infrared (IR) sensor for automatic target detection. We present a new size-based feature, called target-silhouette to reduce the number of false alarms produced by the conventional target-detection algorithm. Boolean Map Visual Theory is used to combine a pair of SAR and IR images to generate the target-enhanced map. Then basic belief assignment is used to transform this map into a belief map. The detection results of sensors are combined to build the target-silhouette map. We integrate the fusion mass and the target-silhouette map on the decision level to exclude false alarms. The proposed algorithm is evaluated using a SAR and IR synthetic database generated by SE-WORKBENCH simulator, and compared with conventional algorithms. The proposed fusion scheme achieves higher detection rate and lower false alarm rate than the conventional algorithms.
Enhanced visualization of MR angiogram with modified MIP and 3D image fusion
NASA Astrophysics Data System (ADS)
Kim, JongHyo; Yeon, Kyoung M.; Han, Man Chung; Lee, Dong Hyuk; Cho, Han I.
1997-05-01
We have developed a 3D image processing and display technique that include image resampling, modification of MIP, volume rendering, and fusion of MIP image with volumetric rendered image. This technique facilitates the visualization of the 3D spatial relationship between vasculature and surrounding organs by overlapping the MIP image on the volumetric rendered image of the organ. We applied this technique to a MR brain image data to produce an MRI angiogram that is overlapped with 3D volume rendered image of brain. MIP technique was used to visualize the vasculature of brain, and volume rendering was used to visualize the other structures of brain. The two images are fused after adjustment of contrast and brightness levels of each image in such a way that both the vasculature and brain structure are well visualized either by selecting the maximum value of each image or by assigning different color table to each image. The resultant image with this technique visualizes both the brain structure and vasculature simultaneously, allowing the physicians to inspect their relationship more easily. The presented technique will be useful for surgical planning for neurosurgery.
Li, Kai; Su, Zhongzhen; Xu, Erjiao; Huang, Qiannan; Zeng, Qingjing; Zheng, Rongqin
2017-01-19
To assess the accuracy of contrast-enhanced ultrasound (CEUS)-CT/MR image fusion in evaluating the radiofrequency ablative margin (AM) of hepatocellular carcinoma (HCC) based on a custom-made phantom model and in HCC patients. Twenty-four phantoms were randomly divided into a complete ablation group (n = 6) and an incomplete ablation group (n = 18). After radiofrequency ablation (RFA), the AM was evaluated using ultrasound (US)-CT image fusion, and the results were compared with the AM results that were directly measured in a gross specimen. CEUS-CT/MR image fusion and CT-CT / MR-MR image fusion were used to evaluate the AM in 37 tumors from 33 HCC patients who underwent RFA. The sensitivity, specificity, and accuracy of US-CT image fusion for evaluating AM in the phantom model were 93.8, 85.7 and 91.3%, respectively. The maximal thicknesses of the residual AM were 3.5 ± 2.0 mm and 3.2 ± 2.0 mm in the US-CT image fusion and gross specimen, respectively. No significant difference was observed between the US-CT image fusion and direct measurements of the AM of HCC. In the clinical study, the success rate of the AM evaluation was 100% for both CEUS-CT/MR and CT-CT/MR-MR, and the duration was 8.5 ± 2.8 min (range: 4-12 min) and 13.5 ± 4.5 min (range: 8-16 min) for CEUS-CT/MR and CT-CT/MR-MR, respectively. The sensitivity, specificity, and accuracy of CEUS-CT/MR imaging for evaluating the AM were 100.0, 80.0, and 90.0%, respectively. A phantom model composed of carrageenan gel and additives was suitable for the evaluation of HCC AM. CEUS-CT/MR image fusion can be used to evaluate HCC AM with high accuracy.
Fully Convolutional Network-Based Multifocus Image Fusion.
Guo, Xiaopeng; Nie, Rencan; Cao, Jinde; Zhou, Dongming; Qian, Wenhua
2018-07-01
As the optical lenses for cameras always have limited depth of field, the captured images with the same scene are not all in focus. Multifocus image fusion is an efficient technology that can synthesize an all-in-focus image using several partially focused images. Previous methods have accomplished the fusion task in spatial or transform domains. However, fusion rules are always a problem in most methods. In this letter, from the aspect of focus region detection, we propose a novel multifocus image fusion method based on a fully convolutional network (FCN) learned from synthesized multifocus images. The primary novelty of this method is that the pixel-wise focus regions are detected through a learning FCN, and the entire image, not just the image patches, are exploited to train the FCN. First, we synthesize 4500 pairs of multifocus images by repeatedly using a gaussian filter for each image from PASCAL VOC 2012, to train the FCN. After that, a pair of source images is fed into the trained FCN, and two score maps indicating the focus property are generated. Next, an inversed score map is averaged with another score map to produce an aggregative score map, which take full advantage of focus probabilities in two score maps. We implement the fully connected conditional random field (CRF) on the aggregative score map to accomplish and refine a binary decision map for the fusion task. Finally, we exploit the weighted strategy based on the refined decision map to produce the fused image. To demonstrate the performance of the proposed method, we compare its fused results with several start-of-the-art methods not only on a gray data set but also on a color data set. Experimental results show that the proposed method can achieve superior fusion performance in both human visual quality and objective assessment.
Application of Sensor Fusion to Improve Uav Image Classification
NASA Astrophysics Data System (ADS)
Jabari, S.; Fathollahi, F.; Zhang, Y.
2017-08-01
Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.
Change detection in synthetic aperture radar images based on image fusion and fuzzy clustering.
Gong, Maoguo; Zhou, Zhiqiang; Ma, Jingjing
2012-04-01
This paper presents an unsupervised distribution-free change detection approach for synthetic aperture radar (SAR) images based on an image fusion strategy and a novel fuzzy clustering algorithm. The image fusion technique is introduced to generate a difference image by using complementary information from a mean-ratio image and a log-ratio image. In order to restrain the background information and enhance the information of changed regions in the fused difference image, wavelet fusion rules based on an average operator and minimum local area energy are chosen to fuse the wavelet coefficients for a low-frequency band and a high-frequency band, respectively. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.
Multimodality Image Fusion-Guided Procedures: Technique, Accuracy, and Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abi-Jaoudeh, Nadine, E-mail: naj@mail.nih.gov; Kruecker, Jochen, E-mail: jochen.kruecker@philips.com; Kadoury, Samuel, E-mail: samuel.kadoury@polymtl.ca
2012-10-15
Personalized therapies play an increasingly critical role in cancer care: Image guidance with multimodality image fusion facilitates the targeting of specific tissue for tissue characterization and plays a role in drug discovery and optimization of tailored therapies. Positron-emission tomography (PET), magnetic resonance imaging (MRI), and contrast-enhanced computed tomography (CT) may offer additional information not otherwise available to the operator during minimally invasive image-guided procedures, such as biopsy and ablation. With use of multimodality image fusion for image-guided interventions, navigation with advanced modalities does not require the physical presence of the PET, MRI, or CT imaging system. Several commercially available methodsmore » of image-fusion and device navigation are reviewed along with an explanation of common tracking hardware and software. An overview of current clinical applications for multimodality navigation is provided.« less
Adaptive polarization image fusion based on regional energy dynamic weighted average
NASA Astrophysics Data System (ADS)
Zhao, Yong-Qiang; Pan, Quan; Zhang, Hong-Cai
2005-11-01
According to the principle of polarization imaging and the relation between Stokes parameters and the degree of linear polarization, there are much redundant and complementary information in polarized images. Since man-made objects and natural objects can be easily distinguished in images of degree of linear polarization and images of Stokes parameters contain rich detailed information of the scene, the clutters in the images can be removed efficiently while the detailed information can be maintained by combining these images. An algorithm of adaptive polarization image fusion based on regional energy dynamic weighted average is proposed in this paper to combine these images. Through an experiment and simulations, most clutters are removed by this algorithm. The fusion method is used for different light conditions in simulation, and the influence of lighting conditions on the fusion results is analyzed.
Data fusion of Landsat TM and IRS images in forest classification
Guangxing Wang; Markus Holopainen; Eero Lukkarinen
2000-01-01
Data fusion of Landsat TM images and Indian Remote Sensing satellite panchromatic image (IRS-1C PAN) was studied and compared to the use of TM or IRS image only. The aim was to combine the high spatial resolution of IRS-1C PAN to the high spectral resolution of Landsat TM images using a data fusion algorithm. The ground truth of the study was based on a sample of 1,020...
Soyama, Takeshi; Sakuhara, Yusuke; Kudo, Kohsuke; Abo, Daisuke; Wang, Jeff; Ito, Yoichi M; Hasegawa, Yu; Shirato, Hiroki
2016-07-01
This preliminary study compared ultrasonography-computed tomography (US-CT) fusion imaging and conventional ultrasonography (US) for accuracy and time required for target identification using a combination of real phantoms and sets of digitally modified computed tomography (CT) images (digital/real hybrid phantoms). In this randomized prospective study, 27 spheres visible on B-mode US were placed at depths of 3.5, 8.5, and 13.5 cm (nine spheres each). All 27 spheres were digitally erased from the CT images, and a radiopaque sphere was digitally placed at each of the 27 locations to create 27 different sets of CT images. Twenty clinicians were instructed to identify the sphere target using US alone and fusion imaging. The accuracy of target identification of the two methods was compared using McNemar's test. The mean time required for target identification and error distances were compared using paired t tests. At all three depths, target identification was more accurate and the mean time required for target identification was significantly less with US-CT fusion imaging than with US alone, and the mean error distances were also shorter with US-CT fusion imaging. US-CT fusion imaging was superior to US alone in terms of accurate and rapid identification of target lesions.
Wavelet Fusion for Concealed Object Detection Using Passive Millimeter Wave Sequence Images
NASA Astrophysics Data System (ADS)
Chen, Y.; Pang, L.; Liu, H.; Xu, X.
2018-04-01
PMMW imaging system can create interpretable imagery on the objects concealed under clothing, which gives the great advantage to the security check system. Paper addresses wavelet fusion to detect concealed objects using passive millimeter wave (PMMW) sequence images. According to PMMW real-time imager acquired image characteristics and storage methods firstly, using the sum of squared difference (SSD) as the image-related parameters to screen the sequence images. Secondly, the selected images are optimized using wavelet fusion algorithm. Finally, the concealed objects are detected by mean filter, threshold segmentation and edge detection. The experimental results show that this method improves the detection effect of concealed objects by selecting the most relevant images from PMMW sequence images and using wavelet fusion to enhance the information of the concealed objects. The method can be effectively applied to human body concealed object detection in millimeter wave video.
Visual affective classification by combining visual and text features.
Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming
2017-01-01
Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task.
Visual affective classification by combining visual and text features
Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming
2017-01-01
Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task. PMID:28850566
Progressive multi-atlas label fusion by dictionary evolution.
Song, Yantao; Wu, Guorong; Bahrami, Khosro; Sun, Quansen; Shen, Dinggang
2017-02-01
Accurate segmentation of anatomical structures in medical images is important in recent imaging based studies. In the past years, multi-atlas patch-based label fusion methods have achieved a great success in medical image segmentation. In these methods, the appearance of each input image patch is first represented by an atlas patch dictionary (in the image domain), and then the latent label of the input image patch is predicted by applying the estimated representation coefficients to the corresponding anatomical labels of the atlas patches in the atlas label dictionary (in the label domain). However, due to the generally large gap between the patch appearance in the image domain and the patch structure in the label domain, the estimated (patch) representation coefficients from the image domain may not be optimal for the final label fusion, thus reducing the labeling accuracy. To address this issue, we propose a novel label fusion framework to seek for the suitable label fusion weights by progressively constructing a dynamic dictionary in a layer-by-layer manner, where the intermediate dictionaries act as a sequence of guidance to steer the transition of (patch) representation coefficients from the image domain to the label domain. Our proposed multi-layer label fusion framework is flexible enough to be applied to the existing labeling methods for improving their label fusion performance, i.e., by extending their single-layer static dictionary to the multi-layer dynamic dictionary. The experimental results show that our proposed progressive label fusion method achieves more accurate hippocampal segmentation results for the ADNI dataset, compared to the counterpart methods using only the single-layer static dictionary. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mu, Wei; Qi, Jin; Lu, Hong; Schabath, Matthew; Balagurunathan, Yoganand; Tunali, Ilke; Gillies, Robert James
2018-02-01
Purpose: Investigate the ability of using complementary information provided by the fusion of PET/CT images to predict immunotherapy response in non-small cell lung cancer (NSCLC) patients. Materials and methods: We collected 64 patients diagnosed with primary NSCLC treated with anti PD-1 checkpoint blockade. Using PET/CT images, fused images were created following multiple methodologies, resulting in up to 7 different images for the tumor region. Quantitative image features were extracted from the primary image (PET/CT) and the fused images, which included 195 from primary images and 1235 features from the fusion images. Three clinical characteristics were also analyzed. We then used support vector machine (SVM) classification models to identify discriminant features that predict immunotherapy response at baseline. Results: A SVM built with 87 fusion features and 13 primary PET/CT features on validation dataset had an accuracy and area under the ROC curve (AUROC) of 87.5% and 0.82, respectively, compared to a model built with 113 original PET/CT features on validation dataset 78.12% and 0.68. Conclusion: The fusion features shows better ability to predict immunotherapy response prediction compared to individual image features.
Minami, Yasunori; Minami, Tomohiro; Hagiwara, Satoru; Ida, Hiroshi; Ueshima, Kazuomi; Nishida, Naoshi; Murakami, Takamichi; Kudo, Masatoshi
2018-05-01
To assess the clinical feasibility of US-US image overlay fusion with evaluation of the ablative margin in radiofrequency ablation (RFA) for hepatocellular carcinoma (HCC). Fifty-three patients with 68 HCCs measuring 0.9-4.0 cm who underwent RFA guided by US-US overlay image fusion were included in this retrospective study. By an overlay of pre-/postoperative US, the tumor image could be projected onto the ablative hyperechoic zone. Therefore, the ablative margin three-dimensionally could be shown during the RFA procedure. US-US image overlay was compared to dynamic CT a few days after RFA for assessment of early treatment response. Accuracy of graded response was calculated, and the performance of US-US image overlay fusion was compared with that of CT using a Kappa agreement test. Technically effective ablation was achieved in a single session, and 59 HCCs (86.8 %) succeeded in obtaining a 5-mm margin on CT. The response with US-US image overlay correctly predicted early CT evaluation with an accuracy of 92.6 % (63/68) (k = 0.67; 95 % CI: 0.39-0.95). US-US image overlay fusion can be proposed as a feasible guidance in RFA with a safety margin and predicts early response of treatment assessment with high accuracy. • US-US image overlay fusion visualizes the ablative margin during RFA procedure. • Visualizing the margin during the procedure can prompt immediate complementary treatment. • US image fusion correlates with the results of early evaluation CT.
[Experience of Fusion image guided system in endonasal endoscopic surgery].
Wen, Jingying; Zhen, Hongtao; Shi, Lili; Cao, Pingping; Cui, Yonghua
2015-08-01
To review endonasal endoscopic surgeries aided by Fusion image guided system, and to explore the application value of Fusion image guided system in endonasal endoscopic surgeries. Retrospective research. Sixty cases of endonasal endoscopic surgeries aided by Fusion image guided system were analysed including chronic rhinosinusitis with polyp (n = 10), fungus sinusitis (n = 5), endoscopic optic nerve decompression (n = 16), inverted papilloma of the paranasal sinus (n = 9), ossifying fibroma of sphenoid bone (n = 1), malignance of the paranasal sinus (n = 9), cerebrospinal fluid leak (n = 5), hemangioma of orbital apex (n = 2) and orbital reconstruction (n = 3). Sixty cases of endonasal endoscopic surgeries completed successfully without any complications. Fusion image guided system can help to identify the ostium of paranasal sinus, lamina papyracea and skull base. Fused CT-CTA images, or fused MR-MRA images can help to localize the optic nerve or internal carotid arteiy . Fused CT-MR images can help to detect the range of the tumor. It spent (7.13 ± 1.358) minutes for image guided system to do preoperative preparation and the surgical navigation accuracy reached less than 1mm after proficient. There was no device localization problem because of block or head set loosed. Fusion image guided system make endonasal endoscopic surgery to be a true microinvasive and exact surgery. It spends less preoperative preparation time, has high surgical navigation accuracy, improves the surgical safety and reduces the surgical complications.
Dong, Yi; Wang, Wen-Ping; Mao, Feng; Ji, Zheng-Biao; Huang, Bei-Jian
2016-04-01
The aim of this study is to explore the value of volume navigation image fusion-assisted contrast-enhanced ultrasound (CEUS) in detection for radiofrequency ablation guidance of hepatocellular carcinomas (HCCs), which were undetectable on conventional ultrasound. From May 2012 to May 2014, 41 patients with 49 HCCs were included in this study. All lesions were detected by dynamic magnetic resonance imaging (MRI) and planned for radiofrequency ablation but were undetectable on conventional ultrasound. After a bolus injection of 2.4 ml SonoVue® (Bracco, Italy), LOGIQ E9 ultrasound system with volume navigation system (version R1.0.5, GE Healthcare, Milwaukee, WI, USA) was used to fuse CEUS and MRI images. The fusion time, fusion success rate, lesion enhancement pattern, and detection rate were analyzed. Image fusions were conducted successfully in 49 HCCs, the technical success rate was 100%. The average fusion time was (9.2 ± 2.1) min (6-12 min). The mean diameter of HCCs was 25.2 ± 5.3 mm (mean ± SD), and mean depth was 41.8 ± 17.2 mm. The detection rate of HCCs using CEUS/MRI imaging fusion (95.9%, 47/49) was significantly higher than CEUS (42.9%, 21/49) (P < 0.05). For small HCCs (diameter, 1-2 cm), the detection rate using imaging fusion (96.9%, 32/33) was also significantly higher than CEUS (18.2%, 6/33) (P < 0.01). All HCCs displayed a rapid wash-in pattern in the arterial phase of CEUS. Imaging fusion combining CEUS and MRI is a promising technique to improve the detection, precise localization, and accurate diagnosis of undetectable HCCs on conventional ultrasound, especially small and atypical HCCs. © 2015 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.
Jun, Yong Woong; Wang, Taejun; Hwang, Sekyu; Kim, Dokyoung; Ma, Donghee; Kim, Ki Hean; Kim, Sungjee; Jung, Junyang; Ahn, Kyo Han
2018-06-05
Vesicles exchange its contents through membrane fusion processes-kiss-and-run and full-collapse fusion. Indirect observation of these fusion processes using artificial vesicles enhanced our understanding on the molecular mechanisms involved. Direct observation of the fusion processes in a real biological system, however, remains a challenge owing to many technical obstacles. We disclose a ratiometric two-photon probe offering real-time tracking of lysosomal ATP with quantitative information for the first time. By applying the probe to two-photon live-cell imaging technique, lysosomal membrane fusion process in cells has been directly observed along with the concentration of its content-lysosomal ATP. Results show that the kiss-and-run process between lysosomes proceeds through repeating transient interactions with gradual content mixing, whereas the full-fusion process occurs at once. Furthermore, it is confirmed that both the fusion processes proceed with conservation of the content. Such a small-molecule probe exerts minimal disturbance and hence has potential for studying various biological processes associated with lysosomal ATP. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Dynamic image fusion and general observer preference
NASA Astrophysics Data System (ADS)
Burks, Stephen D.; Doe, Joshua M.
2010-04-01
Recent developments in image fusion give the user community many options for ways of presenting the imagery to an end-user. Individuals at the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate have developed an electronic system that allows users to quickly and efficiently determine optimal image fusion algorithms and color parameters based upon collected imagery and videos from environments that are typical to observers in a military environment. After performing multiple multi-band data collections in a variety of military-like scenarios, different waveband, fusion algorithm, image post-processing, and color choices are presented to observers as an output of the fusion system. The observer preferences can give guidelines as to how specific scenarios should affect the presentation of fused imagery.
Transperineal prostate biopsy with ECHO-MRI fusion. Biopsee system. Initial experience.
Romero-Selas, E; Cuadros, V; Montáns, J; Sánchez, E; López-Alcorocho, J M; Gómez-Sancha, F
2016-06-01
The aim of this study is to present our initial experience with the stereotactic echo-MRI fusion system for diagnosing prostate cancer. Between September 2014 and January 2015, we performed 50 prostate biopsies using the stereotactic echo-MRI fusion system. The 3-Tesla multiparameter MR images were superimposed using this image fusion system on 3D echo images obtained with the Biopsee system for the exact locating of areas suspected of prostate cancer. The lesions were classified using the Prostate Imaging Report and Date System. We assessed a total of 50 patients, with a mean age of 63 years (range, 45-79), a mean prostate-specific antigen level of 8 ng/mL (range, 1.9-20) and a mean prostate volume of 52mL (range, 12-118). Prostate cancer was diagnosed in 69% of the patients and intraepithelial neoplasia in 6%. The results of the biopsy were negative for 24% of the patients. The results of the biopsy and MRI were in agreement for 62% of the patients; however, 46% also had a tumour outside of the suspicious lesion. We diagnosed 46% anterior tumours and 33% apical tumours. One patient had a haematuria, another had a haematoma and a third had acute urine retention. Multiparametric prostatic MRI helps identify prostate lesions suggestive of cancer. The Biopsee echo-MRI fusion system provides for guided biopsy and increases the diagnostic performance, reducing the false negatives of classical biopsies and increasing the diagnosis of anterior tumours. Transperineal access minimises the risk of prostatic infection and sepsis. Copyright © 2015 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.
Enhanced EDX images by fusion of multimodal SEM images using pansharpening techniques.
Franchi, G; Angulo, J; Moreaud, M; Sorbier, L
2018-01-01
The goal of this paper is to explore the potential interest of image fusion in the context of multimodal scanning electron microscope (SEM) imaging. In particular, we aim at merging the backscattered electron images that usually have a high spatial resolution but do not provide enough discriminative information to physically classify the nature of the sample, with energy-dispersive X-ray spectroscopy (EDX) images that have discriminative information but a lower spatial resolution. The produced images are named enhanced EDX. To achieve this goal, we have compared the results obtained with classical pansharpening techniques for image fusion with an original approach tailored for multimodal SEM fusion of information. Quantitative assessment is obtained by means of two SEM images and a simulated dataset produced by a software based on PENELOPE. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
Framework for 2D-3D image fusion of infrared thermography with preoperative MRI.
Hoffmann, Nico; Weidner, Florian; Urban, Peter; Meyer, Tobias; Schnabel, Christian; Radev, Yordan; Schackert, Gabriele; Petersohn, Uwe; Koch, Edmund; Gumhold, Stefan; Steiner, Gerald; Kirsch, Matthias
2017-11-27
Multimodal medical image fusion combines information of one or more images in order to improve the diagnostic value. While previous applications mainly focus on merging images from computed tomography, magnetic resonance imaging (MRI), ultrasonic and single-photon emission computed tomography, we propose a novel approach for the registration and fusion of preoperative 3D MRI with intraoperative 2D infrared thermography. Image-guided neurosurgeries are based on neuronavigation systems, which further allow us track the position and orientation of arbitrary cameras. Hereby, we are able to relate the 2D coordinate system of the infrared camera with the 3D MRI coordinate system. The registered image data are now combined by calibration-based image fusion in order to map our intraoperative 2D thermographic images onto the respective brain surface recovered from preoperative MRI. In extensive accuracy measurements, we found that the proposed framework achieves a mean accuracy of 2.46 mm.
Image fusion pitfalls for cranial radiosurgery
Jonker, Benjamin P.
2013-01-01
Stereotactic radiosurgery requires imaging to define both the stereotactic space in which the treatment is delivered and the target itself. Image fusion is the process of using rotation and translation to bring a second image set into alignment with the first image set. This allows the potential concurrent use of multiple image sets to define the target and stereotactic space. While a single magnetic resonance imaging (MRI) sequence alone can be used for delineation of the target and fiducials, there may be significant advantages to using additional imaging sets including other MRI sequences, computed tomography (CT) scans, and advanced imaging sets such as catheter-based angiography, diffusor tension imaging-based fiber tracking and positon emission tomography in order to more accurately define the target and surrounding critical structures. Stereotactic space is usually defined by detection of fiducials on the stereotactic head frame or mask system. Unfortunately MRI sequences are susceptible to geometric distortion, whereas CT scans do not face this problem (although they have poorer resolution of the target in most cases). Thus image fusion can allow the definition of stereotactic space to proceed from the geometrically accurate CT images at the same time as using MRI to define the target. The use of image fusion is associated with risk of error introduced by inaccuracies of the fusion process, as well as workflow changes that if not properly accounted for can mislead the treating clinician. The purpose of this review is to describe the uses of image fusion in stereotactic radiosurgery as well as its potential pitfalls. PMID:23682338
Walter, Uwe; Müller, Jan-Uwe; Rösche, Johannes; Kirsch, Michael; Grossmann, Annette; Benecke, Reiner; Wittstock, Matthias; Wolters, Alexander
2016-03-01
A combination of preoperative magnetic resonance imaging (MRI) with real-time transcranial ultrasound, known as fusion imaging, may improve postoperative control of deep brain stimulation (DBS) electrode location. Fusion imaging, however, employs a weak magnetic field for tracking the position of the ultrasound transducer and the patient's head. Here we assessed its feasibility, safety, and clinical relevance in patients with DBS. Eighteen imaging sessions were conducted in 15 patients (7 women; aged 52.4 ± 14.4 y) with DBS of subthalamic nucleus (n = 6), globus pallidus interna (n = 5), ventro-intermediate (n = 3), or anterior (n = 1) thalamic nucleus and clinically suspected lead displacement. Minimum distance between DBS generator and magnetic field transmitter was kept at 65 cm. The pre-implantation MRI dataset was loaded into the ultrasound system for the fusion imaging examination. The DBS lead position was rated using validated criteria. Generator DBS parameters and neurological state of patients were monitored. Magnetic resonance-ultrasound fusion imaging and volume navigation were feasible in all cases and provided with real-time imaging capabilities of DBS lead and its location within the superimposed magnetic resonance images. Of 35 assessed lead locations, 30 were rated optimal, three suboptimal, and two displaced. In two cases, electrodes were re-implanted after confirming their inappropriate location on computed tomography (CT) scan. No influence of fusion imaging on clinical state of patients, or on DBS implantable pulse generator function, was found. Magnetic resonance-ultrasound real-time fusion imaging of DBS electrodes is safe with distinct precautions and improves assessment of electrode location. It may lower the need for repeated CT or MRI scans in DBS patients. © 2015 International Parkinson and Movement Disorder Society.
Schulz-Wendtland, Rüdiger; Jud, Sebastian M.; Fasching, Peter A.; Hartmann, Arndt; Radicke, Marcus; Rauh, Claudia; Uder, Michael; Wunderle, Marius; Gass, Paul; Langemann, Hanna; Beckmann, Matthias W.; Emons, Julius
2017-01-01
Aim The combination of different imaging modalities through the use of fusion devices promises significant diagnostic improvement for breast pathology. The aim of this study was to evaluate image quality and clinical feasibility of a prototype fusion device (fusion prototype) constructed from a standard tomosynthesis mammography unit and a standard 3D ultrasound probe using a new method of breast compression. Materials and Methods Imaging was performed on 5 mastectomy specimens from patients with confirmed DCIS or invasive carcinoma (BI-RADS ™ 6). For the preclinical fusion prototype an ABVS system ultrasound probe from an Acuson S2000 was integrated into a MAMMOMAT Inspiration (both Siemens Healthcare Ltd) and, with the aid of a newly developed compression plate, digital mammogram and automated 3D ultrasound images were obtained. Results The quality of digital mammogram images produced by the fusion prototype was comparable to those produced using conventional compression. The newly developed compression plate did not influence the applied x-ray dose. The method was not more labour intensive or time-consuming than conventional mammography. From the technical perspective, fusion of the two modalities was achievable. Conclusion In this study, using only a few mastectomy specimens, the fusion of an automated 3D ultrasound machine with a standard mammography unit delivered images of comparable quality to conventional mammography. The device allows simultaneous ultrasound – the second important imaging modality in complementary breast diagnostics – without increasing examination time or requiring additional staff. PMID:28713173
Fusion of High Resolution Multispectral Imagery in Vulnerable Coastal and Land Ecosystems.
Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier; Garcia-Pedrero, Angel; Rodriguez-Esparragon, Dionisio
2017-01-25
Ecosystems provide a wide variety of useful resources that enhance human welfare, but these resources are declining due to climate change and anthropogenic pressure. In this work, three vulnerable ecosystems, including shrublands, coastal areas with dunes systems and areas of shallow water, are studied. As far as these resources' reduction is concerned, remote sensing and image processing techniques could contribute to the management of these natural resources in a practical and cost-effective way, although some improvements are needed for obtaining a higher quality of the information available. An important quality improvement is the fusion at the pixel level. Hence, the objective of this work is to assess which pansharpening technique provides the best fused image for the different types of ecosystems. After a preliminary evaluation of twelve classic and novel fusion algorithms, a total of four pansharpening algorithms was analyzed using six quality indices. The quality assessment was implemented not only for the whole set of multispectral bands, but also for the subset of spectral bands covered by the wavelength range of the panchromatic image and outside of it. A better quality result is observed in the fused image using only the bands covered by the panchromatic band range. It is important to highlight the use of these techniques not only in land and urban areas, but a novel analysis in areas of shallow water ecosystems. Although the algorithms do not show a high difference in land and coastal areas, coastal ecosystems require simpler algorithms, such as fast intensity hue saturation, whereas more heterogeneous ecosystems need advanced algorithms, as weighted wavelet ' à trous ' through fractal dimension maps for shrublands and mixed ecosystems. Moreover, quality map analysis was carried out in order to study the fusion result in each band at the local level. Finally, to demonstrate the performance of these pansharpening techniques, advanced Object-Based (OBIA) support vector machine classification was applied, and a thematic map for the shrubland ecosystem was obtained, which corroborates wavelet ' à trous ' through fractal dimension maps as the best fusion algorithm for this ecosystem.
Fusion of High Resolution Multispectral Imagery in Vulnerable Coastal and Land Ecosystems
Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier; Garcia-Pedrero, Angel; Rodriguez-Esparragon, Dionisio
2017-01-01
Ecosystems provide a wide variety of useful resources that enhance human welfare, but these resources are declining due to climate change and anthropogenic pressure. In this work, three vulnerable ecosystems, including shrublands, coastal areas with dunes systems and areas of shallow water, are studied. As far as these resources’ reduction is concerned, remote sensing and image processing techniques could contribute to the management of these natural resources in a practical and cost-effective way, although some improvements are needed for obtaining a higher quality of the information available. An important quality improvement is the fusion at the pixel level. Hence, the objective of this work is to assess which pansharpening technique provides the best fused image for the different types of ecosystems. After a preliminary evaluation of twelve classic and novel fusion algorithms, a total of four pansharpening algorithms was analyzed using six quality indices. The quality assessment was implemented not only for the whole set of multispectral bands, but also for the subset of spectral bands covered by the wavelength range of the panchromatic image and outside of it. A better quality result is observed in the fused image using only the bands covered by the panchromatic band range. It is important to highlight the use of these techniques not only in land and urban areas, but a novel analysis in areas of shallow water ecosystems. Although the algorithms do not show a high difference in land and coastal areas, coastal ecosystems require simpler algorithms, such as fast intensity hue saturation, whereas more heterogeneous ecosystems need advanced algorithms, as weighted wavelet ‘à trous’ through fractal dimension maps for shrublands and mixed ecosystems. Moreover, quality map analysis was carried out in order to study the fusion result in each band at the local level. Finally, to demonstrate the performance of these pansharpening techniques, advanced Object-Based (OBIA) support vector machine classification was applied, and a thematic map for the shrubland ecosystem was obtained, which corroborates wavelet ‘à trous’ through fractal dimension maps as the best fusion algorithm for this ecosystem. PMID:28125055
Fusion Imaging: A Novel Staging Modality in Testis Cancer
Sterbis, Joseph R.; Rice, Kevin R.; Javitt, Marcia C.; Schenkman, Noah S.; Brassell, Stephen A.
2010-01-01
Objective: Computed tomography and chest radiographs provide the standard imaging for staging, treatment, and surveillance of testicular germ cell neoplasms. Positron emission tomography has recently been utilized for staging, but is somewhat limited in its ability to provide anatomic localization. Fusion imaging combines the metabolic information provided by positron emission tomography with the anatomic precision of computed tomography. To the best of our knowledge, this represents the first study of the effectiveness using fusion imaging in evaluation of patients with testis cancer. Methods: A prospective study of 49 patients presenting to Walter Reed Army Medical Center with testicular cancer from 2003 to 2009 was performed. Fusion imaging was compared with conventional imaging, tumor markers, pathologic results, and clinical follow-up. Results: There were 14 true positives, 33 true negatives, 1 false positive, and 1 false negative. Sensitivity, specificity, positive predictive value, and negative predictive value were 93.3, 97.0, 93.3, and 97.0% respectively. In 11 patient scenarios, fusion imaging differed from conventional imaging. Utility was found in superior lesion detection compared to helical computed tomography due to anatomical/functional image co-registration, detection of micrometastasis in lymph nodes (pathologic nodes < 1cm), surveillance for recurrence post-chemotherapy, differentiating fibrosis from active disease in nodes < 2.5cm, and acting as a quality assurance measure to computed tomography alone. Conclusions: In addition to demonstrating a sensitivity and specificity comparable or superior to conventional imaging, fusion imaging shows promise in providing additive data that may assist in clinical decision-making. PMID:21103077
Fusion imaging: a novel staging modality in testis cancer.
Sterbis, Joseph R; Rice, Kevin R; Javitt, Marcia C; Schenkman, Noah S; Brassell, Stephen A
2010-11-05
Computed tomography and chest radiographs provide the standard imaging for staging, treatment, and surveillance of testicular germ cell neoplasms. Positron emission tomography has recently been utilized for staging, but is somewhat limited in its ability to provide anatomic localization. Fusion imaging combines the metabolic information provided by positron emission tomography with the anatomic precision of computed tomography. To the best of our knowledge, this represents the first study of the effectiveness using fusion imaging in evaluation of patients with testis cancer. A prospective study of 49 patients presenting to Walter Reed Army Medical Center with testicular cancer from 2003 to 2009 was performed. Fusion imaging was compared with conventional imaging, tumor markers, pathologic results, and clinical follow-up. There were 14 true positives, 33 true negatives, 1 false positive, and 1 false negative. Sensitivity, specificity, positive predictive value, and negative predictive value were 93.3, 97.0, 93.3, and 97.0% respectively. In 11 patient scenarios, fusion imaging differed from conventional imaging. Utility was found in superior lesion detection compared to helical computed tomography due to anatomical/functional image co-registration, detection of micrometastasis in lymph nodes (pathologic nodes < 1cm), surveillance for recurrence post-chemotherapy, differentiating fibrosis from active disease in nodes < 2.5cm, and acting as a quality assurance measure to computed tomography alone. In addition to demonstrating a sensitivity and specificity comparable or superior to conventional imaging, fusion imaging shows promise in providing additive data that may assist in clinical decision-making.
A computerized scheme for lung nodule detection in multiprojection chest radiography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo Wei; Li Qiang; Boyce, Sarah J.
2012-04-15
Purpose: Our previous study indicated that multiprojection chest radiography could significantly improve radiologists' performance for lung nodule detection in clinical practice. In this study, the authors further verify that multiprojection chest radiography can greatly improve the performance of a computer-aided diagnostic (CAD) scheme. Methods: Our database consisted of 59 subjects, including 43 subjects with 45 nodules and 16 subjects without nodules. The 45 nodules included 7 real and 38 simulated ones. The authors developed a conventional CAD scheme and a new fusion CAD scheme to detect lung nodules. The conventional CAD scheme consisted of four steps for (1) identification ofmore » initial nodule candidates inside lungs, (2) nodule candidate segmentation based on dynamic programming, (3) extraction of 33 features from nodule candidates, and (4) false positive reduction using a piecewise linear classifier. The conventional CAD scheme processed each of the three projection images of a subject independently and discarded the correlation information between the three images. The fusion CAD scheme included the four steps in the conventional CAD scheme and two additional steps for (5) registration of all candidates in the three images of a subject, and (6) integration of correlation information between the registered candidates in the three images. The integration step retained all candidates detected at least twice in the three images of a subject and removed those detected only once in the three images as false positives. A leave-one-subject-out testing method was used for evaluation of the performance levels of the two CAD schemes. Results: At the sensitivities of 70%, 65%, and 60%, our conventional CAD scheme reported 14.7, 11.3, and 8.6 false positives per image, respectively, whereas our fusion CAD scheme reported 3.9, 1.9, and 1.2 false positives per image, and 5.5, 2.8, and 1.7 false positives per patient, respectively. The low performance of the conventional CAD scheme may be attributed to the high noise level in chest radiography, and the small size and low contrast of most nodules. Conclusions: This study indicated that the fusion of correlation information in multiprojection chest radiography can markedly improve the performance of CAD scheme for lung nodule detection.« less
Kang, Tae Wook; Lee, Min Woo; Song, Kyoung Doo; Kim, Mimi; Kim, Seung Soo; Kim, Seong Hyun; Ha, Sang Yun
2017-01-01
To assess whether contrast-enhanced ultrasonography (CEUS) with Sonazoid can improve the lesion conspicuity and feasibility of percutaneous biopsies for focal hepatic lesions invisible on fusion imaging of real-time ultrasonography (US) with computed tomography/magnetic resonance images, and evaluate its impact on clinical decision making. The Institutional Review Board approved this retrospective study. Between June 2013 and January 2015, 711 US-guided percutaneous biopsies were performed for focal hepatic lesions. Biopsies were performed using CEUS for guidance if lesions were invisible on fusion imaging. We retrospectively evaluated the number of target lesions initially invisible on fusion imaging that became visible after applying CEUS, using a 4-point scale. Technical success rates of biopsies were evaluated based on histopathological results. In addition, the occurrence of changes in clinical decision making was assessed. Among 711 patients, 16 patients (2.3%) were included in the study. The median size of target lesions was 1.1 cm (range, 0.5-1.9 cm) in pre-procedural imaging. After CEUS, 15 of 16 (93.8%) focal hepatic lesions were visualized. The conspicuity score was significantly increased after adding CEUS, as compared to that on fusion imaging (p < 0.001). The technical success rate of biopsy was 87.6% (14/16). After biopsy, there were changes in clinical decision making for 11 of 16 patients (68.8%). The addition of CEUS could improve the conspicuity of focal hepatic lesions invisible on fusion imaging. This dual guidance using CEUS and fusion imaging may affect patient management via changes in clinical decision-making.
Orita, Sumihisa; Yamagata, Masatsune; Ikeda, Yoshikazu; Nakajima, Fumitake; Aoki, Yasuchika; Nakamura, Junichi; Takahashi, Kazuhisa; Suzuki, Takane; Ohtori, Seiji
2015-10-17
Lumbar floating fusion occasionally causes postoperative adjacent segment disorder (ASD) at lumbosacral level, causing L5 spinal nerve disorder by L5-S1 foraminal stenosis. The disorder is considered to be one of the major outcomes of L5-S1 ASD, which has not been evaluated yet. The present study aimed to evaluate the incidence and risk factors of postoperative L5 spinal nerve disorder after lumbar interbody fusion extending to the L5 vertebra. We evaluated 125 patients with a diagnosis of spondylolisthesis who underwent floating fusion surgery with transforaminal lumbar interbody fusion with average postoperative period of 25.2 months. The patients were regarded as symptomatic with postoperative L5 spinal nerve disorder such as radicular pain/numbness in the lower limbs and/or motor dysfunction. We estimated and compared the wedging angle (frontal view) and height (lateral view) of the lumbosacral junction in pre- and postoperative plain X-ray images and the foraminal ratio (ratio of the narrower foraminal diameter to the wider diameter in the craniocaudal direction) in the preoperative magnetic resonance image. Risk factors for the incidence of L5 spinal nerve disorder were explored using multivariate logistic regression. Eight of the 125 patients (6.4%) were categorized as symptomatic, an average of 13.3 months after surgery. The wedging angle was significantly higher, and the foraminal ratio was significantly decreased in the symptomatic group (both P < 0.05) compared to the asymptomatic group. Multivariate logistic regression analysis of possible risk factors revealed that the wedging angle, foraminal ratio, and multileveled fusion were statistically significant. Higher wedging angle and lower foraminal ratio in the lumbosacral junction were significantly predictive for the incidence of L5 nerve root disorder as well as multiple-leveled fusion. These findings indicate that lumbosacral fixation should be considered for patients with these risk factors even if they have few symptoms from the L5-S1 junction.
Image construction from the IRAS survey and data fusion
NASA Technical Reports Server (NTRS)
Bontekoe, Tj. R.
1990-01-01
The IRAS survey data can be used successfully to produce images of extended objects. The major difficulty, viz. non-uniform sampling, different response functions for each detector, and varying signal-to-noise levels for each detector for each scan, were resolved. The results of three different image construction techniques are compared: co-addition, constrained least squares, and maximum entropy. The maximum entropy result is superior. An image of the galaxy M51 with an average spatial resolution of 45 arc seconds, is presented using 60 micron survey data. This exceeds the telescope diffraction limit of 1 minute of arc, at this wavelength. Data fusion is a proposed method for combining data from different instruments, with different spatial resolutions, at different wavelengths. Direct estimates of the physical parameters, temperature, density and composition, can be made from the data without prior images (re-)construction. An increase in the accuracy of these parameters is expected as the result of this more systematic approach.
Autofocus and fusion using nonlinear correlation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cabazos-Marín, Alma Rocío; Álvarez-Borrego, Josué, E-mail: josue@cicese.mx; Coronel-Beltrán, Ángel
2014-10-06
In this work a new algorithm is proposed for auto focusing and images fusion captured by microscope's CCD. The proposed algorithm for auto focusing implements the spiral scanning of each image in the stack f(x, y){sub w} to define the V{sub w} vector. The spectrum of the vector FV{sub w} is calculated by fast Fourier transform. The best in-focus image is determined by a focus measure that is obtained by the FV{sub 1} nonlinear correlation vector, of the reference image, with each other FV{sub W} images in the stack. In addition, fusion is performed with a subset of selected imagesmore » f(x, y){sub SBF} like the images with best focus measurement. Fusion creates a new improved image f(x, y){sub F} with the selection of pixels of higher intensity.« less
Research on fusion algorithm of polarization image in tetrolet domain
NASA Astrophysics Data System (ADS)
Zhang, Dexiang; Yuan, BaoHong; Zhang, Jingjing
2015-12-01
Tetrolets are Haar-type wavelets whose supports are tetrominoes which are shapes made by connecting four equal-sized squares. A fusion method for polarization images based on tetrolet transform is proposed. Firstly, the magnitude of polarization image and angle of polarization image can be decomposed into low-frequency coefficients and high-frequency coefficients with multi-scales and multi-directions using tetrolet transform. For the low-frequency coefficients, the average fusion method is used. According to edge distribution differences in high frequency sub-band images, for the directional high-frequency coefficients are used to select the better coefficients by region spectrum entropy algorithm for fusion. At last the fused image can be obtained by utilizing inverse transform for fused tetrolet coefficients. Experimental results show that the proposed method can detect image features more effectively and the fused image has better subjective visual effect
Baños-Capilla, M C; García, M A; Bea, J; Pla, C; Larrea, L; López, E
2007-06-01
The quality of dosimetry in radiotherapy treatment requires the accurate delimitation of the gross tumor volume. This can be achieved by complementing the anatomical detail provided by CT images through fusion with other imaging modalities that provide additional metabolic and physiological information. Therefore, use of multiple imaging modalities for radiotherapy treatment planning requires an accurate image registration method. This work describes tests carried out on a Discovery LS positron emission/computed tomography (PET/CT) system by General Electric Medical Systems (GEMS), for its later use to obtain images to delimit the target in radiotherapy treatment. Several phantoms have been used to verify image correlation, in combination with fiducial markers, which were used as a system of external landmarks. We analyzed the geometrical accuracy of two different fusion methods with the images obtained with these phantoms. We first studied the fusion method used by the PET/CT system by GEMS (hardware fusion) on the basis that there is satisfactory coincidence between the reconstruction centers in CT and PET systems; and secondly the fiducial fusion, a registration method, by means of least-squares fitting algorithm of a landmark points system. The study concluded with the verification of the centroid position of some phantom components in both imaging modalities. Centroids were estimated through a calculation similar to center-of-mass, weighted by the value of the CT number and the uptake intensity in PET. The mean deviations found for the hardware fusion method were: deltax/ +/-sigma = 3.3 mm +/- 1.0 mm and /deltax/ +/-sigma = 3.6 mm +/- 1.0 mm. These values were substantially improved upon applying fiducial fusion based on external landmark points: /deltax/ +/-sigma = 0.7 mm +/- 0.8 mm and /deltax/ +/-sigma = 0.3 mm 1.7 mm. We also noted that differences found for each of the fusion methods were similar for both the axial and helical CT image acquisition protocols.
Infrared and visible image fusion based on total variation and augmented Lagrangian.
Guo, Hanqi; Ma, Yong; Mei, Xiaoguang; Ma, Jiayi
2017-11-01
This paper proposes a new algorithm for infrared and visible image fusion based on gradient transfer that achieves fusion by preserving the intensity of the infrared image and then transferring gradients in the corresponding visible one to the result. The gradient transfer suffers from the problems of low dynamic range and detail loss because it ignores the intensity from the visible image. The new algorithm solves these problems by providing additive intensity from the visible image to balance the intensity between the infrared image and the visible one. It formulates the fusion task as an l 1 -l 1 -TV minimization problem and then employs variable splitting and augmented Lagrangian to convert the unconstrained problem to a constrained one that can be solved in the framework of alternating the multiplier direction method. Experiments demonstrate that the new algorithm achieves better fusion results with a high computation efficiency in both qualitative and quantitative tests than gradient transfer and most state-of-the-art methods.
Dim target detection method based on salient graph fusion
NASA Astrophysics Data System (ADS)
Hu, Ruo-lan; Shen, Yi-yan; Jiang, Jun
2018-02-01
Dim target detection is one key problem in digital image processing field. With development of multi-spectrum imaging sensor, it becomes a trend to improve the performance of dim target detection by fusing the information from different spectral images. In this paper, one dim target detection method based on salient graph fusion was proposed. In the method, Gabor filter with multi-direction and contrast filter with multi-scale were combined to construct salient graph from digital image. And then, the maximum salience fusion strategy was designed to fuse the salient graph from different spectral images. Top-hat filter was used to detect dim target from the fusion salient graph. Experimental results show that proposal method improved the probability of target detection and reduced the probability of false alarm on clutter background images.
Image fusion and navigation platforms for percutaneous image-guided interventions.
Rajagopal, Manoj; Venkatesan, Aradhana M
2016-04-01
Image-guided interventional procedures, particularly image guided biopsy and ablation, serve an important role in the care of the oncology patient. The need for tumor genomic and proteomic profiling, early tumor response assessment and confirmation of early recurrence are common scenarios that may necessitate successful biopsies of targets, including those that are small, anatomically unfavorable or inconspicuous. As image-guided ablation is increasingly incorporated into interventional oncology practice, similar obstacles are posed for the ablation of technically challenging tumor targets. Navigation tools, including image fusion and device tracking, can enable abdominal interventionalists to more accurately target challenging biopsy and ablation targets. Image fusion technologies enable multimodality fusion and real-time co-displays of US, CT, MRI, and PET/CT data, with navigational technologies including electromagnetic tracking, robotic, cone beam CT, optical, and laser guidance of interventional devices. Image fusion and navigational platform technology is reviewed in this article, including the results of studies implementing their use for interventional procedures. Pre-clinical and clinical experiences to date suggest these technologies have the potential to reduce procedure risk, time, and radiation dose to both the patient and the operator, with a valuable role to play for complex image-guided interventions.
Compressive hyperspectral and multispectral imaging fusion
NASA Astrophysics Data System (ADS)
Espitia, Óscar; Castillo, Sergio; Arguello, Henry
2016-05-01
Image fusion is a valuable framework which combines two or more images of the same scene from one or multiple sensors, allowing to improve the resolution of the images and increase the interpretable content. In remote sensing a common fusion problem consists of merging hyperspectral (HS) and multispectral (MS) images that involve large amount of redundant data, which ignores the highly correlated structure of the datacube along the spatial and spectral dimensions. Compressive HS and MS systems compress the spectral data in the acquisition step allowing to reduce the data redundancy by using different sampling patterns. This work presents a compressed HS and MS image fusion approach, which uses a high dimensional joint sparse model. The joint sparse model is formulated by combining HS and MS compressive acquisition models. The high spectral and spatial resolution image is reconstructed by using sparse optimization algorithms. Different fusion spectral image scenarios are used to explore the performance of the proposed scheme. Several simulations with synthetic and real datacubes show promising results as the reliable reconstruction of a high spectral and spatial resolution image can be achieved by using as few as just the 50% of the datacube.
Multispectral simulation environment for modeling low-light-level sensor systems
NASA Astrophysics Data System (ADS)
Ientilucci, Emmett J.; Brown, Scott D.; Schott, John R.; Raqueno, Rolando V.
1998-11-01
Image intensifying cameras have been found to be extremely useful in low-light-level (LLL) scenarios including military night vision and civilian rescue operations. These sensors utilize the available visible region photons and an amplification process to produce high contrast imagery. It has been demonstrated that processing techniques can further enhance the quality of this imagery. For example, fusion with matching thermal IR imagery can improve image content when very little visible region contrast is available. To aid in the improvement of current algorithms and the development of new ones, a high fidelity simulation environment capable of producing radiometrically correct multi-band imagery for low- light-level conditions is desired. This paper describes a modeling environment attempting to meet these criteria by addressing the task as two individual components: (1) prediction of a low-light-level radiance field from an arbitrary scene, and (2) simulation of the output from a low- light-level sensor for a given radiance field. The radiance prediction engine utilized in this environment is the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model which is a first principles based multi-spectral synthetic image generation model capable of producing an arbitrary number of bands in the 0.28 to 20 micrometer region. The DIRSIG model is utilized to produce high spatial and spectral resolution radiance field images. These images are then processed by a user configurable multi-stage low-light-level sensor model that applies the appropriate noise and modulation transfer function (MTF) at each stage in the image processing chain. This includes the ability to reproduce common intensifying sensor artifacts such as saturation and 'blooming.' Additionally, co-registered imagery in other spectral bands may be simultaneously generated for testing fusion and exploitation algorithms. This paper discusses specific aspects of the DIRSIG radiance prediction for low- light-level conditions including the incorporation of natural and man-made sources which emphasizes the importance of accurate BRDF. A description of the implementation of each stage in the image processing and capture chain for the LLL model is also presented. Finally, simulated images are presented and qualitatively compared to lab acquired imagery from a commercial system.
NASA Astrophysics Data System (ADS)
Dou, Hao; Sun, Xiao; Li, Bin; Deng, Qianqian; Yang, Xubo; Liu, Di; Tian, Jinwen
2018-03-01
Aircraft detection from very high resolution remote sensing images, has gained more increasing interest in recent years due to the successful civil and military applications. However, several problems still exist: 1) how to extract the high-level features of aircraft; 2) locating objects within such a large image is difficult and time consuming; 3) A common problem of multiple resolutions of satellite images still exists. In this paper, inspirited by biological visual mechanism, the fusion detection framework is proposed, which fusing the top-down visual mechanism (deep CNN model) and bottom-up visual mechanism (GBVS) to detect aircraft. Besides, we use multi-scale training method for deep CNN model to solve the problem of multiple resolutions. Experimental results demonstrate that our method can achieve a better detection result than the other methods.
Comparison and evaluation on image fusion methods for GaoFen-1 imagery
NASA Astrophysics Data System (ADS)
Zhang, Ningyu; Zhao, Junqing; Zhang, Ling
2016-10-01
Currently, there are many research works focusing on the best fusion method suitable for satellite images of SPOT, QuickBird, Landsat and so on, but only a few of them discuss the application of GaoFen-1 satellite images. This paper proposes a novel idea by using four fusion methods, such as principal component analysis transform, Brovey transform, hue-saturation-value transform, and Gram-Schmidt transform, from the perspective of keeping the original image spectral information. The experimental results showed that the transformed images by the four fusion methods not only retain high spatial resolution on panchromatic band but also have the abundant spectral information. Through comparison and evaluation, the integration of Brovey transform is better, but the color fidelity is not the premium. The brightness and color distortion in hue saturation-value transformed image is the largest. Principal component analysis transform did a good job in color fidelity, but its clarity still need improvement. Gram-Schmidt transform works best in color fidelity, and the edge of the vegetation is the most obvious, the fused image sharpness is higher than that of principal component analysis. Brovey transform, is suitable for distinguishing the Gram-Schmidt transform, and the most appropriate for GaoFen-1 satellite image in vegetation and non-vegetation area. In brief, different fusion methods have different advantages in image quality and class extraction, and should be used according to the actual application information and image fusion algorithm.
Assessment of Spatiotemporal Fusion Algorithms for Planet and Worldview Images
Zhu, Xiaolin; Gao, Feng; Chou, Bryan; Li, Jiang; Shen, Yuzhong; Koperski, Krzysztof; Marchisio, Giovanni
2018-01-01
Although Worldview-2 (WV) images (non-pansharpened) have 2-m resolution, the re-visit times for the same areas may be seven days or more. In contrast, Planet images are collected using small satellites that can cover the whole Earth almost daily. However, the resolution of Planet images is 3.125 m. It would be ideal to fuse these two satellites images to generate high spatial resolution (2 m) and high temporal resolution (1 or 2 days) images for applications such as damage assessment, border monitoring, etc. that require quick decisions. In this paper, we evaluate three approaches to fusing Worldview (WV) and Planet images. These approaches are known as Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), Flexible Spatiotemporal Data Fusion (FSDAF), and Hybrid Color Mapping (HCM), which have been applied to the fusion of MODIS and Landsat images in recent years. Experimental results using actual Planet and Worldview images demonstrated that the three aforementioned approaches have comparable performance and can all generate high quality prediction images. PMID:29614745
Assessment of Spatiotemporal Fusion Algorithms for Planet and Worldview Images.
Kwan, Chiman; Zhu, Xiaolin; Gao, Feng; Chou, Bryan; Perez, Daniel; Li, Jiang; Shen, Yuzhong; Koperski, Krzysztof; Marchisio, Giovanni
2018-03-31
Although Worldview-2 (WV) images (non-pansharpened) have 2-m resolution, the re-visit times for the same areas may be seven days or more. In contrast, Planet images are collected using small satellites that can cover the whole Earth almost daily. However, the resolution of Planet images is 3.125 m. It would be ideal to fuse these two satellites images to generate high spatial resolution (2 m) and high temporal resolution (1 or 2 days) images for applications such as damage assessment, border monitoring, etc. that require quick decisions. In this paper, we evaluate three approaches to fusing Worldview (WV) and Planet images. These approaches are known as Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), Flexible Spatiotemporal Data Fusion (FSDAF), and Hybrid Color Mapping (HCM), which have been applied to the fusion of MODIS and Landsat images in recent years. Experimental results using actual Planet and Worldview images demonstrated that the three aforementioned approaches have comparable performance and can all generate high quality prediction images.
Choi, Hyun Ho; Lee, Ju Hwan; Kim, Sung Min; Park, Sung Yun
2015-01-01
Here, the speckle noise in ultrasonic images is removed using an image fusion-based denoising method. To optimize the denoising performance, each discrete wavelet transform (DWT) and filtering technique was analyzed and compared. In addition, the performances were compared in order to derive the optimal input conditions. To evaluate the speckle noise removal performance, an image fusion algorithm was applied to the ultrasound images, and comparatively analyzed with the original image without the algorithm. As a result, applying DWT and filtering techniques caused information loss and noise characteristics, and did not represent the most significant noise reduction performance. Conversely, an image fusion method applying SRAD-original conditions preserved the key information in the original image, and the speckle noise was removed. Based on such characteristics, the input conditions of SRAD-original had the best denoising performance with the ultrasound images. From this study, the best denoising technique proposed based on the results was confirmed to have a high potential for clinical application.
NASA Astrophysics Data System (ADS)
Li, Xiaosong; Li, Huafeng; Yu, Zhengtao; Kong, Yingchun
2015-07-01
An efficient multifocus image fusion scheme in nonsubsampled contourlet transform (NSCT) domain is proposed. Based on the property of optical imaging and the theory of defocused image, we present a selection principle for lowpass frequency coefficients and also investigate the connection between a low-frequency image and the defocused image. Generally, the NSCT algorithm decomposes detail image information indwells in different scales and different directions in the bandpass subband coefficient. In order to correctly pick out the prefused bandpass directional coefficients, we introduce multiscale curvature, which not only inherits the advantages of windows with different sizes, but also correctly recognizes the focused pixels from source images, and then develop a new fusion scheme of the bandpass subband coefficients. The fused image can be obtained by inverse NSCT with the different fused coefficients. Several multifocus image fusion methods are compared with the proposed scheme. The experimental results clearly indicate the validity and superiority of the proposed scheme in terms of both the visual qualities and the quantitative evaluation.
Image Fusion Algorithms Using Human Visual System in Transform Domain
NASA Astrophysics Data System (ADS)
Vadhi, Radhika; Swamy Kilari, Veera; Samayamantula, Srinivas Kumar
2017-08-01
The endeavor of digital image fusion is to combine the important visual parts from various sources to advance the visibility eminence of the image. The fused image has a more visual quality than any source images. In this paper, the Human Visual System (HVS) weights are used in the transform domain to select appropriate information from various source images and then to attain a fused image. In this process, mainly two steps are involved. First, apply the DWT to the registered source images. Later, identify qualitative sub-bands using HVS weights. Hence, qualitative sub-bands are selected from different sources to form high quality HVS based fused image. The quality of the HVS based fused image is evaluated with general fusion metrics. The results show the superiority among the state-of-the art resolution Transforms (MRT) such as Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), Contourlet Transform (CT), and Non Sub Sampled Contourlet Transform (NSCT) using maximum selection fusion rule.
Fusion of Geophysical Images in the Study of Archaeological Sites
NASA Astrophysics Data System (ADS)
Karamitrou, A. A.; Petrou, M.; Tsokas, G. N.
2011-12-01
This paper presents results from different fusion techniques between geophysical images from different modalities in order to combine them into one image with higher information content than the two original images independently. The resultant image will be useful for the detection and mapping of buried archaeological relics. The examined archaeological area is situated in Kampana site (NE Greece) near the ancient theater of Maronia city. Archaeological excavations revealed an ancient theater, an aristocratic house and the temple of the ancient Greek God Dionysus. Numerous ceramic objects found in the broader area indicated the probability of the existence of buried urban structure. In order to accurately locate and map the latter, geophysical measurements performed with the use of the magnetic method (vertical gradient of the magnetic field) and of the electrical method (apparent resistivity). We performed a semi-stochastic pixel based registration method between the geophysical images in order to fine register them by correcting their local spatial offsets produced by the use of hand held devices. After this procedure we applied to the registered images three different fusion approaches. Image fusion is a relatively new technique that not only allows integration of different information sources, but also takes advantage of the spatial and spectral resolution as well as the orientation characteristics of each image. We have used three different fusion techniques, fusion with mean values, with wavelets by enhancing selected frequency bands and curvelets giving emphasis at specific bands and angles (according the expecting orientation of the relics). In all three cases the fused images gave significantly better results than each of the original geophysical images separately. The comparison of the results of the three different approaches showed that the fusion with the use of curvelets, giving emphasis at the features' orientation, seems to give the best fused image. In the resultant image appear clear linear and ellipsoid features corresponding to potential archaeological relics.
Multispectral Image Enhancement Through Adaptive Wavelet Fusion
2016-09-14
13. SUPPLEMENTARY NOTES 14. ABSTRACT This research developed a multiresolution image fusion scheme based on guided filtering . Guided filtering can...effectively reduce noise while preserving detail boundaries. When applied in an iterative mode, guided filtering selectively eliminates small scale...details while restoring larger scale edges. The proposed multi-scale image fusion scheme achieves spatial consistency by using guided filtering both at
Muscatello, Christopher M.; Domier, Calvin W.; Hu, Xing; ...
2014-08-13
Here, quasi-optical imaging at sub-THz frequencies has had a major impact on fusion plasma diagnostics. Mm-wave imaging reflectometry utilizes microwaves to actively probe fusion plasmas, inferring the local properties of electron density fluctuations. Electron cyclotron emission imaging is a multichannel radiometer that passively measures the spontaneous emission of microwaves from the plasma to infer local properties of electron temperature fluctuations. These imaging diagnostics work together to diagnose the characteristics of turbulence. Important quantities such as amplitude and wavenumber of coherent fluctuations, correlation lengths and decor relation times of turbulence, and poloidal flow velocity of the plasma are readily inferred.
Assessment of SPOT-6 optical remote sensing data against GF-1 using NNDiffuse image fusion algorithm
NASA Astrophysics Data System (ADS)
Zhao, Jinling; Guo, Junjie; Cheng, Wenjie; Xu, Chao; Huang, Linsheng
2017-07-01
A cross-comparison method was used to assess the SPOT-6 optical satellite imagery against Chinese GF-1 imagery using three types of indicators: spectral and color quality, fusion effect and identification potential. More specifically, spectral response function (SRF) curves were used to compare the two imagery, showing that the SRF curve shape of SPOT-6 is more like a rectangle compared to GF-1 in blue, green, red and near-infrared bands. NNDiffuse image fusion algorithm was used to evaluate the capability of information conservation in comparison with wavelet transform (WT) and principal component (PC) algorithms. The results show that NNDiffuse fused image has extremely similar entropy vales than original image (1.849 versus 1.852) and better color quality. In addition, the object-oriented classification toolset (ENVI EX) was used to identify greenlands for comparing the effect of self-fusion image of SPOT-6 and inter-fusion image between SPOT-6 and GF-1 based on the NNDiffuse algorithm. The overall accuracy is 97.27% and 76.88%, respectively, showing that self-fused image of SPOT-6 has better identification capability.
The effect of multispectral image fusion enhancement on human efficiency.
Bittner, Jennifer L; Schill, M Trent; Mohd-Zaid, Fairul; Blaha, Leslie M
2017-01-01
The visual system can be highly influenced by changes to visual presentation. Thus, numerous techniques have been developed to augment imagery in an attempt to improve human perception. The current paper examines the potential impact of one such enhancement, multispectral image fusion, where imagery captured in varying spectral bands (e.g., visible, thermal, night vision) is algorithmically combined to produce an output to strengthen visual perception. We employ ideal observer analysis over a series of experimental conditions to (1) establish a framework for testing the impact of image fusion over the varying aspects surrounding its implementation (e.g., stimulus content, task) and (2) examine the effectiveness of fusion on human information processing efficiency in a basic application. We used a set of rotated Landolt C images captured with a number of individual sensor cameras and combined across seven traditional fusion algorithms (e.g., Laplacian pyramid, principal component analysis, averaging) in a 1-of-8 orientation task. We found that, contrary to the idea of fused imagery always producing a greater impact on perception, single-band imagery can be just as influential. Additionally, efficiency data were shown to fluctuate based on sensor combination instead of fusion algorithm, suggesting the need for examining multiple factors to determine the success of image fusion. Our use of ideal observer analysis, a popular technique from the vision sciences, provides not only a standard for testing fusion in direct relation to the visual system but also allows for comparable examination of fusion across its associated problem space of application.
Benchmarking image fusion system design parameters
NASA Astrophysics Data System (ADS)
Howell, Christopher L.
2013-06-01
A clear and absolute method for discriminating between image fusion algorithm performances is presented. This method can effectively be used to assist in the design and modeling of image fusion systems. Specifically, it is postulated that quantifying human task performance using image fusion should be benchmarked to whether the fusion algorithm, at a minimum, retained the performance benefit achievable by each independent spectral band being fused. The established benchmark would then clearly represent the threshold that a fusion system should surpass to be considered beneficial to a particular task. A genetic algorithm is employed to characterize the fused system parameters using a Matlab® implementation of NVThermIP as the objective function. By setting the problem up as a mixed-integer constraint optimization problem, one can effectively look backwards through the image acquisition process: optimizing fused system parameters by minimizing the difference between modeled task difficulty measure and the benchmark task difficulty measure. The results of an identification perception experiment are presented, where human observers were asked to identify a standard set of military targets, and used to demonstrate the effectiveness of the benchmarking process.
Applying data fusion techniques for benthic habitat mapping and monitoring in a coral reef ecosystem
NASA Astrophysics Data System (ADS)
Zhang, Caiyun
2015-06-01
Accurate mapping and effective monitoring of benthic habitat in the Florida Keys are critical in developing management strategies for this valuable coral reef ecosystem. For this study, a framework was designed for automated benthic habitat mapping by combining multiple data sources (hyperspectral, aerial photography, and bathymetry data) and four contemporary imagery processing techniques (data fusion, Object-based Image Analysis (OBIA), machine learning, and ensemble analysis). In the framework, 1-m digital aerial photograph was first merged with 17-m hyperspectral imagery and 10-m bathymetry data using a pixel/feature-level fusion strategy. The fused dataset was then preclassified by three machine learning algorithms (Random Forest, Support Vector Machines, and k-Nearest Neighbor). Final object-based habitat maps were produced through ensemble analysis of outcomes from three classifiers. The framework was tested for classifying a group-level (3-class) and code-level (9-class) habitats in a portion of the Florida Keys. Informative and accurate habitat maps were achieved with an overall accuracy of 88.5% and 83.5% for the group-level and code-level classifications, respectively.
Multispectral multisensor image fusion using wavelet transforms
Lemeshewsky, George P.
1999-01-01
Fusion techniques can be applied to multispectral and higher spatial resolution panchromatic images to create a composite image that is easier to interpret than the individual images. Wavelet transform-based multisensor, multiresolution fusion (a type of band sharpening) was applied to Landsat thematic mapper (TM) multispectral and coregistered higher resolution SPOT panchromatic images. The objective was to obtain increased spatial resolution, false color composite products to support the interpretation of land cover types wherein the spectral characteristics of the imagery are preserved to provide the spectral clues needed for interpretation. Since the fusion process should not introduce artifacts, a shift invariant implementation of the discrete wavelet transform (SIDWT) was used. These results were compared with those using the shift variant, discrete wavelet transform (DWT). Overall, the process includes a hue, saturation, and value color space transform to minimize color changes, and a reported point-wise maximum selection rule to combine transform coefficients. The performance of fusion based on the SIDWT and DWT was evaluated with a simulated TM 30-m spatial resolution test image and a higher resolution reference. Simulated imagery was made by blurring higher resolution color-infrared photography with the TM sensors' point spread function. The SIDWT based technique produced imagery with fewer artifacts and lower error between fused images and the full resolution reference. Image examples with TM and SPOT 10-m panchromatic illustrate the reduction in artifacts due to the SIDWT based fusion.
Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.
Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F
2013-09-01
The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.
A new hyperspectral image compression paradigm based on fusion
NASA Astrophysics Data System (ADS)
Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto
2016-10-01
The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.
An infrared-visible image fusion scheme based on NSCT and compressed sensing
NASA Astrophysics Data System (ADS)
Zhang, Qiong; Maldague, Xavier
2015-05-01
Image fusion, as a research hot point nowadays in the field of infrared computer vision, has been developed utilizing different varieties of methods. Traditional image fusion algorithms are inclined to bring problems, such as data storage shortage and computational complexity increase, etc. Compressed sensing (CS) uses sparse sampling without knowing the priori knowledge and greatly reconstructs the image, which reduces the cost and complexity of image processing. In this paper, an advanced compressed sensing image fusion algorithm based on non-subsampled contourlet transform (NSCT) is proposed. NSCT provides better sparsity than the wavelet transform in image representation. Throughout the NSCT decomposition, the low-frequency and high-frequency coefficients can be obtained respectively. For the fusion processing of low-frequency coefficients of infrared and visible images , the adaptive regional energy weighting rule is utilized. Thus only the high-frequency coefficients are specially measured. Here we use sparse representation and random projection to obtain the required values of high-frequency coefficients, afterwards, the coefficients of each image block can be fused via the absolute maximum selection rule and/or the regional standard deviation rule. In the reconstruction of the compressive sampling results, a gradient-based iterative algorithm and the total variation (TV) method are employed to recover the high-frequency coefficients. Eventually, the fused image is recovered by inverse NSCT. Both the visual effects and the numerical computation results after experiments indicate that the presented approach achieves much higher quality of image fusion, accelerates the calculations, enhances various targets and extracts more useful information.
Integrative Multi-Spectral Sensor Device for Far-Infrared and Visible Light Fusion
NASA Astrophysics Data System (ADS)
Qiao, Tiezhu; Chen, Lulu; Pang, Yusong; Yan, Gaowei
2018-06-01
Infrared and visible light image fusion technology is a hot spot in the research of multi-sensor fusion technology in recent years. Existing infrared and visible light fusion technologies need to register before fusion because of using two cameras. However, the application effect of the registration technology has yet to be improved. Hence, a novel integrative multi-spectral sensor device is proposed for infrared and visible light fusion, and by using the beam splitter prism, the coaxial light incident from the same lens is projected to the infrared charge coupled device (CCD) and visible light CCD, respectively. In this paper, the imaging mechanism of the proposed sensor device is studied with the process of the signals acquisition and fusion. The simulation experiment, which involves the entire process of the optic system, signal acquisition, and signal fusion, is constructed based on imaging effect model. Additionally, the quality evaluation index is adopted to analyze the simulation result. The experimental results demonstrate that the proposed sensor device is effective and feasible.
Kang, Tae Wook; Song, Kyoung Doo; Kim, Mimi; Kim, Seung Soo; Kim, Seong Hyun; Ha, Sang Yun
2017-01-01
Objective To assess whether contrast-enhanced ultrasonography (CEUS) with Sonazoid can improve the lesion conspicuity and feasibility of percutaneous biopsies for focal hepatic lesions invisible on fusion imaging of real-time ultrasonography (US) with computed tomography/magnetic resonance images, and evaluate its impact on clinical decision making. Materials and Methods The Institutional Review Board approved this retrospective study. Between June 2013 and January 2015, 711 US-guided percutaneous biopsies were performed for focal hepatic lesions. Biopsies were performed using CEUS for guidance if lesions were invisible on fusion imaging. We retrospectively evaluated the number of target lesions initially invisible on fusion imaging that became visible after applying CEUS, using a 4-point scale. Technical success rates of biopsies were evaluated based on histopathological results. In addition, the occurrence of changes in clinical decision making was assessed. Results Among 711 patients, 16 patients (2.3%) were included in the study. The median size of target lesions was 1.1 cm (range, 0.5–1.9 cm) in pre-procedural imaging. After CEUS, 15 of 16 (93.8%) focal hepatic lesions were visualized. The conspicuity score was significantly increased after adding CEUS, as compared to that on fusion imaging (p < 0.001). The technical success rate of biopsy was 87.6% (14/16). After biopsy, there were changes in clinical decision making for 11 of 16 patients (68.8%). Conclusion The addition of CEUS could improve the conspicuity of focal hepatic lesions invisible on fusion imaging. This dual guidance using CEUS and fusion imaging may affect patient management via changes in clinical decision-making. PMID:28096725
A transversal approach for patch-based label fusion via matrix completion
Sanroma, Gerard; Wu, Guorong; Gao, Yaozong; Thung, Kim-Han; Guo, Yanrong; Shen, Dinggang
2015-01-01
Recently, multi-atlas patch-based label fusion has received an increasing interest in the medical image segmentation field. After warping the anatomical labels from the atlas images to the target image by registration, label fusion is the key step to determine the latent label for each target image point. Two popular types of patch-based label fusion approaches are (1) reconstruction-based approaches that compute the target labels as a weighted average of atlas labels, where the weights are derived by reconstructing the target image patch using the atlas image patches; and (2) classification-based approaches that determine the target label as a mapping of the target image patch, where the mapping function is often learned using the atlas image patches and their corresponding labels. Both approaches have their advantages and limitations. In this paper, we propose a novel patch-based label fusion method to combine the above two types of approaches via matrix completion (and hence, we call it transversal). As we will show, our method overcomes the individual limitations of both reconstruction-based and classification-based approaches. Since the labeling confidences may vary across the target image points, we further propose a sequential labeling framework that first labels the highly confident points and then gradually labels more challenging points in an iterative manner, guided by the label information determined in the previous iterations. We demonstrate the performance of our novel label fusion method in segmenting the hippocampus in the ADNI dataset, subcortical and limbic structures in the LONI dataset, and mid-brain structures in the SATA dataset. We achieve more accurate segmentation results than both reconstruction-based and classification-based approaches. Our label fusion method is also ranked 1st in the online SATA Multi-Atlas Segmentation Challenge. PMID:26160394
An efficient method for the fusion of light field refocused images
NASA Astrophysics Data System (ADS)
Wang, Yingqian; Yang, Jungang; Xiao, Chao; An, Wei
2018-04-01
Light field cameras have drawn much attention due to the advantage of post-capture adjustments such as refocusing after exposure. The depth of field in refocused images is always shallow because of the large equivalent aperture. As a result, a large number of multi-focus images are obtained and an all-in-focus image is demanded. Consider that most multi-focus image fusion algorithms do not particularly aim at large numbers of source images and traditional DWT-based fusion approach has serious problems in dealing with lots of multi-focus images, causing color distortion and ringing effect. To solve this problem, this paper proposes an efficient multi-focus image fusion method based on stationary wavelet transform (SWT), which can deal with a large quantity of multi-focus images with shallow depth of fields. We compare SWT-based approach with DWT-based approach on various occasions. And the results demonstrate that the proposed method performs much better both visually and quantitatively.
Human visual system consistent quality assessment for remote sensing image fusion
NASA Astrophysics Data System (ADS)
Liu, Jun; Huang, Junyi; Liu, Shuguang; Li, Huali; Zhou, Qiming; Liu, Junchen
2015-07-01
Quality assessment for image fusion is essential for remote sensing application. Generally used indices require a high spatial resolution multispectral (MS) image for reference, which is not always readily available. Meanwhile, the fusion quality assessments using these indices may not be consistent with the Human Visual System (HVS). As an attempt to overcome this requirement and inconsistency, this paper proposes an HVS-consistent image fusion quality assessment index at the highest resolution without a reference MS image using Gaussian Scale Space (GSS) technology that could simulate the HVS. The spatial details and spectral information of original and fused images are first separated in GSS, and the qualities are evaluated using the proposed spatial and spectral quality index respectively. The overall quality is determined without a reference MS image by a combination of the proposed two indices. Experimental results on various remote sensing images indicate that the proposed index is more consistent with HVS evaluation compared with other widely used indices that may or may not require reference images.
Image Registration Workshop Proceedings
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline (Editor)
1997-01-01
Automatic image registration has often been considered as a preliminary step for higher-level processing, such as object recognition or data fusion. But with the unprecedented amounts of data which are being and will continue to be generated by newly developed sensors, the very topic of automatic image registration has become and important research topic. This workshop presents a collection of very high quality work which has been grouped in four main areas: (1) theoretical aspects of image registration; (2) applications to satellite imagery; (3) applications to medical imagery; and (4) image registration for computer vision research.
A Remote Sensing Image Fusion Method based on adaptive dictionary learning
NASA Astrophysics Data System (ADS)
He, Tongdi; Che, Zongxi
2018-01-01
This paper discusses using a remote sensing fusion method, based on' adaptive sparse representation (ASP)', to provide improved spectral information, reduce data redundancy and decrease system complexity. First, the training sample set is formed by taking random blocks from the images to be fused, the dictionary is then constructed using the training samples, and the remaining terms are clustered to obtain the complete dictionary by iterated processing at each step. Second, the self-adaptive weighted coefficient rule of regional energy is used to select the feature fusion coefficients and complete the reconstruction of the image blocks. Finally, the reconstructed image blocks are rearranged and an average is taken to obtain the final fused images. Experimental results show that the proposed method is superior to other traditional remote sensing image fusion methods in both spectral information preservation and spatial resolution.
Rennert, J; Georgieva, M; Schreyer, A G; Jung, W; Ross, C; Stroszczynski, C; Jung, E M
2011-01-01
To evaluate, whether image fusion of contrast enhanced ultrasound (CEUS) with CT or MRI affects the diagnosis and characterization of liver lesions or the therapeutic strategy of surgical or interventional procedures compared to the preliminary diagnosis. In a retrospective study the image fusion scans of CEUS with contrast enhanced CT or MRI of 100 patients (71 male, mean age 59 years, 0.3-85 years) with benign or malignant liver lesions were evaluated. Fundamental B-scan, color Doppler imaging and CEUS were performed in all patients by an experienced examiner using a multifrequency convex transducer (1-5 MHz, LOGIQ 9/GE) and volume navigation (Vnav). After a bolus injections of up to 2.4 ml SonoVue® (BRACCO, Italy) digital raw data was stored as cine-loops up to 5 min. In 74 patients, CEUS was fused with a pre-existing ceCT, in 26 patients a ceMRI was used. In all 100 patients (100%) the image quality in all modalities (ceCT, ceMRI and CEUS) was excellent or with only minor diagnostic limitations. Regarding the number of lesions revealed in image fusion of CEUS/ceCT/ceMRI and the preceding diagnostic method, concordant results were found in 84 patients. In 12 patients, additional lesions were found using fusion imaging causing subsequently a change of the therapeutical strategy. In 15 out of 21 patients with either concordant or discordant results regarding the number of lesions, image fusion allowed a definite diagnosis due to a continuous documentation of the microcirculation of the tumor and its contrast enhancement. A significant coherency (p < 0.05) among image fusion with either ceCT or ceMRI and CEUS and a subsequent change of therapeutic strategy was found. Image fusion with volume navigation (VNav) of CEUS with ceCT or ceMRI frequently allows a definite localization and diagnosis of hepatic lesions in patients with primary hepatic carcinoma or metastatic diseases. This might cause a change of the therapeutic strategy in many patients with hepatic lesions.
A Multi-Objective Decision Making Approach for Solving the Image Segmentation Fusion Problem.
Khelifi, Lazhar; Mignotte, Max
2017-08-01
Image segmentation fusion is defined as the set of methods which aim at merging several image segmentations, in a manner that takes full advantage of the complementarity of each one. Previous relevant researches in this field have been impeded by the difficulty in identifying an appropriate single segmentation fusion criterion, providing the best possible, i.e., the more informative, result of fusion. In this paper, we propose a new model of image segmentation fusion based on multi-objective optimization which can mitigate this problem, to obtain a final improved result of segmentation. Our fusion framework incorporates the dominance concept in order to efficiently combine and optimize two complementary segmentation criteria, namely, the global consistency error and the F-measure (precision-recall) criterion. To this end, we present a hierarchical and efficient way to optimize the multi-objective consensus energy function related to this fusion model, which exploits a simple and deterministic iterative relaxation strategy combining the different image segments. This step is followed by a decision making task based on the so-called "technique for order performance by similarity to ideal solution". Results obtained on two publicly available databases with manual ground truth segmentations clearly show that our multi-objective energy-based model gives better results than the classical mono-objective one.
NASA Astrophysics Data System (ADS)
Paramanandham, Nirmala; Rajendiran, Kishore
2018-01-01
A novel image fusion technique is presented for integrating infrared and visible images. Integration of images from the same or various sensing modalities can deliver the required information that cannot be delivered by viewing the sensor outputs individually and consecutively. In this paper, a swarm intelligence based image fusion technique using discrete cosine transform (DCT) domain is proposed for surveillance application which integrates the infrared image with the visible image for generating a single informative fused image. Particle swarm optimization (PSO) is used in the fusion process for obtaining the optimized weighting factor. These optimized weighting factors are used for fusing the DCT coefficients of visible and infrared images. Inverse DCT is applied for obtaining the initial fused image. An enhanced fused image is obtained through adaptive histogram equalization for a better visual understanding and target detection. The proposed framework is evaluated using quantitative metrics such as standard deviation, spatial frequency, entropy and mean gradient. The experimental results demonstrate the outperformance of the proposed algorithm over many other state- of- the- art techniques reported in literature.
NASA Astrophysics Data System (ADS)
Aghaei, Faranak; Tan, Maxine; Hollingsworth, Alan B.; Zheng, Bin; Cheng, Samuel
2016-03-01
Dynamic contrast-enhanced breast magnetic resonance imaging (DCE-MRI) has been used increasingly in breast cancer diagnosis and assessment of cancer treatment efficacy. In this study, we applied a computer-aided detection (CAD) scheme to automatically segment breast regions depicting on MR images and used the kinetic image features computed from the global breast MR images acquired before neoadjuvant chemotherapy to build a new quantitative model to predict response of the breast cancer patients to the chemotherapy. To assess performance and robustness of this new prediction model, an image dataset involving breast MR images acquired from 151 cancer patients before undergoing neoadjuvant chemotherapy was retrospectively assembled and used. Among them, 63 patients had "complete response" (CR) to chemotherapy in which the enhanced contrast levels inside the tumor volume (pre-treatment) was reduced to the level as the normal enhanced background parenchymal tissues (post-treatment), while 88 patients had "partially response" (PR) in which the high contrast enhancement remain in the tumor regions after treatment. We performed the studies to analyze the correlation among the 22 global kinetic image features and then select a set of 4 optimal features. Applying an artificial neural network trained with the fusion of these 4 kinetic image features, the prediction model yielded an area under ROC curve (AUC) of 0.83+/-0.04. This study demonstrated that by avoiding tumor segmentation, which is often difficult and unreliable, fusion of kinetic image features computed from global breast MR images without tumor segmentation can also generate a useful clinical marker in predicting efficacy of chemotherapy.
NASA Astrophysics Data System (ADS)
Li, Jun; Song, Minghui; Peng, Yuanxi
2018-03-01
Current infrared and visible image fusion methods do not achieve adequate information extraction, i.e., they cannot extract the target information from infrared images while retaining the background information from visible images. Moreover, most of them have high complexity and are time-consuming. This paper proposes an efficient image fusion framework for infrared and visible images on the basis of robust principal component analysis (RPCA) and compressed sensing (CS). The novel framework consists of three phases. First, RPCA decomposition is applied to the infrared and visible images to obtain their sparse and low-rank components, which represent the salient features and background information of the images, respectively. Second, the sparse and low-rank coefficients are fused by different strategies. On the one hand, the measurements of the sparse coefficients are obtained by the random Gaussian matrix, and they are then fused by the standard deviation (SD) based fusion rule. Next, the fused sparse component is obtained by reconstructing the result of the fused measurement using the fast continuous linearized augmented Lagrangian algorithm (FCLALM). On the other hand, the low-rank coefficients are fused using the max-absolute rule. Subsequently, the fused image is superposed by the fused sparse and low-rank components. For comparison, several popular fusion algorithms are tested experimentally. By comparing the fused results subjectively and objectively, we find that the proposed framework can extract the infrared targets while retaining the background information in the visible images. Thus, it exhibits state-of-the-art performance in terms of both fusion effects and timeliness.
Wang, Tao; Zheng, Nanning; Xin, Jingmin; Ma, Zheng
2011-01-01
This paper presents a systematic scheme for fusing millimeter wave (MMW) radar and a monocular vision sensor for on-road obstacle detection. As a whole, a three-level fusion strategy based on visual attention mechanism and driver’s visual consciousness is provided for MMW radar and monocular vision fusion so as to obtain better comprehensive performance. Then an experimental method for radar-vision point alignment for easy operation with no reflection intensity of radar and special tool requirements is put forward. Furthermore, a region searching approach for potential target detection is derived in order to decrease the image processing time. An adaptive thresholding algorithm based on a new understanding of shadows in the image is adopted for obstacle detection, and edge detection is used to assist in determining the boundary of obstacles. The proposed fusion approach is verified through real experimental examples of on-road vehicle/pedestrian detection. In the end, the experimental results show that the proposed method is simple and feasible. PMID:22164117
Wang, Tao; Zheng, Nanning; Xin, Jingmin; Ma, Zheng
2011-01-01
This paper presents a systematic scheme for fusing millimeter wave (MMW) radar and a monocular vision sensor for on-road obstacle detection. As a whole, a three-level fusion strategy based on visual attention mechanism and driver's visual consciousness is provided for MMW radar and monocular vision fusion so as to obtain better comprehensive performance. Then an experimental method for radar-vision point alignment for easy operation with no reflection intensity of radar and special tool requirements is put forward. Furthermore, a region searching approach for potential target detection is derived in order to decrease the image processing time. An adaptive thresholding algorithm based on a new understanding of shadows in the image is adopted for obstacle detection, and edge detection is used to assist in determining the boundary of obstacles. The proposed fusion approach is verified through real experimental examples of on-road vehicle/pedestrian detection. In the end, the experimental results show that the proposed method is simple and feasible.
Segmentation Fusion Techniques with Application to Plenoptic Images: A Survey.
NASA Astrophysics Data System (ADS)
Evin, D.; Hadad, A.; Solano, A.; Drozdowicz, B.
2016-04-01
The segmentation of anatomical and pathological structures plays a key role in the characterization of clinically relevant evidence from digital images. Recently, plenoptic imaging has emerged as a new promise to enrich the diagnostic potential of conventional photography. Since the plenoptic images comprises a set of slightly different versions of the target scene, we propose to make use of those images to improve the segmentation quality in relation to the scenario of a single image segmentation. The problem of finding a segmentation solution from multiple images of a single scene, is called segmentation fusion. This paper reviews the issue of segmentation fusion in order to find solutions that can be applied to plenoptic images, particularly images from the ophthalmological domain.
[An improved low spectral distortion PCA fusion method].
Peng, Shi; Zhang, Ai-Wu; Li, Han-Lun; Hu, Shao-Xing; Meng, Xian-Gang; Sun, Wei-Dong
2013-10-01
Aiming at the spectral distortion produced in PCA fusion process, the present paper proposes an improved low spectral distortion PCA fusion method. This method uses NCUT (normalized cut) image segmentation algorithm to make a complex hyperspectral remote sensing image into multiple sub-images for increasing the separability of samples, which can weaken the spectral distortions of traditional PCA fusion; Pixels similarity weighting matrix and masks were produced by using graph theory and clustering theory. These masks are used to cut the hyperspectral image and high-resolution image into some sub-region objects. All corresponding sub-region objects between the hyperspectral image and high-resolution image are fused by using PCA method, and all sub-regional integration results are spliced together to produce a new image. In the experiment, Hyperion hyperspectral data and Rapid Eye data were used. And the experiment result shows that the proposed method has the same ability to enhance spatial resolution and greater ability to improve spectral fidelity performance.
Kashimura, Hiroshi; Ogasawara, Kuniaki; Arai, Hiroshi; Beppu, Takaaki; Inoue, Takashi; Takahashi, Tsutomu; Matsuda, Koichi; Takahashi, Yujiro; Fujiwara, Shunrou; Ogawa, Akira
2008-09-01
A fusion technique for magnetic resonance (MR) angiography and MR imaging was developed to help assess the peritumoral angioarchitecture during surgical planning for meningioma. Three-dimensional time-of-flight (3D-TOF) and 3D-spoiled gradient recalled (SPGR) datasets were obtained from 10 patients with intracranial meningioma, and fused using newly developed volume registration and visualization software. Maximum intensity projection (MIP) images from 3D-TOF MR angiography and axial SPGR MR imaging were displayed at the same time on the monitor. Selecting a vessel on the real-time MIP image indicated the corresponding points on the axial image automatically. Fusion images showed displacement of the anterior cerebral or middle cerebral artery in 7 patients and encasement of the anterior cerebral arteries in 1 patient, with no relationship between the main arterial trunk and tumor in 2 patients. Fusion of MR angiography and MR imaging can clarify relationships between the intracranial vasculature and meningioma, and may be helpful for surgical planning for meningioma.
NASA Astrophysics Data System (ADS)
Liu, Chunhui; Zhang, Duona; Zhao, Xintao
2018-03-01
Saliency detection in synthetic aperture radar (SAR) images is a difficult problem. This paper proposed a multitask saliency detection (MSD) model for the saliency detection task of SAR images. We extract four features of the SAR image, which include the intensity, orientation, uniqueness, and global contrast, as the input of the MSD model. The saliency map is generated by the multitask sparsity pursuit, which integrates the multiple features collaboratively. Detection of different scale features is also taken into consideration. Subjective and objective evaluation of the MSD model verifies its effectiveness. Based on the saliency maps obtained by the MSD model, we apply the saliency map of the SAR image to the SAR and color optical image fusion. The experimental results of real data show that the saliency map obtained by the MSD model helps to improve the fusion effect, and the salient areas in the SAR image can be highlighted in the fusion results.
NASA Astrophysics Data System (ADS)
Zhou, Xiran; Liu, Jun; Liu, Shuguang; Cao, Lei; Zhou, Qiming; Huang, Huawen
2014-02-01
High spatial resolution and spectral fidelity are basic standards for evaluating an image fusion algorithm. Numerous fusion methods for remote sensing images have been developed. Some of these methods are based on the intensity-hue-saturation (IHS) transform and the generalized IHS (GIHS), which may cause serious spectral distortion. Spectral distortion in the GIHS is proven to result from changes in saturation during fusion. Therefore, reducing such changes can achieve high spectral fidelity. A GIHS-based spectral preservation fusion method that can theoretically reduce spectral distortion is proposed in this study. The proposed algorithm consists of two steps. The first step is spectral modulation (SM), which uses the Gaussian function to extract spatial details and conduct SM of multispectral (MS) images. This method yields a desirable visual effect without requiring histogram matching between the panchromatic image and the intensity of the MS image. The second step uses the Gaussian convolution function to restore lost edge details during SM. The proposed method is proven effective and shown to provide better results compared with other GIHS-based methods.
Fusion of laser and image sensory data for 3-D modeling of the free navigation space
NASA Technical Reports Server (NTRS)
Mass, M.; Moghaddamzadeh, A.; Bourbakis, N.
1994-01-01
A fusion technique which combines two different types of sensory data for 3-D modeling of a navigation space is presented. The sensory data is generated by a vision camera and a laser scanner. The problem of different resolutions for these sensory data was solved by reduced image resolution, fusion of different data, and use of a fuzzy image segmentation technique.
Henninger, B.; Putzer, D.; Kendler, D.; Uprimny, C.; Virgolini, I.; Gunsilius, E.; Bale, R.
2012-01-01
Aim. The purpose of this study was to evaluate the accuracy of 2-deoxy-2-[fluorine-18]fluoro-D-glucose (FDG) positron emission tomography (PET), computed tomography (CT), and software-based image fusion of both modalities in the imaging of non-Hodgkin's lymphoma (NHL) and Hodgkin's disease (HD). Methods. 77 patients with NHL (n = 58) or HD (n = 19) underwent a FDG PET scan, a contrast-enhanced CT, and a subsequent digital image fusion during initial staging or followup. 109 examinations of each modality were evaluated and compared to each other. Conventional staging procedures, other imaging techniques, laboratory screening, and follow-up data constituted the reference standard for comparison with image fusion. Sensitivity and specificity were calculated for CT and PET separately. Results. Sensitivity and specificity for detecting malignant lymphoma were 90% and 76% for CT and 94% and 91% for PET, respectively. A lymph node region-based analysis (comprising 14 defined anatomical regions) revealed a sensitivity of 81% and a specificity of 97% for CT and 96% and 99% for FDG PET, respectively. Only three of 109 image fusion findings needed further evaluation (false positive). Conclusion. Digital fusion of PET and CT improves the accuracy of staging, restaging, and therapy monitoring in patients with malignant lymphoma and may reduce the need for invasive diagnostic procedures. PMID:22654631
Molecular imaging of malignant tumor metabolism: whole-body image fusion of DWI/CT vs. PET/CT.
Reiner, Caecilia S; Fischer, Michael A; Hany, Thomas; Stolzmann, Paul; Nanz, Daniel; Donati, Olivio F; Weishaupt, Dominik; von Schulthess, Gustav K; Scheffel, Hans
2011-08-01
To prospectively investigate the technical feasibility and performance of image fusion for whole-body diffusion-weighted imaging (wbDWI) and computed tomography (CT) to detect metastases using hybrid positron emission tomography/computed tomography (PET/CT) as reference standard. Fifty-two patients (60 ± 14 years; 18 women) with different malignant tumor disease examined by PET/CT for clinical reasons consented to undergo additional wbDWI at 1.5 Tesla. WbDWI was performed using a diffusion-weighted single-shot echo-planar imaging during free breathing. Images at b = 0 s/mm(2) and b = 700 s/mm(2) were acquired and apparent diffusion coefficient (ADC) maps were generated. Image fusion of wbDWI and CT (from PET/CT scan) was performed yielding for wbDWI/CT fused image data. One radiologist rated the success of image fusion and diagnostic image quality. The presence or absence of metastases on wbDWI/CT fused images was evaluated together with the separate wbDWI and CT images by two different, independent radiologists blinded to results from PET/CT. Detection rate and positive predictive values for diagnosing metastases was calculated. PET/CT examinations were used as reference standard. PET/CT identified 305 malignant lesions in 39 of 52 (75%) patients. WbDWI/CT image fusion was technically successful and yielded diagnostic image quality in 73% and 92% of patients, respectively. Interobserver agreement for the evaluation of wbDWI/CT images was κ = 0.78. WbDWI/CT identified 270 metastases in 43 of 52 (83%) patients. Overall detection rate and positive predictive value of wbDWI/CT was 89% (95% CI, 0.85-0.92) and 94% (95% CI, 0.92-0.97), respectively. WbDWI/CT image fusion is technically feasible in a clinical setting and allows the diagnostic assessment of metastatic tumor disease detecting nine of 10 lesions as compared with PET/CT. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.
Improving Echo-Guided Procedures Using an Ultrasound-CT Image Fusion System.
Diana, Michele; Halvax, Peter; Mertz, Damien; Legner, Andras; Brulé, Jean-Marcel; Robinet, Eric; Mutter, Didier; Pessaux, Patrick; Marescaux, Jacques
2015-06-01
Image fusion between ultrasound (US) and computed tomography (CT) scan or magnetic resonance can increase operator accuracy in targeting liver lesions, particularly when those are undetectable with US alone. We have developed a modular gel to simulate hepatic solid lesions for educational purposes in imaging and minimally invasive ablation techniques. We aimed to assess the impact of image fusion in targeting artificial hepatic lesions during the hands-on part of 2 courses (basic and advanced) in hepatobiliary surgery. Under US guidance, 10 fake tumors of various sizes were created in the livers of 2 pigs, by percutaneous injection of a biocompatible gel engineered to be hyperdense on CT scanning and barely detectable on US. A CT scan was obtained and a CT-US image fusion was performed using the ACUSON S3000 US system (Siemens Healthcare, Germany). A total of 12 blinded course attendants, were asked in turn to perform a 10-minute liver scan with US alone followed by a 10-minute scan using image fusion. Using US alone, the expert managed to identify all lesions successfully. The true positive rate for course attendants with US alone was 14/36 and 2/24 in the advanced and basic courses, respectively. The total number of false positives identified was 26. With image fusion, the rate of true positives significantly increased to 31/36 (P < .001) in the advanced group and 16/24 in the basic group (P < .001). The total number of false positives, considering all participants, decreased to 4 (P < .001). Image fusion significantly increases accuracy in targeting hepatic lesions and might improve echo-guided procedures. © The Author(s) 2015.
ERIC Educational Resources Information Center
Kim, Deok-Hwan; Chung, Chin-Wan
2003-01-01
Discusses the collection fusion problem of image databases, concerned with retrieving relevant images by content based retrieval from image databases distributed on the Web. Focuses on a metaserver which selects image databases supporting similarity measures and proposes a new algorithm which exploits a probabilistic technique using Bayesian…
Wang, Hongzhi; Yushkevich, Paul A.
2013-01-01
Label fusion based multi-atlas segmentation has proven to be one of the most competitive techniques for medical image segmentation. This technique transfers segmentations from expert-labeled images, called atlases, to a novel image using deformable image registration. Errors produced by label transfer are further reduced by label fusion that combines the results produced by all atlases into a consensus solution. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity is a simple and highly effective label fusion technique. However, one limitation of most weighted voting methods is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this problem, we recently developed the joint label fusion technique and the corrective learning technique, which won the first place of the 2012 MICCAI Multi-Atlas Labeling Challenge and was one of the top performers in 2013 MICCAI Segmentation: Algorithms, Theory and Applications (SATA) challenge. To make our techniques more accessible to the scientific research community, we describe an Insight-Toolkit based open source implementation of our label fusion methods. Our implementation extends our methods to work with multi-modality imaging data and is more suitable for segmentation problems with multiple labels. We demonstrate the usage of our tools through applying them to the 2012 MICCAI Multi-Atlas Labeling Challenge brain image dataset and the 2013 SATA challenge canine leg image dataset. We report the best results on these two datasets so far. PMID:24319427
Goreczny, Sebastian; Dryzek, Pawel; Morgan, Gareth J; Lukaszewski, Maciej; Moll, Jadwiga A; Moszura, Tomasz
2017-08-01
We report initial experience with novel three-dimensional (3D) image fusion software for guidance of transcatheter interventions in congenital heart disease. Developments in fusion imaging have facilitated the integration of 3D roadmaps from computed tomography or magnetic resonance imaging datasets. The latest software allows live fusion of two-dimensional (2D) fluoroscopy with pre-registered 3D roadmaps. We reviewed all cardiac catheterizations guided with this software (Philips VesselNavigator). Pre-catheterization imaging and catheterization data were collected focusing on fusion of 3D roadmap, intervention guidance, contrast and radiation exposure. From 09/2015 until 06/2016, VesselNavigator was applied in 34 patients for guidance (n = 28) or planning (n = 6) of cardiac catheterization. In all 28 patients successful 2D-3D registration was performed. Bony structures combined with the cardiovascular silhouette were used for fusion in 26 patients (93%), calcifications in 9 (32%), previously implanted devices in 8 (29%) and low-volume contrast injection in 7 patients (25%). Accurate initial 3D roadmap alignment was achieved in 25 patients (89%). Six patients (22%) required realignment during the procedure due to distortion of the anatomy after introduction of stiff equipment. Overall, VesselNavigator was applied successfully in 27 patients (96%) without any complications related to 3D image overlay. VesselNavigator was useful in guidance of nearly all of cardiac catheterizations. The combination of anatomical markers and low-volume contrast injections allowed reliable 2D-3D registration in the vast majority of patients.
Facial expression recognition under partial occlusion based on fusion of global and local features
NASA Astrophysics Data System (ADS)
Wang, Xiaohua; Xia, Chen; Hu, Min; Ren, Fuji
2018-04-01
Facial expression recognition under partial occlusion is a challenging research. This paper proposes a novel framework for facial expression recognition under occlusion by fusing the global and local features. In global aspect, first, information entropy are employed to locate the occluded region. Second, principal Component Analysis (PCA) method is adopted to reconstruct the occlusion region of image. After that, a replace strategy is applied to reconstruct image by replacing the occluded region with the corresponding region of the best matched image in training set, Pyramid Weber Local Descriptor (PWLD) feature is then extracted. At last, the outputs of SVM are fitted to the probabilities of the target class by using sigmoid function. For the local aspect, an overlapping block-based method is adopted to extract WLD features, and each block is weighted adaptively by information entropy, Chi-square distance and similar block summation methods are then applied to obtain the probabilities which emotion belongs to. Finally, fusion at the decision level is employed for the data fusion of the global and local features based on Dempster-Shafer theory of evidence. Experimental results on the Cohn-Kanade and JAFFE databases demonstrate the effectiveness and fault tolerance of this method.
NASA Astrophysics Data System (ADS)
Qin, Xinqiang; Hu, Gang; Hu, Kai
2018-01-01
The decomposition of multiple source images using bidimensional empirical mode decomposition (BEMD) often produces mismatched bidimensional intrinsic mode functions, either by their number or their frequency, making image fusion difficult. A solution to this problem is proposed using a fixed number of iterations and a union operation in the sifting process. By combining the local regional features of the images, an image fusion method has been developed. First, the source images are decomposed using the proposed BEMD to produce the first intrinsic mode function (IMF) and residue component. Second, for the IMF component, a selection and weighted average strategy based on local area energy is used to obtain a high-frequency fusion component. Third, for the residue component, a selection and weighted average strategy based on local average gray difference is used to obtain a low-frequency fusion component. Finally, the fused image is obtained by applying the inverse BEMD transform. Experimental results show that the proposed algorithm provides superior performance over methods based on wavelet transform, line and column-based EMD, and complex empirical mode decomposition, both in terms of visual quality and objective evaluation criteria.
Nakajo, Kazuya; Tatsumi, Mitsuaki; Inoue, Atsuo; Isohashi, Kayako; Higuchi, Ichiro; Kato, Hiroki; Imaizumi, Masao; Enomoto, Takayuki; Shimosegawa, Eku; Kimura, Tadashi; Hatazawa, Jun
2010-02-01
We compared the diagnostic accuracy of fluorodeoxyglucose positron emission tomography/computed tomography (FDG PET/CT) and PET/magnetic resonance imaging (MRI) fusion images for gynecological malignancies. A total of 31 patients with gynecological malignancies were enrolled. FDG-PET images were fused to CT, T1- and T2-weighted images (T1WI, T2WI). PET-MRI fusion was performed semiautomatically. We performed three types of evaluation to demonstrate the usefulness of PET/MRI fusion images in comparison with that of inline PET/CT as follows: depiction of the uterus and the ovarian lesions on CT or MRI mapping images (first evaluation); additional information for lesion localization with PET and mapping images (second evaluation); and the image quality of fusion on interpretation (third evaluation). For the first evaluation, the score for T2WI (4.68 +/- 0.65) was significantly higher than that for CT (3.54 +/- 1.02) or T1WI (3.71 +/- 0.97) (P < 0.01). For the second evaluation, the scores for the localization of FDG accumulation showing that T2WI (2.74 +/- 0.57) provided significantly more additional information for the identification of anatomical sites of FDG accumulation than did CT (2.06 +/- 0.68) or T1WI (2.23 +/- 0.61) (P < 0.01). For the third evaluation, the three-point rating scale for the patient group as a whole demonstrated that PET/T2WI (2.72 +/- 0.54) localized the lesion significantly more convincingly than PET/CT (2.23 +/- 0.50) or PET/T1WI (2.29 +/- 0.53) (P < 0.01). PET/T2WI fusion images are superior for the detection and localization of gynecological malignancies.
Goudeketting, Seline R; Heinen, Stefan G; van den Heuvel, Daniel A; van Strijen, Marco J; de Haan, Michiel W; Slump, Cornelis H; de Vries, Jean-Paul P
2018-02-01
The effect of the insertion of guidewires and catheters on fusion accuracy of the three-dimensional (3D) image fusion technique during iliac percutaneous transluminal angioplasty (PTA) procedures has not yet been investigated. Technical validation of the 3D fusion technique was evaluated in 11 patients with common and/or external iliac artery lesions. A preprocedural contrast-enhanced magnetic resonance angiogram (CE-MRA) was segmented and manually registered to a cone-beam computed tomography image created at the beginning of the procedure for each patient. The treating physician visually scored the fusion accuracy (i.e., accurate [<2 mm], mismatch [2-5 mm], or inaccurate [>5 mm]) of the entire vasculature of the overlay with respect to the digital subtraction angiography (DSA) directly after the first obtained DSA. Contours of the vasculature of the fusion images and DSAs were drawn after the procedure. The cranial-caudal, lateral-medial, and absolute displacement were calculated between the vessel centerlines. To determine the influence of the catheters, displacement of the catheterized iliac trajectories were compared with the noncatheterized trajectories. Electronic databases were systematically searched for available literature published between January 2010 till August 2017. The mean registration error for all iliac trajectories (N.=20) was small (4.0±2.5 mm). No significant difference in fusion displacement was observed between catheterized (N.=11) and noncatheterized (N.=9) iliac arteries. The systematic literature search yielded 2 manuscripts with a total of 22 patients. The methodological quality of these studies was poor (≤11 MINORS Score), mainly due to a lack of a control group. Accurate image fusion based on preprocedural CE-MRA is possible and could potentially be of help in iliac PTA procedures. The flexible guidewires and angiographic catheters, routinely used during endovascular procedures of iliac arteries, did not cause significant displacement that influenced the image fusion. Current literature on 3D image fusion in iliac PTA procedures is of limited methodological quality.
Steel, Morrison M.; Papsin, Blake C.; Gordon, Karen A.
2015-01-01
Bilateral cochlear implants aim to provide hearing to both ears for children who are deaf and promote binaural/spatial hearing. Benefits are limited by mismatched devices and unilaterally-driven development which could compromise the normal integration of left and right ear input. We thus asked whether children hear a fused image (ie. 1 vs 2 sounds) from their bilateral implants and if this “binaural fusion” reduces listening effort. Binaural fusion was assessed by asking 25 deaf children with cochlear implants and 24 peers with normal hearing whether they heard one or two sounds when listening to bilaterally presented acoustic click-trains/electric pulses (250 Hz trains of 36 ms presented at 1 Hz). Reaction times and pupillary changes were recorded simultaneously to measure listening effort. Bilaterally implanted children heard one image of bilateral input less frequently than normal hearing peers, particularly when intensity levels on each side were balanced. Binaural fusion declined as brainstem asymmetries increased and age at implantation decreased. Children implanted later had access to acoustic input prior to implantation due to progressive deterioration of hearing. Increases in both pupil diameter and reaction time occurred as perception of binaural fusion decreased. Results indicate that, without binaural level cues, children have difficulty fusing input from their bilateral implants to perceive one sound which costs them increased listening effort. Brainstem asymmetries exacerbate this issue. By contrast, later implantation, reflecting longer access to bilateral acoustic hearing, may have supported development of auditory pathways underlying binaural fusion. Improved integration of bilateral cochlear implant signals for children is required to improve their binaural hearing. PMID:25668423
NASA Astrophysics Data System (ADS)
Preibisch, Stephan; Rohlfing, Torsten; Hasak, Michael P.; Tomancak, Pavel
2008-03-01
Single Plane Illumination Microscopy (SPIM; Huisken et al., Nature 305(5686):1007-1009, 2004) is an emerging microscopic technique that enables live imaging of large biological specimens in their entirety. By imaging the living biological sample from multiple angles SPIM has the potential to achieve isotropic resolution throughout even relatively large biological specimens. For every angle, however, only a relatively shallow section of the specimen is imaged with high resolution, whereas deeper regions appear increasingly blurred. In order to produce a single, uniformly high resolution image, we propose here an image mosaicing algorithm that combines state of the art groupwise image registration for alignment with content-based image fusion to prevent degrading of the fused image due to regional blurring of the input images. For the registration stage, we introduce an application-specific groupwise transformation model that incorporates per-image as well as groupwise transformation parameters. We also propose a new fusion algorithm based on Gaussian filters, which is substantially faster than fusion based on local image entropy. We demonstrate the performance of our mosaicing method on data acquired from living embryos of the fruit fly, Drosophila, using four and eight angle acquisitions.
Operational data fusion framework for building frequent Landsat-like imagery in a cloudy region
USDA-ARS?s Scientific Manuscript database
An operational data fusion framework is built to generate dense time-series Landsat-like images for a cloudy region by fusing Moderate Resolution Imaging Spectroradiometer (MODIS) data products and Landsat imagery. The Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) is integrated in ...
Aperture tolerances for neutron-imaging systems in inertial confinement fusion.
Ghilea, M C; Sangster, T C; Meyerhofer, D D; Lerche, R A; Disdier, L
2008-02-01
Neutron-imaging systems are being considered as an ignition diagnostic for the National Ignition Facility (NIF) [Hogan et al., Nucl. Fusion 41, 567 (2001)]. Given the importance of these systems, a neutron-imaging design tool is being used to quantify the effects of aperture fabrication and alignment tolerances on reconstructed neutron images for inertial confinement fusion. The simulations indicate that alignment tolerances of more than 1 mrad would introduce measurable features in a reconstructed image for both pinholes and penumbral aperture systems. These simulations further show that penumbral apertures are several times less sensitive to fabrication errors than pinhole apertures.
An object-oriented framework for medical image registration, fusion, and visualization.
Zhu, Yang-Ming; Cochoff, Steven M
2006-06-01
An object-oriented framework for image registration, fusion, and visualization was developed based on the classic model-view-controller paradigm. The framework employs many design patterns to facilitate legacy code reuse, manage software complexity, and enhance the maintainability and portability of the framework. Three sample applications built a-top of this framework are illustrated to show the effectiveness of this framework: the first one is for volume image grouping and re-sampling, the second one is for 2D registration and fusion, and the last one is for visualization of single images as well as registered volume images.
Mortezavi, Ashkan; Märzendorfer, Olivia; Donati, Olivio F; Rizzi, Gianluca; Rupp, Niels J; Wettstein, Marian S; Gross, Oliver; Sulser, Tullio; Hermanns, Thomas; Eberli, Daniel
2018-02-21
We evaluated the diagnostic accuracy of multiparametric magnetic resonance imaging and multiparametric magnetic resonance imaging/transrectal ultrasound fusion guided targeted biopsy against that of transperineal template saturation prostate biopsy to detect prostate cancer. We retrospectively analyzed the records of 415 men who consecutively presented for prostate biopsy between November 2014 and September 2016 at our tertiary care center. Multiparametric magnetic resonance imaging was performed using a 3 Tesla device without an endorectal coil, followed by transperineal template saturation prostate biopsy with the BiopSee® fusion system. Additional fusion guided targeted biopsy was done in men with a suspicious lesion on multiparametric magnetic resonance imaging, defined as Likert score 3 to 5. Any Gleason pattern 4 or greater was defined as clinically significant prostate cancer. The detection rates of multiparametric magnetic resonance imaging and fusion guided targeted biopsy were compared with the detection rate of transperineal template saturation prostate biopsy using the McNemar test. We obtained a median of 40 (range 30 to 55) and 3 (range 2 to 4) transperineal template saturation prostate biopsy and fusion guided targeted biopsy cores, respectively. Of the 124 patients (29.9%) without a suspicious lesion on multiparametric magnetic resonance imaging 32 (25.8%) were found to have clinically significant prostate cancer on transperineal template saturation prostate biopsy. Of the 291 patients (70.1%) with a Likert score of 3 to 5 clinically significant prostate cancer was detected in 129 (44.3%) by multiparametric magnetic resonance imaging fusion guided targeted biopsy, in 176 (60.5%) by transperineal template saturation prostate biopsy and in 187 (64.3%) by the combined approach. Overall 58 cases (19.9%) of clinically significant prostate cancer would have been missed if fusion guided targeted biopsy had been performed exclusively. The sensitivity of multiparametric magnetic resonance imaging and fusion guided targeted biopsy for clinically significant prostate cancer was 84.6% and 56.7% with a negative likelihood ratio of 0.35 and 0.46, respectively. Multiparametric magnetic resonance imaging alone should not be performed as a triage test due to a substantial number of false-negative cases with clinically significant prostate cancer. Systematic biopsy outperformed fusion guided targeted biopsy. Therefore, it will remain crucial in the diagnostic pathway of prostate cancer. Copyright © 2018 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Xia, Jun; He, Pin; Cai, Xiaodong; Zhang, Doudou; Xie, Ni
2017-10-15
Electrode position after deep brain stimulation (DBS) for Parkinson's disease (PD) needs to be confirmed, but there are concerns about the risk of postoperative magnetic resonance imaging (MRI) after DBS. These issues could be avoided by fusion images obtained from preoperative MRI and postoperative computed tomography (CT). This study aimed to investigate image fusion technology for displaying the position of the electrodes compared with postoperative MRI. This was a retrospective study of 32 patients with PD treated with bilateral subthalamic nucleus (STN) DBS between April 2015 and March 2016. The postoperative (same day) CT and preoperative MRI were fused using the Elekta Leksell 10.1 planning workstation (Elekta Instruments, Stockholm, Sweden). The position of the electrodes was compared between the fusion images and postoperative 1-2-week MRI. The position of the electrodes was highly correlated between the fusion and postoperative MRI (all r between 0.865 and 0.996; all P<0.001). The differences of the left electrode position in the lateral and vertical planes was significantly different between the two methods (0.30 and 0.24mm, respectively, both P<0.05), but there were no significant differences for the other electrode and planes (all P>0.05). The position of the electrodes was highly correlated between the fusion and postoperative MRI. The CT-MRI fusion images could be used to avoid the potential risks of MRI after DBS in patients with PD. Copyright © 2017. Published by Elsevier B.V.
A multi-focus image fusion method via region mosaicking on Laplacian pyramids
Kou, Liang; Zhang, Liguo; Sun, Jianguo; Han, Qilong; Jin, Zilong
2018-01-01
In this paper, a method named Region Mosaicking on Laplacian Pyramids (RMLP) is proposed to fuse multi-focus images that is captured by microscope. First, the Sum-Modified-Laplacian is applied to measure the focus of multi-focus images. Then the density-based region growing algorithm is utilized to segment the focused region mask of each image. Finally, the mask is decomposed into a mask pyramid to supervise region mosaicking on a Laplacian pyramid. The region level pyramid keeps more original information than the pixel level. The experiment results show that RMLP has best performance in quantitative comparison with other methods. In addition, RMLP is insensitive to noise and can reduces the color distortion of the fused images on two datasets. PMID:29771912
Multilevel depth and image fusion for human activity detection.
Ni, Bingbing; Pei, Yong; Moulin, Pierre; Yan, Shuicheng
2013-10-01
Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and inaccurate modeling of the interaction context between individual features. In this paper, we show that these problems can be addressed by combining data from a conventional camera and a depth sensor (e.g., Microsoft Kinect). We propose a novel complex activity recognition and localization framework that effectively fuses information from both grayscale and depth image channels at multiple levels of the video processing pipeline. In the individual visual feature detection level, depth-based filters are applied to the detected human/object rectangles to remove false detections. In the next level of interaction modeling, 3-D spatial and temporal contexts among human subjects or objects are extracted by integrating information from both grayscale and depth images. Depth information is also utilized to distinguish different types of indoor scenes. Finally, a latent structural model is developed to integrate the information from multiple levels of video processing for an activity detection. Extensive experiments on two activity recognition benchmarks (one with depth information) and a challenging grayscale + depth human activity database that contains complex interactions between human-human, human-object, and human-surroundings demonstrate the effectiveness of the proposed multilevel grayscale + depth fusion scheme. Higher recognition and localization accuracies are obtained relative to the previous methods.
Progressive Label Fusion Framework for Multi-atlas Segmentation by Dictionary Evolution
Song, Yantao; Wu, Guorong; Sun, Quansen; Bahrami, Khosro; Li, Chunming; Shen, Dinggang
2015-01-01
Accurate segmentation of anatomical structures in medical images is very important in neuroscience studies. Recently, multi-atlas patch-based label fusion methods have achieved many successes, which generally represent each target patch from an atlas patch dictionary in the image domain and then predict the latent label by directly applying the estimated representation coefficients in the label domain. However, due to the large gap between these two domains, the estimated representation coefficients in the image domain may not stay optimal for the label fusion. To overcome this dilemma, we propose a novel label fusion framework to make the weighting coefficients eventually to be optimal for the label fusion by progressively constructing a dynamic dictionary in a layer-by-layer manner, where a sequence of intermediate patch dictionaries gradually encode the transition from the patch representation coefficients in image domain to the optimal weights for label fusion. Our proposed framework is general to augment the label fusion performance of the current state-of-the-art methods. In our experiments, we apply our proposed method to hippocampus segmentation on ADNI dataset and achieve more accurate labeling results, compared to the counterpart methods with single-layer dictionary. PMID:26942233
Progressive Label Fusion Framework for Multi-atlas Segmentation by Dictionary Evolution.
Song, Yantao; Wu, Guorong; Sun, Quansen; Bahrami, Khosro; Li, Chunming; Shen, Dinggang
2015-10-01
Accurate segmentation of anatomical structures in medical images is very important in neuroscience studies. Recently, multi-atlas patch-based label fusion methods have achieved many successes, which generally represent each target patch from an atlas patch dictionary in the image domain and then predict the latent label by directly applying the estimated representation coefficients in the label domain. However, due to the large gap between these two domains, the estimated representation coefficients in the image domain may not stay optimal for the label fusion. To overcome this dilemma, we propose a novel label fusion framework to make the weighting coefficients eventually to be optimal for the label fusion by progressively constructing a dynamic dictionary in a layer-by-layer manner, where a sequence of intermediate patch dictionaries gradually encode the transition from the patch representation coefficients in image domain to the optimal weights for label fusion. Our proposed framework is general to augment the label fusion performance of the current state-of-the-art methods. In our experiments, we apply our proposed method to hippocampus segmentation on ADNI dataset and achieve more accurate labeling results, compared to the counterpart methods with single-layer dictionary.
NASA Astrophysics Data System (ADS)
Zhang, Rui; Xin, Binjie
2016-08-01
Yarn density is always considered as the fundamental structural parameter used for the quality evaluation of woven fabrics. The conventional yarn density measurement method is based on one-side analysis. In this paper, a novel density measurement method is developed for yarn-dyed woven fabrics based on a dual-side fusion technique. Firstly, a lab-used dual-side imaging system is established to acquire both face-side and back-side images of woven fabric and the affine transform is used for the alignment and fusion of the dual-side images. Then, the color images of the woven fabrics are transferred from the RGB to the CIE-Lab color space, and the intensity information of the image extracted from the L component is used for texture fusion and analysis. Subsequently, three image fusion methods are developed and utilized to merge the dual-side images: the weighted average method, wavelet transform method and Laplacian pyramid blending method. The fusion efficacy of each method is evaluated by three evaluation indicators and the best of them is selected to do the reconstruction of the complete fabric texture. Finally, the yarn density of the fused image is measured based on the fast Fourier transform, and the yarn alignment image could be reconstructed using the inverse fast Fourier transform. Our experimental results show that the accuracy of density measurement by using the proposed method is close to 99.44% compared with the traditional method and the robustness of this new proposed method is better than that of conventional analysis methods.
Han, Guanghui; Liu, Xiabi; Zheng, Guangyuan; Wang, Murong; Huang, Shan
2018-06-06
Ground-glass opacity (GGO) is a common CT imaging sign on high-resolution CT, which means the lesion is more likely to be malignant compared to common solid lung nodules. The automatic recognition of GGO CT imaging signs is of great importance for early diagnosis and possible cure of lung cancers. The present GGO recognition methods employ traditional low-level features and system performance improves slowly. Considering the high-performance of CNN model in computer vision field, we proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling is performed on multi-views and multi-receptive fields, which reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has the ability to obtain the optimal fine-tuning model. Multi-CNN models fusion strategy obtains better performance than any single trained model. We evaluated our method on the GGO nodule samples in publicly available LIDC-IDRI dataset of chest CT scans. The experimental results show that our method yields excellent results with 96.64% sensitivity, 71.43% specificity, and 0.83 F1 score. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images. Graphical abstract We proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has ability to obtain the optimal fine-tuning model. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images.
Introduction to clinical and laboratory (small-animal) image registration and fusion.
Zanzonico, Pat B; Nehmeh, Sadek A
2006-01-01
Imaging has long been a vital component of clinical medicine and, increasingly, of biomedical research in small-animals. Clinical and laboratory imaging modalities can be divided into two general categories, structural (or anatomical) and functional (or physiological). The latter, in particular, has spawned what has come to be known as "molecular imaging". Image registration and fusion have rapidly emerged as invaluable components of both clinical and small-animal imaging and has lead to the development and marketing of a variety of multi-modality, e.g. PET-CT, devices which provide registered and fused three-dimensional image sets. This paper briefly reviews the basics of image registration and fusion and available clinical and small-animal multi-modality instrumentation.
Pfister, Karin; Schierling, Wilma; Jung, Ernst Michael; Apfelbeck, Hanna; Hennersperger, Christoph; Kasprzak, Piotr M
2016-01-01
To compare standardised 2D ultrasound (US) to the novel ultrasonographic imaging techniques 3D/4D US and image fusion (combined real-time display of B mode and CT scan) for routine measurement of aortic diameter in follow-up after endovascular aortic aneurysm repair (EVAR). 300 measurements were performed on 20 patients after EVAR by one experienced sonographer (3rd degree of the German society of ultrasound (DEGUM)) with a high-end ultrasound machine and a convex probe (1-5 MHz). An internally standardized scanning protocol of the aortic aneurysm diameter in B mode used a so called leading-edge method. In summary, five different US methods (2D, 3D free-hand, magnetic field tracked 3D - Curefab™, 4D volume sweep, image fusion), each including contrast-enhanced ultrasound (CEUS), were used for measurement of the maximum aortic aneurysm diameter. Standardized 2D sonography was the defined reference standard for statistical analysis. CEUS was used for endoleak detection. Technical success was 100%. In augmented transverse imaging the mean aortic anteroposterior (AP) diameter was 4.0±1.3 cm for 2D US, 4.0±1.2 cm for 3D Curefab™, and 3.9±1.3 cm for 4D US and 4.0±1.2 for image fusion. The mean differences were below 1 mm (0.2-0.9 mm). Concerning estimation of aneurysm growth, agreement was found between 2D, 3D and 4D US in 19 of the 20 patients (95%). Definitive decision could always be made by image fusion. CEUS was combined with all methods and detected two out of the 20 patients (10%) with an endoleak type II. In one case, endoleak feeding arteries remained unclear with 2D CEUS but could be clearly localized by 3D CEUS and image fusion. Standardized 2D US allows adequate routine follow-up of maximum aortic aneurysm diameter after EVAR. Image Fusion enables a definitive statement about aneurysm growth without the need for new CT imaging by combining the postoperative CT scan with real-time B mode in a dual image display. 3D/4D CEUS and image fusion can improve endoleak characterization in selected cases but are not mandatory for routine practice.
NASA Astrophysics Data System (ADS)
Poobalasubramanian, Mangalraj; Agrawal, Anupam
2016-10-01
The presented work proposes fusion of panchromatic and multispectral images in a shearlet domain. The proposed fusion rules rely on the regional considerations which makes the system efficient in terms of spatial enhancement. The luminance hue saturation-based color conversion system is utilized to avoid spectral distortions. The proposed fusion method is tested on Worldview2 and Ikonos datasets, and the proposed method is compared against other methodologies. The proposed fusion method performs well against the other compared methods in terms of subjective and objective evaluations.
Paprottka, P M; Zengel, P; Cyran, C C; Ingrisch, M; Nikolaou, K; Reiser, M F; Clevert, D A
2014-01-01
To evaluate the ultrasound tissue elasticity imaging by comparison to multimodality imaging using image fusion with Magnetic Resonance Imaging (MRI) and conventional grey scale imaging with additional elasticity-ultrasound in an experimental small-animal-squamous-cell carcinoma-model for the assessment of tissue morphology. Human hypopharynx carcinoma cells were subcutaneously injected into the left flank of 12 female athymic nude rats. After 10 days (SD ± 2) of subcutaneous tumor growth, sonographic grey scale including elasticity imaging and MRI measurements were performed using a high-end ultrasound system and a 3T MR. For image fusion the contrast-enhanced MRI DICOM data set was uploaded in the ultrasonic device which has a magnetic field generator, a linear array transducer (6-15 MHz) and a dedicated software package (GE Logic E9), that can detect transducers by means of a positioning system. Conventional grey scale and elasticity imaging were integrated in the image fusion examination. After successful registration and image fusion the registered MR-images were simultaneously shown with the respective ultrasound sectional plane. Data evaluation was performed using the digitally stored video sequence data sets by two experienced radiologist using a modified Tsukuba Elasticity score. The colors "red and green" are assigned for an area of soft tissue, "blue" indicates hard tissue. In all cases a successful image fusion and plan registration with MRI and ultrasound imaging including grey scale and elasticity imaging was possible. The mean tumor volume based on caliper measurements in 3 dimensions was ~323 mm3. 4/12 rats were evaluated with Score I, 5/12 rates were evaluated with Score II, 3/12 rates were evaluated with Score III. There was a close correlation in the fused MRI with existing small necrosis in the tumor. None of the scored II or III lesions was visible by conventional grey scale. The comparison of ultrasound tissue elasticity imaging enables a secure differentiation between different tumor tissue areas in comparison to image fusion with MRI in our small study group. Therefore ultrasound tissue elasticity imaging might be used for fast detection of tumor response in the future whereas conventional grey scale imaging alone could not provide the additional information. By using standard, contrast-enhanced MRI images for reliable and reproducible slice positioning, the strongly user-dependent limitation of ultrasound tissue elasticity imaging may be overcome, especially for a comparison between baseline and follow-up measurements.
Intensity-hue-saturation-based image fusion using iterative linear regression
NASA Astrophysics Data System (ADS)
Cetin, Mufit; Tepecik, Abdulkadir
2016-10-01
The image fusion process basically produces a high-resolution image by combining the superior features of a low-resolution spatial image and a high-resolution panchromatic image. Despite its common usage due to its fast computing capability and high sharpening ability, the intensity-hue-saturation (IHS) fusion method may cause some color distortions, especially when a large number of gray value differences exist among the images to be combined. This paper proposes a spatially adaptive IHS (SA-IHS) technique to avoid these distortions by automatically adjusting the exact spatial information to be injected into the multispectral image during the fusion process. The SA-IHS method essentially suppresses the effects of those pixels that cause the spectral distortions by assigning weaker weights to them and avoiding a large number of redundancies on the fused image. The experimental database consists of IKONOS images, and the experimental results both visually and statistically prove the enhancement of the proposed algorithm when compared with the several other IHS-like methods such as IHS, generalized IHS, fast IHS, and generalized adaptive IHS.
Implementing and validating of pan-sharpening algorithms in open-source software
NASA Astrophysics Data System (ADS)
Pesántez-Cobos, Paúl; Cánovas-García, Fulgencio; Alonso-Sarría, Francisco
2017-10-01
Several approaches have been used in remote sensing to integrate images with different spectral and spatial resolutions in order to obtain fused enhanced images. The objective of this research is three-fold. To implement in R three image fusion techniques (High Pass Filter, Principal Component Analysis and Gram-Schmidt); to apply these techniques to merging multispectral and panchromatic images from five different images with different spatial resolutions; finally, to evaluate the results using the universal image quality index (Q index) and the ERGAS index. As regards qualitative analysis, Landsat-7 and Landsat-8 show greater colour distortion with the three pansharpening methods, although the results for the other images were better. Q index revealed that HPF fusion performs better for the QuickBird, IKONOS and Landsat-7 images, followed by GS fusion; whereas in the case of Landsat-8 and Natmur-08 images, the results were more even. Regarding the ERGAS spatial index, the ACP algorithm performed better for the QuickBird, IKONOS, Landsat-7 and Natmur-08 images, followed closely by the GS algorithm. Only for the Landsat-8 image did, the GS fusion present the best result. In the evaluation of spectral components, HPF results tended to be better and ACP results worse, the opposite was the case with the spatial components. Better quantitative results are obtained in Landsat-7 and Landsat-8 images with the three fusion methods than with the QuickBird, IKONOS and Natmur-08 images. This contrasts with the qualitative evaluation reflecting the importance of splitting the two evaluation approaches (qualitative and quantitative). Significant disagreement may arise when different methodologies are used to asses the quality of an image fusion. Moreover, it is not possible to designate, a priori, a given algorithm as the best, not only because of the different characteristics of the sensors, but also because of the different atmospherics conditions or peculiarities of the different study areas, among other reasons.
Perception-oriented fusion of multi-sensor imagery: visible, IR, and SAR
NASA Astrophysics Data System (ADS)
Sidorchuk, D.; Volkov, V.; Gladilin, S.
2018-04-01
This paper addresses the problem of image fusion of optical (visible and thermal domain) data and radar data for the purpose of visualization. These types of images typically contain a lot of complimentary information, and their joint visualization can be useful and more convenient for human user than a set of individual images. To solve the image fusion problem we propose a novel algorithm that utilizes some peculiarities of human color perception and based on the grey-scale structural visualization. Benefits of presented algorithm are exemplified by satellite imagery.
Image Fusion for Radiosurgery, Neurosurgery and Hypofractionated Radiotherapy.
Inoue, Hiroshi K; Nakajima, Atsushi; Sato, Hiro; Noda, Shin-Ei; Saitoh, Jun-Ichi; Suzuki, Yoshiyuki
2015-03-01
Precise target detection is essential for radiosurgery, neurosurgery and hypofractionated radiotherapy because treatment results and complication rates are related to accuracy of the target definition. In skull base tumors and tumors around the optic pathways, exact anatomical evaluation of cranial nerves are important to avoid adverse effects on these structures close to lesions. Three-dimensional analyses of structures obtained with MR heavy T2-images and image fusion with CT thin-sliced sections are desirable to evaluate fine structures during radiosurgery and microsurgery. In vascular lesions, angiography is most important for evaluations of whole structures from feeder to drainer, shunt, blood flow and risk factors of bleeding. However, exact sites and surrounding structures in the brain are not shown on angiography. True image fusions of angiography, MR images and CT on axial planes are ideal for precise target definition. In malignant tumors, especially recurrent head and neck tumors, biologically active areas of recurrent tumors are main targets of radiosurgery. PET scan is useful for quantitative evaluation of recurrences. However, the examination is not always available at the time of radiosurgery. Image fusion of MR diffusion images with CT is always available during radiosurgery and useful for the detection of recurrent lesions. All images are fused and registered on thin sliced CT sections and exactly demarcated targets are planned for treatment. Follow-up images are also able to register on this CT. Exact target changes, including volume, are possible in this fusion system. The purpose of this review is to describe the usefulness of image fusion for 1) skull base, 2) vascular, 3) recurrent target detection, and 4) follow-up analyses in radiosurgery, neurosurgery and hypofractionated radiotherapy.
Image Fusion for Radiosurgery, Neurosurgery and Hypofractionated Radiotherapy
Nakajima, Atsushi; Sato, Hiro; Noda, Shin-ei; Saitoh, Jun-ichi; Suzuki, Yoshiyuki
2015-01-01
Precise target detection is essential for radiosurgery, neurosurgery and hypofractionated radiotherapy because treatment results and complication rates are related to accuracy of the target definition. In skull base tumors and tumors around the optic pathways, exact anatomical evaluation of cranial nerves are important to avoid adverse effects on these structures close to lesions. Three-dimensional analyses of structures obtained with MR heavy T2-images and image fusion with CT thin-sliced sections are desirable to evaluate fine structures during radiosurgery and microsurgery. In vascular lesions, angiography is most important for evaluations of whole structures from feeder to drainer, shunt, blood flow and risk factors of bleeding. However, exact sites and surrounding structures in the brain are not shown on angiography. True image fusions of angiography, MR images and CT on axial planes are ideal for precise target definition. In malignant tumors, especially recurrent head and neck tumors, biologically active areas of recurrent tumors are main targets of radiosurgery. PET scan is useful for quantitative evaluation of recurrences. However, the examination is not always available at the time of radiosurgery. Image fusion of MR diffusion images with CT is always available during radiosurgery and useful for the detection of recurrent lesions. All images are fused and registered on thin sliced CT sections and exactly demarcated targets are planned for treatment. Follow-up images are also able to register on this CT. Exact target changes, including volume, are possible in this fusion system. The purpose of this review is to describe the usefulness of image fusion for 1) skull base, 2) vascular, 3) recurrent target detection, and 4) follow-up analyses in radiosurgery, neurosurgery and hypofractionated radiotherapy. PMID:26180676
Scholkmann, Felix; Revol, Vincent; Kaufmann, Rolf; Baronowski, Heidrun; Kottler, Christian
2014-03-21
This paper introduces a new image denoising, fusion and enhancement framework for combining and optimal visualization of x-ray attenuation contrast (AC), differential phase contrast (DPC) and dark-field contrast (DFC) images retrieved from x-ray Talbot-Lau grating interferometry. The new image fusion framework comprises three steps: (i) denoising each input image (AC, DPC and DFC) through adaptive Wiener filtering, (ii) performing a two-step image fusion process based on the shift-invariant wavelet transform, i.e. first fusing the AC with the DPC image and then fusing the resulting image with the DFC image, and finally (iii) enhancing the fused image to obtain a final image using adaptive histogram equalization, adaptive sharpening and contrast optimization. Application examples are presented for two biological objects (a human tooth and a cherry) and the proposed method is compared to two recently published AC/DPC/DFC image processing techniques. In conclusion, the new framework for the processing of AC, DPC and DFC allows the most relevant features of all three images to be combined in one image while reducing the noise and enhancing adaptively the relevant image features. The newly developed framework may be used in technical and medical applications.
System integration and DICOM image creation for PET-MR fusion.
Hsiao, Chia-Hung; Kao, Tsair; Fang, Yu-Hua; Wang, Jiunn-Kuen; Guo, Wan-Yuo; Chao, Liang-Hsiao; Yen, Sang-Hue
2005-03-01
This article demonstrates a gateway system for converting image fusion results to digital imaging and communication in medicine (DICOM) objects. For the purpose of standardization and integration, we have followed the guidelines of the Integrated Healthcare Enterprise technical framework and developed a DICOM gateway. The gateway system combines data from hospital information system, image fusion results, and the information generated itself to constitute new DICOM objects. All the mandatory tags defined in standard DICOM object were generated in the gateway system. The gateway system will generate two series of SOP instances of each PET-MR fusion result; SOP (Service Object Pair) one for the reconstructed magnetic resonance (MR) images and the other for position emission tomography (PET) images. The size, resolution, spatial coordinates, and number of frames are the same in both series of SOP instances. Every new generated MR image exactly fits with one of the reconstructed PET images. Those DICOM images are stored to the picture archiving and communication system (PACS) server by means of standard DICOM protocols. When those images are retrieved and viewed by standard DICOM viewing systems, both images can be viewed at the same anatomy location. This system is useful for precise diagnosis and therapy.
Predicting individual fusional range from optometric data
NASA Astrophysics Data System (ADS)
Endrikhovski, Serguei; Jin, Elaine; Miller, Michael E.; Ford, Robert W.
2005-03-01
A model was developed to predict the range of disparities that can be fused by an individual user from optometric measurements. This model uses parameters, such as dissociated phoria and fusional reserves, to calculate an individual user"s fusional range (i.e., the disparities that can be fused on stereoscopic displays) when the user views a stereoscopic stimulus from various distances. This model is validated by comparing its output with data from a study in which the individual fusional range of a group of users was quantified while they viewed a stereoscopic display from distances of 0.5, 1.0, and 2.0 meters. Overall, the model provides good data predictions for the majority of the subjects and can be generalized for other viewing conditions. The model may, therefore, be used within a customized stereoscopic system, which would render stereoscopic information in a way that accounts for the individual differences in fusional range. Because the comfort of an individual user also depends on the user"s ability to fuse stereo images, such a system may, consequently, improve the comfort level and viewing experience for people with different stereoscopic fusional capabilities.
Schwein, Adeline; Chinnadurai, Ponraj; Behler, Greg; Lumsden, Alan B; Bismuth, Jean; Bechara, Carlos F
2018-07-01
Fenestrated endovascular aneurysm repair (FEVAR) is an evolving technique to treat juxtarenal abdominal aortic aneurysms (AAAs). Catheterization of visceral and renal vessels after the deployment of the fenestrated main body device is often challenging, usually requiring additional fluoroscopy and multiple digital subtraction angiograms. The aim of this study was to assess the clinical utility and accuracy of a computed tomography angiography (CTA)-fluoroscopy image fusion technique in guiding visceral vessel cannulation during FEVAR. Between August 2014 and September 2016, all consecutive patients who underwent FEVAR at our institution using image fusion guidance were included. Preoperative CTA images were fused with intraoperative fluoroscopy after coregistering with non-contrast-enhanced cone beam computed tomography (syngo 3D3D image fusion; Siemens Healthcare, Forchheim, Germany). The ostia of the visceral vessels were electronically marked on CTA images (syngo iGuide Toolbox) and overlaid on live fluoroscopy to guide vessel cannulation after fenestrated device deployment. Clinical utility of image fusion was evaluated by assessing the number of dedicated angiograms required for each visceral or renal vessel cannulation and the use of optimized C-arm angulation. Accuracy of image fusion was evaluated from video recordings by three raters using a binary qualitative assessment scale. A total of 26 patients (17 men; mean age, 73.8 years) underwent FEVAR during the study period for juxtarenal AAA (17), pararenal AAA (6), and thoracoabdominal aortic aneurysm (3). Video recordings of fluoroscopy from 19 cases were available for review and assessment. A total of 46 vessels were cannulated; 38 of 46 (83%) of these vessels were cannulated without angiography but based only on image fusion guidance: 9 of 11 superior mesenteric artery cannulations and 29 of 35 renal artery cannulations. Binary qualitative assessment showed that 90% (36/40) of the virtual ostia overlaid on live fluoroscopy were accurate. Optimized C-arm angulations were achieved in 35% of vessel cannulations (0/9 for superior mesenteric artery cannulation, 12/25 for renal arteries). Preoperative CTA-fluoroscopy image fusion guidance during FEVAR is a valuable and accurate tool that allows visceral and renal vessel cannulation without the need of dedicated angiograms, thus avoiding additional injection of contrast material and radiation exposure. Further refinements, such as accounting for device-induced aortic deformation and automating the image fusion workflow, will bolster this technology toward optimal routine clinical use. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Daily monitoring of 30 m crop condition over complex agricultural landscapes
NASA Astrophysics Data System (ADS)
Sun, L.; Gao, F.; Xie, D.; Anderson, M. C.; Yang, Y.
2017-12-01
Crop progress provides information necessary for efficient irrigation, scheduling fertilization and harvesting operations at optimal times for achieving higher yields. In the United States, crop progress reports are released online weekly by US Department of Agriculture (USDA) - National Agricultural Statistics Service (NASS). However, the ground data collection is time consuming and subjective, and these reports are provided at either district (multiple counties) or state level. Remote sensing technologies have been widely used to map crop conditions, to extract crop phenology, and to predict crop yield. However, for current satellite-based sensors, it is difficult to acquire both high spatial resolution and frequent coverage. For example, Landsat satellites are capable to capture 30 m resolution images, while the long revisit cycles, cloud contamination further limited their use in detecting rapid surface changes. On the other hand, MODIS can provide daily observations, but with coarse spatial resolutions range from 250 to 1000 m. In recent years, multi-satellite data fusion technology such as the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) has been used to combine the spatial resolution of Landsat with the temporal frequency of MODIS. It has been found that this synthetic dataset could provide more valuable information compared to the images acquired from only one single sensor. However, accuracy of STARFM depends on heterogeneity of landscape and available clear image pairs of MODIS and Landsat. In this study, a new fusion method was developed using the crop vegetation index (VI) timeseries extracted from "pure" MODIS pixels and Landsat overpass images to generate daily 30 m VI for crops. The fusion accuracy was validated by comparing to the original Landsat images. Results show that the relative error in non-rapid growing period is around 3-5% and in rapid growing period is around 6-8% . The accuracy is much better than that of STARFM which is 4-9% in non-rapid growing period and 10-16% in rapid growing period based on 13 image pairs. The predicted VI from this approach looks consistent and smooth in the SLC-off gap stripes of Landsat 7 ETM+ image. The new fusion results will be used to map crop phenology and to predict crop yield at field scale in the complex agricultural landscapes.
Multispectral image fusion based on fractal features
NASA Astrophysics Data System (ADS)
Tian, Jie; Chen, Jie; Zhang, Chunhua
2004-01-01
Imagery sensors have been one indispensable part of the detection and recognition systems. They are widely used to the field of surveillance, navigation, control and guide, et. However, different imagery sensors depend on diverse imaging mechanisms, and work within diverse range of spectrum. They also perform diverse functions and have diverse circumstance requires. So it is unpractical to accomplish the task of detection or recognition with a single imagery sensor under the conditions of different circumstances, different backgrounds and different targets. Fortunately, the multi-sensor image fusion technique emerged as important route to solve this problem. So image fusion has been one of the main technical routines used to detect and recognize objects from images. While, loss of information is unavoidable during fusion process, so it is always a very important content of image fusion how to preserve the useful information to the utmost. That is to say, it should be taken into account before designing the fusion schemes how to avoid the loss of useful information or how to preserve the features helpful to the detection. In consideration of these issues and the fact that most detection problems are actually to distinguish man-made objects from natural background, a fractal-based multi-spectral fusion algorithm has been proposed in this paper aiming at the recognition of battlefield targets in the complicated backgrounds. According to this algorithm, source images are firstly orthogonally decomposed according to wavelet transform theories, and then fractal-based detection is held to each decomposed image. At this step, natural background and man-made targets are distinguished by use of fractal models that can well imitate natural objects. Special fusion operators are employed during the fusion of area that contains man-made targets so that useful information could be preserved and features of targets could be extruded. The final fused image is reconstructed from the composition of source pyramid images. So this fusion scheme is a multi-resolution analysis. The wavelet decomposition of image can be actually considered as special pyramid decomposition. According to wavelet decomposition theories, the approximation of image (formula available in paper) at resolution 2j+1 equal to its orthogonal projection in space , that is, where Ajf is the low-frequency approximation of image f(x, y) at resolution 2j and , , represent the vertical, horizontal and diagonal wavelet coefficients respectively at resolution 2j. These coefficients describe the high-frequency information of image at direction of vertical, horizontal and diagonal respectively. Ajf, , and are independent and can be considered as images. In this paper J is set to be 1, so the source image is decomposed to produce the son-images Af, D1f, D2f and D3f. To solve the problem of detecting artifacts, the concepts of vertical fractal dimension FD1, horizontal fractal dimension FD2 and diagonal fractal dimension FD3 are proposed in this paper. The vertical fractal dimension FD1 corresponds to the vertical wavelet coefficients image after the wavelet decomposition of source image, the horizontal fractal dimension FD2 corresponds to the horizontal wavelet coefficients and the diagonal fractal dimension FD3 the diagonal one. These definitions enrich the illustration of source images. Therefore they are helpful to classify the targets. Then the detection of artifacts in the decomposed images is a problem of pattern recognition in 4-D space. The combination of FD0, FD1, FD2 and FD3 make a vector of (FD0, FD1, FD2, FD3), which can be considered as a united feature vector of the studied image. All the parts of the images are classified in the 4-D pattern space created by the vector of (FD0, FD1, FD2, FD3) so that the area that contains man-made objects could be detected. This detection can be considered as a coarse recognition, and then the significant areas in each son-images are signed so that they can be dealt with special rules. There has been various fusion rules developed with each one aiming at a special problem. These rules have different performance, so it is very important to select an appropriate rule during the design of an image fusion system. Recent research denotes that the rule should be adjustable so that it is always suitable to extrude the features of targets and to preserve the pixels of useful information. In this paper, owing to the consideration that fractal dimension is one of the main features to distinguish man-made targets from natural objects, the fusion rule was defined that if the studied region of image contains man-made target, the pixels of the source image whose fractal dimension is minimal are saved to be the pixels of the fused image, otherwise, a weighted average operator is adopted to avoid loss of information. The main idea of this rule is to store the pixels with low fractal dimensions, so it can be named Minimal Fractal dimensions (MFD) fusion rule. This fractal-based algorithm is compared with a common weighted average fusion algorithm. An objective assessment is taken to the two fusion results. The criteria of Entropy, Cross-Entropy, Peak Signal-to-Noise Ratio (PSNR) and Standard Gray Scale Difference are defined in this paper. Reversely to the idea of constructing an ideal image as the assessing reference, the source images are selected to be the reference in this paper. It can be deemed that this assessment is to calculate how much the image quality has been enhanced and the quantity of information has been increased when the fused image is compared with the source images. The experimental results imply that the fractal-based multi-spectral fusion algorithm can effectively preserve the information of man-made objects with a high contrast. It is proved that this algorithm could well preserve features of military targets because that battlefield targets are most man-made objects and in common their images differ from fractal models obviously. Furthermore, the fractal features are not sensitive to the imaging conditions and the movement of targets, so this fractal-based algorithm may be very practical.
Bagrosky, Brian M; Hayes, Kari L; Koo, Phillip J; Fenton, Laura Z
2013-08-01
Evaluation of the child with spinal fusion hardware and concern for infection is challenging because of hardware artifact with standard imaging (CT and MRI) and difficult physical examination. Studies using (18)F-FDG PET/CT combine the benefit of functional imaging with anatomical localization. To discuss a case series of children and young adults with spinal fusion hardware and clinical concern for hardware infection. These people underwent FDG PET/CT imaging to determine the site of infection. We performed a retrospective review of whole-body FDG PET/CT scans at a tertiary children's hospital from December 2009 to January 2012 in children and young adults with spinal hardware and suspected hardware infection. The PET/CT scan findings were correlated with pertinent clinical information including laboratory values of inflammatory markers, postoperative notes and pathology results to evaluate the diagnostic accuracy of FDG PET/CT. An exempt status for this retrospective review was approved by the Institution Review Board. Twenty-five FDG PET/CT scans were performed in 20 patients. Spinal fusion hardware infection was confirmed surgically and pathologically in six patients. The most common FDG PET/CT finding in patients with hardware infection was increased FDG uptake in the soft tissue and bone immediately adjacent to the posterior spinal fusion rods at multiple contiguous vertebral levels. Noninfectious hardware complications were diagnosed in ten patients and proved surgically in four. Alternative sources of infection were diagnosed by FDG PET/CT in seven patients (five with pneumonia, one with pyonephrosis and one with superficial wound infections). FDG PET/CT is helpful in evaluation of children and young adults with concern for spinal hardware infection. Noninfectious hardware complications and alternative sources of infection, including pneumonia and pyonephrosis, can be diagnosed. FDG PET/CT should be the first-line cross-sectional imaging study in patients with suspected spinal hardware infection. Because pneumonia was diagnosed as often as spinal hardware infection, initial chest radiography should also be performed.
Live imaging of mouse secondary palate fusion
Kim, Seungil; Prochazka, Jan; Bush, Jeffrey O.
2017-01-01
LONG ABSTRACT The fusion of the secondary palatal shelves to form the intact secondary palate is a key process in mammalian development and its disruption can lead to cleft secondary palate, a common congenital anomaly in humans. Secondary palate fusion has been extensively studied leading to several proposed cellular mechanisms that may mediate this process. However, these studies have been mostly performed on fixed embryonic tissues at progressive timepoints during development or in fixed explant cultures analyzed at static timepoints. Static analysis is limited for the analysis of dynamic morphogenetic processes such a palate fusion and what types of dynamic cellular behaviors mediate palatal fusion is incompletely understood. Here we describe a protocol for live imaging of ex vivo secondary palate fusion in mouse embryos. To examine cellular behaviors of palate fusion, epithelial-specific Keratin14-cre was used to label palate epithelial cells in ROSA26-mTmGflox reporter embryos. To visualize filamentous actin, Lifeact-mRFPruby reporter mice were used. Live imaging of secondary palate fusion was performed by dissecting recently-adhered secondary palatal shelves of embryonic day (E) 14.5 stage embryos and culturing in agarose-containing media on a glass bottom dish to enable imaging with an inverted confocal microscope. Using this method, we have detected a variety of novel cellular behaviors during secondary palate fusion. An appreciation of how distinct cell behaviors are coordinated in space and time greatly contributes to our understanding of this dynamic morphogenetic process. This protocol can be applied to mutant mouse lines, or cultures treated with pharmacological inhibitors to further advance understanding of how secondary palate fusion is controlled. PMID:28784960
Lee, Junkyo; Lee, Min Woo; Choi, Dongil; Cha, Dong Ik; Lee, Sunyoung; Kang, Tae Wook; Yang, Jehoon; Jo, Jaemoon; Bang, Won-Chul; Kim, Jongsik; Shin, Dongkuk
2017-12-21
The purpose of this study was to evaluate the accuracy of an active contour model for estimating the posterior ablative margin in images obtained by the fusion of real-time ultrasonography (US) and 3-dimensional (3D) US or magnetic resonance (MR) images of an experimental tumor model for radiofrequency ablation. Chickpeas (n=12) and bovine rump meat (n=12) were used as an experimental tumor model. Grayscale 3D US and T1-weighted MR images were pre-acquired for use as reference datasets. US and MR/3D US fusion was performed for one group (n=4), and US and 3D US fusion only (n=8) was performed for the other group. Half of the models in each group were completely ablated, while the other half were incompletely ablated. Hyperechoic ablation areas were extracted using an active contour model from real-time US images, and the posterior margin of the ablation zone was estimated from the anterior margin. After the experiments, the ablated pieces of bovine rump meat were cut along the electrode path and the cut planes were photographed. The US images with the estimated posterior margin were compared with the photographs and post-ablation MR images. The extracted contours of the ablation zones from 12 US fusion videos and post-ablation MR images were also matched. In the four models fused under real-time US with MR/3D US, compression from the transducer and the insertion of an electrode resulted in misregistration between the real-time US and MR images, making the estimation of the ablation zones less accurate than was achieved through fusion between real-time US and 3D US. Eight of the 12 post-ablation 3D US images were graded as good when compared with the sectioned specimens, and 10 of the 12 were graded as good in a comparison with nicotinamide adenine dinucleotide staining and histopathologic results. Estimating the posterior ablative margin using an active contour model is a feasible way of predicting the ablation area, and US/3D US fusion was more accurate than US/MR fusion.
Pee, Yong Hun; Park, Jong Dae; Choi, Young-Geun; Lee, Sang-Ho
2008-05-01
An anterior approach for debridement and fusion with autologous bone graft has been recommended as the gold standard for surgical treatment of pyogenic spondylodiscitis. The use of anterior foreign body implants at the site of active infection is still a challenging procedure for spine surgeons. Several authors have recently introduced anterior grafting with titanium mesh cages instead of autologous bone strut in the treatment of spondylodiscitis. The authors present their experience of anterior fusion with 3 types of cages followed by posterior pedicle screw fixation. They also compare their results with the use of autologous iliac bone strut. The authors retrospectively reviewed the cases of 60 patients with pyogenic spondylodiscitis treated by anterior debridement between January 2003 and April 2005. Fusion using either cages or iliac bone struts was performed during the same course of anesthesia followed by posterior fixation. Twenty-three patients underwent fusion with autologous iliac bone strut, and 37 patients underwent fusion with 1 of the 3 types of cages. The infections resolved in all patients, as noted by normalization of their erythrocyte sedimentation rates and C-reactive protein levels. Patients in both groups were evaluated in terms of their preoperative and postoperative clinical and imaging findings. Single-stage anterior debridement and cage fusion followed by posterior pedicle screw fixation can be effective in the treatment of pyogenic spondylodiscitis. There was no difference in clinical and imaging outcomes between the strut group and cage group except for the subsidence rate. The subsidence rate was higher in the strut group than in the cage group. The duration until subsidence was also shorter in the strut group than in the cage group.
Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion
Zhao, Yuanshen; Gong, Liang; Huang, Yixiang; Liu, Chengliang
2016-01-01
Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ) color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost. PMID:26840313
Estimation of forest biomass using remote sensing
NASA Astrophysics Data System (ADS)
Sarker, Md. Latifur Rahman
Forest biomass estimation is essential for greenhouse gas inventories, terrestrial carbon accounting and climate change modelling studies. The availability of new SAR, (C-band RADARSAT-2 and L-band PALSAR) and optical sensors (SPOT-5 and AVNIR-2) has opened new possibilities for biomass estimation because these new SAR sensors can provide data with varying polarizations, incidence angles and fine spatial resolutions. 'Therefore, this study investigated the potential of two SAR sensors (RADARSAT-2 with C-band and PALSAR with L-band) and two optical sensors (SPOT-5 and AVNIR2) for the estimation of biomass in Hong Kong. Three common major processing steps were used for data processing, namely (i) spectral reflectance/intensity, (ii) texture measurements and (iii) polarization or band ratios of texture parameters. Simple linear and stepwise multiple regression models were developed to establish a relationship between the image parameters and the biomass of field plots. The results demonstrate the ineffectiveness of raw data. However, significant improvements in performance (r2) (RADARSAT-2=0.78; PALSAR=0.679; AVNIR-2=0.786; SPOT-5=0.854; AVNIR-2 + SPOT-5=0.911) were achieved using texture parameters of all sensors. The performances were further improved and very promising performances (r2) were obtained using the ratio of texture parameters (RADARSAT-2=0.91; PALSAR=0.823; PALSAR two-date=0.921; AVNIR-2=0.899; SPOT-5=0.916; AVNIR-2 + SPOT-5=0.939). These performances suggest four main contributions arising from this research, namely (i) biomass estimation can be significantly improved by using texture parameters, (ii) further improvements can be obtained using the ratio of texture parameters, (iii) multisensor texture parameters and their ratios have more potential than texture from a single sensor, and (iv) biomass can be accurately estimated far beyond the previously perceived saturation levels of SAR and optical data using texture parameters or the ratios of texture parameters. A further important contribution resulting from the fusion of SAR & optical images produced accuracies (r2) of 0.706 and 0.77 from the simple fusion, and the texture processing of the fused image, respectively. Although these performances were not as attractive as the performances obtained from the other four processing steps, the wavelet fusion procedure improved the saturation level of the optical (AVNIR-2) image very significantly after fusion with SAR, image. Keywords: biomass, climate change, SAR, optical, multisensors, RADARSAT-2, PALSAR, AVNIR-2, SPOT-5, texture measurement, ratio of texture parameters, wavelets, fusion, saturation
Neutron imaging with bubble chambers for inertial confinement fusion
NASA Astrophysics Data System (ADS)
Ghilea, Marian C.
One of the main methods to obtain energy from controlled thermonuclear fusion is inertial confinement fusion (ICF), a process where nuclear fusion reactions are initiated by heating and compressing a fuel target, typically in the form of a pellet that contains deuterium and tritium, relying on the inertia of the fuel mass to provide confinement. In inertial confinement fusion experiments, it is important to distinguish failure mechanisms of the imploding capsule and unambiguously diagnose compression and hot spot formation in the fuel. Neutron imaging provides such a technique and bubble chambers are capable of generating higher resolution images than other types of neutron detectors. This thesis explores the use of a liquid bubble chamber to record high yield 14.1 MeV neutrons resulting from deuterium-tritium fusion reactions on ICF experiments. A design tool to deconvolve and reconstruct penumbral and pinhole neutron images was created, using an original ray tracing concept to simulate the neutron images. The design tool proved that misalignment and aperture fabrication errors can significantly decrease the resolution of the reconstructed neutron image. A theoretical model to describe the mechanism of bubble formation was developed. A bubble chamber for neutron imaging with Freon 115 as active medium was designed and implemented for the OMEGA laser system. High neutron yields resulting from deuterium-tritium capsule implosions were recorded. The bubble density was too low for neutron imaging on OMEGA but agreed with the model of bubble formation. The research done in here shows that bubble detectors are a promising technology for the higher neutron yields expected at National Ignition Facility (NIF).
Thaden, Jeremy J; Sanon, Saurabh; Geske, Jeffrey B; Eleid, Mackram F; Nijhof, Niels; Malouf, Joseph F; Rihal, Charanjit S; Bruce, Charles J
2016-06-01
There has been significant growth in the volume and complexity of percutaneous structural heart procedures in the past decade. Increasing procedural complexity and accompanying reliance on multimodality imaging have fueled the development of fusion imaging to facilitate procedural guidance. The first clinically available system capable of echocardiographic and fluoroscopic fusion for real-time guidance of structural heart procedures was approved by the US Food and Drug Administration in 2012. Echocardiographic-fluoroscopic fusion imaging combines the precise catheter and device visualization of fluoroscopy with the soft tissue anatomy and color flow Doppler information afforded by echocardiography in a single image. This allows the interventionalist to perform precise catheter manipulations under fluoroscopy guidance while visualizing critical tissue anatomy provided by echocardiography. However, there are few data available addressing this technology's strengths and limitations in routine clinical practice. The authors provide a critical review of currently available echocardiographic-fluoroscopic fusion imaging for guidance of structural heart interventions to highlight its strengths, limitations, and potential clinical applications and to guide further research into value of this emerging technology. Copyright © 2016 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting
2018-02-01
Image segmentation plays an important role in medical science. One application is multimodality imaging, especially the fusion of structural imaging with functional imaging, which includes CT, MRI and new types of imaging technology such as optical imaging to obtain functional images. The fusion process require precisely extracted structural information, in order to register the image to it. Here we used image enhancement, morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in deep learning way. Such approach greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. The contours of the borders of different tissues on all images were accurately extracted and 3D visualized. This can be used in low-level light therapy and optical simulation software such as MCVM. We obtained a precise three-dimensional distribution of brain, which offered doctors and researchers quantitative volume data and detailed morphological characterization for personal precise medicine of Cerebral atrophy/expansion. We hope this technique can bring convenience to visualization medical and personalized medicine.
Knowledge guided information fusion for segmentation of multiple sclerosis lesions in MRI images
NASA Astrophysics Data System (ADS)
Zhu, Chaozhe; Jiang, Tianzi
2003-05-01
In this work, T1-, T2- and PD-weighted MR images of multiple sclerosis (MS) patients, providing information on the properties of tissues from different aspects, are treated as three independent information sources for the detection and segmentation of MS lesions. Based on information fusion theory, a knowledge guided information fusion framework is proposed to accomplish 3-D segmentation of MS lesions. This framework consists of three parts: (1) information extraction, (2) information fusion, and (3) decision. Information provided by different spectral images is extracted and modeled separately in each spectrum using fuzzy sets, aiming at managing the uncertainty and ambiguity in the images due to noise and partial volume effect. In the second part, the possible fuzzy map of MS lesions in each spectral image is constructed from the extracted information under the guidance of experts' knowledge, and then the final fuzzy map of MS lesions is constructed through the fusion of the fuzzy maps obtained from different spectrum. Finally, 3-D segmentation of MS lesions is derived from the final fuzzy map. Experimental results show that this method is fast and accurate.
Live cell imaging of in vitro human trophoblast syncytialization.
Wang, Rui; Dang, Yan-Li; Zheng, Ru; Li, Yue; Li, Weiwei; Lu, Xiaoyin; Wang, Li-Juan; Zhu, Cheng; Lin, Hai-Yan; Wang, Hongmei
2014-06-01
Human trophoblast syncytialization, a process of cell-cell fusion, is one of the most important yet least understood events during placental development. Investigating the fusion process in a placenta in vivo is very challenging given the complexity of this process. Application of primary cultured cytotrophoblast cells isolated from term placentas and BeWo cells derived from human choriocarcinoma formulates a biphasic strategy to achieve the mechanism of trophoblast cell fusion, as the former can spontaneously fuse to form the multinucleated syncytium and the latter is capable of fusing under the treatment of forskolin (FSK). Live-cell imaging is a powerful tool that is widely used to investigate many physiological or pathological processes in various animal models or humans; however, to our knowledge, the mechanism of trophoblast cell fusion has not been reported using a live- cell imaging manner. In this study, a live-cell imaging system was used to delineate the fusion process of primary term cytotrophoblast cells and BeWo cells. By using live staining with Hoechst 33342 or cytoplasmic dyes or by stably transfecting enhanced green fluorescent protein (EGFP) and DsRed2-Nuc reporter plasmids, we observed finger-like protrusions on the cell membranes of fusion partners before fusion and the exchange of cytoplasmic contents during fusion. In summary, this study provides the first video recording of the process of trophoblast syncytialization. Furthermore, the various live-cell imaging systems used in this study will help to yield molecular insights into the syncytialization process during placental development. © 2014 by the Society for the Study of Reproduction, Inc.
SIRF: Simultaneous Satellite Image Registration and Fusion in a Unified Framework.
Chen, Chen; Li, Yeqing; Liu, Wei; Huang, Junzhou
2015-11-01
In this paper, we propose a novel method for image fusion with a high-resolution panchromatic image and a low-resolution multispectral (Ms) image at the same geographical location. The fusion is formulated as a convex optimization problem which minimizes a linear combination of a least-squares fitting term and a dynamic gradient sparsity regularizer. The former is to preserve accurate spectral information of the Ms image, while the latter is to keep sharp edges of the high-resolution panchromatic image. We further propose to simultaneously register the two images during the fusing process, which is naturally achieved by virtue of the dynamic gradient sparsity property. An efficient algorithm is then devised to solve the optimization problem, accomplishing a linear computational complexity in the size of the output image in each iteration. We compare our method against six state-of-the-art image fusion methods on Ms image data sets from four satellites. Extensive experimental results demonstrate that the proposed method substantially outperforms the others in terms of both spatial and spectral qualities. We also show that our method can provide high-quality products from coarsely registered real-world IKONOS data sets. Finally, a MATLAB implementation is provided to facilitate future research.
NASA Astrophysics Data System (ADS)
Liu, Zhanwen; Feng, Yan; Chen, Hang; Jiao, Licheng
2017-10-01
A novel and effective image fusion method is proposed for creating a highly informative and smooth surface of fused image through merging visible and infrared images. Firstly, a two-scale non-subsampled shearlet transform (NSST) is employed to decompose the visible and infrared images into detail layers and one base layer. Then, phase congruency is adopted to extract the saliency maps from the detail layers and a guided filtering is proposed to compute the filtering output of base layer and saliency maps. Next, a novel weighted average technique is used to make full use of scene consistency for fusion and obtaining coefficients map. Finally the fusion image was acquired by taking inverse NSST of the fused coefficients map. Experiments show that the proposed approach can achieve better performance than other methods in terms of subjective visual effect and objective assessment.
Heideklang, René; Shokouhi, Parisa
2016-01-01
This article focuses on the fusion of flaw indications from multi-sensor nondestructive materials testing. Because each testing method makes use of a different physical principle, a multi-method approach has the potential of effectively differentiating actual defect indications from the many false alarms, thus enhancing detection reliability. In this study, we propose a new technique for aggregating scattered two- or three-dimensional sensory data. Using a density-based approach, the proposed method explicitly addresses localization uncertainties such as registration errors. This feature marks one of the major of advantages of this approach over pixel-based image fusion techniques. We provide guidelines on how to set all the key parameters and demonstrate the technique’s robustness. Finally, we apply our fusion approach to experimental data and demonstrate its capability to locate small defects by substantially reducing false alarms under conditions where no single-sensor method is adequate. PMID:26784200
Improving cerebellar segmentation with statistical fusion
NASA Astrophysics Data System (ADS)
Plassard, Andrew J.; Yang, Zhen; Prince, Jerry L.; Claassen, Daniel O.; Landman, Bennett A.
2016-03-01
The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multiatlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non- Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution.
Fusion of PET and MRI for Hybrid Imaging
NASA Astrophysics Data System (ADS)
Cho, Zang-Hee; Son, Young-Don; Kim, Young-Bo; Yoo, Seung-Schik
Recently, the development of the fusion PET-MRI system has been actively studied to meet the increasing demand for integrated molecular and anatomical imaging. MRI can provide detailed anatomical information on the brain, such as the locations of gray and white matter, blood vessels, axonal tracts with high resolution, while PET can measure molecular and genetic information, such as glucose metabolism, neurotransmitter-neuroreceptor binding and affinity, protein-protein interactions, and gene trafficking among biological tissues. State-of-the-art MRI systems, such as the 7.0 T whole-body MRI, now can visualize super-fine structures including neuronal bundles in the pons, fine blood vessels (such as lenticulostriate arteries) without invasive contrast agents, in vivo hippocampal substructures, and substantia nigra with excellent image contrast. High-resolution PET, known as High-Resolution Research Tomograph (HRRT), is a brain-dedicated system capable of imaging minute changes of chemicals, such as neurotransmitters and -receptors, with high spatial resolution and sensitivity. The synergistic power of the two, i.e., ultra high-resolution anatomical information offered by a 7.0 T MRI system combined with the high-sensitivity molecular information offered by HRRT-PET, will significantly elevate the level of our current understanding of the human brain, one of the most delicate, complex, and mysterious biological organs. This chapter introduces MRI, PET, and PET-MRI fusion system, and its algorithms are discussed in detail.
Touma, Joseph; Kobeiter, Hicham; Majewski, Marek; Tacher, Vania; Desgranges, Pascal
2018-03-01
The present report describes the management of massive proximal type 1 endoleak with an enlarged symptomatic aneurysmal sac. Urgent treatment was performed using in situ laser fenestration of an aortic proximal extension facing renovisceral ostia. Image fusion was obtained intraoperatively. For each target vessel (superior mesenteric and two renal arteries), an Aptus HeliFX steerable sheath (Medtronic) inserted through femoral access was curved to face the vessel's ostium marker. A laser catheter (Spectranetics) was used to traverse the stent-graft and insert a 0.014" guidewire in the vessel. The fenestration was enlarged using a 2.5-mm-diameter cutting balloon, followed by a 4-mm-diameter balloon angioplasty and a V12 I Cast/Advanta covered stent implantation. Final angiogram demonstrated aneurysm exclusion and patent target vessels. The postoperative course and 7 months follow-up were uneventful. Level of evidence Level 4.
Brock, Kristy K; Mutic, Sasa; McNutt, Todd R; Li, Hua; Kessler, Marc L
2017-07-01
Image registration and fusion algorithms exist in almost every software system that creates or uses images in radiotherapy. Most treatment planning systems support some form of image registration and fusion to allow the use of multimodality and time-series image data and even anatomical atlases to assist in target volume and normal tissue delineation. Treatment delivery systems perform registration and fusion between the planning images and the in-room images acquired during the treatment to assist patient positioning. Advanced applications are beginning to support daily dose assessment and enable adaptive radiotherapy using image registration and fusion to propagate contours and accumulate dose between image data taken over the course of therapy to provide up-to-date estimates of anatomical changes and delivered dose. This information aids in the detection of anatomical and functional changes that might elicit changes in the treatment plan or prescription. As the output of the image registration process is always used as the input of another process for planning or delivery, it is important to understand and communicate the uncertainty associated with the software in general and the result of a specific registration. Unfortunately, there is no standard mathematical formalism to perform this for real-world situations where noise, distortion, and complex anatomical variations can occur. Validation of the software systems performance is also complicated by the lack of documentation available from commercial systems leading to use of these systems in undesirable 'black-box' fashion. In view of this situation and the central role that image registration and fusion play in treatment planning and delivery, the Therapy Physics Committee of the American Association of Physicists in Medicine commissioned Task Group 132 to review current approaches and solutions for image registration (both rigid and deformable) in radiotherapy and to provide recommendations for quality assurance and quality control of these clinical processes. © 2017 American Association of Physicists in Medicine.
Rampersaud, Y. Raja; Gray, Randolph; Lewis, Steven J.; Massicotte, Eric M.; Fehlings, Michael G.
2011-01-01
Background The utility and cost of minimally invasive surgical (MIS) fusion remain controversial. The primary objective of this study was to compare the direct economic impact of 1- and 2-level fusion for grade I or II degenerative or isthmic spondylolisthesis via an MIS technique compared with conventional open posterior decompression and fusion. Methods A retrospective cohort study was performed by use of prospective data from 78 consecutive patients (37 with MIS technique by 1 surgeon and 41 with open technique by 3 surgeons). Independent review of demographic, intraoperative, and acute postoperative data was performed. Oswestry disability index (ODI) and Short Form 36 (SF-36) values were prospectively collected preoperatively and at 1 year postoperatively. Cost-utility analysis was performed by use of in-hospital micro-costing data (operating room, nursing, imaging, laboratories, pharmacy, and allied health cost) and change in health utility index (SF-6D) at 1 year. Results The groups were comparable in terms of age, sex, preoperative hemoglobin, comorbidities, and body mass index. Groups significantly differed (P < .01) regarding baseline ODI and SF-6D scores, as well as number of 2-level fusions (MIS, 12; open, 20) and number of interbody cages (MIS, 45; open, 14). Blood loss (200 mL vs 798 mL), transfusions (0% vs 17%), and length of stay (LOS) (6.1 days vs 8.4 days) were significantly (P < .01) lower in the MIS group. Complications were also fewer in the MIS group (4 vs 12, P < .02). The mean cost of an open fusion was 1.28 times greater than that of an MIS fusion (P = .001). Both groups had significant improvement in 1-year outcome. The changes in ODI and SF-6D scores were not statistically different between groups. Multivariate regression analysis showed that LOS and number of levels fused were independent predictors of cost. Age and MIS were the only predictors of LOS. Baseline outcomes and MIS were predictors of 1-year outcome. Conclusion MIS posterior fusion for spondylolisthesis does reduce blood loss, transfusion requirements, and LOS. Both techniques provided substantial clinical improvements at 1 year. The cost utility of the MIS technique was considered comparable to that of the open technique. Level of Evidence Level III. PMID:25802665
Research on segmentation based on multi-atlas in brain MR image
NASA Astrophysics Data System (ADS)
Qian, Yuejing
2018-03-01
Accurate segmentation of specific tissues in brain MR image can be effectively achieved with the multi-atlas-based segmentation method, and the accuracy mainly depends on the image registration accuracy and fusion scheme. This paper proposes an automatic segmentation method based on the multi-atlas for brain MR image. Firstly, to improve the registration accuracy in the area to be segmented, we employ a target-oriented image registration method for the refinement. Then In the label fusion, we proposed a new algorithm to detect the abnormal sparse patch and simultaneously abandon the corresponding abnormal sparse coefficients, this method is made based on the remaining sparse coefficients combined with the multipoint label estimator strategy. The performance of the proposed method was compared with those of the nonlocal patch-based label fusion method (Nonlocal-PBM), the sparse patch-based label fusion method (Sparse-PBM) and majority voting method (MV). Based on our experimental results, the proposed method is efficient in the brain MR images segmentation compared with MV, Nonlocal-PBM, and Sparse-PBM methods.
Joint image registration and fusion method with a gradient strength regularization
NASA Astrophysics Data System (ADS)
Lidong, Huang; Wei, Zhao; Jun, Wang
2015-05-01
Image registration is an essential process for image fusion, and fusion performance can be used to evaluate registration accuracy. We propose a maximum likelihood (ML) approach to joint image registration and fusion instead of treating them as two independent processes in the conventional way. To improve the visual quality of a fused image, a gradient strength (GS) regularization is introduced in the cost function of ML. The GS of the fused image is controllable by setting the target GS value in the regularization term. This is useful because a larger target GS brings a clearer fused image and a smaller target GS makes the fused image smoother and thus restrains noise. Hence, the subjective quality of the fused image can be improved whether the source images are polluted by noise or not. We can obtain the fused image and registration parameters successively by minimizing the cost function using an iterative optimization method. Experimental results show that our method is effective with transformation, rotation, and scale parameters in the range of [-2.0, 2.0] pixel, [-1.1 deg, 1.1 deg], and [0.95, 1.05], respectively, and variances of noise smaller than 300. It also demonstrated that our method yields a more visual pleasing fused image and higher registration accuracy compared with a state-of-the-art algorithm.
Layer-Based Approach for Image Pair Fusion.
Son, Chang-Hwan; Zhang, Xiao-Ping
2016-04-20
Recently, image pairs, such as noisy and blurred images or infrared and noisy images, have been considered as a solution to provide high-quality photographs under low lighting conditions. In this paper, a new method for decomposing the image pairs into two layers, i.e., the base layer and the detail layer, is proposed for image pair fusion. In the case of infrared and noisy images, simple naive fusion leads to unsatisfactory results due to the discrepancies in brightness and image structures between the image pair. To address this problem, a local contrast-preserving conversion method is first proposed to create a new base layer of the infrared image, which can have visual appearance similar to another base layer such as the denoised noisy image. Then, a new way of designing three types of detail layers from the given noisy and infrared images is presented. To estimate the noise-free and unknown detail layer from the three designed detail layers, the optimization framework is modeled with residual-based sparsity and patch redundancy priors. To better suppress the noise, an iterative approach that updates the detail layer of the noisy image is adopted via a feedback loop. This proposed layer-based method can also be applied to fuse another noisy and blurred image pair. The experimental results show that the proposed method is effective for solving the image pair fusion problem.
Color image fusion for concealed weapon detection
NASA Astrophysics Data System (ADS)
Toet, Alexander
2003-09-01
Recent advances in passive and active imaging sensor technology offer the potential to detect weapons that are concealed underneath a person's clothing or carried along in bags. Although the concealed weapons can sometimes easily be detected, it can be difficult to perceive their context, due to the non-literal nature of these images. Especially for dynamic crowd surveillance purposes it may be impossible to rapidly asses with certainty which individual in the crowd is the one carrying the observed weapon. Sensor fusion is an enabling technology that may be used to solve this problem. Through fusion the signal of the sensor that depicts the weapon can be displayed in the context provided by a sensor of a different modality. We propose an image fusion scheme in which non-literal imagery can be fused with standard color images such that the result clearly displays the observed weapons in the context of the original color image. The procedure is such that the relevant contrast details from the non-literal image are transferred to the color image without altering the original color distribution of this image. The result is a natural looking color image that fluently combines all details from both input sources. When an observer who performs a dynamic crowd surveillance task, detects a weapon in the scene, he will also be able to quickly determine which person in the crowd is actually carrying the observed weapon (e.g. "the man with the red T-shirt and blue jeans"). The method is illustrated by the fusion of thermal 8-12 μm imagery with standard RGB color images.
Standardizing Quality Assessment of Fused Remotely Sensed Images
NASA Astrophysics Data System (ADS)
Pohl, C.; Moellmann, J.; Fries, K.
2017-09-01
The multitude of available operational remote sensing satellites led to the development of many image fusion techniques to provide high spatial, spectral and temporal resolution images. The comparison of different techniques is necessary to obtain an optimized image for the different applications of remote sensing. There are two approaches in assessing image quality: 1. Quantitatively by visual interpretation and 2. Quantitatively using image quality indices. However an objective comparison is difficult due to the fact that a visual assessment is always subject and a quantitative assessment is done by different criteria. Depending on the criteria and indices the result varies. Therefore it is necessary to standardize both processes (qualitative and quantitative assessment) in order to allow an objective image fusion quality evaluation. Various studies have been conducted at the University of Osnabrueck (UOS) to establish a standardized process to objectively compare fused image quality. First established image fusion quality assessment protocols, i.e. Quality with No Reference (QNR) and Khan's protocol, were compared on varies fusion experiments. Second the process of visual quality assessment was structured and standardized with the aim to provide an evaluation protocol. This manuscript reports on the results of the comparison and provides recommendations for future research.
Multi-focus image fusion and robust encryption algorithm based on compressive sensing
NASA Astrophysics Data System (ADS)
Xiao, Di; Wang, Lan; Xiang, Tao; Wang, Yong
2017-06-01
Multi-focus image fusion schemes have been studied in recent years. However, little work has been done in multi-focus image transmission security. This paper proposes a scheme that can reduce data transmission volume and resist various attacks. First, multi-focus image fusion based on wavelet decomposition can generate complete scene images and optimize the perception of the human eye. The fused images are sparsely represented with DCT and sampled with structurally random matrix (SRM), which reduces the data volume and realizes the initial encryption. Then the obtained measurements are further encrypted to resist noise and crop attack through combining permutation and diffusion stages. At the receiver, the cipher images can be jointly decrypted and reconstructed. Simulation results demonstrate the security and robustness of the proposed scheme.
Image-fusion of MR spectroscopic images for treatment planning of gliomas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang Jenghwa; Thakur, Sunitha; Perera, Gerard
2006-01-15
{sup 1}H magnetic resonance spectroscopic imaging (MRSI) can improve the accuracy of target delineation for gliomas, but it lacks the anatomic resolution needed for image fusion. This paper presents a simple protocol for fusing simulation computer tomography (CT) and MRSI images for glioma intensity-modulated radiotherapy (IMRT), including a retrospective study of 12 patients. Each patient first underwent whole-brain axial fluid-attenuated-inversion-recovery (FLAIR) MRI (3 mm slice thickness, no spacing), followed by three-dimensional (3D) MRSI measurements (TE/TR: 144/1000 ms) of a user-specified volume encompassing the extent of the tumor. The nominal voxel size of MRSI ranged from 8x8x10 mm{sup 3} to 12x12x10more » mm{sup 3}. A system was developed to grade the tumor using the choline-to-creatine (Cho/Cr) ratios from each MRSI voxel. The merged MRSI images were then generated by replacing the Cho/Cr value of each MRSI voxel with intensities according to the Cho/Cr grades, and resampling the poorer-resolution Cho/Cr map into the higher-resolution FLAIR image space. The FUNCTOOL processing software was also used to create the screen-dumped MRSI images in which these data were overlaid with each FLAIR MRI image. The screen-dumped MRSI images were manually translated and fused with the FLAIR MRI images. Since the merged MRSI images were intrinsically fused with the FLAIR MRI images, they were also registered with the screen-dumped MRSI images. The position of the MRSI volume on the merged MRSI images was compared with that of the screen-dumped MRSI images and was shifted until agreement was within a predetermined tolerance. Three clinical target volumes (CTVs) were then contoured on the FLAIR MRI images corresponding to the Cho/Cr grades. Finally, the FLAIR MRI images were fused with the simulation CT images using a mutual-information algorithm, yielding an IMRT plan that simultaneously delivers three different dose levels to the three CTVs. The image-fusion protocol was tested on 12 (six high-grade and six low-grade) glioma patients. The average agreement of the MRSI volume position on the screen-dumped MRSI images and the merged MRSI images was 0.29 mm with a standard deviation of 0.07 mm. Of all the voxels with Cho/Cr grade one or above, the distribution of Cho/Cr grade was found to correlate with the glioma grade from pathologic finding and is consistent with literature results indicating Cho/Cr elevation as a marker for malignancy. In conclusion, an image-fusion protocol was developed that successfully incorporates MRSI information into the IMRT treatment plan for glioma.« less
Chavan, Satishkumar S; Mahajan, Abhishek; Talbar, Sanjay N; Desai, Subhash; Thakur, Meenakshi; D'cruz, Anil
2017-02-01
Neurocysticercosis (NCC) is a parasite infection caused by the tapeworm Taenia solium in its larvae stage which affects the central nervous system of the human body (a definite host). It results in the formation of multiple lesions in the brain at different locations during its various stages. During diagnosis of such symptomatic patients, these lesions can be better visualized using a feature based fusion of Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). This paper presents a novel approach to Multimodality Medical Image Fusion (MMIF) used for the analysis of the lesions for the diagnostic purpose and post treatment review of NCC. The MMIF presented here is a technique of combining CT and MRI data of the same patient into a new slice using a Nonsubsampled Rotated Complex Wavelet Transform (NSRCxWT). The forward NSRCxWT is applied on both the source modalities separately to extract the complementary and the edge related features. These features are then combined to form a composite spectral plane using average and maximum value selection fusion rules. The inverse transformation on this composite plane results into a new, visually better, and enriched fused image. The proposed technique is tested on the pilot study data sets of patients infected with NCC. The quality of these fused images is measured using objective and subjective evaluation metrics. Objective evaluation is performed by estimating the fusion parameters like entropy, fusion factor, image quality index, edge quality measure, mean structural similarity index measure, etc. The fused images are also evaluated for their visual quality using subjective analysis with the help of three expert radiologists. The experimental results on 43 image data sets of 17 patients are promising and superior when compared with the state of the art wavelet based fusion algorithms. The proposed algorithm can be a part of computer-aided detection and diagnosis (CADD) system which assists the radiologists in clinical practices. Copyright © 2016 Elsevier Ltd. All rights reserved.
Live-cell imaging of conidial anastomosis tube fusion during colony initiation in Fusarium oxysporum
Kurian, Smija M.; Di Pietro, Antonio
2018-01-01
Fusarium oxysporum exhibits conidial anastomosis tube (CAT) fusion during colony initiation to form networks of conidial germlings. Here we determined the optimal culture conditions for this fungus to undergo CAT fusion between microconidia in liquid medium. Extensive high resolution, confocal live-cell imaging was performed to characterise the different stages of CAT fusion, using genetically encoded fluorescent labelling and vital fluorescent organelle stains. CAT homing and fusion were found to be dependent on adhesion to the surface, in contrast to germ tube development which occurs in the absence of adhesion. Staining with fluorescently labelled concanavalin A indicated that the cell wall composition of CATs differs from that of microconidia and germ tubes. The movement of nuclei, mitochondria, vacuoles and lipid droplets through fused germlings was observed by live-cell imaging. PMID:29734342
Kurian, Smija M; Di Pietro, Antonio; Read, Nick D
2018-01-01
Fusarium oxysporum exhibits conidial anastomosis tube (CAT) fusion during colony initiation to form networks of conidial germlings. Here we determined the optimal culture conditions for this fungus to undergo CAT fusion between microconidia in liquid medium. Extensive high resolution, confocal live-cell imaging was performed to characterise the different stages of CAT fusion, using genetically encoded fluorescent labelling and vital fluorescent organelle stains. CAT homing and fusion were found to be dependent on adhesion to the surface, in contrast to germ tube development which occurs in the absence of adhesion. Staining with fluorescently labelled concanavalin A indicated that the cell wall composition of CATs differs from that of microconidia and germ tubes. The movement of nuclei, mitochondria, vacuoles and lipid droplets through fused germlings was observed by live-cell imaging.
Information Fusion and Visualisation in Anti Asymmetric Warfare
2006-12-01
thermal detectors (e.g. bolometers). They used the MWIR and LWIR part of the IR spectrum. Quantum detectors realize an image rate of over 100Hz while... panorama image by image fusion of several sensors components. EO designators are distinguished through their agility and increased resolution
Joint interpretation of geophysical data using Image Fusion techniques
NASA Astrophysics Data System (ADS)
Karamitrou, A.; Tsokas, G.; Petrou, M.
2013-12-01
Joint interpretation of geophysical data produced from different methods is a challenging area of research in a wide range of applications. In this work we apply several image fusion approaches to combine maps of electrical resistivity, electromagnetic conductivity, vertical gradient of the magnetic field, magnetic susceptibility, and ground penetrating radar reflections, in order to detect archaeological relics. We utilize data gathered from Arkansas University, with the support of the U.S. Department of Defense, through the Strategic Environmental Research and Development Program (SERDP-CS1263). The area of investigation is the Army City, situated in Riley Country of Kansas, USA. The depth of the relics is estimated about 30 cm from the surface, yet the surface indications of its existence are limited. We initially register the images from the different methods to correct from random offsets due to the use of hand-held devices during the measurement procedure. Next, we apply four different image fusion approaches to create combined images, using fusion with mean values, wavelet decomposition, curvelet transform, and curvelet transform enhancing the images along specific angles. We create seven combinations of pairs between the available geophysical datasets. The combinations are such that for every pair at least one high-resolution method (resistivity or magnetic gradiometry) is included. Our results indicate that in almost every case the method of mean values produces satisfactory fused images that corporate the majority of the features of the initial images. However, the contrast of the final image is reduced, and in some cases the averaging process nearly eliminated features that are fade in the original images. Wavelet based fusion outputs also good results, providing additional control in selecting the feature wavelength. Curvelet based fusion is proved the most effective method in most of the cases. The ability of curvelet domain to unfold the image in terms of space, wavenumber, and orientation, provides important advantages compared with the rest of the methods by allowing the incorporation of a-priori information about the orientation of the potential targets.
Xu, Yuan; Ding, Kun; Huo, Chunlei; Zhong, Zisha; Li, Haichang; Pan, Chunhong
2015-01-01
Very high resolution (VHR) image change detection is challenging due to the low discriminative ability of change feature and the difficulty of change decision in utilizing the multilevel contextual information. Most change feature extraction techniques put emphasis on the change degree description (i.e., in what degree the changes have happened), while they ignore the change pattern description (i.e., how the changes changed), which is of equal importance in characterizing the change signatures. Moreover, the simultaneous consideration of the classification robust to the registration noise and the multiscale region-consistent fusion is often neglected in change decision. To overcome such drawbacks, in this paper, a novel VHR image change detection method is proposed based on sparse change descriptor and robust discriminative dictionary learning. Sparse change descriptor combines the change degree component and the change pattern component, which are encoded by the sparse representation error and the morphological profile feature, respectively. Robust change decision is conducted by multiscale region-consistent fusion, which is implemented by the superpixel-level cosparse representation with robust discriminative dictionary and the conditional random field model. Experimental results confirm the effectiveness of the proposed change detection technique. PMID:25918748
Multi-sensor fusion of Landsat 8 thermal infrared (TIR) and panchromatic (PAN) images.
Jung, Hyung-Sup; Park, Sung-Whan
2014-12-18
Data fusion is defined as the combination of data from multiple sensors such that the resulting information is better than would be possible when the sensors are used individually. The multi-sensor fusion of panchromatic (PAN) and thermal infrared (TIR) images is a good example of this data fusion. While a PAN image has higher spatial resolution, a TIR one has lower spatial resolution. In this study, we have proposed an efficient method to fuse Landsat 8 PAN and TIR images using an optimal scaling factor in order to control the trade-off between the spatial details and the thermal information. We have compared the fused images created from different scaling factors and then tested the performance of the proposed method at urban and rural test areas. The test results show that the proposed method merges the spatial resolution of PAN image and the temperature information of TIR image efficiently. The proposed method may be applied to detect lava flows of volcanic activity, radioactive exposure of nuclear power plants, and surface temperature change with respect to land-use change.
High-Level Data Fusion Software for SHOALS-1000TH FY07 Annual Report
2007-01-01
This survey covered the lakeside town of Alpena , Michigan, and the shoreline of Lake Huron. Additionally a small set of ground reflectance...Figure 2. SHOALS green laser reflectance image of the eastern part of Alpena , Michigan, and the shoreline of Thunder Bay
Apfelbeck, M; Clevert, D-A; Ricke, J; Stief, C; Schlenker, B
2018-01-01
Reduced acceptance of radical prostatectomy in patients with low risk or intermediate risk prostate cancer has significantly changed treatment strategies in prostate cancer (PCa) during the last years. Focal therapy of the prostate with high intensity focused ultrasound (HIFU) is an organ-preserving treatment for prostate cancer with less impairment of health-related quality of life. Follow-up after HIFU therapy by imaging modalities remains a major problem as eg. MRI performs poorly. Contrast enhanced ultrasound (CEUS) allows to monitor the vascular architecture of organs non-invasively. However, only limited data are available using CEUS to define successful and complete HIFU treatment of the prostate. In this study, we aimed to evaluate short-term image findings using CEUS and image fusion before and after HIFU treatment. Prospective single arm study in patients with uni- or bilateral, low or intermediate risk prostate cancer or recurrent cancer after radiotherapy treated with HIFU at our institution between October 2016 and November 2017. HIFU hemiablation or whole gland treatment was performed using the Focal One® device. PCa was diagnosed either by multiparametric magnetic resonance imaging (mpMRI) followed by MRI fusion based targeted biopsy combined with 12 core transrectal ultrasound (TRUS) guided biopsy or 12 core random biopsy only. Monitoring of the target region before, immediately and 24 hours after the ablation was done by CEUS in combination with image fusion using an axial T2-weighted MRI sequence. 6 consecutive patients with Gleason score (GS) 6, 5 patients with GS 7a prostate cancer and one patient with biochemical recurrence after radiotherapy were included in the study. Three patients underwent whole gland treatment due to histological proven bilateral PCa or recurrent PCa after radiotherapy. Hemiablation was performed in 9 patients with unilateral tumor and no PIRADS 4 or 5 lesion in the contralateral lobe. Median patient age was 69.8 years and median PSA (prostate-specific antigen) level was 8.4 ng/ml. CEUS showed markedly reduced microbubbles in the ablated area, the prostate capsule still showed signs of perfusion. The study is limited by the short follow up and small number of patients. CEUS examination showed a reduction of microcirculation in the treated area immediately after the treatment and 24 hours later. The combination of CEUS and image fusion seems to be helpful for detecting the PCa target lesion and monitor the success of HIFU ablation treatment. Evidence for image findings after HIFU-therapy are rare. Further studies on this topic are needed.
Value of Image Fusion in Coronary Angiography for the Detection of Coronary Artery Bypass Grafts.
Plessis, Julien; Warin Fresse, Karine; Cahouch, Zachary; Manigold, Thibaut; Letocart, Vincent; Le Gloan, Laurianne; Guyomarch, Béatrice; Guerin, Patrice
2016-06-10
Coronary angiography is more complex in patients with coronary artery bypass grafts (CABG). Image fusion is a new technology that allows the overlay of a computed tomography (CT) three-dimension (3D) model with fluoroscopic images in real time. This single-center prospective study included 66 previous CABG patients undergoing coronary and bypass graft angiography. Image fusion coronary angiographies (fusion group, 20 patients) were compared to conventional coronary angiographies (control group, 46 patients). The fusion group included patients for whom a previous chest CT scan with contrast was available. For patients in this group, aorta and CABG were reconstructed in 3D from CT acquisitions and merged in real time with fluoroscopic images. The following parameters were compared: time needed to localize the CABG; procedure duration; air kerma (AK); dose area product (DAP); and volume of contrast media injected. Results are expressed as median. There were no significant differences between the 2 groups in patient demographics and procedure characteristics (access site, number of bypass to be found, and interventional cardiologist's experience). The time to localize CABG was significantly shorter in the fusion group (7.3 versus 12.4 minutes; P=0.002), as well as the procedure duration (20.6 versus 25.6 minutes; P=0.002), AK (610 versus 814 mGy; P=0.02), DAP (4390 versus 5922.5 cGy·cm(2); P=0.02), and volume of iodinated contrast media (85 versus 116 cc; P=0.002). 3D image fusion improves the CABG detection in coronary angiography and reduces the time necessary to localize CABG, total procedure time duration, radiation exposure, and volume of contrast media. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Dombrowski, Malcolm E; Rynearson, Bryan; LeVasseur, Clarissa; Adgate, Zach; Donaldson, William F; Lee, Joon Y; Aiyangar, Ameet; Anderst, William J
2018-04-01
Degenerative spondylolisthesis (DS) in the setting of symptomatic lumbar spinal stenosis is commonly treated with spinal fusion in addition to decompression with laminectomy. However, recent studies have shown similar clinical outcomes after decompression alone, suggesting that a subset of DS patients may not require spinal fusion. Identification of dynamic instability could prove useful for predicting which patients are at higher risk of post-laminectomy destabilization necessitating fusion. The goal of this study was to determine if static clinical radiographs adequately characterize dynamic instability in patients with lumbar degenerative spondylolisthesis (DS) and to compare the rotational and translational kinematics in vivo during continuous dynamic flexion activity in DS versus asymptomatic age-matched controls. Seven patients with symptomatic single level lumbar DS (6 M, 1 F; 66 ± 5.0 years) and seven age-matched asymptomatic controls (5 M, 2 F age 63.9 ± 6.4 years) underwent biplane radiographic imaging during continuous torso flexion. A volumetric model-based tracking system was used to track each vertebra in the radiographic images using subject-specific 3D bone models from high-resolution computed tomography (CT). In vivo continuous dynamic sagittal rotation (flexion/extension) and AP translation (slip) were calculated and compared to clinical measures of intervertebral flexion/extension and AP translation obtained from standard lateral flexion/extension radiographs. Static clinical radiographs underestimate the degree of AP translation seen on dynamic in vivo imaging (1.0 vs 3.1 mm; p = 0.03). DS patients demonstrated three primary motion patterns compared to a single kinematic pattern in asymptomatic controls when analyzing continuous dynamic in vivo imaging. 3/7 (42%) of patients with DS demonstrated aberrant mid-range motion. Continuous in vivo dynamic imaging in DS reveals a spectrum of aberrant motion with significantly greater kinematic heterogeneity than previously realized that is not readily seen on current clinical imaging. Level V data These slides can be retrieved under Electronic Supplementary Material.
NASA Astrophysics Data System (ADS)
Wang, Y.; Tobias, B.; Chang, Y.-T.; Yu, J.-H.; Li, M.; Hu, F.; Chen, M.; Mamidanna, M.; Phan, T.; Pham, A.-V.; Gu, J.; Liu, X.; Zhu, Y.; Domier, C. W.; Shi, L.; Valeo, E.; Kramer, G. J.; Kuwahara, D.; Nagayama, Y.; Mase, A.; Luhmann, N. C., Jr.
2017-07-01
Electron cyclotron emission (ECE) imaging is a passive radiometric technique that measures electron temperature fluctuations; and microwave imaging reflectometry (MIR) is an active radar imaging technique that measures electron density fluctuations. Microwave imaging diagnostic instruments employing these techniques have made important contributions to fusion science and have been adopted at major fusion facilities worldwide including DIII-D, EAST, ASDEX Upgrade, HL-2A, KSTAR, LHD, and J-TEXT. In this paper, we describe the development status of three major technological advancements: custom mm-wave integrated circuits (ICs), digital beamforming (DBF), and synthetic diagnostic modeling (SDM). These have the potential to greatly advance microwave fusion plasma imaging, enabling compact and low-noise transceiver systems with real-time, fast tracking ability to address critical fusion physics issues, including ELM suppression and disruptions in the ITER baseline scenario, naturally ELM-free states such as QH-mode, and energetic particle confinement (i.e. Alfvén eigenmode stability) in high-performance regimes that include steady-state and advanced tokamak scenarios. Furthermore, these systems are fully compatible with today’s most challenging non-inductive heating and current drive systems and capable of operating in harsh environments, making them the ideal approach for diagnosing long-pulse and steady-state tokamaks.
Wang, Y.; Tobias, B.; Chang, Y. -T.; ...
2017-03-14
Electron cyclotron emission (ECE) imaging is a passive radiometric technique that measures electron temperature fluctuations; and microwave imaging reflectometry (MIR) is an active radar imaging technique that measures electron density fluctuations. The microwave imaging diagnostic instruments employing these techniques have made important contributions to fusion science and have been adopted at major fusion facilities worldwide including DIII-D, EAST, ASDEX Upgrade, HL-2A, KSTAR, LHD, and J-TEXT. In this paper, we describe the development status of three major technological advancements: custom mm-wave integrated circuits (ICs), digital beamforming (DBF), and synthetic diagnostic modeling (SDM). These also have the potential to greatly advance microwavemore » fusion plasma imaging, enabling compact and low-noise transceiver systems with real-time, fast tracking ability to address critical fusion physics issues, including ELM suppression and disruptions in the ITER baseline scenario, naturally ELM-free states such as QH-mode, and energetic particle confinement (i.e. Alfven eigenmode stability) in high-performance regimes that include steady-state and advanced tokamak scenarios. Furthermore, these systems are fully compatible with today's most challenging non-inductive heating and current drive systems and capable of operating in harsh environments, making them the ideal approach for diagnosing long-pulse and steady-state tokamaks.« less
Research on polarization imaging information parsing method
NASA Astrophysics Data System (ADS)
Yuan, Hongwu; Zhou, Pucheng; Wang, Xiaolong
2016-11-01
Polarization information parsing plays an important role in polarization imaging detection. This paper focus on the polarization information parsing method: Firstly, the general process of polarization information parsing is given, mainly including polarization image preprocessing, multiple polarization parameters calculation, polarization image fusion and polarization image tracking, etc.; And then the research achievements of the polarization information parsing method are presented, in terms of polarization image preprocessing, the polarization image registration method based on the maximum mutual information is designed. The experiment shows that this method can improve the precision of registration and be satisfied the need of polarization information parsing; In terms of multiple polarization parameters calculation, based on the omnidirectional polarization inversion model is built, a variety of polarization parameter images are obtained and the precision of inversion is to be improve obviously; In terms of polarization image fusion , using fuzzy integral and sparse representation, the multiple polarization parameters adaptive optimal fusion method is given, and the targets detection in complex scene is completed by using the clustering image segmentation algorithm based on fractal characters; In polarization image tracking, the average displacement polarization image characteristics of auxiliary particle filtering fusion tracking algorithm is put forward to achieve the smooth tracking of moving targets. Finally, the polarization information parsing method is applied to the polarization imaging detection of typical targets such as the camouflage target, the fog and latent fingerprints.
Strain on intervertebral discs after anterior cervical decompression and fusion.
Matsunaga, S; Kabayama, S; Yamamoto, T; Yone, K; Sakou, T; Nakanishi, K
1999-04-01
An analysis of the change in strain distribution of intervertebral discs present after anterior cervical decompression and fusion by an original method. The analytical results were compared to occurrence of herniation of the intervertebral disc on magnetic resonance imaging. To elucidate the influence of anterior cervical decompression and fusion on the unfused segments of the spine. There is no consensus regarding the exact significance of the biomechanical change in the unfused segment present after surgery. Ninety-six patients subjected to anterior cervical decompression and fusion for herniation of intervertebral discs were examined. Shear strain and longitudinal strain of intervertebral discs were analyzed on pre- and postoperative lateral dynamic routine radiography of the cervical spine. Thirty of the 96 patients were examined by magnetic resonance imaging before and after surgery, and the relation between alteration in strains and postsurgical occurrence of disc herniation was examined. In the cases of double- or triple-level fusion, shear strain of adjacent segments had increased 20% on average 1 year after surgery. Thirteen intervertebral discs that had an abnormally high degree of strain showed an increase in longitudinal strain after surgery. Eleven (85%) of the 13 discs that showed an abnormal increase in longitudinal strain had herniation in the same intervertebral discs with compression of the spinal cord during the follow-up period. Relief of symptoms was significantly poor in the patients with recent herniation. Close attention should be paid to long-term biomechanical changes in the unfused segment.
Moche, M; Busse, H; Dannenberg, C; Schulz, T; Schmitgen, A; Trantakis, C; Winkler, D; Schmidt, F; Kahn, T
2001-11-01
The aim of this work was to realize and clinically evaluate an image fusion platform for the integration of preoperative MRI and fMRI data into the intraoperative images of an interventional MRI system with a focus on neurosurgical procedures. A vertically open 0.5 T MRI scanner was equipped with a dedicated navigation system enabling the registration of additional imaging modalities (MRI, fMRI, CT) with the intraoperatively acquired data sets. These merged image data served as the basis for interventional planning and multimodal navigation. So far, the system has been used in 70 neurosurgical interventions (13 of which involved image data fusion--requiring 15 minutes extra time). The augmented navigation system is characterized by a higher frame rate and a higher image quality as compared to the system-integrated navigation based on continuously acquired (near) real time images. Patient movement and tissue shifts can be immediately detected by monitoring the morphological differences between both navigation scenes. The multimodal image fusion allowed a refined navigation planning especially for the resection of deeply seated brain lesions or pathologies close to eloquent areas. Augmented intraoperative orientation and instrument guidance improve the safety and accuracy of neurosurgical interventions.
A novel false color mapping model-based fusion method of visual and infrared images
NASA Astrophysics Data System (ADS)
Qi, Bin; Kun, Gao; Tian, Yue-xin; Zhu, Zhen-yu
2013-12-01
A fast and efficient image fusion method is presented to generate near-natural colors from panchromatic visual and thermal imaging sensors. Firstly, a set of daytime color reference images are analyzed and the false color mapping principle is proposed according to human's visual and emotional habits. That is, object colors should remain invariant after color mapping operations, differences between infrared and visual images should be enhanced and the background color should be consistent with the main scene content. Then a novel nonlinear color mapping model is given by introducing the geometric average value of the input visual and infrared image gray and the weighted average algorithm. To determine the control parameters in the mapping model, the boundary conditions are listed according to the mapping principle above. Fusion experiments show that the new fusion method can achieve the near-natural appearance of the fused image, and has the features of enhancing color contrasts and highlighting the infrared brilliant objects when comparing with the traditional TNO algorithm. Moreover, it owns the low complexity and is easy to realize real-time processing. So it is quite suitable for the nighttime imaging apparatus.
A tri-modality image fusion method for target delineation of brain tumors in radiotherapy.
Guo, Lu; Shen, Shuming; Harris, Eleanor; Wang, Zheng; Jiang, Wei; Guo, Yu; Feng, Yuanming
2014-01-01
To develop a tri-modality image fusion method for better target delineation in image-guided radiotherapy for patients with brain tumors. A new method of tri-modality image fusion was developed, which can fuse and display all image sets in one panel and one operation. And a feasibility study in gross tumor volume (GTV) delineation using data from three patients with brain tumors was conducted, which included images of simulation CT, MRI, and 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) examinations before radiotherapy. Tri-modality image fusion was implemented after image registrations of CT+PET and CT+MRI, and the transparency weight of each modality could be adjusted and set by users. Three radiation oncologists delineated GTVs for all patients using dual-modality (MRI/CT) and tri-modality (MRI/CT/PET) image fusion respectively. Inter-observer variation was assessed by the coefficient of variation (COV), the average distance between surface and centroid (ADSC), and the local standard deviation (SDlocal). Analysis of COV was also performed to evaluate intra-observer volume variation. The inter-observer variation analysis showed that, the mean COV was 0.14(± 0.09) and 0.07(± 0.01) for dual-modality and tri-modality respectively; the standard deviation of ADSC was significantly reduced (p<0.05) with tri-modality; SDlocal averaged over median GTV surface was reduced in patient 2 (from 0.57 cm to 0.39 cm) and patient 3 (from 0.42 cm to 0.36 cm) with the new method. The intra-observer volume variation was also significantly reduced (p = 0.00) with the tri-modality method as compared with using the dual-modality method. With the new tri-modality image fusion method smaller inter- and intra-observer variation in GTV definition for the brain tumors can be achieved, which improves the consistency and accuracy for target delineation in individualized radiotherapy.
Ohtori, Seiji; Orita, Sumihisa; Yamauchi, Kazuyo; Eguchi, Yawara; Aoki, Yasuchika; Nakamura, Junichi; Miyagi, Masayuki; Suzuki, Miyako; Kubota, Gou; Inage, Kazuhide; Sainoh, Takeshi; Sato, Jun; Fujimoto, Kazuki; Shiga, Yasuhiro; Abe, Koki; Kanamoto, Hiroto; Inoue, Gen; Takahashi, Kazuhisa; Furuya, Takeo; Koda, Masao
2017-02-01
Retrospective case series. The purpose of this study was to examine changes in the ligamentum flavum thickness and remodeling of the spinal canal after anterior fusion during a 10-year follow-up. Extreme lateral interbody fusion provides minimally invasive treatment of the lumbar spine; this anterior fusion without direct posterior decompression, so-called indirect decompression, can achieve pain relief. Anterior fusion may restore disc height, stretch the flexure of the ligamentum flavum, and increase the spinal canal diameter. However, changes in the ligamentum flavum thickness and remodeling of the spinal canal after anterior fusion during a long follow-up have not yet been reported. We evaluated 10 patients with L4 spondylolisthesis who underwent stand-alone anterior interbody fusion using the iliac crest bone. Magnetic resonance imaging was performed 10 years after surgery. The cross-sectional area (CSA) of the dural sac and the ligamentum flavum at L1-2 to L5-S1 was calculated using a Picture Archiving and Communication System. Spinal fusion with correction loss (average, 4.75 mm anterior slip) was achieved in all patients 10 years postsurgery. The average CSAs of the dural sac and the ligamentum flavum at L1-2 to L5-S1 were 150 mm 2 and 78 mm 2 , respectively. The average CSA of the ligamentum flavum at L4-5 (30 mm 2 ) (fusion level) was significantly less than that at L1-2 to L3-4 or L5-S1. Although patients had an average anterior slip of 4.75 mm, the average CSA of the dural sac at L4-5 was significantly larger than at the other levels. Spinal stability induced a lumbar ligamentum flavum change and a sustained remodeling of the spinal canal, which may explain the long-term pain relief after indirect decompression fusion surgery.
Schwein, Adeline; Lu, Tony; Chinnadurai, Ponraj; Kitkungvan, Danai; Shah, Dipan J; Chakfe, Nabil; Lumsden, Alan B; Bismuth, Jean
2017-01-01
Endovascular recanalization is considered first-line therapy for chronic central venous occlusion (CVO). Unlike arteries, in which landmarks such as wall calcifications provide indirect guidance for endovascular navigation, sclerotic veins without known vascular branching patterns impose significant challenges. Therefore, safe wire access through such chronic lesions mostly relies on intuition and experience. Studies have shown that magnetic resonance venography (MRV) can be performed safely in these patients, and the boundaries of occluded veins may be visualized on specific MRV sequences. Intraoperative image fusion techniques have become more common to guide complex arterial endovascular procedures. The aim of this study was to assess the feasibility and utility of MRV and intraoperative cone-beam computed tomography (CBCT) image fusion technique during endovascular CVO recanalization. During the study period, patients with symptomatic CVO and failed standard endovascular recanalization underwent further recanalization attempts with use of intraoperative MRV image fusion guidance. After preoperative MRV and intraoperative CBCT image coregistration, a virtual centerline path of the occluded segment was electronically marked in MRV and overlaid on real-time two-dimensional fluoroscopy images. Technical success, fluoroscopy times, radiation doses, number of venograms before recanalization, and accuracy of the virtual centerline overlay were evaluated. Four patients underwent endovascular CVO recanalization with use of intraoperative MRV image fusion guidance. Mean (± standard deviation) time for image fusion was 6:36 ± 00:51 mm:ss. The lesion was successfully crossed in all patients without complications. Mean fluoroscopy time for lesion crossing was 12.5 ± 3.4 minutes. Mean total fluoroscopy time was 28.8 ± 6.5 minutes. Mean total radiation dose was 15,185 ± 7747 μGy/m 2 , and mean radiation dose from CBCT acquisition was 2788 ± 458 μGy/m 2 (18% of mean total radiation dose). Mean number of venograms before recanalization was 1.6 ± 0.9, whereas two lesions were crossed without any prior venography. On qualitative analysis, virtual centerlines from MRV were aligned with actual guidewire trajectory on fluoroscopy in all four cases. MRV image fusion is feasible and may improve success, safety, and the surgeon's confidence during CVO recanalization. Similar to arterial interventions, three-dimensional MRV imaging and image fusion techniques could foster innovative solutions for such complex venous interventions and have the potential to affect a great number of patients. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Practical three color live cell imaging by widefield microscopy
Xia, Jianrun; Kim, Song Hon H.; Macmillan, Susan
2006-01-01
Live cell fluorescence microscopy using fluorescent protein tags derived from jellyfish and coral species has been a successful tool to image proteins and dynamics in many species. Multi-colored aequorea fluorescent protein (AFP) derivatives allow investigators to observe multiple proteins simultaneously, but overlapping spectral properties sometimes require the use of sophisticated and expensive microscopes. Here, we show that the aequorea coerulescens fluorescent protein derivative, PS-CFP2 has excellent practical properties as a blue fluorophore that are distinct from green or red fluorescent proteins and can be imaged with standard filter sets on a widefield microscope. We also find that by widefield illumination in live cells, that PS-CFP2 is very photostable. When fused to proteins that form concentrated puncta in either the cytoplasm or nucleus, PSCFP2 fusions do not artifactually interact with other AFP fusion proteins, even at very high levels of over-expression. PSCFP2 is therefore a good blue fluorophore for distinct three color imaging along with eGFP and mRFP using a relatively simple and inexpensive microscope. PMID:16909160
Shamwell, E Jared; Nothwang, William D; Perlis, Donald
2018-05-04
Aimed at improving size, weight, and power (SWaP)-constrained robotic vision-aided state estimation, we describe our unsupervised, deep convolutional-deconvolutional sensor fusion network, Multi-Hypothesis DeepEfference (MHDE). MHDE learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. We show how our multi-hypothesis formulation provides increased robustness against dynamic, heteroscedastic sensor and motion noise by computing hypothesis image mappings and predictions at 76⁻357 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel, inter-connected architectural pathways and n (1⁻20 in this work) multi-hypothesis generating sub-pathways to produce n global correspondence estimates between a source and a target image. We evaluated MHDE on the KITTI Odometry dataset and benchmarked it against the vision-only DeepMatching and Deformable Spatial Pyramids algorithms and were able to demonstrate a significant runtime decrease and a performance increase compared to the next-best performing method.
Rational Design of a Triple Reporter Gene for Multimodality Molecular Imaging
Hsieh, Ya-Ju; Ke, Chien-Chih; Yeh, Skye Hsin-Hsien; Lin, Chien-Feng; Chen, Fu-Du; Lin, Kang-Ping; Chen, Ran-Chou; Liu, Ren-Shyan
2014-01-01
Multimodality imaging using noncytotoxic triple fusion (TF) reporter genes is an important application for cell-based tracking, drug screening, and therapy. The firefly luciferase (fl), monomeric red fluorescence protein (mrfp), and truncated herpes simplex virus type 1 thymidine kinase SR39 mutant (ttksr39) were fused together to create TF reporter gene constructs with different order. The enzymatic activities of TF protein in vitro and in vivo were determined by luciferase reporter assay, H-FEAU cellular uptake experiment, bioluminescence imaging, and micropositron emission tomography (microPET). The TF construct expressed in H1299 cells possesses luciferase activity and red fluorescence. The tTKSR39 activity is preserved in TF protein and mediates high levels of H-FEAU accumulation and significant cell death from ganciclovir (GCV) prodrug activation. In living animals, the luciferase and tTKSR39 activities of TF protein have also been successfully validated by multimodality imaging systems. The red fluorescence signal is relatively weak for in vivo imaging but may expedite FACS-based selection of TF reporter expressing cells. We have developed an optimized triple fusion reporter construct DsRedm-fl-ttksr39 for more effective and sensitive in vivo animal imaging using fluorescence, bioluminescence, and PET imaging modalities, which may facilitate different fields of biomedical research and applications. PMID:24809057
Faust, James J.; Christenson, Wayne; Doudrick, Kyle; Ros, Robert
2017-01-01
Implantation of synthetic material, including vascular grafts, pacemakers, etc. results in the foreign body reaction and the formation of multinucleated giant cells (MGCs) at the exterior surface of the implant. Despite the long-standing premise that fusion of mononucleated macrophages results in the formation of MGCs, to date, no published study has shown fusion in context with living specimens. This is due to the fact that optical-quality glass, which is required for the majority of live imaging techniques, does not promote macrophage fusion. Consequently, the morphological changes that macrophages undergo during fusion as well as the mechanisms that govern this process remain ill-defined. In this study, we serendipitously identified a highly fusogenic glass surface and discovered that the capacity to promote fusion was due to oleamide contamination. When adsorbed on glass, oleamide and other molecules that contain long-chain hydrocarbons promoted high levels of macrophage fusion. Adhesion, an essential step for macrophage fusion, was apparently mediated by Mac-1 integrin (CD11b/CD18, αMβ2) as determined by single cell force spectroscopy and adhesion assays. Micropatterned glass further increased fusion and enabled a remarkable degree of spatiotemporal control over MGC formation. Using these surfaces, we reveal the kinetics that govern MGC formation in vitro. We anticipate that the spatiotemporal control afforded by these surfaces will expedite studies designed to identify the mechanism(s) of macrophage fusion and MGC formation with implication for the design of novel biomaterials. PMID:28340410
Quantitative multi-modal NDT data analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heideklang, René; Shokouhi, Parisa
2014-02-18
A single NDT technique is often not adequate to provide assessments about the integrity of test objects with the required coverage or accuracy. In such situations, it is often resorted to multi-modal testing, where complementary and overlapping information from different NDT techniques are combined for a more comprehensive evaluation. Multi-modal material and defect characterization is an interesting task which involves several diverse fields of research, including signal and image processing, statistics and data mining. The fusion of different modalities may improve quantitative nondestructive evaluation by effectively exploiting the augmented set of multi-sensor information about the material. It is the redundantmore » information in particular, whose quantification is expected to lead to increased reliability and robustness of the inspection results. There are different systematic approaches to data fusion, each with its specific advantages and drawbacks. In our contribution, these will be discussed in the context of nondestructive materials testing. A practical study adopting a high-level scheme for the fusion of Eddy Current, GMR and Thermography measurements on a reference metallic specimen with built-in grooves will be presented. Results show that fusion is able to outperform the best single sensor regarding detection specificity, while retaining the same level of sensitivity.« less
Design of an Image Fusion Phantom for a Small Animal microPET/CT Scanner Prototype
NASA Astrophysics Data System (ADS)
Nava-García, Dante; Alva-Sánchez, Héctor; Murrieta-Rodríguez, Tirso; Martínez-Dávalos, Arnulfo; Rodríguez-Villafuerte, Mercedes
2010-12-01
Two separate microtomography systems recently developed at Instituto de Física, UNAM, produce anatomical (microCT) and physiological images (microPET) of small animals. In this work, the development and initial tests of an image fusion method based on fiducial markers for image registration between the two modalities are presented. A modular Helix/Line-Sources phantom was designed and constructed; this phantom contains fiducial markers that can be visualized in both imaging systems. The registration was carried out by solving the rigid body alignment problem of Procrustes to obtain rotation and translation matrices required to align the two sets of images. The microCT/microPET image fusion of the Helix/Line-Sources phantom shows excellent visual coincidence between different structures, showing a calculated target-registration-error of 0.32 mm.
NASA Astrophysics Data System (ADS)
Vajdic, Stevan M.; Katz, Henry E.; Downing, Andrew R.; Brooks, Michael J.
1994-09-01
A 3D relational image matching/fusion algorithm is introduced. It is implemented in the domain of medical imaging and is based on Artificial Intelligence paradigms--in particular, knowledge base representation and tree search. The 2D reference and target images are selected from 3D sets and segmented into non-touching and non-overlapping regions, using iterative thresholding and/or knowledge about the anatomical shapes of human organs. Selected image region attributes are calculated. Region matches are obtained using a tree search, and the error is minimized by evaluating a `goodness' of matching function based on similarities of region attributes. Once the matched regions are found and the spline geometric transform is applied to regional centers of gravity, images are ready for fusion and visualization into a single 3D image of higher clarity.
Yu, Yao; Zhang, Wen-Bo; Liu, Xiao-Jing; Guo, Chuan-Bin; Yu, Guang-Yan; Peng, Xin
2017-06-01
The purpose of this study was to describe new technology assisted by 3-dimensional (3D) image fusion of 18 F-fluorodeoxyglucose (FDG)-positron emission tomography (PET)/computed tomography (CT) and contrast-enhanced CT (CECT) for computer planning of a maxillectomy of recurrent maxillary squamous cell carcinoma and defect reconstruction. Treatment of recurrent maxillary squamous cell carcinoma usually includes tumor resection and free flap reconstruction. FDG-PET/CT provided images of regions of abnormal glucose uptake and thus showed metabolic tumor volume to guide tumor resection. CECT data were used to create 3D reconstructed images of vessels to show the vascular diameters and locations, so that the most suitable vein and artery could be selected during anastomosis of the free flap. The data from preoperative maxillofacial CECT scans and FDG-PET/CT imaging were imported into the navigation system (iPlan 3.0; Brainlab, Feldkirchen, Germany). Three-dimensional image fusion between FDG-PET/CT and CECT was accomplished using Brainlab software according to the position of the 2 skulls simulated in the CECT image and PET/CT image, respectively. After verification of the image fusion accuracy, the 3D reconstruction images of the metabolic tumor, vessels, and other critical structures could be visualized within the same coordinate system. These sagittal, coronal, axial, and 3D reconstruction images were used to determine the virtual osteotomy sites and reconstruction plan, which was provided to the surgeon and used for surgical navigation. The average shift of the 3D image fusion between FDG-PET/CT and CECT was less than 1 mm. This technique, by clearly showing the metabolic tumor volume and the most suitable vessels for anastomosis, facilitated resection and reconstruction of recurrent maxillary squamous cell carcinoma. We used 3D image fusion of FDG-PET/CT and CECT to successfully accomplish resection and reconstruction of recurrent maxillary squamous cell carcinoma. This method has the potential to improve the clinical outcomes of these challenging procedures. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Oneil, William F.
1993-01-01
The fusion of radar and electro-optic (E-O) sensor images presents unique challenges. The two sensors measure different properties of the real three-dimensional (3-D) world. Forming the sensor outputs into a common format does not mask these differences. In this paper, the conditions under which fusion of the two sensor signals is possible are explored. The program currently planned to investigate this problem is briefly discussed.
Garcia-Reyes, Kirema; Nguyen, Hao G; Zagoria, Ronald J; Shinohara, Katsuto; Carroll, Peter R; Behr, Spencer C; Westphalen, Antonio C
2017-09-20
The purpose of this study was to estimate the impact of lesion visibility with transrectal ultrasound on the prediction of clinically significant prostate cancer with transrectal ultrasound-magnetic resonance imaging fusion biopsy. This HIPAA (Health Insurance Portability and Accountability Act) compliant, institutional review board approved, retrospective study was performed in 178 men who were 64.7 years old with prostate specific antigen 8.9 ng/ml. They underwent transrectal ultrasound-magnetic resonance imaging fusion biopsy from January 2013 to September 2016. Visible lesions on magnetic resonance imaging were assigned a PI-RADS™ (Prostate Imaging Reporting and Data System), version 2 score of 3 or greater. Transrectal ultrasound was positive when a hypoechoic lesion was identified. We used a 3-level, mixed effects logistic regression model to determine how transrectal ultrasound-magnetic resonance imaging concordance predicted the presence of clinically significant prostate cancer. The diagnostic performance of the 2 methods was estimated using ROC curves. A total of 1,331 sextants were targeted by transrectal ultrasound-magnetic resonance imaging fusion or systematic biopsies, of which 1,037 were negative, 183 were Gleason score 3 + 3 and 111 were Gleason score 3 + 4 or greater. Clinically significant prostate cancer was diagnosed by transrectal ultrasound and magnetic resonance imaging alone at 20.5% and 19.7% of these locations, respectively. Men with positive imaging had higher odds of clinically significant prostate cancer than men without visible lesions regardless of modality (transrectal ultrasound OR 14.75, 95% CI 5.22-41.69, magnetic resonance imaging OR 12.27, 95% CI 6.39-23.58 and the 2 modalities OR 28.68, 95% CI 14.45-56.89, all p <0.001). The ROC AUC to detect clinically significant prostate cancer using the 2 methods (0.85, 95% CI 0.81-0.89) was statistically greater than that of transrectal ultrasound alone (0.80, 95% CI 0.76-0.85, p = 0.001) and magnetic resonance imaging alone (0.83, 95% CI 0.79-0.87, p = 0.04). The sensitivity and specificity of transrectal ultrasound were 42.3% and 91.6%, and the sensitivity and specificity of magnetic resonance imaging were 62.2% and 84.1%, respectively. Lesion visibility on magnetic resonance imaging or transrectal ultrasound denotes a similar probability of clinically significant prostate cancer. This probability is greater when each examination is positive. Copyright © 2018 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Multi exposure image fusion algorithm based on YCbCr space
NASA Astrophysics Data System (ADS)
Yang, T. T.; Fang, P. Y.
2018-05-01
To solve the problem that scene details and visual effects are difficult to be optimized in high dynamic image synthesis, we proposes a multi exposure image fusion algorithm for processing low dynamic range images in YCbCr space, and weighted blending of luminance and chromatic aberration components respectively. The experimental results show that the method can retain color effect of the fused image while balancing details of the bright and dark regions of the high dynamic image.
NASA Astrophysics Data System (ADS)
Che, Chang; Yu, Xiaoyang; Sun, Xiaoming; Yu, Boyang
2017-12-01
In recent years, Scalable Vocabulary Tree (SVT) has been shown to be effective in image retrieval. However, for general images where the foreground is the object to be recognized while the background is cluttered, the performance of the current SVT framework is restricted. In this paper, a new image retrieval framework that incorporates a robust distance metric and information fusion is proposed, which improves the retrieval performance relative to the baseline SVT approach. First, the visual words that represent the background are diminished by using a robust Hausdorff distance between different images. Second, image matching results based on three image signature representations are fused, which enhances the retrieval precision. We conducted intensive experiments on small-scale to large-scale image datasets: Corel-9, Corel-48, and PKU-198, where the proposed Hausdorff metric and information fusion outperforms the state-of-the-art methods by about 13, 15, and 15%, respectively.
Fourier domain image fusion for differential X-ray phase-contrast breast imaging.
Coello, Eduardo; Sperl, Jonathan I; Bequé, Dirk; Benz, Tobias; Scherer, Kai; Herzen, Julia; Sztrókay-Gaul, Anikó; Hellerhoff, Karin; Pfeiffer, Franz; Cozzini, Cristina; Grandl, Susanne
2017-04-01
X-Ray Phase-Contrast (XPC) imaging is a novel technology with a great potential for applications in clinical practice, with breast imaging being of special interest. This work introduces an intuitive methodology to combine and visualize relevant diagnostic features, present in the X-ray attenuation, phase shift and scattering information retrieved in XPC imaging, using a Fourier domain fusion algorithm. The method allows to present complementary information from the three acquired signals in one single image, minimizing the noise component and maintaining visual similarity to a conventional X-ray image, but with noticeable enhancement in diagnostic features, details and resolution. Radiologists experienced in mammography applied the image fusion method to XPC measurements of mastectomy samples and evaluated the feature content of each input and the fused image. This assessment validated that the combination of all the relevant diagnostic features, contained in the XPC images, was present in the fused image as well. Copyright © 2017 Elsevier B.V. All rights reserved.
Spatial Statistical Data Fusion for Remote Sensing Applications
NASA Technical Reports Server (NTRS)
Nguyen, Hai
2010-01-01
Data fusion is the process of combining information from heterogeneous sources into a single composite picture of the relevant process, such that the composite picture is generally more accurate and complete than that derived from any single source alone. Data collection is often incomplete, sparse, and yields incompatible information. Fusion techniques can make optimal use of such data. When investment in data collection is high, fusion gives the best return. Our study uses data from two satellites: (1) Multiangle Imaging SpectroRadiometer (MISR), (2) Moderate Resolution Imaging Spectroradiometer (MODIS).
An imaging method of wavefront coding system based on phase plate rotation
NASA Astrophysics Data System (ADS)
Yi, Rigui; Chen, Xi; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua
2018-01-01
Wave-front coding has a great prospect in extending the depth of the optical imaging system and reducing optical aberrations, but the image quality and noise performance are inevitably reduced. According to the theoretical analysis of the wave-front coding system and the phase function expression of the cubic phase plate, this paper analyzed and utilized the feature that the phase function expression would be invariant in the new coordinate system when the phase plate rotates at different angles around the z-axis, and we proposed a method based on the rotation of the phase plate and image fusion. First, let the phase plate rotated at a certain angle around the z-axis, the shape and distribution of the PSF obtained on the image surface remain unchanged, the rotation angle and direction are consistent with the rotation angle of the phase plate. Then, the middle blurred image is filtered by the point spread function of the rotation adjustment. Finally, the reconstruction images were fused by the method of the Laplacian pyramid image fusion and the Fourier transform spectrum fusion method, and the results were evaluated subjectively and objectively. In this paper, we used Matlab to simulate the images. By using the Laplacian pyramid image fusion method, the signal-to-noise ratio of the image is increased by 19% 27%, the clarity is increased by 11% 15% , and the average gradient is increased by 4% 9% . By using the Fourier transform spectrum fusion method, the signal-to-noise ratio of the image is increased by 14% 23%, the clarity is increased by 6% 11% , and the average gradient is improved by 2% 6%. The experimental results show that the image processing by the above method can improve the quality of the restored image, improving the image clarity, and can effectively preserve the image information.
A new method of Quickbird own image fusion
NASA Astrophysics Data System (ADS)
Han, Ying; Jiang, Hong; Zhang, Xiuying
2009-10-01
With the rapid development of remote sensing technology, the means of accessing to remote sensing data become increasingly abundant, thus the same area can form a large number of multi-temporal, different resolution image sequence. At present, the fusion methods are mainly: HPF, IHS transform method, PCA method, Brovey, Mallat algorithm and wavelet transform and so on. There exists a serious distortion of the spectrums in the IHS transform, Mallat algorithm omits low-frequency information of the high spatial resolution images, the integration results of which has obvious blocking effects. Wavelet multi-scale decomposition for different sizes, the directions, details and the edges can have achieved very good results, but different fusion rules and algorithms can achieve different effects. This article takes the Quickbird own image fusion as an example, basing on wavelet transform and HVS, wavelet transform and IHS integration. The result shows that the former better. This paper introduces the correlation coefficient, the relative average spectral error index and usual index to evaluate the quality of image.
A biometric identification system based on eigenpalm and eigenfinger features.
Ribaric, Slobodan; Fratric, Ivan
2005-11-01
This paper presents a multimodal biometric identification system based on the features of the human hand. We describe a new biometric approach to personal identification using eigenfinger and eigenpalm features, with fusion applied at the matching-score level. The identification process can be divided into the following phases: capturing the image; preprocessing; extracting and normalizing the palm and strip-like finger subimages; extracting the eigenpalm and eigenfinger features based on the K-L transform; matching and fusion; and, finally, a decision based on the (k, l)-NN classifier and thresholding. The system was tested on a database of 237 people (1,820 hand images). The experimental results showed the effectiveness of the system in terms of the recognition rate (100 percent), the equal error rate (EER = 0.58 percent), and the total error rate (TER = 0.72 percent).
Correa, Nicolle M; Li, Yi-Ou; Adalı, Tülay; Calhoun, Vince D
2008-12-01
Typically data acquired through imaging techniques such as functional magnetic resonance imaging (fMRI), structural MRI (sMRI), and electroencephalography (EEG) are analyzed separately. However, fusing information from such complementary modalities promises to provide additional insight into connectivity across brain networks and changes due to disease. We propose a data fusion scheme at the feature level using canonical correlation analysis (CCA) to determine inter-subject covariations across modalities. As we show both with simulation results and application to real data, multimodal CCA (mCCA) proves to be a flexible and powerful method for discovering associations among various data types. We demonstrate the versatility of the method with application to two datasets, an fMRI and EEG, and an fMRI and sMRI dataset, both collected from patients diagnosed with schizophrenia and healthy controls. CCA results for fMRI and EEG data collected for an auditory oddball task reveal associations of the temporal and motor areas with the N2 and P3 peaks. For the application to fMRI and sMRI data collected for an auditory sensorimotor task, CCA results show an interesting joint relationship between fMRI and gray matter, with patients with schizophrenia showing more functional activity in motor areas and less activity in temporal areas associated with less gray matter as compared to healthy controls. Additionally, we compare our scheme with an independent component analysis based fusion method, joint-ICA that has proven useful for such a study and note that the two methods provide complementary perspectives on data fusion.
Guler, N; Volegov, P; Danly, C R; Grim, G P; Merrill, F E; Wilde, C H
2012-10-01
Inertial confinement fusion experiments at the National Ignition Facility are designed to understand the basic principles of creating self-sustaining fusion reactions by laser driven compression of deuterium-tritium (DT) filled cryogenic plastic capsules. The neutron imaging diagnostic provides information on the distribution of the central fusion reaction region and the surrounding DT fuel by observing neutron images in two different energy bands for primary (13-17 MeV) and down-scattered (6-12 MeV) neutrons. From this, the final shape and size of the compressed capsule can be estimated and the symmetry of the compression can be inferred. These experiments provide small sources with high yield neutron flux. An aperture design that includes an array of pinholes and penumbral apertures has provided the opportunity to image the same source with two different techniques. This allows for an evaluation of these different aperture designs and reconstruction algorithms.
NASA Astrophysics Data System (ADS)
Guler, Nevzat; Aragonez, Robert J.; Archuleta, Thomas N.; Batha, Steven H.; Clark, David D.; Clark, Deborah J.; Danly, Chris R.; Day, Robert D.; Fatherley, Valerie E.; Finch, Joshua P.; Gallegos, Robert A.; Garcia, Felix P.; Grim, Gary; Hsu, Albert H.; Jaramillo, Steven A.; Loomis, Eric N.; Mares, Danielle; Martinson, Drew D.; Merrill, Frank E.; Morgan, George L.; Munson, Carter; Murphy, Thomas J.; Oertel, John A.; Polk, Paul J.; Schmidt, Derek W.; Tregillis, Ian L.; Valdez, Adelaida C.; Volegov, Petr L.; Wang, Tai-Sen F.; Wilde, Carl H.; Wilke, Mark D.; Wilson, Douglas C.; Atkinson, Dennis P.; Bower, Dan E.; Drury, Owen B.; Dzenitis, John M.; Felker, Brian; Fittinghoff, David N.; Frank, Matthias; Liddick, Sean N.; Moran, Michael J.; Roberson, George P.; Weiss, Paul; Buckles, Robert A.; Cradick, Jerry R.; Kaufman, Morris I.; Lutz, Steve S.; Malone, Robert M.; Traille, Albert
2013-11-01
Inertial Confinement Fusion experiments at the National Ignition Facility (NIF) are designed to understand and test the basic principles of self-sustaining fusion reactions by laser driven compression of deuterium-tritium (DT) filled cryogenic plastic (CH) capsules. The experimental campaign is ongoing to tune the implosions and characterize the burning plasma conditions. Nuclear diagnostics play an important role in measuring the characteristics of these burning plasmas, providing feedback to improve the implosion dynamics. The Neutron Imaging (NI) diagnostic provides information on the distribution of the central fusion reaction region and the surrounding DT fuel by collecting images at two different energy bands for primary (13-15 MeV) and downscattered (10-12 MeV) neutrons. From these distributions, the final shape and size of the compressed capsule can be estimated and the symmetry of the compression can be inferred. The first downscattered neutron images from imploding ICF capsules are shown in this paper.
Yang, Guocheng; Li, Meiling; Chen, Leiting; Yu, Jie
2015-01-01
We propose a novel medical image fusion scheme based on the statistical dependencies between coefficients in the nonsubsampled contourlet transform (NSCT) domain, in which the probability density function of the NSCT coefficients is concisely fitted using generalized Gaussian density (GGD), as well as the similarity measurement of two subbands is accurately computed by Jensen-Shannon divergence of two GGDs. To preserve more useful information from source images, the new fusion rules are developed to combine the subbands with the varied frequencies. That is, the low frequency subbands are fused by utilizing two activity measures based on the regional standard deviation and Shannon entropy and the high frequency subbands are merged together via weight maps which are determined by the saliency values of pixels. The experimental results demonstrate that the proposed method significantly outperforms the conventional NSCT based medical image fusion approaches in both visual perception and evaluation indices. PMID:26557871
High resolution isotopic analysis of U-bearing particles via fusion of SIMS and EDS images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarolli, Jay G.; Naes, Benjamin E.; Garcia, Benjamin J.
Image fusion of secondary ion mass spectrometry (SIMS) images and X-ray elemental maps from energy-dispersive spectroscopy (EDS) was performed to facilitate the isolation and re-analysis of isotopically unique U-bearing particles where the highest precision SIMS measurements are required. Image registration, image fusion and particle micromanipulation were performed on a subset of SIMS images obtained from a large area pre-screen of a particle distribution from a sample containing several certified reference materials (CRM) U129A, U015, U150, U500 and U850, as well as a standard reference material (SRM) 8704 (Buffalo River Sediment) to simulate particles collected on swipes during routine inspections ofmore » declared uranium enrichment facilities by the International Atomic Energy Agency (IAEA). In total, fourteen particles, ranging in size from 5 – 15 µm, were isolated and re-analyzed by SIMS in multi-collector mode identifying nine particles of CRM U129A, one of U150, one of U500 and three of U850. These identifications were made within a few percent errors from the National Institute of Standards and Technology (NIST) certified atom percent values for 234U, 235U and 238U for the corresponding CRMs. This work represents the first use of image fusion to enhance the accuracy and precision of isotope ratio measurements for isotopically unique U-bearing particles for nuclear safeguards applications. Implementation of image fusion is essential for the identification of particles of interests that fall below the spatial resolution of the SIMS images.« less
Facial recognition using multisensor images based on localized kernel eigen spaces.
Gundimada, Satyanadh; Asari, Vijayan K
2009-06-01
A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy.
NASA Astrophysics Data System (ADS)
Zhou, Yi; Li, Qi
2017-01-01
A dual-axis reflective continuous-wave terahertz (THz) confocal scanning polarization imaging system was adopted. THz polarization imaging experiments on gaps on film and metallic letters "BeLLE" were carried out. Imaging results indicate that the THz polarization imaging is sensitive to the tilted gap or wide flat gap, suggesting the THz polarization imaging is able to detect edges and stains. An image fusion method based on the digital image processing was proposed to ameliorate the imaging quality of metallic letters "BeLLE." Objective and subjective evaluation both prove that this method can improve the imaging quality.
The fusion of satellite and UAV data: simulation of high spatial resolution band
NASA Astrophysics Data System (ADS)
Jenerowicz, Agnieszka; Siok, Katarzyna; Woroszkiewicz, Malgorzata; Orych, Agata
2017-10-01
Remote sensing techniques used in the precision agriculture and farming that apply imagery data obtained with sensors mounted on UAV platforms became more popular in the last few years due to the availability of low- cost UAV platforms and low- cost sensors. Data obtained from low altitudes with low- cost sensors can be characterised by high spatial and radiometric resolution but quite low spectral resolution, therefore the application of imagery data obtained with such technology is quite limited and can be used only for the basic land cover classification. To enrich the spectral resolution of imagery data acquired with low- cost sensors from low altitudes, the authors proposed the fusion of RGB data obtained with UAV platform with multispectral satellite imagery. The fusion is based on the pansharpening process, that aims to integrate the spatial details of the high-resolution panchromatic image with the spectral information of lower resolution multispectral or hyperspectral imagery to obtain multispectral or hyperspectral images with high spatial resolution. The key of pansharpening is to properly estimate the missing spatial details of multispectral images while preserving their spectral properties. In the research, the authors presented the fusion of RGB images (with high spatial resolution) obtained with sensors mounted on low- cost UAV platforms and multispectral satellite imagery with satellite sensors, i.e. Landsat 8 OLI. To perform the fusion of UAV data with satellite imagery, the simulation of the panchromatic bands from RGB data based on the spectral channels linear combination, was conducted. Next, for simulated bands and multispectral satellite images, the Gram-Schmidt pansharpening method was applied. As a result of the fusion, the authors obtained several multispectral images with very high spatial resolution and then analysed the spatial and spectral accuracies of processed images.
Aldossari, M; Alfalou, A; Brosseau, C
2014-09-22
This study presents and validates an optimized method of simultaneous compression and encryption designed to process images with close spectra. This approach is well adapted to the compression and encryption of images of a time-varying scene but also to static polarimetric images. We use the recently developed spectral fusion method [Opt. Lett.35, 1914-1916 (2010)] to deal with the close resemblance of the images. The spectral plane (containing the information to send and/or to store) is decomposed in several independent areas which are assigned according a specific way. In addition, each spectrum is shifted in order to minimize their overlap. The dual purpose of these operations is to optimize the spectral plane allowing us to keep the low- and high-frequency information (compression) and to introduce an additional noise for reconstructing the images (encryption). Our results show that not only can the control of the spectral plane enhance the number of spectra to be merged, but also that a compromise between the compression rate and the quality of the reconstructed images can be tuned. We use a root-mean-square (RMS) optimization criterion to treat compression. Image encryption is realized at different security levels. Firstly, we add a specific encryption level which is related to the different areas of the spectral plane, and then, we make use of several random phase keys. An in-depth analysis at the spectral fusion methodology is done in order to find a good trade-off between the compression rate and the quality of the reconstructed images. Our new proposal spectral shift allows us to minimize the image overlap. We further analyze the influence of the spectral shift on the reconstructed image quality and compression rate. The performance of the multiple-image optical compression and encryption method is verified by analyzing several video sequences and polarimetric images.
Kamogawa, Junji; Kato, Osamu; Morizane, Tatsunori; Hato, Taizo
2015-01-01
There have been several imaging studies of cervical radiculopathy, but no three-dimensional (3D) images have shown the path, position, and pathological changes of the cervical nerve roots and spinal root ganglion relative to the cervical bony structure. The objective of this study was to introduce a technique that enables the virtual pathology of the nerve root to be assessed using 3D magnetic resonance (MR)/computed tomography (CT) fusion images that show the compression of the proximal portion of the cervical nerve root by both the herniated disc and the preforaminal or foraminal bony spur in patients with cervical radiculopathy. MR and CT images were obtained from three patients with cervical radiculopathy. 3D MR images were placed onto 3D CT images using a computer workstation. The entire nerve root could be visualized in 3D with or without the vertebrae. The most important characteristic evident on the images was flattening of the nerve root by a bony spur. The affected root was constricted at a pre-ganglion site. In cases of severe deformity, the flattened portion of the root seemed to change the angle of its path, resulting in twisted condition. The 3D MR/CT fusion imaging technique enhances visualization of pathoanatomy in cervical hidden area that is composed of the root and intervertebral foramen. This technique provides two distinct advantages for diagnosis of cervical radiculopathy. First, the isolation of individual vertebra clarifies the deformities of the whole root groove, including both the uncinate process and superior articular process in the cervical spine. Second, the tortuous or twisted condition of a compressed root can be visualized. The surgeon can identify the narrowest face of the root if they view the MR/CT fusion image from the posterolateral-inferior direction. Surgeons use MR/CT fusion images as a pre-operative map and for intraoperative navigation. The MR/CT fusion images can also be used as educational materials for all hospital staff and for patients and patients' families who provide informed consent for treatments.
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline; Laporte, Nadine; Netanyahuy, Nathan S.; Zukor, Dorothy (Technical Monitor)
2001-01-01
The characterization and the mapping of land cover/land use of forest areas, such as the Central African rainforest, is a very complex task. This complexity is mainly due to the extent of such areas and, as a consequence, to the lack of full and continuous cloud-free coverage of those large regions by one single remote sensing instrument, In order to provide improved vegetation maps of Central Africa and to develop forest monitoring techniques for applications at the local and regional scales, we propose to utilize multi-sensor remote sensing observations coupled with in-situ data. Fusion and clustering of multi-sensor data are the first steps towards the development of such a forest monitoring system. In this paper, we will describe some preliminary experiments involving the fusion of SAR and Landsat image data of the Lope Reserve in Gabon. Similarly to previous fusion studies, our fusion method is wavelet-based. The fusion provides a new image data set which contains more detailed texture features and preserves the large homogeneous regions that are observed by the Thematic Mapper sensor. The fusion step is followed by unsupervised clustering and provides a vegetation map of the area.
A wavelet-based adaptive fusion algorithm of infrared polarization imaging
NASA Astrophysics Data System (ADS)
Yang, Wei; Gu, Guohua; Chen, Qian; Zeng, Haifang
2011-08-01
The purpose of infrared polarization image is to highlight man-made target from a complex natural background. For the infrared polarization images can significantly distinguish target from background with different features, this paper presents a wavelet-based infrared polarization image fusion algorithm. The method is mainly for image processing of high-frequency signal portion, as for the low frequency signal, the original weighted average method has been applied. High-frequency part is processed as follows: first, the source image of the high frequency information has been extracted by way of wavelet transform, then signal strength of 3*3 window area has been calculated, making the regional signal intensity ration of source image as a matching measurement. Extraction method and decision mode of the details are determined by the decision making module. Image fusion effect is closely related to the setting threshold of decision making module. Compared to the commonly used experiment way, quadratic interpolation optimization algorithm is proposed in this paper to obtain threshold. Set the endpoints and midpoint of the threshold searching interval as initial interpolation nodes, and compute the minimum quadratic interpolation function. The best threshold can be obtained by comparing the minimum quadratic interpolation function. A series of image quality evaluation results show this method has got improvement in fusion effect; moreover, it is not only effective for some individual image, but also for a large number of images.
Putzer, Daniel; Henninger, Benjamin; Kovacs, Peter; Uprimny, Christian; Kendler, Dorota; Jaschke, Werner; Bale, Reto J
2016-06-01
Even as PET/CT provides valuable diagnostic information in a great number of clinical indications, availability of hybrid PET/CT scanners is mainly limited to clinical centers. A software-based image fusion would facilitate combined image reading of CT and PET data sets if hardware image fusion is not available. To analyze the relevance of retrospective image fusion of separately acquired PET and CT data sets, we studied the accuracy, practicability and reproducibility of three different image registration techniques. We evaluated whole-body 18F-FDG-PET and CT data sets of 71 oncologic patients. Images were fused retrospectively using Stealth Station System, Treon (Medtronic Inc., Louisville, CO, USA) equipped with Cranial4 Software. External markers fixed to a vacuum mattress were used as reference for exact repositioning. Registration was repeated using internal anatomic landmarks and Automerge software, assessing accuracy for all three methods, measuring distances of liver representation in CT and PET with reference to a common coordinate system. On first measurement of image fusions with external markers, 53 were successful, 16 feasible and 2 not successful. Using anatomic landmarks, 42 were successful, 26 feasible and 3 not successful. Using Automerge Software only 13 were successful. The mean distance between center points in PET and CT was 7.69±4.96 mm on first, and 7.65±4.2 mm on second measurement. Results with external markers correlate very well and inaccuracies are significantly lower (P<0.001) than results using anatomical landmarks (10.38±6.13 mm and 10.83±6.23 mm). Analysis revealed a significantly faster alignment using external markers (P<0.001). External fiducials in combination with immobilization devices and breathing protocols allow for highly accurate image fusion cost-effectively and significantly less time, posing an attractive alternative for PET/CT interpretation when a hybrid scanner is not available.
Robust multi-atlas label propagation by deep sparse representation
Zu, Chen; Wang, Zhengxia; Zhang, Daoqiang; Liang, Peipeng; Shi, Yonghong; Shen, Dinggang; Wu, Guorong
2016-01-01
Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer (label-specific dictionaries) consists of groups of representative atlas patches and the subsequent layers (residual dictionaries) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared to other counterpart label fusion methods. PMID:27942077
Robust multi-atlas label propagation by deep sparse representation.
Zu, Chen; Wang, Zhengxia; Zhang, Daoqiang; Liang, Peipeng; Shi, Yonghong; Shen, Dinggang; Wu, Guorong
2017-03-01
Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer ( label-specific dictionaries ) consists of groups of representative atlas patches and the subsequent layers ( residual dictionaries ) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared to other counterpart label fusion methods.
Morana, Giovanni; Piccardo, Arnoldo; Milanaccio, Claudia; Puntoni, Matteo; Nozza, Paolo; Cama, Armando; Zefiro, Daniele; Cabria, Massimo; Rossi, Andrea; Garrè, Maria Luisa
2014-05-01
Infiltrative astrocytomas (IAs) represent a group of astrocytic gliomas ranging from low-grade to highly malignant, characterized by diffuse invasion of the brain parenchyma. When compared with their adult counterpart, pediatric IAs may be considered biologically distinct entities; nevertheless, similarly to those in adults they represent a complex oncologic challenge. The aim of this study was to investigate the diagnostic role, clinical contribution, and prognostic value of fused (18)F-3,4-dihydroxyphenylalanine ((18)F-DOPA) PET/MR images in pediatric supratentorial IAs. Pediatric patients with supratentorial IAs involving at least 2 cerebral lobes, either newly diagnosed or with suspected disease progression, prospectively underwent (18)F-DOPA PET and conventional MR imaging, performed within 10 d of each other. (18)F-DOPA PET data were interpreted qualitatively and semiquantitatively, fusing images with MR images. PET scans were classified as positive if tumors identified on MR imaging exhibited tracer uptake above the level of the corresponding contralateral normal brain. Maximum standardized uptake values, tumor-to-normal contralateral tissue ratios, and tumor-to-normal striatum ratios were calculated for all tumors. Correlations between the degree and extent of (18)F-DOPA uptake, MR imaging tumor characteristics, and histologic results were investigated. The contribution of (18)F-DOPA PET/MR image fusion was considered relevant if it enabled one to select the most appropriate biopsy site, discriminate between disease progression and treatment-related changes, or influence treatment strategy. The patient's outcome was finally correlated with (18)F-DOPA uptake. Thirteen patients (8 boys and 5 girls) were included (5 diffuse astrocytomas, 2 anaplastic astrocytomas, 5 gliomatosis cerebri, and 1 glioblastoma multiforme). The (18)F-DOPA uptake pattern was heterogeneous in all positive scans (9/13), revealing metabolic heterogeneities within each tumor. Significant differences in terms of (18)F-DOPA uptake were found between low- and high-grade lesions (P < 0.05). The diagnostic and therapeutic contribution of (18)F-DOPA PET/MR image fusion was relevant in 9 of 13 patients (69%). (18)F-DOPA uptake correlated significantly with progression-free survival (P = 0.004). Our results indicate that (18)F-DOPA PET/MR image fusion may be a reliable imaging biomarker of pediatric IAs. Information gathered by this combined imaging approach can be readily transferred to the everyday practice and may help clinicians to better stratify patients with IAs, especially diffuse astrocytomas and gliomatosis cerebri, for diagnostic, therapeutic, and prognostic purposes.
Effects of spatial resolution ratio in image fusion
Ling, Y.; Ehlers, M.; Usery, E.L.; Madden, M.
2008-01-01
In image fusion, the spatial resolution ratio can be defined as the ratio between the spatial resolution of the high-resolution panchromatic image and that of the low-resolution multispectral image. This paper attempts to assess the effects of the spatial resolution ratio of the input images on the quality of the fused image. Experimental results indicate that a spatial resolution ratio of 1:10 or higher is desired for optimal multisensor image fusion provided the input panchromatic image is not downsampled to a coarser resolution. Due to the synthetic pixels generated from resampling, the quality of the fused image decreases as the spatial resolution ratio decreases (e.g. from 1:10 to 1:30). However, even with a spatial resolution ratio as small as 1:30, the quality of the fused image is still better than the original multispectral image alone for feature interpretation. In cases where the spatial resolution ratio is too small (e.g. 1:30), to obtain better spectral integrity of the fused image, one may downsample the input high-resolution panchromatic image to a slightly lower resolution before fusing it with the multispectral image.
NASA Astrophysics Data System (ADS)
Zhang, Lei; Yang, Fengbao; Ji, Linna; Lv, Sheng
2018-01-01
Diverse image fusion methods perform differently. Each method has advantages and disadvantages compared with others. One notion is that the advantages of different image methods can be effectively combined. A multiple-algorithm parallel fusion method based on algorithmic complementarity and synergy is proposed. First, in view of the characteristics of the different algorithms and difference-features among images, an index vector-based feature-similarity is proposed to define the degree of complementarity and synergy. This proposed index vector is a reliable evidence indicator for algorithm selection. Second, the algorithms with a high degree of complementarity and synergy are selected. Then, the different degrees of various features and infrared intensity images are used as the initial weights for the nonnegative matrix factorization (NMF). This avoids randomness of the NMF initialization parameter. Finally, the fused images of different algorithms are integrated using the NMF because of its excellent data fusing performance on independent features. Experimental results demonstrate that the visual effect and objective evaluation index of the fused images obtained using the proposed method are better than those obtained using traditional methods. The proposed method retains all the advantages that individual fusion algorithms have.
Ogata, Yuji; Nakahara, Tadaki; Ode, Kenichi; Matsusaka, Yohji; Katagiri, Mari; Iwabuchi, Yu; Itoh, Kazunari; Ichimura, Akira; Jinzaki, Masahiro
2017-05-01
We developed a method of image data projection of bone SPECT into 3D volume-rendered CT images for 3D SPECT/CT fusion. The aims of our study were to evaluate its feasibility and clinical usefulness. Whole-body bone scintigraphy (WB) and SPECT/CT scans were performed in 318 cancer patients using a dedicated SPECT/CT systems. Volume data of bone SPECT and CT were fused to obtain 2D SPECT/CT images. To generate our 3D SPECT/CT images, colored voxel data of bone SPECT were projected onto the corresponding location of the volume-rendered CT data after a semi-automatic bone extraction. Then, the resultant 3D images were blended with conventional volume-rendered CT images, allowing to grasp the three-dimensional relationship between bone metabolism and anatomy. WB and SPECT (WB + SPECT), 2D SPECT/CT fusion, and 3D SPECT/CT fusion were evaluated by two independent reviewers in the diagnosis of bone metastasis. The inter-observer variability and diagnostic accuracy in these three image sets were investigated using a four-point diagnostic scale. Increased bone metabolism was found in 744 metastatic sites and 1002 benign changes. On a per-lesion basis, inter-observer agreements in the diagnosis of bone metastasis were 0.72 for WB + SPECT, 0.90 for 2D SPECT/CT, and 0.89 for 3D SPECT/CT. Receiver operating characteristic analyses for the diagnostic accuracy of bone metastasis showed that WB + SPECT, 2D SPECT/CT, and 3D SPECT/CT had an area under the curve of 0.800, 0.983, and 0.983 for reader 1, 0.865, 0.992, and 0.993 for reader 2, respectively (WB + SPECT vs. 2D or 3D SPECT/CT, p < 0.001; 2D vs. 3D SPECT/CT, n.s.). The durations of interpretation of WB + SPECT, 2D SPECT/CT, and 3D SPECT/CT images were 241 ± 75, 225 ± 73, and 182 ± 71 s for reader 1 and 207 ± 72, 190 ± 73, and 179 ± 73 s for reader 2, respectively. As a result, it took shorter time to read 3D SPECT/CT images than 2D SPECT/CT (p < 0.0001) or WB + SPECT images (p < 0.0001). 3D SPECT/CT fusion offers comparable diagnostic accuracy to 2D SPECT/CT fusion. The visual effect of 3D SPECT/CT fusion facilitates reduction of reading time compared to 2D SPECT/CT fusion.
Prostate seed implant quality assessment using MR and CT image fusion.
Amdur, R J; Gladstone, D; Leopold, K A; Harris, R D
1999-01-01
After a seed implant of the prostate, computerized tomography (CT) is ideal for determining seed distribution but soft tissue anatomy is frequently not well visualized. Magnetic resonance (MR) images soft tissue anatomy well but seed visualization is problematic. We describe a method of fusing CT and MR images to exploit the advantages of both of these modalities when assessing the quality of a prostate seed implant. Eleven consecutive prostate seed implant patients were imaged with axial MR and CT scans. MR and CT images were fused in three dimensions using the Pinnacle 3.0 version of the ADAC treatment planning system. The urethra and bladder base were used to "line up" MR and CT image sets during image fusion. Alignment was accomplished using translation and rotation in the three ortho-normal planes. Accuracy of image fusion was evaluated by calculating the maximum deviation in millimeters between the center of the urethra on axial MR versus CT images. Implant quality was determined by comparing dosimetric results to previously set parameters. Image fusion was performed with a high degree of accuracy. When lining up the urethra and base of bladder, the maximum difference in axial position of the urethra between MR and CT averaged 2.5 mm (range 1.3-4.0 mm, SD 0.9 mm). By projecting CT-derived dose distributions over MR images of soft tissue structures, qualitative and quantitative evaluation of implant quality is straightforward. The image-fusion process we describe provides a sophisticated way of assessing the quality of a prostate seed implant. Commercial software makes the process time-efficient and available to any clinical practice with a high-quality treatment planning system. While we use MR to image soft tissue structures, the process could be used with any imaging modality that is able to visualize the prostatic urethra (e.g., ultrasound).
Joint sparsity based heterogeneous data-level fusion for target detection and estimation
NASA Astrophysics Data System (ADS)
Niu, Ruixin; Zulch, Peter; Distasio, Marcello; Blasch, Erik; Shen, Dan; Chen, Genshe
2017-05-01
Typical surveillance systems employ decision- or feature-level fusion approaches to integrate heterogeneous sensor data, which are sub-optimal and incur information loss. In this paper, we investigate data-level heterogeneous sensor fusion. Since the sensors monitor the common targets of interest, whose states can be determined by only a few parameters, it is reasonable to assume that the measurement domain has a low intrinsic dimensionality. For heterogeneous sensor data, we develop a joint-sparse data-level fusion (JSDLF) approach based on the emerging joint sparse signal recovery techniques by discretizing the target state space. This approach is applied to fuse signals from multiple distributed radio frequency (RF) signal sensors and a video camera for joint target detection and state estimation. The JSDLF approach is data-driven and requires minimum prior information, since there is no need to know the time-varying RF signal amplitudes, or the image intensity of the targets. It can handle non-linearity in the sensor data due to state space discretization and the use of frequency/pixel selection matrices. Furthermore, for a multi-target case with J targets, the JSDLF approach only requires discretization in a single-target state space, instead of discretization in a J-target state space, as in the case of the generalized likelihood ratio test (GLRT) or the maximum likelihood estimator (MLE). Numerical examples are provided to demonstrate that the proposed JSDLF approach achieves excellent performance with near real-time accurate target position and velocity estimates.
Faizan, Ahmad; Goel, Vijay K; Biyani, Ashok; Garfin, Steven R; Bono, Christopher M
2012-03-01
Studies delineating the adjacent level effect of single level disc replacement systems have been reported in literature. The aim of this study was to compare the adjacent level biomechanics of bi-level disc replacement, bi-level fusion and a construct having adjoining level disc replacement and fusion system. In total, biomechanics of four models- intact, bi level disc replacement, bi level fusion and fusion plus disc replacement at adjoining levels- was studied to gain insight into the effects of various instrumentation systems on cranial and caudal adjacent levels using finite element analysis (73.6N+varying moment). The bi-level fusion models are more than twice as stiff as compared to the intact model during flexion-extension, lateral bending and axial rotation. Bi-level disc replacement model required moments lower than intact model (1.5Nm). Fusion plus disc replacement model required moment 10-25% more than intact model, except in extension. Adjacent level motions, facet loads and endplate stresses increased substantially in the bi-level fusion model. On the other hand, adjacent level motions, facet loads and endplate stresses were similar to intact for the bi-level disc replacement model. For the fusion plus disc replacement model, adjacent level motions, facet loads and endplate stresses were closer to intact model rather than the bi-level fusion model, except in extension. Based on our finite element analysis, fusion plus disc replacement procedure has less severe biomechanical effects on adjacent levels when compared to bi-level fusion procedure. Bi-level disc replacement procedure did not have any adverse mechanical effects on adjacent levels. Copyright © 2011 Elsevier Ltd. All rights reserved.
Automated image-based assay for evaluation of HIV neutralization and cell-to-cell fusion inhibition.
Sheik-Khalil, Enas; Bray, Mark-Anthony; Özkaya Şahin, Gülsen; Scarlatti, Gabriella; Jansson, Marianne; Carpenter, Anne E; Fenyö, Eva Maria
2014-08-30
Standardized techniques to detect HIV-neutralizing antibody responses are of great importance in the search for an HIV vaccine. Here, we present a high-throughput, high-content automated plaque reduction (APR) assay based on automated microscopy and image analysis that allows evaluation of neutralization and inhibition of cell-cell fusion within the same assay. Neutralization of virus particles is measured as a reduction in the number of fluorescent plaques, and inhibition of cell-cell fusion as a reduction in plaque area. We found neutralization strength to be a significant factor in the ability of virus to form syncytia. Further, we introduce the inhibitory concentration of plaque area reduction (ICpar) as an additional measure of antiviral activity, i.e. fusion inhibition. We present an automated image based high-throughput, high-content HIV plaque reduction assay. This allows, for the first time, simultaneous evaluation of neutralization and inhibition of cell-cell fusion within the same assay, by quantifying the reduction in number of plaques and mean plaque area, respectively. Inhibition of cell-to-cell fusion requires higher quantities of inhibitory reagent than inhibition of virus neutralization.
NASA Astrophysics Data System (ADS)
Ma, Jinlei; Zhou, Zhiqiang; Wang, Bo; Zong, Hua
2017-05-01
The goal of infrared (IR) and visible image fusion is to produce a more informative image for human observation or some other computer vision tasks. In this paper, we propose a novel multi-scale fusion method based on visual saliency map (VSM) and weighted least square (WLS) optimization, aiming to overcome some common deficiencies of conventional methods. Firstly, we introduce a multi-scale decomposition (MSD) using the rolling guidance filter (RGF) and Gaussian filter to decompose input images into base and detail layers. Compared with conventional MSDs, this MSD can achieve the unique property of preserving the information of specific scales and reducing halos near edges. Secondly, we argue that the base layers obtained by most MSDs would contain a certain amount of residual low-frequency information, which is important for controlling the contrast and overall visual appearance of the fused image, and the conventional "averaging" fusion scheme is unable to achieve desired effects. To address this problem, an improved VSM-based technique is proposed to fuse the base layers. Lastly, a novel WLS optimization scheme is proposed to fuse the detail layers. This optimization aims to transfer more visual details and less irrelevant IR details or noise into the fused image. As a result, the fused image details would appear more naturally and be suitable for human visual perception. Experimental results demonstrate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.
NASA Technical Reports Server (NTRS)
Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.
2012-01-01
The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized forest AGB sampling errors by 15 - 38%. Furthermore, spaceborne global scale accuracy requirements were achieved. At least 80% of the grid cells at 100m, 250m, 500m, and 1km grid levels met AGB density accuracy requirements using a combination of passive optical and SAR along with machine learning methods to predict vegetation structure metrics for forested areas without LiDAR samples. Finally, using either passive optical or SAR, accuracy requirements were met at the 500m and 250m grid level, respectively.
Adaptive multiple super fast simulated annealing for stochastic microstructure reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryu, Seun; Lin, Guang; Sun, Xin
2013-01-01
Fast image reconstruction from statistical information is critical in image fusion from multimodality chemical imaging instrumentation to create high resolution image with large domain. Stochastic methods have been used widely in image reconstruction from two point correlation function. The main challenge is to increase the efficiency of reconstruction. A novel simulated annealing method is proposed for fast solution of image reconstruction. Combining the advantage of very fast cooling schedules, dynamic adaption and parallelization, the new simulation annealing algorithm increases the efficiencies by several orders of magnitude, making the large domain image fusion feasible.
NASA Astrophysics Data System (ADS)
Sukawattanavijit, Chanika; Srestasathiern, Panu
2017-10-01
Land Use and Land Cover (LULC) information are significant to observe and evaluate environmental change. LULC classification applying remotely sensed data is a technique popularly employed on a global and local dimension particularly, in urban areas which have diverse land cover types. These are essential components of the urban terrain and ecosystem. In the present, object-based image analysis (OBIA) is becoming widely popular for land cover classification using the high-resolution image. COSMO-SkyMed SAR data was fused with THAICHOTE (namely, THEOS: Thailand Earth Observation Satellite) optical data for land cover classification using object-based. This paper indicates a comparison between object-based and pixel-based approaches in image fusion. The per-pixel method, support vector machines (SVM) was implemented to the fused image based on Principal Component Analysis (PCA). For the objectbased classification was applied to the fused images to separate land cover classes by using nearest neighbor (NN) classifier. Finally, the accuracy assessment was employed by comparing with the classification of land cover mapping generated from fused image dataset and THAICHOTE image. The object-based data fused COSMO-SkyMed with THAICHOTE images demonstrated the best classification accuracies, well over 85%. As the results, an object-based data fusion provides higher land cover classification accuracy than per-pixel data fusion.
Covariance descriptor fusion for target detection
NASA Astrophysics Data System (ADS)
Cukur, Huseyin; Binol, Hamidullah; Bal, Abdullah; Yavuz, Fatih
2016-05-01
Target detection is one of the most important topics for military or civilian applications. In order to address such detection tasks, hyperspectral imaging sensors provide useful images data containing both spatial and spectral information. Target detection has various challenging scenarios for hyperspectral images. To overcome these challenges, covariance descriptor presents many advantages. Detection capability of the conventional covariance descriptor technique can be improved by fusion methods. In this paper, hyperspectral bands are clustered according to inter-bands correlation. Target detection is then realized by fusion of covariance descriptor results based on the band clusters. The proposed combination technique is denoted Covariance Descriptor Fusion (CDF). The efficiency of the CDF is evaluated by applying to hyperspectral imagery to detect man-made objects. The obtained results show that the CDF presents better performance than the conventional covariance descriptor.
[Time consumption and quality of an automated fusion tool for SPECT and MRI images of the brain].
Fiedler, E; Platsch, G; Schwarz, A; Schmiedehausen, K; Tomandl, B; Huk, W; Rupprecht, Th; Rahn, N; Kuwert, T
2003-10-01
Although the fusion of images from different modalities may improve diagnostic accuracy, it is rarely used in clinical routine work due to logistic problems. Therefore we evaluated performance and time needed for fusing MRI and SPECT images using a semiautomated dedicated software. PATIENTS, MATERIAL AND METHOD: In 32 patients regional cerebral blood flow was measured using (99m)Tc ethylcystein dimer (ECD) and the three-headed SPECT camera MultiSPECT 3. MRI scans of the brain were performed using either a 0,2 T Open or a 1,5 T Sonata. Twelve of the MRI data sets were acquired using a 3D-T1w MPRAGE sequence, 20 with a 2D acquisition technique and different echo sequences. Image fusion was performed on a Syngo workstation using an entropy minimizing algorithm by an experienced user of the software. The fusion results were classified. We measured the time needed for the automated fusion procedure and in case of need that for manual realignment after automated, but insufficient fusion. The mean time of the automated fusion procedure was 123 s. It was for the 2D significantly shorter than for the 3D MRI datasets. For four of the 2D data sets and two of the 3D data sets an optimal fit was reached using the automated approach. The remaining 26 data sets required manual correction. The sum of the time required for automated fusion and that needed for manual correction averaged 320 s (50-886 s). The fusion of 3D MRI data sets lasted significantly longer than that of the 2D MRI data. The automated fusion tool delivered in 20% an optimal fit, in 80% manual correction was necessary. Nevertheless, each of the 32 SPECT data sets could be merged in less than 15 min with the corresponding MRI data, which seems acceptable for clinical routine use.
Integrating image quality in 2nu-SVM biometric match score fusion.
Vatsa, Mayank; Singh, Richa; Noore, Afzel
2007-10-01
This paper proposes an intelligent 2nu-support vector machine based match score fusion algorithm to improve the performance of face and iris recognition by integrating the quality of images. The proposed algorithm applies redundant discrete wavelet transform to evaluate the underlying linear and non-linear features present in the image. A composite quality score is computed to determine the extent of smoothness, sharpness, noise, and other pertinent features present in each subband of the image. The match score and the corresponding quality score of an image are fused using 2nu-support vector machine to improve the verification performance. The proposed algorithm is experimentally validated using the FERET face database and the CASIA iris database. The verification performance and statistical evaluation show that the proposed algorithm outperforms existing fusion algorithms.
Goudeketting, Seline R; Heinen, Stefan G H; Ünlü, Çağdaş; van den Heuvel, Daniel A F; de Vries, Jean-Paul P M; van Strijen, Marco J; Sailer, Anna M
2017-08-01
To systematically review and meta-analyze the added value of 3-dimensional (3D) image fusion technology in endovascular aortic repair for its potential to reduce contrast media volume, radiation dose, procedure time, and fluoroscopy time. Electronic databases were systematically searched for studies published between January 2010 and March 2016 that included a control group describing 3D fusion imaging in endovascular aortic procedures. Two independent reviewers assessed the methodological quality of the included studies and extracted data on iodinated contrast volume, radiation dose, procedure time, and fluoroscopy time. The contrast use for standard and complex endovascular aortic repairs (fenestrated, branched, and chimney) were pooled using a random-effects model; outcomes are reported as the mean difference with 95% confidence intervals (CIs). Seven studies, 5 retrospective and 2 prospective, involving 921 patients were selected for analysis. The methodological quality of the studies was moderate (median 17, range 15-18). The use of fusion imaging led to an estimated mean reduction in iodinated contrast of 40.1 mL (95% CI 16.4 to 63.7, p=0.002) for standard procedures and a mean 70.7 mL (95% CI 44.8 to 96.6, p<0.001) for complex repairs. Secondary outcome measures were not pooled because of potential bias in nonrandomized data, but radiation doses, procedure times, and fluoroscopy times were lower, although not always significantly, in the fusion group in 6 of the 7 studies. Compared with the control group, 3D fusion imaging is associated with a significant reduction in the volume of contrast employed for standard and complex endovascular aortic procedures, which can be particularly important in patients with renal failure. Radiation doses, procedure times, and fluoroscopy times were reduced when 3D fusion was used.
NASA Astrophysics Data System (ADS)
Ning, Nannan; Tian, Jie; Liu, Xia; Deng, Kexin; Wu, Ping; Wang, Bo; Wang, Kun; Ma, Xibo
2014-02-01
In mathematics, optical molecular imaging including bioluminescence tomography (BLT), fluorescence tomography (FMT) and Cerenkov luminescence tomography (CLT) are concerned with a similar inverse source problem. They all involve the reconstruction of the 3D location of a single/multiple internal luminescent/fluorescent sources based on 3D surface flux distribution. To achieve that, an accurate fusion between 2D luminescent/fluorescent images and 3D structural images that may be acquired form micro-CT, MRI or beam scanning is extremely critical. However, the absence of a universal method that can effectively convert 2D optical information into 3D makes the accurate fusion challengeable. In this study, to improve the fusion accuracy, a new fusion method for dual-modality tomography (luminescence/fluorescence and micro-CT) based on natural light surface reconstruction (NLSR) and iterated closest point (ICP) was presented. It consisted of Octree structure, exact visual hull from marching cubes and ICP. Different from conventional limited projection methods, it is 360° free-space registration, and utilizes more luminescence/fluorescence distribution information from unlimited multi-orientation 2D optical images. A mouse mimicking phantom (one XPM-2 Phantom Light Source, XENOGEN Corporation) and an in-vivo BALB/C mouse with implanted one luminescent light source were used to evaluate the performance of the new fusion method. Compared with conventional fusion methods, the average error of preset markers was improved by 0.3 and 0.2 pixels from the new method, respectively. After running the same 3D internal light source reconstruction algorithm of the BALB/C mouse, the distance error between the actual and reconstructed internal source was decreased by 0.19 mm.
Abulizi, Yakefu; Liang, Wei-Dong; Muheremu, Aikeremujiang; Maimaiti, Maierdan; Sheng, Wei-Bin
2017-07-14
Spinal brucellosis is a less commonly reported infectious spinal pathology. There are few reports regarding the surgical treatment of spinal brucellosis in existing literature. This retrospective study was conducted to determine the effectiveness of single-stage transforaminal decompression, debridement, interbody fusion, and posterior instrumentation for lumbosacral spinal brucellosis. From February 2012 to April 2015, 32 consecutive patients (19 males and 13 females, mean age 53.7 ± 8.7) with lumbosacral brucellosis treated by transforaminal decompression, debridement, interbody fusion, and posterior instrumentation were enrolled. Medical records, imaging studies, laboratory data were collected and summarized. Surgical outcomes were evaluated based on visual analogue scale (VAS), Oswestry Disability Index (ODI) and Japanese Orthopaedic Association (JOA) scale. The changes in C-reactive protein (CRP) levels, erythrocyte sedimentation rate (ESR), clinical symptoms and complications were investigated. Graft fusion was evaluated using Bridwell grading criteria. The mean follow-up period was 24.9 ± 8.2 months. Back pain and radiating leg pain was relieved significantly in all patients after operation. No implant failures were observed in any patients. Wound infection was observed in two patients and sinus formation was observed in one patient. Solid bony fusion was achieved in 30 patients and the fusion rate was 93.8%. The levels of ESR and CRP were returned to normal by the end of three months' follow-up. VAS and ODI scores were significantly improved (P < 0.05). According to JOA score, surgical improvement was excellent in 22 cases (68.8%), good in 9 cases (28.1%), moderate in 1 case (3.1%) at the last follow-up. Single-stage transforaminal decompression, debridement, interbody fusion, and posterior instrumentation is an effective and safe approach for lumbosacral brucellosis.
Chinnadurai, Ponraj; Duran, Cassidy; Al-Jabbari, Odeaa; Abu Saleh, Walid K; Lumsden, Alan; Bismuth, Jean
2016-01-01
To report our initial experience and highlight the value of using intraoperative C-arm cone beam computed tomography (CT; DynaCT(®)) image fusion guidance along with steerable robotic endovascular catheter navigation to optimize vessel cannulation. Between May 2013 and January 2015, all patients who underwent endovascular procedures using DynaCT image fusion technique along with Hansen Magellan vascular robotic catheter were included in this study. As a part of preoperative planning, relevant vessel landmarks were electronically marked in contrast-enhanced multi-slice computed tomography images and stored. At the beginning of procedure, an intraoperative noncontrast C-arm cone beam CT (syngo DynaCT(®), Siemens Medical Solutions USA Inc.) was acquired in the hybrid suite. Preoperative images were then coregistered to intraoperative DynaCT images using aortic wall calcifications and bone landmarks. Stored landmarks were then overlaid on 2-dimensional (2D) live fluoroscopic images as virtual markers that are updated in real-time with C-arm, table movements and image zoom. Vascular access and robotic catheter (Magellan(®), Hansen Medical) was setup per standard. Vessel cannulation was performed based on electronic virtual markers on live fluoroscopy using robotic catheter. The impact of 3-dimensional (3D) image fusion guidance on robotic vessel cannulation was evaluated retrospectively, by assessing quantitative parameters like number of angiograms acquired before vessel cannulation and qualitative parameters like accuracy of vessel ostium and centerline markers. All 17 vessels were cannulated successfully in 14 patients' attempted using robotic catheter and image fusion guidance. Median vessel diameter at origin was 5.4 mm (range, 2.3-13 mm), whereas 12 of 17 (70.6%) vessels had either calcified and/or stenosed origin from parent vessel. Nine of 17 vessels (52.9 %) were cannulated without any contrast injection. Median number of angiograms required before cannulation was 0 (range, 0-2). On qualitative assessment, 14 of 15 vessels (93.3%) had grade = 1 accuracy (guidewire inside virtual ostial marker). Fourteen of 14 vessels had grade = 1 accuracy (virtual centerlines that matched with the actual vessel trajectory during cannulation). In this small series, the experience of using DynaCT image fusion guidance together with a steerable endovascular robotic catheter indicates that such image fusion strategies can enhance intraoperative 2D fluoroscopy by bringing preoperative 3D information about vascular stenosis and/or calcification, angulation, and take off from main vessel thereby facilitating ultimate vessel cannulation. Copyright © 2016 Elsevier Inc. All rights reserved.
Reconstruction based finger-knuckle-print verification with score level adaptive binary fusion.
Gao, Guangwei; Zhang, Lei; Yang, Jian; Zhang, Lin; Zhang, David
2013-12-01
Recently, a new biometrics identifier, namely finger knuckle print (FKP), has been proposed for personal authentication with very interesting results. One of the advantages of FKP verification lies in its user friendliness in data collection. However, the user flexibility in positioning fingers also leads to a certain degree of pose variations in the collected query FKP images. The widely used Gabor filtering based competitive coding scheme is sensitive to such variations, resulting in many false rejections. We propose to alleviate this problem by reconstructing the query sample with a dictionary learned from the template samples in the gallery set. The reconstructed FKP image can reduce much the enlarged matching distance caused by finger pose variations; however, both the intra-class and inter-class distances will be reduced. We then propose a score level adaptive binary fusion rule to adaptively fuse the matching distances before and after reconstruction, aiming to reduce the false rejections without increasing much the false acceptances. Experimental results on the benchmark PolyU FKP database show that the proposed method significantly improves the FKP verification accuracy.
Magnetic resonance imaging-ultrasound fusion biopsy for prediction of final prostate pathology.
Le, Jesse D; Stephenson, Samuel; Brugger, Michelle; Lu, David Y; Lieu, Patricia; Sonn, Geoffrey A; Natarajan, Shyam; Dorey, Frederick J; Huang, Jiaoti; Margolis, Daniel J A; Reiter, Robert E; Marks, Leonard S
2014-11-01
We explored the impact of magnetic resonance imaging-ultrasound fusion prostate biopsy on the prediction of final surgical pathology. A total of 54 consecutive men undergoing radical prostatectomy at UCLA after fusion biopsy were included in this prospective, institutional review board approved pilot study. Using magnetic resonance imaging-ultrasound fusion, tissue was obtained from a 12-point systematic grid (mapping biopsy) and from regions of interest detected by multiparametric magnetic resonance imaging (targeted biopsy). A single radiologist read all magnetic resonance imaging, and a single pathologist independently rereviewed all biopsy and whole mount pathology, blinded to prior interpretation and matched specimen. Gleason score concordance between biopsy and prostatectomy was the primary end point. Mean patient age was 62 years and median prostate specific antigen was 6.2 ng/ml. Final Gleason score at prostatectomy was 6 (13%), 7 (70%) and 8-9 (17%). A tertiary pattern was detected in 17 (31%) men. Of 45 high suspicion (image grade 4-5) magnetic resonance imaging targets 32 (71%) contained prostate cancer. The per core cancer detection rate was 20% by systematic mapping biopsy and 42% by targeted biopsy. The highest Gleason pattern at prostatectomy was detected by systematic mapping biopsy in 54%, targeted biopsy in 54% and a combination in 81% of cases. Overall 17% of cases were upgraded from fusion biopsy to final pathology and 1 (2%) was downgraded. The combination of targeted biopsy and systematic mapping biopsy was needed to obtain the best predictive accuracy. In this pilot study magnetic resonance imaging-ultrasound fusion biopsy allowed for the prediction of final prostate pathology with greater accuracy than that reported previously using conventional methods (81% vs 40% to 65%). If confirmed, these results will have important clinical implications. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Natarajan, Shyam; Jones, Tonye A; Priester, Alan M; Geoghegan, Rory; Lieu, Patricia; Delfin, Merdie; Felker, Ely; Margolis, Daniel J A; Sisk, Anthony; Pantuck, Allan; Grundfest, Warren; Marks, Leonard S
2017-10-01
Focal laser ablation is a potential treatment in some men with prostate cancer. Currently focal laser ablation is performed by radiologists in a magnetic resonance imaging unit (in bore). We evaluated the safety and feasibility of performing focal laser ablation in a urology clinic (out of bore) using magnetic resonance imaging-ultrasound fusion for guidance. A total of 11 men with intermediate risk prostate cancer were enrolled in this prospective, institutional review board approved pilot study. Magnetic resonance imaging-ultrasound fusion was used to guide laser fibers transrectally into regions of interest harboring intermediate risk prostate cancer. Thermal probes were inserted for real-time monitoring of intraprostatic temperatures during laser activation. Multiparametric magnetic resonance imaging (3 Tesla) was done immediately after treatment and at 6 months along with comprehensive fusion biopsy. Ten of 11 patients were successfully treated while under local anesthesia. Mean procedure time was 95 minutes (range 71 to 105). Posttreatment magnetic resonance imaging revealed a confined zone of nonperfusion in all 10 men. Mean zone volume was 4.3 cc (range 2.1 to 6.0). No CTCAE grade 3 or greater adverse events developed and no changes were observed in urinary or sexual function. At 6 months magnetic resonance imaging-ultrasound fusion biopsy of the treatment site showed no cancer in 3 patients, microfocal Gleason 3 + 3 in another 3 and persistent intermediate risk prostate cancer in 4. Focal laser ablation of prostate cancer appears safe and feasible with the patient under local anesthesia in a urology clinic using magnetic resonance imaging-ultrasound fusion for guidance and thermal probes for monitoring. Further development is necessary to refine out of bore focal laser ablation and additional studies are needed to determine appropriate treatment margins and oncologic efficacy. Copyright © 2017 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
18F-FDG PET/MRI fusion in characterizing pancreatic tumors: comparison to PET/CT.
Tatsumi, Mitsuaki; Isohashi, Kayako; Onishi, Hiromitsu; Hori, Masatoshi; Kim, Tonsok; Higuchi, Ichiro; Inoue, Atsuo; Shimosegawa, Eku; Takeda, Yutaka; Hatazawa, Jun
2011-08-01
To demonstrate that positron emission tomography (PET)/magnetic resonance imaging (MRI) fusion was feasible in characterizing pancreatic tumors (PTs), comparing MRI and computed tomography (CT) as mapping images for fusion with PET as well as fused PET/MRI and PET/CT. We retrospectively reviewed 47 sets of (18)F-fluorodeoxyglucose ((18)F -FDG) PET/CT and MRI examinations to evaluate suspected or known pancreatic cancer. To assess the ability of mapping images for fusion with PET, CT (of PET/CT), T1- and T2-weighted (w) MR images (all non-contrast) were graded regarding the visibility of PT (5-point confidence scale). Fused PET/CT, PET/T1-w or T2-w MR images of the upper abdomen were evaluated to determine whether mapping images provided additional diagnostic information to PET alone (3-point scale). The overall quality of PET/CT or PET/MRI sets in diagnosis was also assessed (3-point scale). These PET/MRI-related scores were compared to PET/CT-related scores and the accuracy in characterizing PTs was compared. Forty-three PTs were visualized on CT or MRI, including 30 with abnormal FDG uptake and 13 without. The confidence score for the visibility of PT was significantly higher on T1-w MRI than CT. The scores for additional diagnostic information to PET and overall quality of each image set in diagnosis were significantly higher on the PET/T1-w MRI set than the PET/CT set. The diagnostic accuracy was higher on PET/T1-w or PET/T2-w MRI (93.0 and 90.7%, respectively) than PET/CT (88.4%), but statistical significance was not obtained. PET/MRI fusion, especially PET with T1-w MRI, was demonstrated to be superior to PET/CT in characterizing PTs, offering better mapping and fusion image quality.
Augustine, Daniel; Yaqub, Mohammad; Szmigielski, Cezary; Lima, Eduardo; Petersen, Steffen E; Becher, Harald; Noble, J Alison; Leeson, Paul
2015-02-01
Three-dimensional fusion echocardiography (3DFE) is a novel postprocessing approach that utilizes imaging data acquired from multiple 3D acquisitions. We assessed image quality, endocardial border definition, and cardiac wall motion in patients using 3DFE compared to standard 3D images (3D) and results obtained with contrast echocardiography (2DC). Twenty-four patients (mean age 66.9 ± 13 years, 17 males, 7 females) undergoing 2DC had three, noncontrast, 3D apical volumes acquired at rest. Images were fused using an automated image fusion approach. Quality of the 3DFE was compared to both 3D and 2DC based on contrast-to-noise ratio (CNR) and endocardial border definition. We then compared clinical wall-motion score index (WMSI) calculated from 3DFE and 3D to those obtained from 2DC images. Fused 3D volumes had significantly improved CNR (8.92 ± 1.35 vs. 6.59 ± 1.19, P < 0.0005) and segmental image quality (2.42 ± 0.99 vs. 1.93 ± 1.18, P < 0.005) compared to unfused 3D acquisitions. Levels achieved were closer to scores for 2D contrast images (CNR: 9.04 ± 2.21, P = 0.6; segmental image quality: 2.91 ± 0.37, P < 0.005). WMSI calculated from fused 3D volumes did not differ significantly from those obtained from 2D contrast echocardiography (1.06 ± 0.09 vs. 1.07 ± 0.15, P = 0.69), whereas unfused images produced significantly more variable results (1.19 ± 0.30). This was confirmed by a better intraclass correlation coefficient (ICC 0.72; 95% CI 0.32-0.88) relative to comparisons with unfused images (ICC 0.56; 95% CI 0.02-0.81). 3DFE significantly improves left ventricular image quality compared to unfused 3D in a patient population and allows noncontrast assessment of wall motion that approaches that achieved with 2D contrast echocardiography. © 2014, Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Singh, Dharmendra; Kumar, Harish
Earth observation satellites provide data that covers different portions of the electromagnetic spectrum at different spatial and spectral resolutions. The increasing availability of information products generated from satellite images are extending the ability to understand the patterns and dynamics of the earth resource systems at all scales of inquiry. In which one of the most important application is the generation of land cover classification from satellite images for understanding the actual status of various land cover classes. The prospect for the use of satel-lite images in land cover classification is an extremely promising one. The quality of satellite images available for land-use mapping is improving rapidly by development of advanced sensor technology. Particularly noteworthy in this regard is the improved spatial and spectral reso-lution of the images captured by new satellite sensors like MODIS, ASTER, Landsat 7, and SPOT 5. For the full exploitation of increasingly sophisticated multisource data, fusion tech-niques are being developed. Fused images may enhance the interpretation capabilities. The images used for fusion have different temporal, and spatial resolution. Therefore, the fused image provides a more complete view of the observed objects. It is one of the main aim of image fusion to integrate different data in order to obtain more information that can be de-rived from each of the single sensor data alone. A good example of this is the fusion of images acquired by different sensors having a different spatial resolution and of different spectral res-olution. Researchers are applying the fusion technique since from three decades and propose various useful methods and techniques. The importance of high-quality synthesis of spectral information is well suited and implemented for land cover classification. More recently, an underlying multiresolution analysis employing the discrete wavelet transform has been used in image fusion. It was found that multisensor image fusion is a tradeoff between the spectral information from a low resolution multi-spectral images and the spatial information from a high resolution multi-spectral images. With the wavelet transform based fusion method, it is easy to control this tradeoff. A new transform, the curvelet transform was used in recent years by Starck. A ridgelet transform is applied to square blocks of detail frames of undecimated wavelet decomposition, consequently the curvelet transform is obtained. Since the ridgelet transform possesses basis functions matching directional straight lines therefore, the curvelet transform is capable of representing piecewise linear contours on multiple scales through few significant coefficients. This property leads to a better separation between geometric details and background noise, which may be easily reduced by thresholding curvelet coefficients before they are used for fusion. The Terra and Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) instrument provides high radiometric sensitivity (12 bit) in 36 spectral bands ranging in wavelength from 0.4 m to 14.4 m and also it is freely available. Two bands are imaged at a nominal resolution of 250 m at nadir, with five bands at 500 m, and the remaining 29 bands at 1 km. In this paper, the band 1 of spatial resolution 250 m and bandwidth 620-670 nm, and band 2, of spatial resolution of 250m and bandwidth 842-876 nm is considered as these bands has special features to identify the agriculture and other land covers. In January 2006, the Advanced Land Observing Satellite (ALOS) was successfully launched by the Japan Aerospace Exploration Agency (JAXA). The Phased Arraytype L-band SAR (PALSAR) sensor onboard the satellite acquires SAR imagery at a wavelength of 23.5 cm (frequency 1.27 GHz) with capabilities of multimode and multipolarization observation. PALSAR can operate in several modes: the fine-beam single (FBS) polarization mode (HH), fine-beam dual (FBD) polariza-tion mode (HH/HV or VV/VH), polarimetric (PLR) mode (HH/HV/VH/VV), and ScanSAR (WB) mode (HH/VV) [15]. These makes PALSAR imagery very attractive for spatially and temporally consistent monitoring system. The Overview of Principal Component Analysis is that the most of the information within all the bands can be compressed into a much smaller number of bands with little loss of information. It allows us to extract the low-dimensional subspaces that capture the main linear correlation among the high-dimensional image data. This facilitates viewing the explained variance or signal in the available imagery, allowing both gross and more subtle features in the imagery to be seen. In this paper we have explored the fusion technique for enhancing the land cover classification of low resolution satellite data espe-cially freely available satellite data. For this purpose, we have considered to fuse the PALSAR principal component data with MODIS principal component data. Initially, the MODIS band 1 and band 2 is considered, its principal component is computed. Similarly the PALSAR HH, HV and VV polarized data are considered, and there principal component is also computed. con-sequently, the PALSAR principal component image is fused with MODIS principal component image. The aim of this paper is to analyze the effect of classification accuracy on major type of land cover types like agriculture, water and urban bodies with fusion of PALSAR data to MODIS data. Curvelet transformation has been applied for fusion of these two satellite images and Minimum Distance classification technique has been applied for the resultant fused image. It is qualitatively and visually observed that the overall classification accuracy of MODIS image after fusion is enhanced. This type of fusion technique may be quite helpful in near future to use freely available satellite data to develop monitoring system for different land cover classes on the earth.
Faust, James J; Christenson, Wayne; Doudrick, Kyle; Ros, Robert; Ugarova, Tatiana P
2017-06-01
Implantation of synthetic material, including vascular grafts, pacemakers, etc. results in the foreign body reaction and the formation of multinucleated giant cells (MGCs) at the exterior surface of the implant. Despite the long-standing premise that fusion of mononucleated macrophages results in the formation of MGCs, to date, no published study has shown fusion in context with living specimens. This is due to the fact that optical-quality glass, which is required for the majority of live imaging techniques, does not promote macrophage fusion. Consequently, the morphological changes that macrophages undergo during fusion as well as the mechanisms that govern this process remain ill-defined. In this study, we serendipitously identified a highly fusogenic glass surface and discovered that the capacity to promote fusion was due to oleamide contamination. When adsorbed on glass, oleamide and other molecules that contain long-chain hydrocarbons promoted high levels of macrophage fusion. Adhesion, an essential step for macrophage fusion, was apparently mediated by Mac-1 integrin (CD11b/CD18, α M β 2 ) as determined by single cell force spectroscopy and adhesion assays. Micropatterned glass further increased fusion and enabled a remarkable degree of spatiotemporal control over MGC formation. Using these surfaces, we reveal the kinetics that govern MGC formation in vitro. We anticipate that the spatiotemporal control afforded by these surfaces will expedite studies designed to identify the mechanism(s) of macrophage fusion and MGC formation with implication for the design of novel biomaterials. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dahl, Michael C; Ellingson, Arin M; Mehta, Hitesh P; Huelman, Justin H; Nuckley, David J
2013-02-01
Degenerative disc disease is commonly a multilevel pathology with varying deterioration severity. The use of fusion on multiple levels can significantly affect functionality and has been linked to persistent adjacent disc degeneration. A hybrid approach of fusion and nucleus replacement (NR) has been suggested as a solution for mildly degenerated yet painful levels adjacent to fusion. To compare the biomechanical metrics of different hybrid implant constructs, hypothesizing that an NR+fusion hybrid would be similar to a single-level fusion and perform more naturally compared with a two-level fusion. A cadaveric in vitro repeated-measures study was performed to evaluate a multilevel lumbar NR+fusion hybrid. Eight cadaveric spines (L3-S1) were tested in a Spine Kinetic Simulator (Instron, Norwood, MA, USA). Pure moments of 8 Nm were applied in flexion/extension, lateral bending, and axial rotation as well as compression loading. Specimens were tested intact; fused (using transforaminal lumbar interbody fusion instrumentation with posterior rods) at L5-S1; with a nuclectomy at L4-L5 including fusion at L5-S1; with NR at L4-L5 including fusion at L5-S1; and finally with a two-level fusion spanning L4-S1. Repeated-measures analysis of variance and corrected t tests were used to statistically compare outcomes. The NR+fusion hybrid and single-level fusion exhibited no statistical differences for range of motion (ROM), stiffness, neutral zone, and intradiscal pressure in all loading directions. Compared with two-level fusion, the hybrid affords the construct 41.9% more ROM on average. Two-level fusion stiffness was statistically higher than all other constructs and resulted in significantly lower ROM in flexion, extension, and lateral bending. The hybrid construct produced approximately half of the L3-L4 adjacent-level pressures as the two-level fusion case while generating similar pressures to the single-level fusion case. These data portend more natural functional outcomes and fewer adjacent disc complications for a multilevel NR+fusion hybrid compared with the classical two-level fusion. Copyright © 2013 Elsevier Inc. All rights reserved.
Evaluation of Night Vision Devices for Image Fusion Studies
2004-12-01
July 2004. http://www.sensorsmag.com/articles/0400/34/main.shtml Task, Harry L., Hartman, Richard T., Marasco , Peter L., Methods for Measuring...Press, Bellingham, Washington, 1998. Burt, Peter J. & Kolczynski, Raymond J., David Sarnoff Research Center, Enhanced Image Capture through Fusion
Lee, Minsu; Shin, Su-Jin; Oh, Young Taik; Jung, Dae Chul; Cho, Nam Hoon; Choi, Young Deuk; Park, Sung Yoon
2017-09-01
To investigate the utility of fused high b value diffusion-weighted imaging (DWI) and T2-weighted imaging (T2WI) for evaluating depth of invasion in bladder cancer. We included 62 patients with magnetic resonance imaging (MRI) and surgically confirmed urothelial carcinoma in the urinary bladder. An experienced genitourinary radiologist analysed the depth of invasion (T stage <2 or ≥2) using T2WI, DWI, T2WI plus DWI, and fused DWI and T2WI (fusion MRI). Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and accuracy were investigated. Area under the curve (AUC) was analysed to identify T stage ≥2. The rate of patients with surgically confirmed T stage ≥2 was 41.9% (26/62). Sensitivity, specificity, PPV, NPV and accuracy were 50.0%, 55.6%, 44.8%, 60.6% and 53.2%, respectively, with T2WI; 57.7%, 77.8%, 65.2%, 71.8% and 69.4%, respectively, with DWI; 65.4%, 80.6%, 70.8%, 76.3% and 74.2%, respectively, with T2WI plus DWI and 80.8%, 77.8%, 72.4%, 84.9% and 79.0%, respectively, with fusion MRI. AUC was 0.528 with T2WI, 0.677 with DWI, 0.730 with T2WI plus DWI and 0.793 with fusion MRI for T stage ≥2. Fused high b value DWI and T2WI may be a promising non-contrast MRI technique for assessing depth of invasion in bladder cancer. • Accuracy of fusion MRI was 79.0% for T stage ≥2 in bladder cancer. • AUC of fusion MRI was 0.793 for T stage ≥2 in bladder cancer. • Diagnostic performance of fusion MRI was comparable with T2WI plus DWI. • As a non-contrast MRI technique, fusion MRI is useful for bladder cancer.
Yusof, Mohd Imran; Nadarajan, Eswaran; Abdullah, Mohd Shafie
2014-06-15
Cross-sectional study on the measurement of relevant magnetic resonance imaging parameters in 100 patients presented for lumbar spine assessment. To determine anatomical position of lumbar plexus and major blood vessels in relation to vertebral body and anterior edge of psoas muscle at L3-L4 and L4-L5 and to define the safe working zone for transpsoas approach for lumbar fusion. Lateral transpsoas lumbar interbody fusion has been shown to be safe and provides alternative for lumbar fusion. However, proximity of neurovascular structures may not allow a safe passage for this procedure in the Asian population. Relevant parameters were measured from axial magnetic resonance images and analyzed, including the psoas muscle and vertebrae endplate diameters, lumbar plexus and psoas muscle distance, lumbar plexus and vertebra body distance, and vena cava to the anterior vertebrae body diameters. The mean anteroposterior diameters of the right and left psoas muscle ranged from 44.0 to 58.6 mm and 44.8 to 54.0 mm, respectively. The mean anteroposterior diameters of vertebra endplate of L3, L4, and L5 were 38.2 mm, 39.3 mm, and 41.4 mm, respectively. The mean distance of posterior border of vena cava from the vertebra body was 4.5 mm at L3-L4 and 14.1 mm at L4-L5. L3-L4 fusion is feasible at both sides in both sexes; however, at L4-L5 level, the procedure is feasible only on the left side. The safe working zone for transpsoas approach to lumbar spine is significantly narrower at L4-L5 in both sexes. Anterior edge of psoas muscle can be used as a reliable guide to locate lumbar plexus within psoas muscle. N/A.
Electron cyclotron emission imaging and applications in magnetic fusion energy
NASA Astrophysics Data System (ADS)
Tobias, Benjamin John
Energy production through the burning of fossil fuels is an unsustainable practice. Exponentially increasing energy consumption and dwindling natural resources ensure that coal and gas fueled power plants will someday be a thing of the past. However, even before fuel reserves are depleted, our planet may well succumb to disastrous side effects, namely the build up of carbon emissions in the environment triggering world-wide climate change and the countless industrial spills of pollutants that continue to this day. Many alternatives are currently being developed, but none has so much promise as fusion nuclear energy, the energy of the sun. The confinement of hot plasma at temperatures in excess of 100 million Kelvin by a carefully arranged magnetic field for the realization of a self-sustaining fusion power plant requires new technologies and improved understanding of fundamental physical phenomena. Imaging of electron cyclotron radiation lends insight into the spatial and temporal behavior of electron temperature fluctuations and instabilities, providing a powerful diagnostic for investigations into basic plasma physics and nuclear fusion reactor operation. This dissertation presents the design and implementation of a new generation of Electron Cyclotron Emission Imaging (ECEI) diagnostics on toroidal magnetic fusion confinement devices, or tokamaks, around the world. The underlying physics of cyclotron radiation in fusion plasmas is reviewed, and a thorough discussion of millimeter wave imaging techniques and heterodyne radiometry in ECEI follows. The imaging of turbulence and fluid flows has evolved over half a millennium since Leonardo da Vinci's first sketches of cascading water, and applications for ECEI in fusion research are broad ranging. Two areas of physical investigation are discussed in this dissertation: the identification of poloidal shearing in Alfven eigenmode structures predicted by hybrid gyrofluid-magnetohydrodynamic (gyrofluid-MHD) modeling, and magnetic field line displacement during precursor oscillations associated with the sawtooth crash, a disruptive instability observed both in tokamak plasmas with high core current and in the magnetized plasmas of solar flares and other interstellar plasmas. Understanding both of these phenomena is essential for the future of magnetic fusion energy, and important new observations described herein underscore the advantages of imaging techniques in experimental physics.
2014-10-01
Unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT The study investigates whether fusion PET/MRI imaging with 18F- choline PET/CT and...imaging with 18F- choline PET/CT and diffusion-weighted MRI can be successfully applied to target prostate cancer using image-guided prostate...Completed task. The 18F- choline synthesis was implemented and optimized for routine radiotracer production. RDRC committee approval as part of the IRB
NASA Astrophysics Data System (ADS)
Pan, Feng; Deng, Yating; Ma, Xichao; Xiao, Wen
2017-11-01
Digital holographic microtomography is improved and applied to the measurements of three-dimensional refractive index distributions of fusion spliced optical fibers. Tomographic images are reconstructed from full-angle phase projection images obtained with a setup-rotation approach, in which the laser source, the optical system and the image sensor are arranged on an optical breadboard and synchronously rotated around the fixed object. For retrieving high-quality tomographic images, a numerical method is proposed to compensate the unwanted movements of the object in the lateral, axial and vertical directions during rotation. The compensation is implemented on the two-dimensional phase images instead of the sinogram. The experimental results exhibit distinctly the internal structures of fusion splices between a single-mode fiber and other fibers, including a multi-mode fiber, a panda polarization maintaining fiber, a bow-tie polarization maintaining fiber and a photonic crystal fiber. In particular, the internal structure distortion in the fusion areas can be intuitively observed, such as the expansion of the stress zones of polarization maintaining fibers, the collapse of the air holes of photonic crystal fibers, etc.
Towards Omni-Tomography—Grand Fusion of Multiple Modalities for Simultaneous Interior Tomography
Wang, Ge; Zhang, Jie; Gao, Hao; Weir, Victor; Yu, Hengyong; Cong, Wenxiang; Xu, Xiaochen; Shen, Haiou; Bennett, James; Furth, Mark; Wang, Yue; Vannier, Michael
2012-01-01
We recently elevated interior tomography from its origin in computed tomography (CT) to a general tomographic principle, and proved its validity for other tomographic modalities including SPECT, MRI, and others. Here we propose “omni-tomography”, a novel concept for the grand fusion of multiple tomographic modalities for simultaneous data acquisition in a region of interest (ROI). Omni-tomography can be instrumental when physiological processes under investigation are multi-dimensional, multi-scale, multi-temporal and multi-parametric. Both preclinical and clinical studies now depend on in vivo tomography, often requiring separate evaluations by different imaging modalities. Over the past decade, two approaches have been used for multimodality fusion: Software based image registration and hybrid scanners such as PET-CT, PET-MRI, and SPECT-CT among others. While there are intrinsic limitations with both approaches, the main obstacle to the seamless fusion of multiple imaging modalities has been the bulkiness of each individual imager and the conflict of their physical (especially spatial) requirements. To address this challenge, omni-tomography is now unveiled as an emerging direction for biomedical imaging and systems biomedicine. PMID:22768108
Research on multi-source image fusion technology in haze environment
NASA Astrophysics Data System (ADS)
Ma, GuoDong; Piao, Yan; Li, Bing
2017-11-01
In the haze environment, the visible image collected by a single sensor can express the details of the shape, color and texture of the target very well, but because of the haze, the sharpness is low and some of the target subjects are lost; Because of the expression of thermal radiation and strong penetration ability, infrared image collected by a single sensor can clearly express the target subject, but it will lose detail information. Therefore, the multi-source image fusion method is proposed to exploit their respective advantages. Firstly, the improved Dark Channel Prior algorithm is used to preprocess the visible haze image. Secondly, the improved SURF algorithm is used to register the infrared image and the haze-free visible image. Finally, the weighted fusion algorithm based on information complementary is used to fuse the image. Experiments show that the proposed method can improve the clarity of the visible target and highlight the occluded infrared target for target recognition.
Target detection method by airborne and spaceborne images fusion based on past images
NASA Astrophysics Data System (ADS)
Chen, Shanjing; Kang, Qing; Wang, Zhenggang; Shen, ZhiQiang; Pu, Huan; Han, Hao; Gu, Zhongzheng
2017-11-01
To solve the problem that remote sensing target detection method has low utilization rate of past remote sensing data on target area, and can not recognize camouflage target accurately, a target detection method by airborne and spaceborne images fusion based on past images is proposed in this paper. The target area's past of space remote sensing image is taken as background. The airborne and spaceborne remote sensing data is fused and target feature is extracted by the means of airborne and spaceborne images registration, target change feature extraction, background noise suppression and artificial target feature extraction based on real-time aerial optical remote sensing image. Finally, the support vector machine is used to detect and recognize the target on feature fusion data. The experimental results have established that the proposed method combines the target area change feature of airborne and spaceborne remote sensing images with target detection algorithm, and obtains fine detection and recognition effect on camouflage and non-camouflage targets.
Multimodal Medical Image Fusion by Adaptive Manifold Filter.
Geng, Peng; Liu, Shuaiqi; Zhuang, Shanna
2015-01-01
Medical image fusion plays an important role in diagnosis and treatment of diseases such as image-guided radiotherapy and surgery. The modified local contrast information is proposed to fuse multimodal medical images. Firstly, the adaptive manifold filter is introduced into filtering source images as the low-frequency part in the modified local contrast. Secondly, the modified spatial frequency of the source images is adopted as the high-frequency part in the modified local contrast. Finally, the pixel with larger modified local contrast is selected into the fused image. The presented scheme outperforms the guided filter method in spatial domain, the dual-tree complex wavelet transform-based method, nonsubsampled contourlet transform-based method, and four classic fusion methods in terms of visual quality. Furthermore, the mutual information values by the presented method are averagely 55%, 41%, and 62% higher than the three methods and those values of edge based similarity measure by the presented method are averagely 13%, 33%, and 14% higher than the three methods for the six pairs of source images.
NASA Astrophysics Data System (ADS)
Pournamdari, M.; Hashim, M.
2014-02-01
Chromite ore deposit occurrence is related to ophiolite complexes as a part of the oceanic crust and provides a good opportunity for lithological mapping using remote sensing data. The main contribution of this paper is a novel approaches to discriminate different rock units associated with ophiolite complex using the Feature Level Fusion technique on ASTER and Landsat TM satellite data at regional scale. In addition this study has applied spectral transform approaches, consisting of Spectral Angle Mapper (SAM) to distinguish the concentration of high-potential areas of chromite and also for determining the boundary between different rock units. Results indicated both approaches show superior outputs compared to other methods and can produce a geological map for ophiolite complex rock units in the arid and the semi-arid region. The novel technique including feature level fusion and Spectral Angle Mapper (SAM) discriminated ophiolitic rock units and produced detailed geological maps of the study area. As a case study, Sikhoran ophiolite complex located in SE, Iran has been selected for image processing techniques. In conclusion, a suitable approach for lithological mapping of ophiolite complexes is demonstrated, this technique contributes meaningfully towards economic geology in terms of identifying new prospects.
Solid radiographic fusion with a nonconstrained device 5 years after cervical arthroplasty.
Heary, Robert F; Goldstein, Ira M; Getto, Katarzyna M; Agarwal, Nitin
2014-12-01
Cervical disc arthroplasty (CDA) has been gaining popularity as a surgical alternative to anterior cervical discectomy and fusion. Spontaneous fusion following a CDA is uncommon. A few anecdotal reports of heterotrophic ossification around the implant sites have been noted for the BRYAN, ProDisc-C, Mobi-C, PRESTIGE, and PCM devices. All CDA fusions reported to date have been in devices that are semiconstrained. The authors reported the case of a 56-year-old man who presented with left C-7 radiculopathy and neck pain for 10 weeks after an assault injury. There was evidence of disc herniation at the C6-7 level. He was otherwise healthy with functional scores on the visual analog scale (VAS, 4.2); neck disability index (NDI, 16); and the 36-item short form health survey (SF-36; physical component summary [PSC] score 43 and mental component summary [MCS] score 47). The patient underwent total disc replacement in which the DISCOVER Artificial Cervical Disc (DePuy Spine, Inc.) was used. The patient was seen at regular follow-up visits up to 60 months. At his 60-month follow-up visit, he had complete radiographic fusion at the C6-7 level with bridging trabecular bone and no motion at the index site on dynamic imaging. He was pain free, with a VAS score of 0, NDI score of 0, and SF-36 PCS and MCS scores of 61 and 55, respectively. Conclusions This is the first case report that identifies the phenomenon of fusion around a nonconstrained cervical prosthesis. Despite this unwanted radiographic outcome, the patient's clinical outcome was excellent.
Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study
Sappa, Angel D.; Carvajal, Juan A.; Aguilera, Cristhian A.; Oliveira, Miguel; Romero, Dennis; Vintimilla, Boris X.
2016-01-01
This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and Long Wave InfraRed (LWIR). PMID:27294938
Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study.
Sappa, Angel D; Carvajal, Juan A; Aguilera, Cristhian A; Oliveira, Miguel; Romero, Dennis; Vintimilla, Boris X
2016-06-10
This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and Long Wave InfraRed (LWIR).
NASA Astrophysics Data System (ADS)
Wang, G. H.; Wang, H. B.; Fan, W. F.; Liu, Y.; Chen, C.
2018-04-01
In view of the traditional change detection algorithm mainly depends on the spectral information image spot, failed to effectively mining and fusion of multi-image feature detection advantage, the article borrows the ideas of object oriented analysis proposed a multi feature fusion of remote sensing image change detection algorithm. First by the multi-scale segmentation of image objects based; then calculate the various objects of color histogram and linear gradient histogram; utilizes the color distance and edge line feature distance between EMD statistical operator in different periods of the object, using the adaptive weighted method, the color feature distance and edge in a straight line distance of combination is constructed object heterogeneity. Finally, the curvature histogram analysis image spot change detection results. The experimental results show that the method can fully fuse the color and edge line features, thus improving the accuracy of the change detection.
New false color mapping for image fusion
NASA Astrophysics Data System (ADS)
Toet, Alexander; Walraven, Jan
1996-03-01
A pixel-based color-mapping algorithm is presented that produces a fused false color rendering of two gray-level images representing different sensor modalities. The resulting images have a higher information content than each of the original images and retain sensor-specific image information. The unique component of each image modality is enhanced in the resulting fused color image representation. First, the common component of the two original input images is determined. Second, the common component is subtracted from the original images to obtain the unique component of each image. Third, the unique component of each image modality is subtracted from the image of the other modality. This step serves to enhance the representation of sensor-specific details in the final fused result. Finally, a fused color image is produced by displaying the images resulting from the last step through, respectively, the red and green channels of a color display. The method is applied to fuse thermal and visual images. The results show that the color mapping enhances the visibility of certain details and preserves the specificity of the sensor information. The fused images also have a fairly natural appearance. The fusion scheme involves only operations on corresponding pixels. The resolution of a fused image is therefore directly related to the resolution of the input images. Before fusing, the contrast of the images can be enhanced and their noise can be reduced by standard image- processing techniques. The color mapping algorithm is computationally simple. This implies that the investigated approaches can eventually be applied in real time and that the hardware needed is not too complicated or too voluminous (an important consideration when it has to fit in an airplane, for instance).
A genetically encoded fluorescent tRNA is active in live-cell protein synthesis
Masuda, Isao; Igarashi, Takao; Sakaguchi, Reiko; Nitharwal, Ram G.; Takase, Ryuichi; Han, Kyu Young; Leslie, Benjamin J.; Liu, Cuiping; Gamper, Howard; Ha, Taekjip; Sanyal, Suparna
2017-01-01
Abstract Transfer RNAs (tRNAs) perform essential tasks for all living cells. They are major components of the ribosomal machinery for protein synthesis and they also serve in non-ribosomal pathways for regulation and signaling metabolism. We describe the development of a genetically encoded fluorescent tRNA fusion with the potential for imaging in live Escherichia coli cells. This tRNA fusion carries a Spinach aptamer that becomes fluorescent upon binding of a cell-permeable and non-toxic fluorophore. We show that, despite having a structural framework significantly larger than any natural tRNA species, this fusion is a viable probe for monitoring tRNA stability in a cellular quality control mechanism that degrades structurally damaged tRNA. Importantly, this fusion is active in E. coli live-cell protein synthesis allowing peptidyl transfer at a rate sufficient to support cell growth, indicating that it is accommodated by translating ribosomes. Imaging analysis shows that this fusion and ribosomes are both excluded from the nucleoid, indicating that the fusion and ribosomes are in the cytosol together possibly engaged in protein synthesis. This fusion methodology has the potential for developing new tools for live-cell imaging of tRNA with the unique advantage of both stoichiometric labeling and broader application to all cells amenable to genetic engineering. PMID:27956502
Fusion of imaging and nonimaging data for surveillance aircraft
NASA Astrophysics Data System (ADS)
Shahbazian, Elisa; Gagnon, Langis; Duquet, Jean Remi; Macieszczak, Maciej; Valin, Pierre
1997-06-01
This paper describes a phased incremental integration approach for application of image analysis and data fusion technologies to provide automated intelligent target tracking and identification for airborne surveillance on board an Aurora Maritime Patrol Aircraft. The sensor suite of the Aurora consists of a radar, an identification friend or foe (IFF) system, an electronic support measures (ESM) system, a spotlight synthetic aperture radar (SSAR), a forward looking infra-red (FLIR) sensor and a link-11 tactical datalink system. Lockheed Martin Canada (LMCan) is developing a testbed, which will be used to analyze and evaluate approaches for combining the data provided by the existing sensors, which were initially not designed to feed a fusion system. Three concurrent research proof-of-concept activities provide techniques, algorithms and methodology into three sequential phases of integration of this testbed. These activities are: (1) analysis of the fusion architecture (track/contact/hybrid) most appropriate for the type of data available, (2) extraction and fusion of simple features from the imaging data into the fusion system performing automatic target identification, and (3) development of a unique software architecture which will permit integration and independent evolution, enhancement and optimization of various decision aid capabilities, such as multi-sensor data fusion (MSDF), situation and threat assessment (STA) and resource management (RM).
Intelligent imaging systems for automotive applications
NASA Astrophysics Data System (ADS)
Thompson, Chris; Huang, Yingping; Fu, Shan
2004-03-01
In common with many other application areas, visual signals are becoming an increasingly important information source for many automotive applications. For several years CCD cameras have been used as research tools for a range of automotive applications. Infrared cameras, RADAR and LIDAR are other types of imaging sensors that have also been widely investigated for use in cars. This paper will describe work in this field performed in C2VIP over the last decade - starting with Night Vision Systems and looking at various other Advanced Driver Assistance Systems. Emerging from this experience, we make the following observations which are crucial for "intelligent" imaging systems: 1. Careful arrangement of sensor array. 2. Dynamic-Self-Calibration. 3. Networking and processing. 4. Fusion with other imaging sensors, both at the image level and the feature level, provides much more flexibility and reliability in complex situations. We will discuss how these problems can be addressed and what are the outstanding issues.
Simulated disparity and peripheral blur interact during binocular fusion.
Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J
2014-07-17
We have developed a low-cost, practical gaze-contingent display in which natural images are presented to the observer with dioptric blur and stereoscopic disparity that are dependent on the three-dimensional structure of natural scenes. Our system simulates a distribution of retinal blur and depth similar to that experienced in real-world viewing conditions by emmetropic observers. We implemented the system using light-field photographs taken with a plenoptic camera which supports digital refocusing anywhere in the images. We coupled this capability with an eye-tracking system and stereoscopic rendering. With this display, we examine how the time course of binocular fusion depends on depth cues from blur and stereoscopic disparity in naturalistic images. Our results show that disparity and peripheral blur interact to modify eye-movement behavior and facilitate binocular fusion, and the greatest benefit was gained by observers who struggled most to achieve fusion. Even though plenoptic images do not replicate an individual’s aberrations, the results demonstrate that a naturalistic distribution of depth-dependent blur may improve 3-D virtual reality, and that interruptions of this pattern (e.g., with intraocular lenses) which flatten the distribution of retinal blur may adversely affect binocular fusion. © 2014 ARVO.
Simulated disparity and peripheral blur interact during binocular fusion
Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J
2014-01-01
We have developed a low-cost, practical gaze-contingent display in which natural images are presented to the observer with dioptric blur and stereoscopic disparity that are dependent on the three-dimensional structure of natural scenes. Our system simulates a distribution of retinal blur and depth similar to that experienced in real-world viewing conditions by emmetropic observers. We implemented the system using light-field photographs taken with a plenoptic camera which supports digital refocusing anywhere in the images. We coupled this capability with an eye-tracking system and stereoscopic rendering. With this display, we examine how the time course of binocular fusion depends on depth cues from blur and stereoscopic disparity in naturalistic images. Our results show that disparity and peripheral blur interact to modify eye-movement behavior and facilitate binocular fusion, and the greatest benefit was gained by observers who struggled most to achieve fusion. Even though plenoptic images do not replicate an individual's aberrations, the results demonstrate that a naturalistic distribution of depth-dependent blur may improve 3-D virtual reality, and that interruptions of this pattern (e.g., with intraocular lenses) which flatten the distribution of retinal blur may adversely affect binocular fusion. PMID:25034260
Fusion of MultiSpectral and Panchromatic Images Based on Morphological Operators.
Restaino, Rocco; Vivone, Gemine; Dalla Mura, Mauro; Chanussot, Jocelyn
2016-04-20
Nonlinear decomposition schemes constitute an alternative to classical approaches for facing the problem of data fusion. In this paper we discuss the application of this methodology to a popular remote sensing application called pansharpening, which consists in the fusion of a low resolution multispectral image and a high resolution panchromatic image. We design a complete pansharpening scheme based on the use of morphological half gradients operators and demonstrate the suitability of this algorithm through the comparison with state of the art approaches. Four datasets acquired by the Pleiades, Worldview-2, Ikonos and Geoeye-1 satellites are employed for the performance assessment, testifying the effectiveness of the proposed approach in producing top-class images with a setting independent of the specific sensor.
Nazarena Pizzi, M; Aguadé Bruix, S; Cuéllar Calabria, H; Aliaga, V; Candell Riera, J
2010-01-01
A 77-year old patient was admitted for acute coronary syndrome without ST elevation. His risk was stratified using the myocardial perfusion gated SPECT, mild inferior ischemia being observed. Thus, medical therapy was optimized and the patient was discharged. He continued with exertional dyspnea so a coronary CT angiography was performed. It revealed severe lesions in the proximal RCA. SPECT-CT fusion images correlated the myocardial perfusion defect with a posterior descending artery from the RCA, in a co-dominant coronary area. Subsequently, cardiac catheterism was indicated for his treatment. The current use of image fusion studies is limited to patients in whom it is difficult to attribute a perfusion defect to a specific coronary artery. In our patient, the fusion images helped to distinguish between the RCA and the circumflex artery as the culprit artery of ischemia. Copyright © 2010 Elsevier España, S.L. y SEMNIM. All rights reserved.
Fusion of LBP and SWLD using spatio-spectral information for hyperspectral face recognition
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Jiang, Peng; Zhang, Shuai; Xiong, Jinquan
2018-01-01
Hyperspectral imaging, recording intrinsic spectral information of the skin cross different spectral bands, become an important issue for robust face recognition. However, the main challenges for hyperspectral face recognition are high data dimensionality, low signal to noise ratio and inter band misalignment. In this paper, hyperspectral face recognition based on LBP (Local binary pattern) and SWLD (Simplified Weber local descriptor) is proposed to extract discriminative local features from spatio-spectral fusion information. Firstly, the spatio-spectral fusion strategy based on statistical information is used to attain discriminative features of hyperspectral face images. Secondly, LBP is applied to extract the orientation of the fusion face edges. Thirdly, SWLD is proposed to encode the intensity information in hyperspectral images. Finally, we adopt a symmetric Kullback-Leibler distance to compute the encoded face images. The hyperspectral face recognition is tested on Hong Kong Polytechnic University Hyperspectral Face database (PolyUHSFD). Experimental results show that the proposed method has higher recognition rate (92.8%) than the state of the art hyperspectral face recognition algorithms.
Lucena-Santos, Paola; Trindade, Inês A; Oliveira, Margareth; Pinto-Gouveia, José
2017-05-19
Given the clinical usefulness of the CFQ-BI (Cognitive Fusion Questionnaire-Body Image; the only existing measure to assess the body-image-related cognitive fusion), the present study aimed to confirm its one-factor structure, to verify its measurement invariance between clinical and non-clinical samples, to analyze its internal consistency and sensitivity to detect differences between samples, as well as to explore the incremental and convergent validities of the CFQ-BI scores in Brazilian samples. This was a cross-sectional study, which was conducted in clinical (women with overweight or obesity in treatment for weight loss) and non-clinical samples (women from the general population). The one-factor structure was confirmed showing factorial measurement invariance across clinical and non-clinical samples. The CFQ-BI scores presented an excellent internal consistency, were able to discriminate clinical and non-clinical samples, and were positively associated with binge eating severity, general cognitive fusion, and psychological inflexibility. Furthermore, body-image-related cognitive fusion scores (CFQ-BI) presented incremental validity over a general measure of cognitive fusion in the prediction of binge eating symptoms. This study demonstrated that CFQ-BI is a short scale with reliable and robust scores in Brazilian samples, presenting incremental and convergent validities, measurement invariance, and sensitivity to detect differences between clinical and non-clinical groups of women, enabling comparative studies between them.
NASA Astrophysics Data System (ADS)
Sirmacek, B.; Lindenbergh, R. C.; Menenti, M.
2013-10-01
Fusion of 3D airborne laser (LIDAR) data and terrestrial optical imagery can be applied in 3D urban modeling and model up-dating. The most challenging aspect of the fusion procedure is registering the terrestrial optical images on the LIDAR point clouds. In this article, we propose an approach for registering these two different data from different sensor sources. As we use iPhone camera images which are taken in front of the interested urban structure by the application user and the high resolution LIDAR point clouds of the acquired by an airborne laser sensor. After finding the photo capturing position and orientation from the iPhone photograph metafile, we automatically select the area of interest in the point cloud and transform it into a range image which has only grayscale intensity levels according to the distance from the image acquisition position. We benefit from local features for registering the iPhone image to the generated range image. In this article, we have applied the registration process based on local feature extraction and graph matching. Finally, the registration result is used for facade texture mapping on the 3D building surface mesh which is generated from the LIDAR point cloud. Our experimental results indicate possible usage of the proposed algorithm framework for 3D urban map updating and enhancing purposes.
Outcomes of Spinal Fusion for Cervical Kyphosis in Children with Neurofibromatosis.
Helenius, Ilkka J; Sponseller, Paul D; Mackenzie, William; Odent, Thierry; Dormans, John P; Asghar, Jahangir; Rathjen, Karl; Pahys, Joshua M; Miyanji, Firoz; Hedequist, Daniel; Phillips, Jonathan H
2016-11-02
Cervical kyphosis may occur with neurofibromatosis type I (NF1) and is often associated with vertebral dysplasia. Outcomes of cervical spinal fusion in patients with NF1 are not well described because of the rarity of the condition. We aimed to (1) characterize the clinical presentation of cervical kyphosis and (2) report the outcomes of posterior and anteroposterior cervical fusion for the condition in these children. The medical records and imaging studies of 22 children with NF1 who had undergone spinal fusion for cervical kyphosis (mean, 67°) at a mean age of 11 years and who had been followed for a minimum of 2 years were reviewed. Thirteen children presented with neck pain; 10, with head tilt; 9, with a previous cervical laminectomy or fusion; and 5, with a neurologic deficit. Two patients had spontaneous dislocation of the mid-cervical spine without a neurologic deficit. Eleven had scoliosis, with the major curve measuring a mean of 61°. Nine patients underwent posterior and 13 underwent anteroposterior surgery. Twenty-one received spinal instrumentation, and 1 was not treated with instrumentation. Preoperative halo traction was used for 9 patients, and it reduced the mean preoperative kyphosis by 34% (p = 0.0059). At the time of final follow-up, all spinal fusion sites had healed and the cervical kyphosis averaged 21° (mean correction, 69%; p < 0.001). The cervical kyphosis correction was significantly better after the anteroposterior procedures (83%) than after the posterior-only procedures (58%) (p = 0.031). Vertebral dysplasia and erosion continued in all 17 patients who had presented with dysplasia preoperatively. Thirteen patients had complications, including 5 new neurologic deficits and 8 cases of junctional kyphosis. Nine patients required revision surgery. Junctional kyphosis was more common in children in whom ≤5 levels had been fused (p = 0.054). Anteroposterior surgery provided better correction of cervical kyphosis than posterior spinal fusion in children with NF1. Erosion of vertebral bodies continued during the postoperative follow-up period in all patients who had presented with dysplastic changes preoperatively. The cervical spine should be screened in all children with NF1. Fusion should include at least 6 levels to prevent junctional kyphosis. Therapeutic Level III. See Instructions for Authors for a complete description of levels of evidence. Copyright © 2016 by The Journal of Bone and Joint Surgery, Incorporated.
[Magnetic resonance imaging in facial injuries and digital fusion CT/MRI].
Kozakiewicz, Marcin; Olszycki, Marek; Arkuszewski, Piotr; Stefańczyk, Ludomir
2006-01-01
Magnetic resonance images [MRI] and their digital fusion with computed tomography [CT] data, observed in patients affected with facial injuries, are presented in this study. The MR imaging of 12 posttraumatic patients was performed in the same plains as their previous CT scans. Evaluation focused on quality of the facial soft tissues depicting, which was unsatisfactory in CT. Using the own "Dental Studio" programme the digital fusion of the both modalities was performed. Pathologic dislocations and injures of facial soft tissues are visualized better in MRI than in CT examination. Especially MRI properly reveals disturbances in intraorbital soft structures. MRI-based assessment is valuable in patients affected with facial soft tissues injuries, especially in case of orbita/sinuses hernia. Fusion CT/MRI scans allows to evaluate simultaneously bone structure and soft tissues of the same region.
Fusion of an Ensemble of Augmented Image Detectors for Robust Object Detection
Wei, Pan; Anderson, Derek T.
2018-01-01
A significant challenge in object detection is accurate identification of an object’s position in image space, whereas one algorithm with one set of parameters is usually not enough, and the fusion of multiple algorithms and/or parameters can lead to more robust results. Herein, a new computational intelligence fusion approach based on the dynamic analysis of agreement among object detection outputs is proposed. Furthermore, we propose an online versus just in training image augmentation strategy. Experiments comparing the results both with and without fusion are presented. We demonstrate that the augmented and fused combination results are the best, with respect to higher accuracy rates and reduction of outlier influences. The approach is demonstrated in the context of cone, pedestrian and box detection for Advanced Driver Assistance Systems (ADAS) applications. PMID:29562609
Multi-focus image fusion algorithm using NSCT and MPCNN
NASA Astrophysics Data System (ADS)
Liu, Kang; Wang, Lianli
2018-04-01
Based on nonsubsampled contourlet transform (NSCT) and modified pulse coupled neural network (MPCNN), the paper proposes an effective method of image fusion. Firstly, the paper decomposes the source image into the low-frequency components and high-frequency components using NSCT, and then processes the low-frequency components by regional statistical fusion rules. For high-frequency components, the paper calculates the spatial frequency (SF), which is input into MPCNN model to get relevant coefficients according to the fire-mapping image of MPCNN. At last, the paper restructures the final image by inverse transformation of low-frequency and high-frequency components. Compared with the wavelet transformation (WT) and the traditional NSCT algorithm, experimental results indicate that the method proposed in this paper achieves an improvement both in human visual perception and objective evaluation. It indicates that the method is effective, practical and good performance.
A motorized ultrasound system for MRI-ultrasound fusion guided prostatectomy
NASA Astrophysics Data System (ADS)
Seifabadi, Reza; Xu, Sheng; Pinto, Peter; Wood, Bradford J.
2016-03-01
Purpose: This study presents MoTRUS, a motorized transrectal ultrasound system, to enable remote navigation of a transrectal ultrasound (TRUS) probe during da Vinci assisted prostatectomy. MoTRUS not only provides a stable platform to the ultrasound probe, but also allows the physician to navigate it remotely while sitting on the da Vinci console. This study also presents phantom feasibility study with the goal being intraoperative MRI-US image fusion capability to bring preoperative MR images to the operating room for the best visualization of the gland, boundaries, nerves, etc. Method: A two degree-of-freedom probe holder is developed to insert and rotate a bi-plane transrectal ultrasound transducer. A custom joystick is made to enable remote navigation of MoTRUS. Safety features have been considered to avoid inadvertent risks (if any) to the patient. Custom design software has been developed to fuse pre-operative MR images to intraoperative ultrasound images acquired by MoTRUS. Results: Remote TRUS probe navigation was evaluated on a patient after taking required consents during prostatectomy using MoTRUS. It took 10 min to setup the system in OR. MoTRUS provided similar capability in addition to remote navigation and stable imaging. No complications were observed. Image fusion was evaluated on a commercial prostate phantom. Electromagnetic tracking was used for the fusion. Conclusions: Motorized navigation of the TRUS probe during prostatectomy is safe and feasible. Remote navigation provides physician with a more precise and easier control of the ultrasound image while removing the burden of manual manipulation of the probe. Image fusion improved visualization of the prostate and boundaries in a phantom study.
Villavicencio, Alan T; Burneikiene, Sigita
2016-10-01
Recombinant human bone morphogenetic protein-2 (rhBMP-2) remains the primary synthetic osteoinductive material used in spinal fusion surgery today. The early inflammation reaction to rhBMP-2 manifesting with radicular symptoms has been previously reported in patients undergoing transforaminal lumbar interbody fusion (TLIF). There is a disagreement with regard to the factors affecting its occurrence and whether such symptoms are dose dependent. The purpose of this analysis was to determine the incidence of rhBMP-2-induced radiculitis and its relationship to dose. A retrospective cohort analysis was performed of the prospectively collected data. All consecutive patients (n=204) who underwent one- or two-level TLIF and instrumented posterolateral fusion with an off-label rhBMP-2 use were included in this analysis. The patients who developed new radicular symptoms after initial improvement postoperatively and had sterile fluid collections indicative of inflammatory process, or in the absence of any structural abnormalities that would explain these symptoms on imaging studies, were deemed to have rhBMP-2-induced radiculitis. Magnetic resonance imaging (MRI) scans were obtained for all patients who developed postoperative radicular symptoms. Correlations between the total rhBMP-2 dose, dose per spinal level, and incidence of radiculitis were evaluated while controlling for age, sex, number of TLIF levels, and surgeon. The incidence of postoperative radiculitis was 11.3% (23 out of 204). The average total rhBMP-2 dose was 4.9 mg (range=2.1-12) and the average dose per spinal level was 3.8 mg (range=1.05-12). Logistic regression analysis did not identify any significant correlations between the rhBMP-2 doses and the incidence of radiculitis (p=.6). The incidence of rhBMP-2-induced radiculitis in patients undergoing TLIF is quite high, but there were no dose-related correlations found. The study, however, cannot rule out a possibility that a larger variation in bone morphogenetic protein (BMP) doses could still be a factor in the development of rhBMP-2-associated radiculitis. Copyright © 2016 Elsevier Inc. All rights reserved.
Calhoun, Vince D; Sui, Jing
2016-01-01
It is becoming increasingly clear that combining multi-modal brain imaging data is able to provide more information for individual subjects by exploiting the rich multimodal information that exists. However, the number of studies that do true multimodal fusion (i.e. capitalizing on joint information among modalities) is still remarkably small given the known benefits. In part, this is because multi-modal studies require broader expertise in collecting, analyzing, and interpreting the results than do unimodal studies. In this paper, we start by introducing the basic reasons why multimodal data fusion is important and what it can do, and importantly how it can help us avoid wrong conclusions and help compensate for imperfect brain imaging studies. We also discuss the challenges that need to be confronted for such approaches to be more widely applied by the community. We then provide a review of the diverse studies that have used multimodal data fusion (primarily focused on psychosis) as well as provide an introduction to some of the existing analytic approaches. Finally, we discuss some up-and-coming approaches to multi-modal fusion including deep learning and multimodal classification which show considerable promise. Our conclusion is that multimodal data fusion is rapidly growing, but it is still underutilized. The complexity of the human brain coupled with the incomplete measurement provided by existing imaging technology makes multimodal fusion essential in order to mitigate against misdirection and hopefully provide a key to finding the missing link(s) in complex mental illness. PMID:27347565
Calhoun, Vince D; Sui, Jing
2016-05-01
It is becoming increasingly clear that combining multi-modal brain imaging data is able to provide more information for individual subjects by exploiting the rich multimodal information that exists. However, the number of studies that do true multimodal fusion (i.e. capitalizing on joint information among modalities) is still remarkably small given the known benefits. In part, this is because multi-modal studies require broader expertise in collecting, analyzing, and interpreting the results than do unimodal studies. In this paper, we start by introducing the basic reasons why multimodal data fusion is important and what it can do, and importantly how it can help us avoid wrong conclusions and help compensate for imperfect brain imaging studies. We also discuss the challenges that need to be confronted for such approaches to be more widely applied by the community. We then provide a review of the diverse studies that have used multimodal data fusion (primarily focused on psychosis) as well as provide an introduction to some of the existing analytic approaches. Finally, we discuss some up-and-coming approaches to multi-modal fusion including deep learning and multimodal classification which show considerable promise. Our conclusion is that multimodal data fusion is rapidly growing, but it is still underutilized. The complexity of the human brain coupled with the incomplete measurement provided by existing imaging technology makes multimodal fusion essential in order to mitigate against misdirection and hopefully provide a key to finding the missing link(s) in complex mental illness.
Label fusion based brain MR image segmentation via a latent selective model
NASA Astrophysics Data System (ADS)
Liu, Gang; Guo, Xiantang; Zhu, Kai; Liao, Hengxu
2018-04-01
Multi-atlas segmentation is an effective approach and increasingly popular for automatically labeling objects of interest in medical images. Recently, segmentation methods based on generative models and patch-based techniques have become the two principal branches of label fusion. However, these generative models and patch-based techniques are only loosely related, and the requirement for higher accuracy, faster segmentation, and robustness is always a great challenge. In this paper, we propose novel algorithm that combines the two branches using global weighted fusion strategy based on a patch latent selective model to perform segmentation of specific anatomical structures for human brain magnetic resonance (MR) images. In establishing this probabilistic model of label fusion between the target patch and patch dictionary, we explored the Kronecker delta function in the label prior, which is more suitable than other models, and designed a latent selective model as a membership prior to determine from which training patch the intensity and label of the target patch are generated at each spatial location. Because the image background is an equally important factor for segmentation, it is analyzed in label fusion procedure and we regard it as an isolated label to keep the same privilege between the background and the regions of interest. During label fusion with the global weighted fusion scheme, we use Bayesian inference and expectation maximization algorithm to estimate the labels of the target scan to produce the segmentation map. Experimental results indicate that the proposed algorithm is more accurate and robust than the other segmentation methods.
Patch-based Convolutional Neural Network for Whole Slide Tissue Image Classification
Hou, Le; Samaras, Dimitris; Kurc, Tahsin M.; Gao, Yi; Davis, James E.; Saltz, Joel H.
2016-01-01
Convolutional Neural Networks (CNN) are state-of-the-art models for many image classification tasks. However, to recognize cancer subtypes automatically, training a CNN on gigapixel resolution Whole Slide Tissue Images (WSI) is currently computationally impossible. The differentiation of cancer subtypes is based on cellular-level visual features observed on image patch scale. Therefore, we argue that in this situation, training a patch-level classifier on image patches will perform better than or similar to an image-level classifier. The challenge becomes how to intelligently combine patch-level classification results and model the fact that not all patches will be discriminative. We propose to train a decision fusion model to aggregate patch-level predictions given by patch-level CNNs, which to the best of our knowledge has not been shown before. Furthermore, we formulate a novel Expectation-Maximization (EM) based method that automatically locates discriminative patches robustly by utilizing the spatial relationships of patches. We apply our method to the classification of glioma and non-small-cell lung carcinoma cases into subtypes. The classification accuracy of our method is similar to the inter-observer agreement between pathologists. Although it is impossible to train CNNs on WSIs, we experimentally demonstrate using a comparable non-cancer dataset of smaller images that a patch-based CNN can outperform an image-based CNN. PMID:27795661
Radiological Determination of Postoperative Cervical Fusion: A Systematic Review.
Rhee, John M; Chapman, Jens R; Norvell, Daniel C; Smith, Justin; Sherry, Ned A; Riew, K Daniel
2015-07-01
Systematic review. To determine best criteria for radiological determination of postoperative subaxial cervical fusion to be applied to current clinical practice and ongoing future research assessing fusion to standardize assessment and improve comparability. Despite availability of multiple imaging modalities and criteria, there remains no method of determining cervical fusion with absolute certainty, nor clear consensus on specific criteria to be applied. A systematic search in MEDLINE/Cochrane Collaboration Library (through March 2014). Included studies assessed C2 to C7 via anterior or posterior approach, at 12 weeks or more postoperative, with any graft or implant. Overall body of evidence with respect to 6 posited key questions was determined using Grading of Recommendations Assessment, Development and Evaluation and Agency for Healthcare Research and Quality precepts. Of plain radiographical modalities, there is moderate evidence that the interspinous process motion method (<1 mm) is more accurate than the Cobb angle method for assessing anterior cervical fusion. Of the advanced imaging modalities, there is moderate evidence that computed tomography (CT) is more accurate and reliable than magnetic resonance imaging in assessing anterior cervical fusion. There is insufficient evidence regarding the optimal modality and criteria for assessing posterior cervical fusions and insufficient evidence to support a single time point after surgery as being optimal for determining fusion, although some evidence suggest that reliability of radiography and CT improves with increasing time postoperatively. We recommend using less than 1-mm motion as the initial modality for determining anterior cervical arthrodesis for both clinical and research applications. If further imaging is needed because of indeterminate radiographical evaluation, we recommend CT, which has relatively high accuracy and reliability, but due to greater radiation exposure and cost, it is not routinely suggested. We recommend that plain radiographs also be the initial method of determining posterior cervical fusion but suggest a lower threshold for obtaining CT scans because dynamic radiographs may not be as useful if spinous processes have been removed by laminectomy. 1.
Nonrigid 3D medical image registration and fusion based on deformable models.
Liu, Peng; Eberhardt, Benjamin; Wybranski, Christian; Ricke, Jens; Lüdemann, Lutz
2013-01-01
For coregistration of medical images, rigid methods often fail to provide enough freedom, while reliable elastic methods are available clinically for special applications only. The number of degrees of freedom of elastic models must be reduced for use in the clinical setting to archive a reliable result. We propose a novel geometry-based method of nonrigid 3D medical image registration and fusion. The proposed method uses a 3D surface-based deformable model as guidance. In our twofold approach, the deformable mesh from one of the images is first applied to the boundary of the object to be registered. Thereafter, the non-rigid volume deformation vector field needed for registration and fusion inside of the region of interest (ROI) described by the active surface is inferred from the displacement of the surface mesh points. The method was validated using clinical images of a quasirigid organ (kidney) and of an elastic organ (liver). The reduction in standard deviation of the image intensity difference between reference image and model was used as a measure of performance. Landmarks placed at vessel bifurcations in the liver were used as a gold standard for evaluating registration results for the elastic liver. Our registration method was compared with affine registration using mutual information applied to the quasi-rigid kidney. The new method achieved 15.11% better quality with a high confidence level of 99% for rigid registration. However, when applied to the quasi-elastic liver, the method has an averaged landmark dislocation of 4.32 mm. In contrast, affine registration of extracted livers yields a significantly (P = 0.000001) smaller dislocation of 3.26 mm. In conclusion, our validation shows that the novel approach is applicable in cases where internal deformation is not crucial, but it has limitations in cases where internal displacement must also be taken into account.
Multi-modality image fusion based on enhanced fuzzy radial basis function neural networks.
Chao, Zhen; Kim, Dohyeon; Kim, Hee-Joung
2018-04-01
In clinical applications, single modality images do not provide sufficient diagnostic information. Therefore, it is necessary to combine the advantages or complementarities of different modalities of images. Recently, neural network technique was applied to medical image fusion by many researchers, but there are still many deficiencies. In this study, we propose a novel fusion method to combine multi-modality medical images based on the enhanced fuzzy radial basis function neural network (Fuzzy-RBFNN), which includes five layers: input, fuzzy partition, front combination, inference, and output. Moreover, we propose a hybrid of the gravitational search algorithm (GSA) and error back propagation algorithm (EBPA) to train the network to update the parameters of the network. Two different patterns of images are used as inputs of the neural network, and the output is the fused image. A comparison with the conventional fusion methods and another neural network method through subjective observation and objective evaluation indexes reveals that the proposed method effectively synthesized the information of input images and achieved better results. Meanwhile, we also trained the network by using the EBPA and GSA, individually. The results reveal that the EBPGSA not only outperformed both EBPA and GSA, but also trained the neural network more accurately by analyzing the same evaluation indexes. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Application of Virtual Navigation with Multimodality Image Fusion in Foramen Ovale Cannulation.
Qiu, Xixiong; Liu, Weizong; Zhang, Mingdong; Lin, Hengzhou; Zhou, Shoujun; Lei, Yi; Xia, Jun
2017-11-01
Idiopathic trigeminal neuralgia (ITN) can be effectively treated with radiofrequency thermocoagulation. However, this procedure requires cannulation of the foramen ovale, and conventional cannulation methods are associated with high failure rates. Multimodality imaging can improve the accuracy of cannulation because each imaging method can compensate for the drawbacks of the other. We aim to determine the feasibility and accuracy of percutaneous foramen ovale cannulation under the guidance of virtual navigation with multimodality image fusion in a self-designed anatomical model of human cadaveric heads. Five cadaveric head specimens were investigated in this study. Spiral computed tomography (CT) scanning clearly displayed the foramen ovale in all five specimens (10 foramina), which could not be visualized using two-dimensional ultrasound alone. The ultrasound and spiral CT images were fused, and percutaneous cannulation of the foramen ovale was performed under virtual navigation. After this, spiral CT scanning was immediately repeated to confirm the accuracy of the cannulation. Postprocedural spiral CT confirmed that the ultrasound and CT images had been successfully fused for all 10 foramina, which were accurately and successfully cannulated. The success rates of both image fusion and cannulation were 100%. Virtual navigation with multimodality image fusion can substantially facilitate foramen ovale cannulation and is worthy of clinical application. © 2017 American Academy of Pain Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Jabeen, Safia; Mehmood, Zahid; Mahmood, Toqeer; Saba, Tanzila; Rehman, Amjad; Mahmood, Muhammad Tariq
2018-01-01
For the last three decades, content-based image retrieval (CBIR) has been an active research area, representing a viable solution for retrieving similar images from an image repository. In this article, we propose a novel CBIR technique based on the visual words fusion of speeded-up robust features (SURF) and fast retina keypoint (FREAK) feature descriptors. SURF is a sparse descriptor whereas FREAK is a dense descriptor. Moreover, SURF is a scale and rotation-invariant descriptor that performs better in the case of repeatability, distinctiveness, and robustness. It is robust to noise, detection errors, geometric, and photometric deformations. It also performs better at low illumination within an image as compared to the FREAK descriptor. In contrast, FREAK is a retina-inspired speedy descriptor that performs better for classification-based problems as compared to the SURF descriptor. Experimental results show that the proposed technique based on the visual words fusion of SURF-FREAK descriptors combines the features of both descriptors and resolves the aforementioned issues. The qualitative and quantitative analysis performed on three image collections, namely Corel-1000, Corel-1500, and Caltech-256, shows that proposed technique based on visual words fusion significantly improved the performance of the CBIR as compared to the feature fusion of both descriptors and state-of-the-art image retrieval techniques. PMID:29694429
Jabeen, Safia; Mehmood, Zahid; Mahmood, Toqeer; Saba, Tanzila; Rehman, Amjad; Mahmood, Muhammad Tariq
2018-01-01
For the last three decades, content-based image retrieval (CBIR) has been an active research area, representing a viable solution for retrieving similar images from an image repository. In this article, we propose a novel CBIR technique based on the visual words fusion of speeded-up robust features (SURF) and fast retina keypoint (FREAK) feature descriptors. SURF is a sparse descriptor whereas FREAK is a dense descriptor. Moreover, SURF is a scale and rotation-invariant descriptor that performs better in the case of repeatability, distinctiveness, and robustness. It is robust to noise, detection errors, geometric, and photometric deformations. It also performs better at low illumination within an image as compared to the FREAK descriptor. In contrast, FREAK is a retina-inspired speedy descriptor that performs better for classification-based problems as compared to the SURF descriptor. Experimental results show that the proposed technique based on the visual words fusion of SURF-FREAK descriptors combines the features of both descriptors and resolves the aforementioned issues. The qualitative and quantitative analysis performed on three image collections, namely Corel-1000, Corel-1500, and Caltech-256, shows that proposed technique based on visual words fusion significantly improved the performance of the CBIR as compared to the feature fusion of both descriptors and state-of-the-art image retrieval techniques.
Microwave and video sensor fusion for the shape extraction of 3D space objects
NASA Technical Reports Server (NTRS)
Shaw, Scott W.; Defigueiredo, Rui J. P.; Krishen, Kumar
1987-01-01
A new system for the fusion of optical image data and polarized radar scattering cross-sections is presented. By considering the scattering data in conjunction with image data, the problem of ambiguity can be reduced. Only a small part of the surface needs to be reconstructed from the radar cross-sections; the remaining portion is constrained by the optical image.