Fusion of multi-spectral and panchromatic images based on 2D-PWVD and SSIM
NASA Astrophysics Data System (ADS)
Tan, Dongjie; Liu, Yi; Hou, Ruonan; Xue, Bindang
2016-03-01
A combined method using 2D pseudo Wigner-Ville distribution (2D-PWVD) and structural similarity(SSIM) index is proposed for fusion of low resolution multi-spectral (MS) image and high resolution panchromatic (PAN) image. First, the intensity component of multi-spectral image is extracted with generalized IHS transform. Then, the spectrum diagrams of the intensity components of multi-spectral image and panchromatic image are obtained with 2D-PWVD. Different fusion rules are designed for different frequency information of the spectrum diagrams. SSIM index is used to evaluate the high frequency information of the spectrum diagrams for assigning the weights in the fusion processing adaptively. After the new spectrum diagram is achieved according to the fusion rule, the final fusion image can be obtained by inverse 2D-PWVD and inverse GIHS transform. Experimental results show that, the proposed method can obtain high quality fusion images.
Hybrid Image Fusion for Sharpness Enhancement of Multi-Spectral Lunar Images
NASA Astrophysics Data System (ADS)
Awumah, Anna; Mahanti, Prasun; Robinson, Mark
2016-10-01
Image fusion enhances the sharpness of a multi-spectral (MS) image by incorporating spatial details from a higher-resolution panchromatic (Pan) image [1,2]. Known applications of image fusion for planetary images are rare, although image fusion is well-known for its applications to Earth-based remote sensing. In a recent work [3], six different image fusion algorithms were implemented and their performances were verified with images from the Lunar Reconnaissance Orbiter (LRO) Camera. The image fusion procedure obtained a high-resolution multi-spectral (HRMS) product from the LRO Narrow Angle Camera (used as Pan) and LRO Wide Angle Camera (used as MS) images. The results showed that the Intensity-Hue-Saturation (IHS) algorithm results in a high-spatial quality product while the Wavelet-based image fusion algorithm best preserves spectral quality among all the algorithms. In this work we show the results of a hybrid IHS-Wavelet image fusion algorithm when applied to LROC MS images. The hybrid method provides the best HRMS product - both in terms of spatial resolution and preservation of spectral details. Results from hybrid image fusion can enable new science and increase the science return from existing LROC images.[1] Pohl, Cle, and John L. Van Genderen. "Review article multisensor image fusion in remote sensing: concepts, methods and applications." International journal of remote sensing 19.5 (1998): 823-854.[2] Zhang, Yun. "Understanding image fusion." Photogramm. Eng. Remote Sens 70.6 (2004): 657-661.[3] Mahanti, Prasun et al. "Enhancement of spatial resolution of the LROC Wide Angle Camera images." Archives, XXIII ISPRS Congress Archives (2016).
NASA Astrophysics Data System (ADS)
Huang, Yadong; Gao, Kun; Gong, Chen; Han, Lu; Guo, Yue
2016-03-01
During traditional multi-resolution infrared and visible image fusion processing, the low contrast ratio target may be weakened and become inconspicuous because of the opposite DN values in the source images. So a novel target pseudo-color enhanced image fusion algorithm based on the modified attention model and fast discrete curvelet transformation is proposed. The interesting target regions are extracted from source images by introducing the motion features gained from the modified attention model, and source images are performed the gray fusion via the rules based on physical characteristics of sensors in curvelet domain. The final fusion image is obtained by mapping extracted targets into the gray result with the proper pseudo-color instead. The experiments show that the algorithm can highlight dim targets effectively and improve SNR of fusion image.
Image fusion based on millimeter-wave for concealed weapon detection
NASA Astrophysics Data System (ADS)
Zhu, Weiwen; Zhao, Yuejin; Deng, Chao; Zhang, Cunlin; Zhang, Yalin; Zhang, Jingshui
2010-11-01
This paper describes a novel multi sensors image fusion technology which is presented for concealed weapon detection (CWD). It is known to all, because of the good transparency of the clothes at millimeter wave band, a millimeter wave radiometer can be used to image and distinguish concealed contraband beneath clothes, for example guns, knives, detonator and so on. As a result, we adopt the passive millimeter wave (PMMW) imaging technology for airport security. However, in consideration of the wavelength of millimeter wave and the single channel mechanical scanning, the millimeter wave image has law optical resolution, which can't meet the need of practical application. Therefore, visible image (VI), which has higher resolution, is proposed for the image fusion with the millimeter wave image to enhance the readability. Before the image fusion, a novel image pre-processing which specifics to the fusion of millimeter wave imaging and visible image is adopted. And in the process of image fusion, multi resolution analysis (MRA) based on Wavelet Transform (WT) is adopted. In this way, the experiment result shows that this method has advantages in concealed weapon detection and has practical significance.
Satellite image fusion based on principal component analysis and high-pass filtering.
Metwalli, Mohamed R; Nasr, Ayman H; Allah, Osama S Farag; El-Rabaie, S; Abd El-Samie, Fathi E
2010-06-01
This paper presents an integrated method for the fusion of satellite images. Several commercial earth observation satellites carry dual-resolution sensors, which provide high spatial resolution or simply high-resolution (HR) panchromatic (pan) images and low-resolution (LR) multi-spectral (MS) images. Image fusion methods are therefore required to integrate a high-spectral-resolution MS image with a high-spatial-resolution pan image to produce a pan-sharpened image with high spectral and spatial resolutions. Some image fusion methods such as the intensity, hue, and saturation (IHS) method, the principal component analysis (PCA) method, and the Brovey transform (BT) method provide HR MS images, but with low spectral quality. Another family of image fusion methods, such as the high-pass-filtering (HPF) method, operates on the basis of the injection of high frequency components from the HR pan image into the MS image. This family of methods provides less spectral distortion. In this paper, we propose the integration of the PCA method and the HPF method to provide a pan-sharpened MS image with superior spatial resolution and less spectral distortion. The experimental results show that the proposed fusion method retains the spectral characteristics of the MS image and, at the same time, improves the spatial resolution of the pan-sharpened image.
Han, Lei; Wulie, Buzha; Yang, Yiling; Wang, Hongqing
2015-01-05
This study investigated a novel method of fusing visible (VIS) and infrared (IR) images with the major objective of obtaining higher-resolution IR images. Most existing image fusion methods focus only on visual performance and many fail to consider the thermal physical properties of the IR images, leading to spectral distortion in the fused image. In this study, we use the IR thermal physical property to correct the VIS image directly. Specifically, the Stefan-Boltzmann Law is used as a strong constraint to modulate the VIS image, such that the fused result shows a similar level of regional thermal energy as the original IR image, while preserving the high-resolution structural features from the VIS image. This method is an improvement over our previous study, which required VIS-IR multi-wavelet fusion before the same correction method was applied. The results of experiments show that applying this correction to the VIS image directly without multi-resolution analysis (MRA) processing achieves similar results, but is considerably more computationally efficient, thereby providing a new perspective on VIS and IR image fusion.
Han, Lei; Wulie, Buzha; Yang, Yiling; Wang, Hongqing
2015-01-01
This study investigated a novel method of fusing visible (VIS) and infrared (IR) images with the major objective of obtaining higher-resolution IR images. Most existing image fusion methods focus only on visual performance and many fail to consider the thermal physical properties of the IR images, leading to spectral distortion in the fused image. In this study, we use the IR thermal physical property to correct the VIS image directly. Specifically, the Stefan-Boltzmann Law is used as a strong constraint to modulate the VIS image, such that the fused result shows a similar level of regional thermal energy as the original IR image, while preserving the high-resolution structural features from the VIS image. This method is an improvement over our previous study, which required VIS-IR multi-wavelet fusion before the same correction method was applied. The results of experiments show that applying this correction to the VIS image directly without multi-resolution analysis (MRA) processing achieves similar results, but is considerably more computationally efficient, thereby providing a new perspective on VIS and IR image fusion. PMID:25569749
Multi-sensor fusion of Landsat 8 thermal infrared (TIR) and panchromatic (PAN) images.
Jung, Hyung-Sup; Park, Sung-Whan
2014-12-18
Data fusion is defined as the combination of data from multiple sensors such that the resulting information is better than would be possible when the sensors are used individually. The multi-sensor fusion of panchromatic (PAN) and thermal infrared (TIR) images is a good example of this data fusion. While a PAN image has higher spatial resolution, a TIR one has lower spatial resolution. In this study, we have proposed an efficient method to fuse Landsat 8 PAN and TIR images using an optimal scaling factor in order to control the trade-off between the spatial details and the thermal information. We have compared the fused images created from different scaling factors and then tested the performance of the proposed method at urban and rural test areas. The test results show that the proposed method merges the spatial resolution of PAN image and the temperature information of TIR image efficiently. The proposed method may be applied to detect lava flows of volcanic activity, radioactive exposure of nuclear power plants, and surface temperature change with respect to land-use change.
The optimal algorithm for Multi-source RS image fusion.
Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan
2016-01-01
In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.
3D reconstruction from multi-view VHR-satellite images in MicMac
NASA Astrophysics Data System (ADS)
Rupnik, Ewelina; Pierrot-Deseilligny, Marc; Delorme, Arthur
2018-05-01
This work addresses the generation of high quality digital surface models by fusing multiple depths maps calculated with the dense image matching method. The algorithm is adapted to very high resolution multi-view satellite images, and the main contributions of this work are in the multi-view fusion. The algorithm is insensitive to outliers, takes into account the matching quality indicators, handles non-correlated zones (e.g. occlusions), and is solved with a multi-directional dynamic programming approach. No geometric constraints (e.g. surface planarity) or auxiliary data in form of ground control points are required for its operation. Prior to the fusion procedures, the RPC geolocation parameters of all images are improved in a bundle block adjustment routine. The performance of the algorithm is evaluated on two VHR (Very High Resolution)-satellite image datasets (Pléiades, WorldView-3) revealing its good performance in reconstructing non-textured areas, repetitive patterns, and surface discontinuities.
Underwater video enhancement using multi-camera super-resolution
NASA Astrophysics Data System (ADS)
Quevedo, E.; Delory, E.; Callicó, G. M.; Tobajas, F.; Sarmiento, R.
2017-12-01
Image spatial resolution is critical in several fields such as medicine, communications or satellite, and underwater applications. While a large variety of techniques for image restoration and enhancement has been proposed in the literature, this paper focuses on a novel Super-Resolution fusion algorithm based on a Multi-Camera environment that permits to enhance the quality of underwater video sequences without significantly increasing computation. In order to compare the quality enhancement, two objective quality metrics have been used: PSNR (Peak Signal-to-Noise Ratio) and the SSIM (Structural SIMilarity) index. Results have shown that the proposed method enhances the objective quality of several underwater sequences, avoiding the appearance of undesirable artifacts, with respect to basic fusion Super-Resolution algorithms.
A new method of Quickbird own image fusion
NASA Astrophysics Data System (ADS)
Han, Ying; Jiang, Hong; Zhang, Xiuying
2009-10-01
With the rapid development of remote sensing technology, the means of accessing to remote sensing data become increasingly abundant, thus the same area can form a large number of multi-temporal, different resolution image sequence. At present, the fusion methods are mainly: HPF, IHS transform method, PCA method, Brovey, Mallat algorithm and wavelet transform and so on. There exists a serious distortion of the spectrums in the IHS transform, Mallat algorithm omits low-frequency information of the high spatial resolution images, the integration results of which has obvious blocking effects. Wavelet multi-scale decomposition for different sizes, the directions, details and the edges can have achieved very good results, but different fusion rules and algorithms can achieve different effects. This article takes the Quickbird own image fusion as an example, basing on wavelet transform and HVS, wavelet transform and IHS integration. The result shows that the former better. This paper introduces the correlation coefficient, the relative average spectral error index and usual index to evaluate the quality of image.
Development of a fusion approach selection tool
NASA Astrophysics Data System (ADS)
Pohl, C.; Zeng, Y.
2015-06-01
During the last decades number and quality of available remote sensing satellite sensors for Earth observation has grown significantly. The amount of available multi-sensor images along with their increased spatial and spectral resolution provides new challenges to Earth scientists. With a Fusion Approach Selection Tool (FAST) the remote sensing community would obtain access to an optimized and improved image processing technology. Remote sensing image fusion is a mean to produce images containing information that is not inherent in the single image alone. In the meantime the user has access to sophisticated commercialized image fusion techniques plus the option to tune the parameters of each individual technique to match the anticipated application. This leaves the operator with an uncountable number of options to combine remote sensing images, not talking about the selection of the appropriate images, resolution and bands. Image fusion can be a machine and time-consuming endeavour. In addition it requires knowledge about remote sensing, image fusion, digital image processing and the application. FAST shall provide the user with a quick overview of processing flows to choose from to reach the target. FAST will ask for available images, application parameters and desired information to process this input to come out with a workflow to quickly obtain the best results. It will optimize data and image fusion techniques. It provides an overview on the possible results from which the user can choose the best. FAST will enable even inexperienced users to use advanced processing methods to maximize the benefit of multi-sensor image exploitation.
NASA Astrophysics Data System (ADS)
Yi, Shengzhen; Zhang, Zhe; Huang, Qiushi; Zhang, Zhong; Wang, Zhanshan; Wei, Lai; Liu, Dongxiao; Cao, Leifeng; Gu, Yuqiu
2018-03-01
Multi-channel Kirkpatrick-Baez (KB) microscopes, which have better resolution and collection efficiency than pinhole cameras, have been widely used in laser inertial confinement fusion to diagnose time evolution of the target implosion. In this study, a tandem multi-channel KB microscope was developed to have sixteen imaging channels with the precise control of spatial resolution and image intervals. This precise control was created using a coarse assembly of mirror pairs with high-accuracy optical prisms, followed by precise adjustment in real-time x-ray imaging experiments. The multilayers coated on the KB mirrors were designed to have substantially the same reflectivity to obtain a uniform brightness of different images for laser-plasma temperature analysis. The study provides a practicable method to achieve the optimum performance of the microscope for future high-resolution applications in inertial confinement fusion experiments.
USDA-ARS?s Scientific Manuscript database
Atmosphere-Land Exchange Inverse model and associated disaggregation scheme (ALEXI/DisALEXI). Satellite-based ET retrievals from both the Moderate Resolution Imaging Spectoradiometer (MODIS; 1km, daily) and Landsat (30m, bi-weekly) are fused with The Spatial and Temporal Adaptive Reflective Fusion ...
A comparative study of multi-focus image fusion validation metrics
NASA Astrophysics Data System (ADS)
Giansiracusa, Michael; Lutz, Adam; Messer, Neal; Ezekiel, Soundararajan; Alford, Mark; Blasch, Erik; Bubalo, Adnan; Manno, Michael
2016-05-01
Fusion of visual information from multiple sources is relevant for applications security, transportation, and safety applications. One way that image fusion can be particularly useful is when fusing imagery data from multiple levels of focus. Different focus levels can create different visual qualities for different regions in the imagery, which can provide much more visual information to analysts when fused. Multi-focus image fusion would benefit a user through automation, which requires the evaluation of the fused images to determine whether they have properly fused the focused regions of each image. Many no-reference metrics, such as information theory based, image feature based and structural similarity-based have been developed to accomplish comparisons. However, it is hard to scale an accurate assessment of visual quality which requires the validation of these metrics for different types of applications. In order to do this, human perception based validation methods have been developed, particularly dealing with the use of receiver operating characteristics (ROC) curves and the area under them (AUC). Our study uses these to analyze the effectiveness of no-reference image fusion metrics applied to multi-resolution fusion methods in order to determine which should be used when dealing with multi-focus data. Preliminary results show that the Tsallis, SF, and spatial frequency metrics are consistent with the image quality and peak signal to noise ratio (PSNR).
NASA Astrophysics Data System (ADS)
Wang, G. H.; Wang, H. B.; Fan, W. F.; Liu, Y.; Chen, C.
2018-04-01
In view of the traditional change detection algorithm mainly depends on the spectral information image spot, failed to effectively mining and fusion of multi-image feature detection advantage, the article borrows the ideas of object oriented analysis proposed a multi feature fusion of remote sensing image change detection algorithm. First by the multi-scale segmentation of image objects based; then calculate the various objects of color histogram and linear gradient histogram; utilizes the color distance and edge line feature distance between EMD statistical operator in different periods of the object, using the adaptive weighted method, the color feature distance and edge in a straight line distance of combination is constructed object heterogeneity. Finally, the curvature histogram analysis image spot change detection results. The experimental results show that the method can fully fuse the color and edge line features, thus improving the accuracy of the change detection.
Multi-Image Registration for an Enhanced Vision System
NASA Technical Reports Server (NTRS)
Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn
2002-01-01
An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.
Information recovery through image sequence fusion under wavelet transformation
NASA Astrophysics Data System (ADS)
He, Qiang
2010-04-01
Remote sensing is widely applied to provide information of areas with limited ground access with applications such as to assess the destruction from natural disasters and to plan relief and recovery operations. However, the data collection of aerial digital images is constrained by bad weather, atmospheric conditions, and unstable camera or camcorder. Therefore, how to recover the information from the low-quality remote sensing images and how to enhance the image quality becomes very important for many visual understanding tasks, such like feature detection, object segmentation, and object recognition. The quality of remote sensing imagery can be improved through meaningful combination of the employed images captured from different sensors or from different conditions through information fusion. Here we particularly address information fusion to remote sensing images under multi-resolution analysis in the employed image sequences. The image fusion is to recover complete information by integrating multiple images captured from the same scene. Through image fusion, a new image with high-resolution or more perceptive for human and machine is created from a time series of low-quality images based on image registration between different video frames.
Han, Lei; Shi, Lu; Yang, Yiling; Song, Dalei
2014-01-01
Geostationary meteorological satellite infrared (IR) channel data contain important spectral information for meteorological research and applications, but their spatial resolution is relatively low. The objective of this study is to obtain higher-resolution IR images. One common method of increasing resolution fuses the IR data with high-resolution visible (VIS) channel data. However, most existing image fusion methods focus only on visual performance, and often fail to take into account the thermal physical properties of the IR images. As a result, spectral distortion occurs frequently. To tackle this problem, we propose a thermal physical properties-based correction method for fusing geostationary meteorological satellite IR and VIS images. In our two-step process, the high-resolution structural features of the VIS image are first extracted and incorporated into the IR image using regular multi-resolution fusion approach, such as the multiwavelet analysis. This step significantly increases the visual details in the IR image, but fake thermal information may be included. Next, the Stefan-Boltzmann Law is applied to correct the distortion, to retain or recover the thermal infrared nature of the fused image. The results of both the qualitative and quantitative evaluation demonstrate that the proposed physical correction method both improves the spatial resolution and preserves the infrared thermal properties. PMID:24919017
Han, Lei; Shi, Lu; Yang, Yiling; Song, Dalei
2014-06-10
Geostationary meteorological satellite infrared (IR) channel data contain important spectral information for meteorological research and applications, but their spatial resolution is relatively low. The objective of this study is to obtain higher-resolution IR images. One common method of increasing resolution fuses the IR data with high-resolution visible (VIS) channel data. However, most existing image fusion methods focus only on visual performance, and often fail to take into account the thermal physical properties of the IR images. As a result, spectral distortion occurs frequently. To tackle this problem, we propose a thermal physical properties-based correction method for fusing geostationary meteorological satellite IR and VIS images. In our two-step process, the high-resolution structural features of the VIS image are first extracted and incorporated into the IR image using regular multi-resolution fusion approach, such as the multiwavelet analysis. This step significantly increases the visual details in the IR image, but fake thermal information may be included. Next, the Stefan-Boltzmann Law is applied to correct the distortion, to retain or recover the thermal infrared nature of the fused image. The results of both the qualitative and quantitative evaluation demonstrate that the proposed physical correction method both improves the spatial resolution and preserves the infrared thermal properties.
Fusion of MultiSpectral and Panchromatic Images Based on Morphological Operators.
Restaino, Rocco; Vivone, Gemine; Dalla Mura, Mauro; Chanussot, Jocelyn
2016-04-20
Nonlinear decomposition schemes constitute an alternative to classical approaches for facing the problem of data fusion. In this paper we discuss the application of this methodology to a popular remote sensing application called pansharpening, which consists in the fusion of a low resolution multispectral image and a high resolution panchromatic image. We design a complete pansharpening scheme based on the use of morphological half gradients operators and demonstrate the suitability of this algorithm through the comparison with state of the art approaches. Four datasets acquired by the Pleiades, Worldview-2, Ikonos and Geoeye-1 satellites are employed for the performance assessment, testifying the effectiveness of the proposed approach in producing top-class images with a setting independent of the specific sensor.
NASA Astrophysics Data System (ADS)
Yao, Wei; van Aardt, Jan; Messinger, David
2017-05-01
The Hyperspectral Infrared Imager (HyspIRI) mission aims to provide global imaging spectroscopy data to the benefit of especially ecosystem studies. The onboard spectrometer will collect radiance spectra from the visible to short wave infrared (VSWIR) regions (400-2500 nm). The mission calls for fine spectral resolution (10 nm band width) and as such will enable scientists to perform material characterization, species classification, and even sub-pixel mapping. However, the global coverage requirement results in a relatively low spatial resolution (GSD 30m), which restricts applications to objects of similar scales. We therefore have focused on the assessment of sub-pixel vegetation structure from spectroscopy data in past studies. In this study, we investigate the development or reconstruction of higher spatial resolution imaging spectroscopy data via fusion of multi-temporal data sets to address the drawbacks implicit in low spatial resolution imagery. The projected temporal resolution of the HyspIRI VSWIR instrument is 15 days, which implies that we have access to as many as six data sets for an area over the course of a growth season. Previous studies have shown that select vegetation structural parameters, e.g., leaf area index (LAI) and gross ecosystem production (GEP), are relatively constant in summer and winter for temperate forests; we therefore consider the data sets collected in summer to be from a similar, stable forest structure. The first step, prior to fusion, involves registration of the multi-temporal data. A data fusion algorithm then can be applied to the pre-processed data sets. The approach hinges on an algorithm that has been widely applied to fuse RGB images. Ideally, if we have four images of a scene which all meet the following requirements - i) they are captured with the same camera configurations; ii) the pixel size of each image is x; and iii) at least r2 images are aligned on a grid of x/r - then a high-resolution image, with a pixel size of x/r, can be reconstructed from the multi-temporal set. The algorithm was applied to data from NASA's classic Airborne Visible and Infrared Imaging Spectrometer (AVIRIS-C; GSD 18m), collected between 2013-2015 (summer and fall) over our study area (NEON's Southwest Pacific Domain; Fresno, CA) to generate higher spatial resolution imagery (GSD 9m). The reconstructed data set was validated via comparison to NEON's imaging spectrometer (NIS) data (GSD 1m). The results showed that algorithm worked well with the AVIRIS-C data and could be applied to the HyspIRI data.
Wang, Yunlong; Liu, Fei; Zhang, Kunbo; Hou, Guangqi; Sun, Zhenan; Tan, Tieniu
2018-09-01
The low spatial resolution of light-field image poses significant difficulties in exploiting its advantage. To mitigate the dependency of accurate depth or disparity information as priors for light-field image super-resolution, we propose an implicitly multi-scale fusion scheme to accumulate contextual information from multiple scales for super-resolution reconstruction. The implicitly multi-scale fusion scheme is then incorporated into bidirectional recurrent convolutional neural network, which aims to iteratively model spatial relations between horizontally or vertically adjacent sub-aperture images of light-field data. Within the network, the recurrent convolutions are modified to be more effective and flexible in modeling the spatial correlations between neighboring views. A horizontal sub-network and a vertical sub-network of the same network structure are ensembled for final outputs via stacked generalization. Experimental results on synthetic and real-world data sets demonstrate that the proposed method outperforms other state-of-the-art methods by a large margin in peak signal-to-noise ratio and gray-scale structural similarity indexes, which also achieves superior quality for human visual systems. Furthermore, the proposed method can enhance the performance of light field applications such as depth estimation.
Fusion and quality analysis for remote sensing images using contourlet transform
NASA Astrophysics Data System (ADS)
Choi, Yoonsuk; Sharifahmadian, Ershad; Latifi, Shahram
2013-05-01
Recent developments in remote sensing technologies have provided various images with high spatial and spectral resolutions. However, multispectral images have low spatial resolution and panchromatic images have low spectral resolution. Therefore, image fusion techniques are necessary to improve the spatial resolution of spectral images by injecting spatial details of high-resolution panchromatic images. The objective of image fusion is to provide useful information by improving the spatial resolution and the spectral information of the original images. The fusion results can be utilized in various applications, such as military, medical imaging, and remote sensing. This paper addresses two issues in image fusion: i) image fusion method and ii) quality analysis of fusion results. First, a new contourlet-based image fusion method is presented, which is an improvement over the wavelet-based fusion. This fusion method is then applied to a case study to demonstrate its fusion performance. Fusion framework and scheme used in the study are discussed in detail. Second, quality analysis for the fusion results is discussed. We employed various quality metrics in order to analyze the fusion results both spatially and spectrally. Our results indicate that the proposed contourlet-based fusion method performs better than the conventional wavelet-based fusion methods.
Three-dimensional super-resolved live cell imaging through polarized multi-angle TIRF.
Zheng, Cheng; Zhao, Guangyuan; Liu, Wenjie; Chen, Youhua; Zhang, Zhimin; Jin, Luhong; Xu, Yingke; Kuang, Cuifang; Liu, Xu
2018-04-01
Measuring three-dimensional nanoscale cellular structures is challenging, especially when the structure is dynamic. Owing to the informative total internal reflection fluorescence (TIRF) imaging under varied illumination angles, multi-angle (MA) TIRF has been examined to offer a nanoscale axial and a subsecond temporal resolution. However, conventional MA-TIRF still performs badly in lateral resolution and fails to characterize the depth image in densely distributed regions. Here, we emphasize the lateral super-resolution in the MA-TIRF, exampled by simply introducing polarization modulation into the illumination procedure. Equipped with a sparsity and accelerated proximal algorithm, we examine a more precise 3D sample structure compared with previous methods, enabling live cell imaging with a temporal resolution of 2 s and recovering high-resolution mitochondria fission and fusion processes. We also shared the recovery program, which is the first open-source recovery code for MA-TIRF, to the best of our knowledge.
NASA Astrophysics Data System (ADS)
Awumah, A.; Mahanti, P.; Robinson, M. S.
2017-12-01
Image fusion is often used in Earth-based remote sensing applications to merge spatial details from a high-resolution panchromatic (Pan) image with the color information from a lower-resolution multi-spectral (MS) image, resulting in a high-resolution multi-spectral image (HRMS). Previously, the performance of six well-known image fusion methods were compared using Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) and Wide Angle Camera (WAC) images (1). Results showed the Intensity-Hue-Saturation (IHS) method provided the best spatial performance, but deteriorated the spectral content. In general, there was a trade-off between spatial enhancement and spectral fidelity from the fusion process; the more spatial details from the Pan fused with the MS image, the more spectrally distorted the final HRMS. In this work, we control the amount of spatial details fused (from the LROC NAC images to WAC images) using a controlled IHS method (2), to investigate the spatial variation in spectral distortion on fresh crater ejecta. In the controlled IHS method (2), the percentage of the Pan component merged with the MS is varied. The percent of spatial detail from the Pan used is determined by a variable whose value may be varied between 1 (no Pan utilized) to infinity (entire Pan utilized). An HRMS color composite image (red=415nm, green=321/415nm, blue=321/360nm (3)) was used to assess performance (via visual inspection and metric-based evaluations) at each tested value of the control parameter (1 to 10—after which spectral distortion saturates—in 0.01 increments) within three regions: crater interiors, ejecta blankets, and the background material surrounding the craters. Increasing the control parameter introduced increased spatial sharpness and spectral distortion in all regions, but to varying degrees. Crater interiors suffered the most color distortion, while ejecta experienced less color distortion. The controlled IHS method is therefore desirable for resolution-enhancement of fresh crater ejecta; larger values of the control parameter may be used to sharpen MS images of ejecta patterns but with less impact to color distortion than in the uncontrolled IHS fusion process. References: (1) Prasun et. al (2016) ISPRS. (2) Choi, Myungjin (2006) IEEE. (3) Denevi et. al (2014) JGR.
Advanced Scintillator Detectors for Neutron Imaging in Inertial Confinement Fusion
NASA Astrophysics Data System (ADS)
Geppert-Kleinrath, Verena; Danly, Christopher; Merrill, Frank; Simpson, Raspberry; Volegov, Petr; Wilde, Carl
2016-10-01
The neutron imaging team at Los Alamos National Laboratory (LANL) has been providing two-dimensional neutron imaging of the inertial confinement fusion process at the National Ignition Facility (NIF) for over five years. Neutron imaging is a powerful tool in which position-sensitive detectors register neutrons emitted in the fusion reactions, producing a picture of the burning fuel. Recent images have revealed possible multi-dimensional asymmetries, calling for additional views to facilitate three-dimensional imaging. These will be along shorter lines of sight to stay within the existing facility at NIF. In order to field imaging capabilities equivalent to the existing system several technological challenges have to be met: high spatial resolution, high light output, and fast scintillator response to capture lower-energy neutrons, which have scattered from non-burning regions of fuel. Deuterated scintillators are a promising candidate to achieve the timing and resolution required; a systematic study of deuterated and non-deuterated polystyrene and liquid samples is currently ongoing. A test stand has been implemented to measure the response function, and preliminary data on resolution and light output have been obtained at the LANL Weapons Neutrons Research facility.
NASA Astrophysics Data System (ADS)
Dou, Hao; Sun, Xiao; Li, Bin; Deng, Qianqian; Yang, Xubo; Liu, Di; Tian, Jinwen
2018-03-01
Aircraft detection from very high resolution remote sensing images, has gained more increasing interest in recent years due to the successful civil and military applications. However, several problems still exist: 1) how to extract the high-level features of aircraft; 2) locating objects within such a large image is difficult and time consuming; 3) A common problem of multiple resolutions of satellite images still exists. In this paper, inspirited by biological visual mechanism, the fusion detection framework is proposed, which fusing the top-down visual mechanism (deep CNN model) and bottom-up visual mechanism (GBVS) to detect aircraft. Besides, we use multi-scale training method for deep CNN model to solve the problem of multiple resolutions. Experimental results demonstrate that our method can achieve a better detection result than the other methods.
[Research Progress of Multi-Model Medical Image Fusion at Feature Level].
Zhang, Junjie; Zhou, Tao; Lu, Huiling; Wang, Huiqun
2016-04-01
Medical image fusion realizes advantage integration of functional images and anatomical images.This article discusses the research progress of multi-model medical image fusion at feature level.We firstly describe the principle of medical image fusion at feature level.Then we analyze and summarize fuzzy sets,rough sets,D-S evidence theory,artificial neural network,principal component analysis and other fusion methods’ applications in medical image fusion and get summery.Lastly,we in this article indicate present problems and the research direction of multi-model medical images in the future.
NASA Astrophysics Data System (ADS)
Singh, Dharmendra; Kumar, Harish
Earth observation satellites provide data that covers different portions of the electromagnetic spectrum at different spatial and spectral resolutions. The increasing availability of information products generated from satellite images are extending the ability to understand the patterns and dynamics of the earth resource systems at all scales of inquiry. In which one of the most important application is the generation of land cover classification from satellite images for understanding the actual status of various land cover classes. The prospect for the use of satel-lite images in land cover classification is an extremely promising one. The quality of satellite images available for land-use mapping is improving rapidly by development of advanced sensor technology. Particularly noteworthy in this regard is the improved spatial and spectral reso-lution of the images captured by new satellite sensors like MODIS, ASTER, Landsat 7, and SPOT 5. For the full exploitation of increasingly sophisticated multisource data, fusion tech-niques are being developed. Fused images may enhance the interpretation capabilities. The images used for fusion have different temporal, and spatial resolution. Therefore, the fused image provides a more complete view of the observed objects. It is one of the main aim of image fusion to integrate different data in order to obtain more information that can be de-rived from each of the single sensor data alone. A good example of this is the fusion of images acquired by different sensors having a different spatial resolution and of different spectral res-olution. Researchers are applying the fusion technique since from three decades and propose various useful methods and techniques. The importance of high-quality synthesis of spectral information is well suited and implemented for land cover classification. More recently, an underlying multiresolution analysis employing the discrete wavelet transform has been used in image fusion. It was found that multisensor image fusion is a tradeoff between the spectral information from a low resolution multi-spectral images and the spatial information from a high resolution multi-spectral images. With the wavelet transform based fusion method, it is easy to control this tradeoff. A new transform, the curvelet transform was used in recent years by Starck. A ridgelet transform is applied to square blocks of detail frames of undecimated wavelet decomposition, consequently the curvelet transform is obtained. Since the ridgelet transform possesses basis functions matching directional straight lines therefore, the curvelet transform is capable of representing piecewise linear contours on multiple scales through few significant coefficients. This property leads to a better separation between geometric details and background noise, which may be easily reduced by thresholding curvelet coefficients before they are used for fusion. The Terra and Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) instrument provides high radiometric sensitivity (12 bit) in 36 spectral bands ranging in wavelength from 0.4 m to 14.4 m and also it is freely available. Two bands are imaged at a nominal resolution of 250 m at nadir, with five bands at 500 m, and the remaining 29 bands at 1 km. In this paper, the band 1 of spatial resolution 250 m and bandwidth 620-670 nm, and band 2, of spatial resolution of 250m and bandwidth 842-876 nm is considered as these bands has special features to identify the agriculture and other land covers. In January 2006, the Advanced Land Observing Satellite (ALOS) was successfully launched by the Japan Aerospace Exploration Agency (JAXA). The Phased Arraytype L-band SAR (PALSAR) sensor onboard the satellite acquires SAR imagery at a wavelength of 23.5 cm (frequency 1.27 GHz) with capabilities of multimode and multipolarization observation. PALSAR can operate in several modes: the fine-beam single (FBS) polarization mode (HH), fine-beam dual (FBD) polariza-tion mode (HH/HV or VV/VH), polarimetric (PLR) mode (HH/HV/VH/VV), and ScanSAR (WB) mode (HH/VV) [15]. These makes PALSAR imagery very attractive for spatially and temporally consistent monitoring system. The Overview of Principal Component Analysis is that the most of the information within all the bands can be compressed into a much smaller number of bands with little loss of information. It allows us to extract the low-dimensional subspaces that capture the main linear correlation among the high-dimensional image data. This facilitates viewing the explained variance or signal in the available imagery, allowing both gross and more subtle features in the imagery to be seen. In this paper we have explored the fusion technique for enhancing the land cover classification of low resolution satellite data espe-cially freely available satellite data. For this purpose, we have considered to fuse the PALSAR principal component data with MODIS principal component data. Initially, the MODIS band 1 and band 2 is considered, its principal component is computed. Similarly the PALSAR HH, HV and VV polarized data are considered, and there principal component is also computed. con-sequently, the PALSAR principal component image is fused with MODIS principal component image. The aim of this paper is to analyze the effect of classification accuracy on major type of land cover types like agriculture, water and urban bodies with fusion of PALSAR data to MODIS data. Curvelet transformation has been applied for fusion of these two satellite images and Minimum Distance classification technique has been applied for the resultant fused image. It is qualitatively and visually observed that the overall classification accuracy of MODIS image after fusion is enhanced. This type of fusion technique may be quite helpful in near future to use freely available satellite data to develop monitoring system for different land cover classes on the earth.
[Research progress of multi-model medical image fusion and recognition].
Zhou, Tao; Lu, Huiling; Chen, Zhiqiang; Ma, Jingxian
2013-10-01
Medical image fusion and recognition has a wide range of applications, such as focal location, cancer staging and treatment effect assessment. Multi-model medical image fusion and recognition are analyzed and summarized in this paper. Firstly, the question of multi-model medical image fusion and recognition is discussed, and its advantage and key steps are discussed. Secondly, three fusion strategies are reviewed from the point of algorithm, and four fusion recognition structures are discussed. Thirdly, difficulties, challenges and possible future research direction are discussed.
Extended depth of field integral imaging using multi-focus fusion
NASA Astrophysics Data System (ADS)
Piao, Yongri; Zhang, Miao; Wang, Xiaohui; Li, Peihua
2018-03-01
In this paper, we propose a new method for depth of field extension in integral imaging by realizing the image fusion method on the multi-focus elemental images. In the proposed method, a camera is translated on a 2D grid to take multi-focus elemental images by sweeping the focus plane across the scene. Simply applying an image fusion method on the elemental images holding rich parallax information does not work effectively because registration accuracy of images is the prerequisite for image fusion. To solve this problem an elemental image generalization method is proposed. The aim of this generalization process is to geometrically align the objects in all elemental images so that the correct regions of multi-focus elemental images can be exacted. The all-in focus elemental images are then generated by fusing the generalized elemental images using the block based fusion method. The experimental results demonstrate that the depth of field of synthetic aperture integral imaging system has been extended by realizing the generation method combined with the image fusion on multi-focus elemental images in synthetic aperture integral imaging system.
Spatial, Temporal and Spectral Satellite Image Fusion via Sparse Representation
NASA Astrophysics Data System (ADS)
Song, Huihui
Remote sensing provides good measurements for monitoring and further analyzing the climate change, dynamics of ecosystem, and human activities in global or regional scales. Over the past two decades, the number of launched satellite sensors has been increasing with the development of aerospace technologies and the growing requirements on remote sensing data in a vast amount of application fields. However, a key technological challenge confronting these sensors is that they tradeoff between spatial resolution and other properties, including temporal resolution, spectral resolution, swath width, etc., due to the limitations of hardware technology and budget constraints. To increase the spatial resolution of data with other good properties, one possible cost-effective solution is to explore data integration methods that can fuse multi-resolution data from multiple sensors, thereby enhancing the application capabilities of available remote sensing data. In this thesis, we propose to fuse the spatial resolution with temporal resolution and spectral resolution, respectively, based on sparse representation theory. Taking the study case of Landsat ETM+ (with spatial resolution of 30m and temporal resolution of 16 days) and MODIS (with spatial resolution of 250m ~ 1km and daily temporal resolution) reflectance, we propose two spatial-temporal fusion methods to combine the fine spatial information of Landsat image and the daily temporal resolution of MODIS image. Motivated by that the images from these two sensors are comparable on corresponding bands, we propose to link their spatial information on available Landsat- MODIS image pair (captured on prior date) and then predict the Landsat image from the MODIS counterpart on prediction date. To well-learn the spatial details from the prior images, we use a redundant dictionary to extract the basic representation atoms for both Landsat and MODIS images based on sparse representation. Under the scenario of two prior Landsat-MODIS image pairs, we build the corresponding relationship between the difference images of MODIS and ETM+ by training a low- and high-resolution dictionary pair from the given prior image pairs. In the second scenario, i.e., only one Landsat- MODIS image pair being available, we directly correlate MODIS and ETM+ data through an image degradation model. Then, the fusion stage is achieved by super-resolving the MODIS image combining the high-pass modulation in a two-layer fusion framework. Remarkably, the proposed spatial-temporal fusion methods form a unified framework for blending remote sensing images with phenology change or land-cover-type change. Based on the proposed spatial-temporal fusion models, we propose to monitor the land use/land cover changes in Shenzhen, China. As a fast-growing city, Shenzhen faces the problem of detecting the rapid changes for both rational city planning and sustainable development. However, the cloudy and rainy weather in region Shenzhen located makes the capturing circle of high-quality satellite images longer than their normal revisit periods. Spatial-temporal fusion methods are capable to tackle this problem by improving the spatial resolution of images with coarse spatial resolution but frequent temporal coverage, thereby making the detection of rapid changes possible. On two Landsat-MODIS datasets with annual and monthly changes, respectively, we apply the proposed spatial-temporal fusion methods to the task of multiple change detection. Afterward, we propose a novel spatial and spectral fusion method for satellite multispectral and hyperspectral (or high-spectral) images based on dictionary-pair learning and sparse non-negative matrix factorization. By combining the spectral information from hyperspectral image, which is characterized by low spatial resolution but high spectral resolution and abbreviated as LSHS, and the spatial information from multispectral image, which is featured by high spatial resolution but low spectral resolution and abbreviated as HSLS, this method aims to generate the fused data with both high spatial and high spectral resolutions. Motivated by the observation that each hyperspectral pixel can be represented by a linear combination of a few endmembers, this method first extracts the spectral bases of LSHS and HSLS images by making full use of the rich spectral information in LSHS data. The spectral bases of these two categories data then formulate a dictionary-pair due to their correspondence in representing each pixel spectra of LSHS data and HSLS data, respectively. Subsequently, the LSHS image is spatially unmixed by representing the HSLS image with respect to the corresponding learned dictionary to derive its representation coefficients. Combining the spectral bases of LSHS data and the representation coefficients of HSLS data, we finally derive the fused data characterized by the spectral resolution of LSHS data and the spatial resolution of HSLS data.
Application of Sensor Fusion to Improve Uav Image Classification
NASA Astrophysics Data System (ADS)
Jabari, S.; Fathollahi, F.; Zhang, Y.
2017-08-01
Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.
Evaluating an image-fusion algorithm with synthetic-image-generation tools
NASA Astrophysics Data System (ADS)
Gross, Harry N.; Schott, John R.
1996-06-01
An algorithm that combines spectral mixing and nonlinear optimization is used to fuse multiresolution images. Image fusion merges images of different spatial and spectral resolutions to create a high spatial resolution multispectral combination. High spectral resolution allows identification of materials in the scene, while high spatial resolution locates those materials. In this algorithm, conventional spectral mixing estimates the percentage of each material (called endmembers) within each low resolution pixel. Three spectral mixing models are compared; unconstrained, partially constrained, and fully constrained. In the partially constrained application, the endmember fractions are required to sum to one. In the fully constrained application, all fractions are additionally required to lie between zero and one. While negative fractions seem inappropriate, they can arise from random spectral realizations of the materials. In the second part of the algorithm, the low resolution fractions are used as inputs to a constrained nonlinear optimization that calculates the endmember fractions for the high resolution pixels. The constraints mirror the low resolution constraints and maintain consistency with the low resolution fraction results. The algorithm can use one or more higher resolution sharpening images to locate the endmembers to high spatial accuracy. The algorithm was evaluated with synthetic image generation (SIG) tools. A SIG developed image can be used to control the various error sources that are likely to impair the algorithm performance. These error sources include atmospheric effects, mismodeled spectral endmembers, and variability in topography and illumination. By controlling the introduction of these errors, the robustness of the algorithm can be studied and improved upon. The motivation for this research is to take advantage of the next generation of multi/hyperspectral sensors. Although the hyperspectral images will be of modest to low resolution, fusing them with high resolution sharpening images will produce a higher spatial resolution land cover or material map.
Multi-focus image fusion based on window empirical mode decomposition
NASA Astrophysics Data System (ADS)
Qin, Xinqiang; Zheng, Jiaoyue; Hu, Gang; Wang, Jiao
2017-09-01
In order to improve multi-focus image fusion quality, a novel fusion algorithm based on window empirical mode decomposition (WEMD) is proposed. This WEMD is an improved form of bidimensional empirical mode decomposition (BEMD), due to its decomposition process using the adding window principle, effectively resolving the signal concealment problem. We used WEMD for multi-focus image fusion, and formulated different fusion rules for bidimensional intrinsic mode function (BIMF) components and the residue component. For fusion of the BIMF components, the concept of the Sum-modified-Laplacian was used and a scheme based on the visual feature contrast adopted; when choosing the residue coefficients, a pixel value based on the local visibility was selected. We carried out four groups of multi-focus image fusion experiments and compared objective evaluation criteria with other three fusion methods. The experimental results show that the proposed fusion approach is effective and performs better at fusing multi-focus images than some traditional methods.
Multispectral image fusion for target detection
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-09-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in an experiment using MSSF against two established methods: Averaging and Principle Components Analysis (PCA), and against its two source bands, visible and infrared. The task that we studied was: target detection in the cluttered environment. MSSF proved superior to the other fusion methods. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
A new multi-spectral feature level image fusion method for human interpretation
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-03-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in a three-task experiment using MSSF against two established methods: averaging and principle components analysis (PCA), and against its two source bands, visible and infrared. The three tasks that we studied were: (1) simple target detection, (2) spatial orientation, and (3) camouflaged target detection. MSSF proved superior to the other fusion methods in all three tests; MSSF also outperformed the source images in the spatial orientation and camouflaged target detection tasks. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
High resolution isotopic analysis of U-bearing particles via fusion of SIMS and EDS images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarolli, Jay G.; Naes, Benjamin E.; Garcia, Benjamin J.
Image fusion of secondary ion mass spectrometry (SIMS) images and X-ray elemental maps from energy-dispersive spectroscopy (EDS) was performed to facilitate the isolation and re-analysis of isotopically unique U-bearing particles where the highest precision SIMS measurements are required. Image registration, image fusion and particle micromanipulation were performed on a subset of SIMS images obtained from a large area pre-screen of a particle distribution from a sample containing several certified reference materials (CRM) U129A, U015, U150, U500 and U850, as well as a standard reference material (SRM) 8704 (Buffalo River Sediment) to simulate particles collected on swipes during routine inspections ofmore » declared uranium enrichment facilities by the International Atomic Energy Agency (IAEA). In total, fourteen particles, ranging in size from 5 – 15 µm, were isolated and re-analyzed by SIMS in multi-collector mode identifying nine particles of CRM U129A, one of U150, one of U500 and three of U850. These identifications were made within a few percent errors from the National Institute of Standards and Technology (NIST) certified atom percent values for 234U, 235U and 238U for the corresponding CRMs. This work represents the first use of image fusion to enhance the accuracy and precision of isotope ratio measurements for isotopically unique U-bearing particles for nuclear safeguards applications. Implementation of image fusion is essential for the identification of particles of interests that fall below the spatial resolution of the SIMS images.« less
Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.
Liu, Min; Wang, Xueping; Zhang, Hongzhong
2018-03-01
In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Min; Cui, Qi; Wang, Jie; Ming, Dongping; Lv, Guonian
2017-01-01
In this paper, we first propose several novel concepts for object-based image analysis, which include line-based shape regularity, line density, and scale-based best feature value (SBV), based on the region-line primitive association framework (RLPAF). We then propose a raft cultivation area (RCA) extraction method for high spatial resolution (HSR) remote sensing imagery based on multi-scale feature fusion and spatial rule induction. The proposed method includes the following steps: (1) Multi-scale region primitives (segments) are obtained by image segmentation method HBC-SEG, and line primitives (straight lines) are obtained by phase-based line detection method. (2) Association relationships between regions and lines are built based on RLPAF, and then multi-scale RLPAF features are extracted and SBVs are selected. (3) Several spatial rules are designed to extract RCAs within sea waters after land and water separation. Experiments show that the proposed method can successfully extract different-shaped RCAs from HR images with good performance.
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-01-01
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs. PMID:25587878
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-11-26
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs.
Registration and Fusion of Multiple Source Remotely Sensed Image Data
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline
2004-01-01
Earth and Space Science often involve the comparison, fusion, and integration of multiple types of remotely sensed data at various temporal, radiometric, and spatial resolutions. Results of this integration may be utilized for global change analysis, global coverage of an area at multiple resolutions, map updating or validation of new instruments, as well as integration of data provided by multiple instruments carried on multiple platforms, e.g. in spacecraft constellations or fleets of planetary rovers. Our focus is on developing methods to perform fast, accurate and automatic image registration and fusion. General methods for automatic image registration are being reviewed and evaluated. Various choices for feature extraction, feature matching and similarity measurements are being compared, including wavelet-based algorithms, mutual information and statistically robust techniques. Our work also involves studies related to image fusion and investigates dimension reduction and co-kriging for application-dependent fusion. All methods are being tested using several multi-sensor datasets, acquired at EOS Core Sites, and including multiple sensors such as IKONOS, Landsat-7/ETM+, EO1/ALI and Hyperion, MODIS, and SeaWIFS instruments. Issues related to the coregistration of data from the same platform (i.e., AIRS and MODIS from Aqua) or from several platforms of the A-train (i.e., MLS, HIRDLS, OMI from Aura with AIRS and MODIS from Terra and Aqua) will also be considered.
Advanced X-ray Imaging Crystal Spectrometer for Magnetic Fusion Tokamak Devices
NASA Astrophysics Data System (ADS)
Lee, S. G.; Bak, J. G.; Bog, M. G.; Nam, U. W.; Moon, M. K.; Cheon, J. K.
2008-03-01
An advanced X-ray imaging crystal spectrometer is currently under development using a segmented position sensitive detector and time-to-digital converter (TDC) based delay-line readout electronics for burning plasma diagnostics. The proposed advanced XICS utilizes an eight-segmented position sensitive multi-wire proportional counter and supporting electronics to increase the spectrometer performance includes the photon count-rate capability and spatial resolution.
Multispectral image fusion based on fractal features
NASA Astrophysics Data System (ADS)
Tian, Jie; Chen, Jie; Zhang, Chunhua
2004-01-01
Imagery sensors have been one indispensable part of the detection and recognition systems. They are widely used to the field of surveillance, navigation, control and guide, et. However, different imagery sensors depend on diverse imaging mechanisms, and work within diverse range of spectrum. They also perform diverse functions and have diverse circumstance requires. So it is unpractical to accomplish the task of detection or recognition with a single imagery sensor under the conditions of different circumstances, different backgrounds and different targets. Fortunately, the multi-sensor image fusion technique emerged as important route to solve this problem. So image fusion has been one of the main technical routines used to detect and recognize objects from images. While, loss of information is unavoidable during fusion process, so it is always a very important content of image fusion how to preserve the useful information to the utmost. That is to say, it should be taken into account before designing the fusion schemes how to avoid the loss of useful information or how to preserve the features helpful to the detection. In consideration of these issues and the fact that most detection problems are actually to distinguish man-made objects from natural background, a fractal-based multi-spectral fusion algorithm has been proposed in this paper aiming at the recognition of battlefield targets in the complicated backgrounds. According to this algorithm, source images are firstly orthogonally decomposed according to wavelet transform theories, and then fractal-based detection is held to each decomposed image. At this step, natural background and man-made targets are distinguished by use of fractal models that can well imitate natural objects. Special fusion operators are employed during the fusion of area that contains man-made targets so that useful information could be preserved and features of targets could be extruded. The final fused image is reconstructed from the composition of source pyramid images. So this fusion scheme is a multi-resolution analysis. The wavelet decomposition of image can be actually considered as special pyramid decomposition. According to wavelet decomposition theories, the approximation of image (formula available in paper) at resolution 2j+1 equal to its orthogonal projection in space , that is, where Ajf is the low-frequency approximation of image f(x, y) at resolution 2j and , , represent the vertical, horizontal and diagonal wavelet coefficients respectively at resolution 2j. These coefficients describe the high-frequency information of image at direction of vertical, horizontal and diagonal respectively. Ajf, , and are independent and can be considered as images. In this paper J is set to be 1, so the source image is decomposed to produce the son-images Af, D1f, D2f and D3f. To solve the problem of detecting artifacts, the concepts of vertical fractal dimension FD1, horizontal fractal dimension FD2 and diagonal fractal dimension FD3 are proposed in this paper. The vertical fractal dimension FD1 corresponds to the vertical wavelet coefficients image after the wavelet decomposition of source image, the horizontal fractal dimension FD2 corresponds to the horizontal wavelet coefficients and the diagonal fractal dimension FD3 the diagonal one. These definitions enrich the illustration of source images. Therefore they are helpful to classify the targets. Then the detection of artifacts in the decomposed images is a problem of pattern recognition in 4-D space. The combination of FD0, FD1, FD2 and FD3 make a vector of (FD0, FD1, FD2, FD3), which can be considered as a united feature vector of the studied image. All the parts of the images are classified in the 4-D pattern space created by the vector of (FD0, FD1, FD2, FD3) so that the area that contains man-made objects could be detected. This detection can be considered as a coarse recognition, and then the significant areas in each son-images are signed so that they can be dealt with special rules. There has been various fusion rules developed with each one aiming at a special problem. These rules have different performance, so it is very important to select an appropriate rule during the design of an image fusion system. Recent research denotes that the rule should be adjustable so that it is always suitable to extrude the features of targets and to preserve the pixels of useful information. In this paper, owing to the consideration that fractal dimension is one of the main features to distinguish man-made targets from natural objects, the fusion rule was defined that if the studied region of image contains man-made target, the pixels of the source image whose fractal dimension is minimal are saved to be the pixels of the fused image, otherwise, a weighted average operator is adopted to avoid loss of information. The main idea of this rule is to store the pixels with low fractal dimensions, so it can be named Minimal Fractal dimensions (MFD) fusion rule. This fractal-based algorithm is compared with a common weighted average fusion algorithm. An objective assessment is taken to the two fusion results. The criteria of Entropy, Cross-Entropy, Peak Signal-to-Noise Ratio (PSNR) and Standard Gray Scale Difference are defined in this paper. Reversely to the idea of constructing an ideal image as the assessing reference, the source images are selected to be the reference in this paper. It can be deemed that this assessment is to calculate how much the image quality has been enhanced and the quantity of information has been increased when the fused image is compared with the source images. The experimental results imply that the fractal-based multi-spectral fusion algorithm can effectively preserve the information of man-made objects with a high contrast. It is proved that this algorithm could well preserve features of military targets because that battlefield targets are most man-made objects and in common their images differ from fractal models obviously. Furthermore, the fractal features are not sensitive to the imaging conditions and the movement of targets, so this fractal-based algorithm may be very practical.
Multi-energy x-ray imaging and sensing for diagnostic and control of the burning plasma.
Stutman, D; Tritz, K; Finkenthal, M
2012-10-01
New diagnostic and sensor designs are needed for future burning plasma (BP) fusion experiments, having good space and time resolution and capable of prolonged operation in the harsh BP environment. We evaluate the potential of multi-energy x-ray imaging with filtered detector arrays for BP diagnostic and control. Experimental studies show that this simple and robust technique enables measuring with good accuracy, speed, and spatial resolution the T(e) profile, impurity content, and MHD activity in a tokamak. Applied to the BP this diagnostic could also serve for non-magnetic sensing of the plasma position, centroid, ELM, and RWM instability. BP compatible x-ray sensors are proposed using "optical array" or "bi-cell" detectors.
Multi-focus image fusion using a guided-filter-based difference image.
Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Yang, Tingwu
2016-03-20
The aim of multi-focus image fusion technology is to integrate different partially focused images into one all-focused image. To realize this goal, a new multi-focus image fusion method based on a guided filter is proposed and an efficient salient feature extraction method is presented in this paper. Furthermore, feature extraction is primarily the main objective of the present work. Based on salient feature extraction, the guided filter is first used to acquire the smoothing image containing the most sharpness regions. To obtain the initial fusion map, we compose a mixed focus measure by combining the variance of image intensities and the energy of the image gradient together. Then, the initial fusion map is further processed by a morphological filter to obtain a good reprocessed fusion map. Lastly, the final fusion map is determined via the reprocessed fusion map and is optimized by a guided filter. Experimental results demonstrate that the proposed method does markedly improve the fusion performance compared to previous fusion methods and can be competitive with or even outperform state-of-the-art fusion methods in terms of both subjective visual effects and objective quality metrics.
Accuracy comparison in mapping water bodies using Landsat images and Google Earth Images
NASA Astrophysics Data System (ADS)
Zhou, Z.; Zhou, X.
2016-12-01
A lot of research has been done for the extraction of water bodies with multiple satellite images. The Water Indexes with the use of multi-spectral images are the mostly used methods for the water bodies' extraction. In order to extract area of water bodies from satellite images, accuracy may depend on the spatial resolution of images and relative size of the water bodies. To quantify the impact of spatial resolution and size (major and minor lengths) of the water bodies on the accuracy of water area extraction, we use Georgetown Lake, Montana and coalbed methane (CBM) water retention ponds in the Montana Powder River Basin as test sites to evaluate the impact of spatial resolution and the size of water bodies on water area extraction. Data sources used include Landsat images and Google Earth images covering both large water bodies and small ponds. Firstly we used water indices to extract water coverage from Landsat images for both large lake and small ponds. Secondly we used a newly developed visible-index method to extract water coverage from Google Earth images covering both large lake and small ponds. Thirdly, we used the image fusion method in which the Google Earth Images are fused with multi-spectral Landsat images to obtain multi-spectral images of the same high spatial resolution as the Google earth images. The actual area of the lake and ponds are measured using GPS surveys. Results will be compared and the optimal method will be selected for water body extraction.
Multi-source remotely sensed data fusion for improving land cover classification
NASA Astrophysics Data System (ADS)
Chen, Bin; Huang, Bo; Xu, Bing
2017-02-01
Although many advances have been made in past decades, land cover classification of fine-resolution remotely sensed (RS) data integrating multiple temporal, angular, and spectral features remains limited, and the contribution of different RS features to land cover classification accuracy remains uncertain. We proposed to improve land cover classification accuracy by integrating multi-source RS features through data fusion. We further investigated the effect of different RS features on classification performance. The results of fusing Landsat-8 Operational Land Imager (OLI) data with Moderate Resolution Imaging Spectroradiometer (MODIS), China Environment 1A series (HJ-1A), and Advanced Spaceborne Thermal Emission and Reflection (ASTER) digital elevation model (DEM) data, showed that the fused data integrating temporal, spectral, angular, and topographic features achieved better land cover classification accuracy than the original RS data. Compared with the topographic feature, the temporal and angular features extracted from the fused data played more important roles in classification performance, especially those temporal features containing abundant vegetation growth information, which markedly increased the overall classification accuracy. In addition, the multispectral and hyperspectral fusion successfully discriminated detailed forest types. Our study provides a straightforward strategy for hierarchical land cover classification by making full use of available RS data. All of these methods and findings could be useful for land cover classification at both regional and global scales.
Advances in multi-sensor data fusion: algorithms and applications.
Dong, Jiang; Zhuang, Dafang; Huang, Yaohuan; Fu, Jingying
2009-01-01
With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of "algorithm fusion" methods; (3) Establishment of an automatic quality assessment scheme.
Effects of spatial resolution ratio in image fusion
Ling, Y.; Ehlers, M.; Usery, E.L.; Madden, M.
2008-01-01
In image fusion, the spatial resolution ratio can be defined as the ratio between the spatial resolution of the high-resolution panchromatic image and that of the low-resolution multispectral image. This paper attempts to assess the effects of the spatial resolution ratio of the input images on the quality of the fused image. Experimental results indicate that a spatial resolution ratio of 1:10 or higher is desired for optimal multisensor image fusion provided the input panchromatic image is not downsampled to a coarser resolution. Due to the synthetic pixels generated from resampling, the quality of the fused image decreases as the spatial resolution ratio decreases (e.g. from 1:10 to 1:30). However, even with a spatial resolution ratio as small as 1:30, the quality of the fused image is still better than the original multispectral image alone for feature interpretation. In cases where the spatial resolution ratio is too small (e.g. 1:30), to obtain better spectral integrity of the fused image, one may downsample the input high-resolution panchromatic image to a slightly lower resolution before fusing it with the multispectral image.
An efficient method for the fusion of light field refocused images
NASA Astrophysics Data System (ADS)
Wang, Yingqian; Yang, Jungang; Xiao, Chao; An, Wei
2018-04-01
Light field cameras have drawn much attention due to the advantage of post-capture adjustments such as refocusing after exposure. The depth of field in refocused images is always shallow because of the large equivalent aperture. As a result, a large number of multi-focus images are obtained and an all-in-focus image is demanded. Consider that most multi-focus image fusion algorithms do not particularly aim at large numbers of source images and traditional DWT-based fusion approach has serious problems in dealing with lots of multi-focus images, causing color distortion and ringing effect. To solve this problem, this paper proposes an efficient multi-focus image fusion method based on stationary wavelet transform (SWT), which can deal with a large quantity of multi-focus images with shallow depth of fields. We compare SWT-based approach with DWT-based approach on various occasions. And the results demonstrate that the proposed method performs much better both visually and quantitatively.
Multispectral multisensor image fusion using wavelet transforms
Lemeshewsky, George P.
1999-01-01
Fusion techniques can be applied to multispectral and higher spatial resolution panchromatic images to create a composite image that is easier to interpret than the individual images. Wavelet transform-based multisensor, multiresolution fusion (a type of band sharpening) was applied to Landsat thematic mapper (TM) multispectral and coregistered higher resolution SPOT panchromatic images. The objective was to obtain increased spatial resolution, false color composite products to support the interpretation of land cover types wherein the spectral characteristics of the imagery are preserved to provide the spectral clues needed for interpretation. Since the fusion process should not introduce artifacts, a shift invariant implementation of the discrete wavelet transform (SIDWT) was used. These results were compared with those using the shift variant, discrete wavelet transform (DWT). Overall, the process includes a hue, saturation, and value color space transform to minimize color changes, and a reported point-wise maximum selection rule to combine transform coefficients. The performance of fusion based on the SIDWT and DWT was evaluated with a simulated TM 30-m spatial resolution test image and a higher resolution reference. Simulated imagery was made by blurring higher resolution color-infrared photography with the TM sensors' point spread function. The SIDWT based technique produced imagery with fewer artifacts and lower error between fused images and the full resolution reference. Image examples with TM and SPOT 10-m panchromatic illustrate the reduction in artifacts due to the SIDWT based fusion.
Enhancing hyperspectral spatial resolution using multispectral image fusion: A wavelet approach
NASA Astrophysics Data System (ADS)
Jazaeri, Amin
High spectral and spatial resolution images have a significant impact in remote sensing applications. Because both spatial and spectral resolutions of spaceborne sensors are fixed by design and it is not possible to further increase the spatial or spectral resolution, techniques such as image fusion must be applied to achieve such goals. This dissertation introduces the concept of wavelet fusion between hyperspectral and multispectral sensors in order to enhance the spectral and spatial resolution of a hyperspectral image. To test the robustness of this concept, images from Hyperion (hyperspectral sensor) and Advanced Land Imager (multispectral sensor) were first co-registered and then fused using different wavelet algorithms. A regression-based fusion algorithm was also implemented for comparison purposes. The results show that the fused images using a combined bi-linear wavelet-regression algorithm have less error than other methods when compared to the ground truth. In addition, a combined regression-wavelet algorithm shows more immunity to misalignment of the pixels due to the lack of proper registration. The quantitative measures of average mean square error show that the performance of wavelet-based methods degrades when the spatial resolution of hyperspectral images becomes eight times less than its corresponding multispectral image. Regardless of what method of fusion is utilized, the main challenge in image fusion is image registration, which is also a very time intensive process. Because the combined regression wavelet technique is computationally expensive, a hybrid technique based on regression and wavelet methods was also implemented to decrease computational overhead. However, the gain in faster computation was offset by the introduction of more error in the outcome. The secondary objective of this dissertation is to examine the feasibility and sensor requirements for image fusion for future NASA missions in order to be able to perform onboard image fusion. In this process, the main challenge of image registration was resolved by registering the input images using transformation matrices of previously acquired data. The composite image resulted from the fusion process remarkably matched the ground truth, indicating the possibility of real time onboard fusion processing.
An adaptive block-based fusion method with LUE-SSIM for multi-focus images
NASA Astrophysics Data System (ADS)
Zheng, Jianing; Guo, Yongcai; Huang, Yukun
2016-09-01
Because of the lenses' limited depth of field, digital cameras are incapable of acquiring an all-in-focus image of objects at varying distances in a scene. Multi-focus image fusion technique can effectively solve this problem. Aiming at the block-based multi-focus image fusion methods, the problem that blocking-artifacts often occurs. An Adaptive block-based fusion method based on lifting undistorted-edge structural similarity (LUE-SSIM) is put forward. In this method, image quality metrics LUE-SSIM is firstly proposed, which utilizes the characteristics of human visual system (HVS) and structural similarity (SSIM) to make the metrics consistent with the human visual perception. Particle swarm optimization(PSO) algorithm which selects LUE-SSIM as the object function is used for optimizing the block size to construct the fused image. Experimental results on LIVE image database shows that LUE-SSIM outperform SSIM on Gaussian defocus blur images quality assessment. Besides, multi-focus image fusion experiment is carried out to verify our proposed image fusion method in terms of visual and quantitative evaluation. The results show that the proposed method performs better than some other block-based methods, especially in reducing the blocking-artifact of the fused image. And our method can effectively preserve the undistorted-edge details in focus region of the source images.
Investigations of image fusion
NASA Astrophysics Data System (ADS)
Zhang, Zhong
1999-12-01
The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a single image which is more suitable for the purpose of human visual perception or further image processing tasks. In this thesis, a region-based fusion algorithm using the wavelet transform is proposed. The identification of important features in each image, such as edges and regions of interest, are used to guide the fusion process. The idea of multiscale grouping is also introduced and a generic image fusion framework based on multiscale decomposition is studied. The framework includes all of the existing multiscale-decomposition- based fusion approaches we found in the literature which did not assume a statistical model for the source images. Comparisons indicate that our framework includes some new approaches which outperform the existing approaches for the cases we consider. Registration must precede our fusion algorithms. So we proposed a hybrid scheme which uses both feature-based and intensity-based methods. The idea of robust estimation of optical flow from time- varying images is employed with a coarse-to-fine multi- resolution approach and feature-based registration to overcome some of the limitations of the intensity-based schemes. Experiments show that this approach is robust and efficient. Assessing image fusion performance in a real application is a complicated issue. In this dissertation, a mixture probability density function model is used in conjunction with the Expectation- Maximization algorithm to model histograms of edge intensity. Some new techniques are proposed for estimating the quality of a noisy image of a natural scene. Such quality measures can be used to guide the fusion. Finally, we study fusion of images obtained from several copies of a new type of camera developed for video surveillance. Our techniques increase the capability and reliability of the surveillance system and provide an easy way to obtain 3-D information of objects in the space monitored by the system.
HIPS: A new hippocampus subfield segmentation method.
Romero, José E; Coupé, Pierrick; Manjón, José V
2017-12-01
The importance of the hippocampus in the study of several neurodegenerative diseases such as Alzheimer's disease makes it a structure of great interest in neuroimaging. However, few segmentation methods have been proposed to measure its subfields due to its complex structure and the lack of high resolution magnetic resonance (MR) data. In this work, we present a new pipeline for automatic hippocampus subfield segmentation using two available hippocampus subfield delineation protocols that can work with both high and standard resolution data. The proposed method is based on multi-atlas label fusion technology that benefits from a novel multi-contrast patch match search process (using high resolution T1-weighted and T2-weighted images). The proposed method also includes as post-processing a new neural network-based error correction step to minimize systematic segmentation errors. The method has been evaluated on both high and standard resolution images and compared to other state-of-the-art methods showing better results in terms of accuracy and execution time. Copyright © 2017 Elsevier Inc. All rights reserved.
Software defined multi-spectral imaging for Arctic sensor networks
NASA Astrophysics Data System (ADS)
Siewert, Sam; Angoth, Vivek; Krishnamurthy, Ramnarayan; Mani, Karthikeyan; Mock, Kenrick; Singh, Surjith B.; Srivistava, Saurav; Wagner, Chris; Claus, Ryan; Vis, Matthew Demi
2016-05-01
Availability of off-the-shelf infrared sensors combined with high definition visible cameras has made possible the construction of a Software Defined Multi-Spectral Imager (SDMSI) combining long-wave, near-infrared and visible imaging. The SDMSI requires a real-time embedded processor to fuse images and to create real-time depth maps for opportunistic uplink in sensor networks. Researchers at Embry Riddle Aeronautical University working with University of Alaska Anchorage at the Arctic Domain Awareness Center and the University of Colorado Boulder have built several versions of a low-cost drop-in-place SDMSI to test alternatives for power efficient image fusion. The SDMSI is intended for use in field applications including marine security, search and rescue operations and environmental surveys in the Arctic region. Based on Arctic marine sensor network mission goals, the team has designed the SDMSI to include features to rank images based on saliency and to provide on camera fusion and depth mapping. A major challenge has been the design of the camera computing system to operate within a 10 to 20 Watt power budget. This paper presents a power analysis of three options: 1) multi-core, 2) field programmable gate array with multi-core, and 3) graphics processing units with multi-core. For each test, power consumed for common fusion workloads has been measured at a range of frame rates and resolutions. Detailed analyses from our power efficiency comparison for workloads specific to stereo depth mapping and sensor fusion are summarized. Preliminary mission feasibility results from testing with off-the-shelf long-wave infrared and visible cameras in Alaska and Arizona are also summarized to demonstrate the value of the SDMSI for applications such as ice tracking, ocean color, soil moisture, animal and marine vessel detection and tracking. The goal is to select the most power efficient solution for the SDMSI for use on UAVs (Unoccupied Aerial Vehicles) and other drop-in-place installations in the Arctic. The prototype selected will be field tested in Alaska in the summer of 2016.
Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D
2015-05-08
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.
Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.
2015-01-01
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714
Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators
Bai, Xiangzhi
2015-01-01
The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion. PMID:26184229
Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators.
Bai, Xiangzhi
2015-07-15
The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion.
Beyond RGB: Very high resolution urban remote sensing with multimodal deep networks
NASA Astrophysics Data System (ADS)
Audebert, Nicolas; Le Saux, Bertrand; Lefèvre, Sébastien
2018-06-01
In this work, we investigate various methods to deal with semantic labeling of very high resolution multi-modal remote sensing data. Especially, we study how deep fully convolutional networks can be adapted to deal with multi-modal and multi-scale remote sensing data for semantic labeling. Our contributions are threefold: (a) we present an efficient multi-scale approach to leverage both a large spatial context and the high resolution data, (b) we investigate early and late fusion of Lidar and multispectral data, (c) we validate our methods on two public datasets with state-of-the-art results. Our results indicate that late fusion make it possible to recover errors steaming from ambiguous data, while early fusion allows for better joint-feature learning but at the cost of higher sensitivity to missing data.
Airborne net-centric multi-INT sensor control, display, fusion, and exploitation systems
NASA Astrophysics Data System (ADS)
Linne von Berg, Dale C.; Lee, John N.; Kruer, Melvin R.; Duncan, Michael D.; Olchowski, Fred M.; Allman, Eric; Howard, Grant
2004-08-01
The NRL Optical Sciences Division has initiated a multi-year effort to develop and demonstrate an airborne net-centric suite of multi-intelligence (multi-INT) sensors and exploitation systems for real-time target detection and targeting product dissemination. The goal of this Net-centric Multi-Intelligence Fusion Targeting Initiative (NCMIFTI) is to develop an airborne real-time intelligence gathering and targeting system that can be used to detect concealed, camouflaged, and mobile targets. The multi-INT sensor suite will include high-resolution visible/infrared (EO/IR) dual-band cameras, hyperspectral imaging (HSI) sensors in the visible-to-near infrared, short-wave and long-wave infrared (VNIR/SWIR/LWIR) bands, Synthetic Aperture Radar (SAR), electronics intelligence sensors (ELINT), and off-board networked sensors. Other sensors are also being considered for inclusion in the suite to address unique target detection needs. Integrating a suite of multi-INT sensors on a single platform should optimize real-time fusion of the on-board sensor streams, thereby improving the detection probability and reducing the false alarms that occur in reconnaissance systems that use single-sensor types on separate platforms, or that use independent target detection algorithms on multiple sensors. In addition to the integration and fusion of the multi-INT sensors, the effort is establishing an open-systems net-centric architecture that will provide a modular "plug and play" capability for additional sensors and system components and provide distributed connectivity to multiple sites for remote system control and exploitation.
Study on Mosaic and Uniform Color Method of Satellite Image Fusion in Large Srea
NASA Astrophysics Data System (ADS)
Liu, S.; Li, H.; Wang, X.; Guo, L.; Wang, R.
2018-04-01
Due to the improvement of satellite radiometric resolution and the color difference for multi-temporal satellite remote sensing images and the large amount of satellite image data, how to complete the mosaic and uniform color process of satellite images is always an important problem in image processing. First of all using the bundle uniform color method and least squares mosaic method of GXL and the dodging function, the uniform transition of color and brightness can be realized in large area and multi-temporal satellite images. Secondly, using Color Mapping software to color mosaic images of 16bit to mosaic images of 8bit based on uniform color method with low resolution reference images. At last, qualitative and quantitative analytical methods are used respectively to analyse and evaluate satellite image after mosaic and uniformity coloring. The test reflects the correlation of mosaic images before and after coloring is higher than 95 % and image information entropy increases, texture features are enhanced which have been proved by calculation of quantitative indexes such as correlation coefficient and information entropy. Satellite image mosaic and color processing in large area has been well implemented.
3D second harmonic generation imaging tomography by multi-view excitation
Campbell, Kirby R.; Wen, Bruce; Shelton, Emily M.; Swader, Robert; Cox, Benjamin L.; Eliceiri, Kevin; Campagnola, Paul J.
2018-01-01
Biological tissues have complex 3D collagen fiber architecture that cannot be fully visualized by conventional second harmonic generation (SHG) microscopy due to electric dipole considerations. We have developed a multi-view SHG imaging platform that successfully visualizes all orientations of collagen fibers. This is achieved by rotating tissues relative to the excitation laser plane of incidence, where the complete fibrillar structure is then visualized following registration and reconstruction. We evaluated high frequency and Gaussian weighted fusion reconstruction algorithms, and found the former approach performs better in terms of the resulting resolution. The new approach is a first step toward SHG tomography. PMID:29541654
Multi-focus image fusion with the all convolutional neural network
NASA Astrophysics Data System (ADS)
Du, Chao-ben; Gao, She-sheng
2018-01-01
A decision map contains complete and clear information about the image to be fused, which is crucial to various image fusion issues, especially multi-focus image fusion. However, in order to get a satisfactory image fusion effect, getting a decision map is very necessary and usually difficult to finish. In this letter, we address this problem with convolutional neural network (CNN), aiming to get a state-of-the-art decision map. The main idea is that the max-pooling of CNN is replaced by a convolution layer, the residuals are propagated backwards by gradient descent, and the training parameters of the individual layers of the CNN are updated layer by layer. Based on this, we propose a new all CNN (ACNN)-based multi-focus image fusion method in spatial domain. We demonstrate that the decision map obtained from the ACNN is reliable and can lead to high-quality fusion results. Experimental results clearly validate that the proposed algorithm can obtain state-of-the-art fusion performance in terms of both qualitative and quantitative evaluations.
Dim target detection method based on salient graph fusion
NASA Astrophysics Data System (ADS)
Hu, Ruo-lan; Shen, Yi-yan; Jiang, Jun
2018-02-01
Dim target detection is one key problem in digital image processing field. With development of multi-spectrum imaging sensor, it becomes a trend to improve the performance of dim target detection by fusing the information from different spectral images. In this paper, one dim target detection method based on salient graph fusion was proposed. In the method, Gabor filter with multi-direction and contrast filter with multi-scale were combined to construct salient graph from digital image. And then, the maximum salience fusion strategy was designed to fuse the salient graph from different spectral images. Top-hat filter was used to detect dim target from the fusion salient graph. Experimental results show that proposal method improved the probability of target detection and reduced the probability of false alarm on clutter background images.
Assessment of Spatiotemporal Fusion Algorithms for Planet and Worldview Images
Zhu, Xiaolin; Gao, Feng; Chou, Bryan; Li, Jiang; Shen, Yuzhong; Koperski, Krzysztof; Marchisio, Giovanni
2018-01-01
Although Worldview-2 (WV) images (non-pansharpened) have 2-m resolution, the re-visit times for the same areas may be seven days or more. In contrast, Planet images are collected using small satellites that can cover the whole Earth almost daily. However, the resolution of Planet images is 3.125 m. It would be ideal to fuse these two satellites images to generate high spatial resolution (2 m) and high temporal resolution (1 or 2 days) images for applications such as damage assessment, border monitoring, etc. that require quick decisions. In this paper, we evaluate three approaches to fusing Worldview (WV) and Planet images. These approaches are known as Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), Flexible Spatiotemporal Data Fusion (FSDAF), and Hybrid Color Mapping (HCM), which have been applied to the fusion of MODIS and Landsat images in recent years. Experimental results using actual Planet and Worldview images demonstrated that the three aforementioned approaches have comparable performance and can all generate high quality prediction images. PMID:29614745
Assessment of Spatiotemporal Fusion Algorithms for Planet and Worldview Images.
Kwan, Chiman; Zhu, Xiaolin; Gao, Feng; Chou, Bryan; Perez, Daniel; Li, Jiang; Shen, Yuzhong; Koperski, Krzysztof; Marchisio, Giovanni
2018-03-31
Although Worldview-2 (WV) images (non-pansharpened) have 2-m resolution, the re-visit times for the same areas may be seven days or more. In contrast, Planet images are collected using small satellites that can cover the whole Earth almost daily. However, the resolution of Planet images is 3.125 m. It would be ideal to fuse these two satellites images to generate high spatial resolution (2 m) and high temporal resolution (1 or 2 days) images for applications such as damage assessment, border monitoring, etc. that require quick decisions. In this paper, we evaluate three approaches to fusing Worldview (WV) and Planet images. These approaches are known as Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), Flexible Spatiotemporal Data Fusion (FSDAF), and Hybrid Color Mapping (HCM), which have been applied to the fusion of MODIS and Landsat images in recent years. Experimental results using actual Planet and Worldview images demonstrated that the three aforementioned approaches have comparable performance and can all generate high quality prediction images.
Multi-focus image fusion and robust encryption algorithm based on compressive sensing
NASA Astrophysics Data System (ADS)
Xiao, Di; Wang, Lan; Xiang, Tao; Wang, Yong
2017-06-01
Multi-focus image fusion schemes have been studied in recent years. However, little work has been done in multi-focus image transmission security. This paper proposes a scheme that can reduce data transmission volume and resist various attacks. First, multi-focus image fusion based on wavelet decomposition can generate complete scene images and optimize the perception of the human eye. The fused images are sparsely represented with DCT and sampled with structurally random matrix (SRM), which reduces the data volume and realizes the initial encryption. Then the obtained measurements are further encrypted to resist noise and crop attack through combining permutation and diffusion stages. At the receiver, the cipher images can be jointly decrypted and reconstructed. Simulation results demonstrate the security and robustness of the proposed scheme.
Daily monitoring of 30 m crop condition over complex agricultural landscapes
NASA Astrophysics Data System (ADS)
Sun, L.; Gao, F.; Xie, D.; Anderson, M. C.; Yang, Y.
2017-12-01
Crop progress provides information necessary for efficient irrigation, scheduling fertilization and harvesting operations at optimal times for achieving higher yields. In the United States, crop progress reports are released online weekly by US Department of Agriculture (USDA) - National Agricultural Statistics Service (NASS). However, the ground data collection is time consuming and subjective, and these reports are provided at either district (multiple counties) or state level. Remote sensing technologies have been widely used to map crop conditions, to extract crop phenology, and to predict crop yield. However, for current satellite-based sensors, it is difficult to acquire both high spatial resolution and frequent coverage. For example, Landsat satellites are capable to capture 30 m resolution images, while the long revisit cycles, cloud contamination further limited their use in detecting rapid surface changes. On the other hand, MODIS can provide daily observations, but with coarse spatial resolutions range from 250 to 1000 m. In recent years, multi-satellite data fusion technology such as the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) has been used to combine the spatial resolution of Landsat with the temporal frequency of MODIS. It has been found that this synthetic dataset could provide more valuable information compared to the images acquired from only one single sensor. However, accuracy of STARFM depends on heterogeneity of landscape and available clear image pairs of MODIS and Landsat. In this study, a new fusion method was developed using the crop vegetation index (VI) timeseries extracted from "pure" MODIS pixels and Landsat overpass images to generate daily 30 m VI for crops. The fusion accuracy was validated by comparing to the original Landsat images. Results show that the relative error in non-rapid growing period is around 3-5% and in rapid growing period is around 6-8% . The accuracy is much better than that of STARFM which is 4-9% in non-rapid growing period and 10-16% in rapid growing period based on 13 image pairs. The predicted VI from this approach looks consistent and smooth in the SLC-off gap stripes of Landsat 7 ETM+ image. The new fusion results will be used to map crop phenology and to predict crop yield at field scale in the complex agricultural landscapes.
The fusion of satellite and UAV data: simulation of high spatial resolution band
NASA Astrophysics Data System (ADS)
Jenerowicz, Agnieszka; Siok, Katarzyna; Woroszkiewicz, Malgorzata; Orych, Agata
2017-10-01
Remote sensing techniques used in the precision agriculture and farming that apply imagery data obtained with sensors mounted on UAV platforms became more popular in the last few years due to the availability of low- cost UAV platforms and low- cost sensors. Data obtained from low altitudes with low- cost sensors can be characterised by high spatial and radiometric resolution but quite low spectral resolution, therefore the application of imagery data obtained with such technology is quite limited and can be used only for the basic land cover classification. To enrich the spectral resolution of imagery data acquired with low- cost sensors from low altitudes, the authors proposed the fusion of RGB data obtained with UAV platform with multispectral satellite imagery. The fusion is based on the pansharpening process, that aims to integrate the spatial details of the high-resolution panchromatic image with the spectral information of lower resolution multispectral or hyperspectral imagery to obtain multispectral or hyperspectral images with high spatial resolution. The key of pansharpening is to properly estimate the missing spatial details of multispectral images while preserving their spectral properties. In the research, the authors presented the fusion of RGB images (with high spatial resolution) obtained with sensors mounted on low- cost UAV platforms and multispectral satellite imagery with satellite sensors, i.e. Landsat 8 OLI. To perform the fusion of UAV data with satellite imagery, the simulation of the panchromatic bands from RGB data based on the spectral channels linear combination, was conducted. Next, for simulated bands and multispectral satellite images, the Gram-Schmidt pansharpening method was applied. As a result of the fusion, the authors obtained several multispectral images with very high spatial resolution and then analysed the spatial and spectral accuracies of processed images.
Image fusion based on Bandelet and sparse representation
NASA Astrophysics Data System (ADS)
Zhang, Jiuxing; Zhang, Wei; Li, Xuzhi
2018-04-01
Bandelet transform could acquire geometric regular direction and geometric flow, sparse representation could represent signals with as little as possible atoms on over-complete dictionary, both of which could be used to image fusion. Therefore, a new fusion method is proposed based on Bandelet and Sparse Representation, to fuse Bandelet coefficients of multi-source images and obtain high quality fusion effects. The test are performed on remote sensing images and simulated multi-focus images, experimental results show that the performance of new method is better than tested methods according to objective evaluation indexes and subjective visual effects.
Sjöberg, C; Ahnesjö, A
2013-06-01
Label fusion multi-atlas approaches for image segmentation can give better segmentation results than single atlas methods. We present a multi-atlas label fusion strategy based on probabilistic weighting of distance maps. Relationships between image similarities and segmentation similarities are estimated in a learning phase and used to derive fusion weights that are proportional to the probability for each atlas to improve the segmentation result. The method was tested using a leave-one-out strategy on a database of 21 pre-segmented prostate patients for different image registrations combined with different image similarity scorings. The probabilistic weighting yields results that are equal or better compared to both fusion with equal weights and results using the STAPLE algorithm. Results from the experiments demonstrate that label fusion by weighted distance maps is feasible, and that probabilistic weighted fusion improves segmentation quality more the stronger the individual atlas segmentation quality depends on the corresponding registered image similarity. The regions used for evaluation of the image similarity measures were found to be more important than the choice of similarity measure. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Data fusion of Landsat TM and IRS images in forest classification
Guangxing Wang; Markus Holopainen; Eero Lukkarinen
2000-01-01
Data fusion of Landsat TM images and Indian Remote Sensing satellite panchromatic image (IRS-1C PAN) was studied and compared to the use of TM or IRS image only. The aim was to combine the high spatial resolution of IRS-1C PAN to the high spectral resolution of Landsat TM images using a data fusion algorithm. The ground truth of the study was based on a sample of 1,020...
Application and evaluation of ISVR method in QuickBird image fusion
NASA Astrophysics Data System (ADS)
Cheng, Bo; Song, Xiaolu
2014-05-01
QuickBird satellite images are widely used in many fields, and applications have put forward high requirements for the integration of the spatial information and spectral information of the imagery. A fusion method for high resolution remote sensing images based on ISVR is identified in this study. The core principle of ISVS is taking the advantage of radicalization targeting to remove the effect of different gain and error of satellites' sensors. Transformed from DN to radiance, the multi-spectral image's energy is used to simulate the panchromatic band. The linear regression analysis is carried through the simulation process to find a new synthetically panchromatic image, which is highly linearly correlated to the original panchromatic image. In order to evaluate, test and compare the algorithm results, this paper used ISVR and other two different fusion methods to give a comparative study of the spatial information and spectral information, taking the average gradient and the correlation coefficient as an indicator. Experiments showed that this method could significantly improve the quality of fused image, especially in preserving spectral information, to maximize the spectral information of original multispectral images, while maintaining abundant spatial information.
Wang, Hongzhi; Yushkevich, Paul A.
2013-01-01
Label fusion based multi-atlas segmentation has proven to be one of the most competitive techniques for medical image segmentation. This technique transfers segmentations from expert-labeled images, called atlases, to a novel image using deformable image registration. Errors produced by label transfer are further reduced by label fusion that combines the results produced by all atlases into a consensus solution. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity is a simple and highly effective label fusion technique. However, one limitation of most weighted voting methods is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this problem, we recently developed the joint label fusion technique and the corrective learning technique, which won the first place of the 2012 MICCAI Multi-Atlas Labeling Challenge and was one of the top performers in 2013 MICCAI Segmentation: Algorithms, Theory and Applications (SATA) challenge. To make our techniques more accessible to the scientific research community, we describe an Insight-Toolkit based open source implementation of our label fusion methods. Our implementation extends our methods to work with multi-modality imaging data and is more suitable for segmentation problems with multiple labels. We demonstrate the usage of our tools through applying them to the 2012 MICCAI Multi-Atlas Labeling Challenge brain image dataset and the 2013 SATA challenge canine leg image dataset. We report the best results on these two datasets so far. PMID:24319427
Segment fusion of ToF-SIMS images.
Milillo, Tammy M; Miller, Mary E; Fischione, Remo; Montes, Angelina; Gardella, Joseph A
2016-06-08
The imaging capabilities of time-of-flight secondary ion mass spectrometry (ToF-SIMS) have not been used to their full potential in the analysis of polymer and biological samples. Imaging has been limited by the size of the dataset and the chemical complexity of the sample being imaged. Pixel and segment based image fusion algorithms commonly used in remote sensing, ecology, geography, and geology provide a way to improve spatial resolution and classification of biological images. In this study, a sample of Arabidopsis thaliana was treated with silver nanoparticles and imaged with ToF-SIMS. These images provide insight into the uptake mechanism for the silver nanoparticles into the plant tissue, giving new understanding to the mechanism of uptake of heavy metals in the environment. The Munechika algorithm was programmed in-house and applied to achieve pixel based fusion, which improved the spatial resolution of the image obtained. Multispectral and quadtree segment or region based fusion algorithms were performed using ecognition software, a commercially available remote sensing software suite, and used to classify the images. The Munechika fusion improved the spatial resolution for the images containing silver nanoparticles, while the segment fusion allowed classification and fusion based on the tissue types in the sample, suggesting potential pathways for the uptake of the silver nanoparticles.
Wu, Guorong; Kim, Minjeong; Sanroma, Gerard; Wang, Qian; Munsell, Brent C.; Shen, Dinggang
2014-01-01
Multi-atlas patch-based label fusion methods have been successfully used to improve segmentation accuracy in many important medical image analysis applications. In general, to achieve label fusion a single target image is first registered to several atlas images, after registration a label is assigned to each target point in the target image by determining the similarity between the underlying target image patch (centered at the target point) and the aligned image patch in each atlas image. To achieve the highest level of accuracy during the label fusion process it’s critical the chosen patch similarity measurement accurately captures the tissue/shape appearance of the anatomical structure. One major limitation of existing state-of-the-art label fusion methods is that they often apply a fixed size image patch throughout the entire label fusion procedure. Doing so may severely affect the fidelity of the patch similarity measurement, which in turn may not adequately capture complex tissue appearance patterns expressed by the anatomical structure. To address this limitation, we advance state-of-the-art by adding three new label fusion contributions: First, each image patch now characterized by a multi-scale feature representation that encodes both local and semi-local image information. Doing so will increase the accuracy of the patch-based similarity measurement. Second, to limit the possibility of the patch-based similarity measurement being wrongly guided by the presence of multiple anatomical structures in the same image patch, each atlas image patch is further partitioned into a set of label-specific partial image patches according to the existing labels. Since image information has now been semantically divided into different patterns, these new label-specific atlas patches make the label fusion process more specific and flexible. Lastly, in order to correct target points that are mislabeled during label fusion, a hierarchically approach is used to improve the label fusion results. In particular, a coarse-to-fine iterative label fusion approach is used that gradually reduces the patch size. To evaluate the accuracy of our label fusion approach, the proposed method was used to segment the hippocampus in the ADNI dataset and 7.0 tesla MR images, sub-cortical regions in LONI LBPA40 dataset, mid-brain regions in SATA dataset from MICCAI 2013 segmentation challenge, and a set of key internal gray matter structures in IXI dataset. In all experiments, the segmentation results of the proposed hierarchical label fusion method with multi-scale feature representations and label-specific atlas patches are more accurate than several well-known state-of-the-art label fusion methods. PMID:25463474
A Multi-Objective Decision Making Approach for Solving the Image Segmentation Fusion Problem.
Khelifi, Lazhar; Mignotte, Max
2017-08-01
Image segmentation fusion is defined as the set of methods which aim at merging several image segmentations, in a manner that takes full advantage of the complementarity of each one. Previous relevant researches in this field have been impeded by the difficulty in identifying an appropriate single segmentation fusion criterion, providing the best possible, i.e., the more informative, result of fusion. In this paper, we propose a new model of image segmentation fusion based on multi-objective optimization which can mitigate this problem, to obtain a final improved result of segmentation. Our fusion framework incorporates the dominance concept in order to efficiently combine and optimize two complementary segmentation criteria, namely, the global consistency error and the F-measure (precision-recall) criterion. To this end, we present a hierarchical and efficient way to optimize the multi-objective consensus energy function related to this fusion model, which exploits a simple and deterministic iterative relaxation strategy combining the different image segments. This step is followed by a decision making task based on the so-called "technique for order performance by similarity to ideal solution". Results obtained on two publicly available databases with manual ground truth segmentations clearly show that our multi-objective energy-based model gives better results than the classical mono-objective one.
Han, Guanghui; Liu, Xiabi; Zheng, Guangyuan; Wang, Murong; Huang, Shan
2018-06-06
Ground-glass opacity (GGO) is a common CT imaging sign on high-resolution CT, which means the lesion is more likely to be malignant compared to common solid lung nodules. The automatic recognition of GGO CT imaging signs is of great importance for early diagnosis and possible cure of lung cancers. The present GGO recognition methods employ traditional low-level features and system performance improves slowly. Considering the high-performance of CNN model in computer vision field, we proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling is performed on multi-views and multi-receptive fields, which reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has the ability to obtain the optimal fine-tuning model. Multi-CNN models fusion strategy obtains better performance than any single trained model. We evaluated our method on the GGO nodule samples in publicly available LIDC-IDRI dataset of chest CT scans. The experimental results show that our method yields excellent results with 96.64% sensitivity, 71.43% specificity, and 0.83 F1 score. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images. Graphical abstract We proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has ability to obtain the optimal fine-tuning model. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images.
Progressive multi-atlas label fusion by dictionary evolution.
Song, Yantao; Wu, Guorong; Bahrami, Khosro; Sun, Quansen; Shen, Dinggang
2017-02-01
Accurate segmentation of anatomical structures in medical images is important in recent imaging based studies. In the past years, multi-atlas patch-based label fusion methods have achieved a great success in medical image segmentation. In these methods, the appearance of each input image patch is first represented by an atlas patch dictionary (in the image domain), and then the latent label of the input image patch is predicted by applying the estimated representation coefficients to the corresponding anatomical labels of the atlas patches in the atlas label dictionary (in the label domain). However, due to the generally large gap between the patch appearance in the image domain and the patch structure in the label domain, the estimated (patch) representation coefficients from the image domain may not be optimal for the final label fusion, thus reducing the labeling accuracy. To address this issue, we propose a novel label fusion framework to seek for the suitable label fusion weights by progressively constructing a dynamic dictionary in a layer-by-layer manner, where the intermediate dictionaries act as a sequence of guidance to steer the transition of (patch) representation coefficients from the image domain to the label domain. Our proposed multi-layer label fusion framework is flexible enough to be applied to the existing labeling methods for improving their label fusion performance, i.e., by extending their single-layer static dictionary to the multi-layer dynamic dictionary. The experimental results show that our proposed progressive label fusion method achieves more accurate hippocampal segmentation results for the ADNI dataset, compared to the counterpart methods using only the single-layer static dictionary. Copyright © 2016 Elsevier B.V. All rights reserved.
Integrative Multi-Spectral Sensor Device for Far-Infrared and Visible Light Fusion
NASA Astrophysics Data System (ADS)
Qiao, Tiezhu; Chen, Lulu; Pang, Yusong; Yan, Gaowei
2018-06-01
Infrared and visible light image fusion technology is a hot spot in the research of multi-sensor fusion technology in recent years. Existing infrared and visible light fusion technologies need to register before fusion because of using two cameras. However, the application effect of the registration technology has yet to be improved. Hence, a novel integrative multi-spectral sensor device is proposed for infrared and visible light fusion, and by using the beam splitter prism, the coaxial light incident from the same lens is projected to the infrared charge coupled device (CCD) and visible light CCD, respectively. In this paper, the imaging mechanism of the proposed sensor device is studied with the process of the signals acquisition and fusion. The simulation experiment, which involves the entire process of the optic system, signal acquisition, and signal fusion, is constructed based on imaging effect model. Additionally, the quality evaluation index is adopted to analyze the simulation result. The experimental results demonstrate that the proposed sensor device is effective and feasible.
Nagayama, T.; Mancini, R. C.; Mayes, D.; ...
2015-11-18
Temperature and density asymmetry diagnosis is critical to advance inertial confinement fusion (ICF) science. A multi-monochromatic x-ray imager (MMI) is an attractive diagnostic for this purpose. The MMI records the spectral signature from an ICF implosion core with time resolution, 2-D space resolution, and spectral resolution. While narrow-band images and 2-D space-resolved spectra from the MMI data constrain temperature and density spatial structure of the core, the accuracy of the images and spectra depends not only on the quality of the MMI data but also on the reliability of the post-processing tools. In this paper, we synthetically quantify the accuracymore » of images and spectra reconstructed from MMI data. Errors in the reconstructed images are less than a few percent when the space-resolution effect is applied to the modeled images. The errors in the reconstructed 2-D space-resolved spectra are also less than a few percent except those for the peripheral regions. Spectra reconstructed for the peripheral regions have slightly but systematically lower intensities by ~6% due to the instrumental spatial-resolution effects. However, this does not alter the relative line ratios and widths and thus does not affect the temperature and density diagnostics. We also investigate the impact of the pinhole size variation on the extracted images and spectra. A 10% pinhole size variation could introduce spatial bias to the images and spectra of ~10%. A correction algorithm is developed, and it successfully reduces the errors to a few percent. Finally, it is desirable to perform similar synthetic investigations to fully understand the reliability and limitations of each MMI application.« less
Nagayama, T; Mancini, R C; Mayes, D; Tommasini, R; Florido, R
2015-11-01
Temperature and density asymmetry diagnosis is critical to advance inertial confinement fusion (ICF) science. A multi-monochromatic x-ray imager (MMI) is an attractive diagnostic for this purpose. The MMI records the spectral signature from an ICF implosion core with time resolution, 2-D space resolution, and spectral resolution. While narrow-band images and 2-D space-resolved spectra from the MMI data constrain temperature and density spatial structure of the core, the accuracy of the images and spectra depends not only on the quality of the MMI data but also on the reliability of the post-processing tools. Here, we synthetically quantify the accuracy of images and spectra reconstructed from MMI data. Errors in the reconstructed images are less than a few percent when the space-resolution effect is applied to the modeled images. The errors in the reconstructed 2-D space-resolved spectra are also less than a few percent except those for the peripheral regions. Spectra reconstructed for the peripheral regions have slightly but systematically lower intensities by ∼6% due to the instrumental spatial-resolution effects. However, this does not alter the relative line ratios and widths and thus does not affect the temperature and density diagnostics. We also investigate the impact of the pinhole size variation on the extracted images and spectra. A 10% pinhole size variation could introduce spatial bias to the images and spectra of ∼10%. A correction algorithm is developed, and it successfully reduces the errors to a few percent. It is desirable to perform similar synthetic investigations to fully understand the reliability and limitations of each MMI application.
Fusion of laser and image sensory data for 3-D modeling of the free navigation space
NASA Technical Reports Server (NTRS)
Mass, M.; Moghaddamzadeh, A.; Bourbakis, N.
1994-01-01
A fusion technique which combines two different types of sensory data for 3-D modeling of a navigation space is presented. The sensory data is generated by a vision camera and a laser scanner. The problem of different resolutions for these sensory data was solved by reduced image resolution, fusion of different data, and use of a fuzzy image segmentation technique.
Multi-Sensor Fusion of Infrared and Electro-Optic Signals for High Resolution Night Images
Huang, Xiaopeng; Netravali, Ravi; Man, Hong; Lawrence, Victor
2012-01-01
Electro-optic (EO) image sensors exhibit the properties of high resolution and low noise level at daytime, but they do not work in dark environments. Infrared (IR) image sensors exhibit poor resolution and cannot separate objects with similar temperature. Therefore, we propose a novel framework of IR image enhancement based on the information (e.g., edge) from EO images, which improves the resolution of IR images and helps us distinguish objects at night. Our framework superimposing/blending the edges of the EO image onto the corresponding transformed IR image improves their resolution. In this framework, we adopt the theoretical point spread function (PSF) proposed by Hardie et al. for the IR image, which has the modulation transfer function (MTF) of a uniform detector array and the incoherent optical transfer function (OTF) of diffraction-limited optics. In addition, we design an inverse filter for the proposed PSF and use it for the IR image transformation. The framework requires four main steps: (1) inverse filter-based IR image transformation; (2) EO image edge detection; (3) registration; and (4) blending/superimposing of the obtained image pair. Simulation results show both blended and superimposed IR images, and demonstrate that blended IR images have better quality over the superimposed images. Additionally, based on the same steps, simulation result shows a blended IR image of better quality when only the original IR image is available. PMID:23112602
Multi-sensor fusion of infrared and electro-optic signals for high resolution night images.
Huang, Xiaopeng; Netravali, Ravi; Man, Hong; Lawrence, Victor
2012-01-01
Electro-optic (EO) image sensors exhibit the properties of high resolution and low noise level at daytime, but they do not work in dark environments. Infrared (IR) image sensors exhibit poor resolution and cannot separate objects with similar temperature. Therefore, we propose a novel framework of IR image enhancement based on the information (e.g., edge) from EO images, which improves the resolution of IR images and helps us distinguish objects at night. Our framework superimposing/blending the edges of the EO image onto the corresponding transformed IR image improves their resolution. In this framework, we adopt the theoretical point spread function (PSF) proposed by Hardie et al. for the IR image, which has the modulation transfer function (MTF) of a uniform detector array and the incoherent optical transfer function (OTF) of diffraction-limited optics. In addition, we design an inverse filter for the proposed PSF and use it for the IR image transformation. The framework requires four main steps: (1) inverse filter-based IR image transformation; (2) EO image edge detection; (3) registration; and (4) blending/superimposing of the obtained image pair. Simulation results show both blended and superimposed IR images, and demonstrate that blended IR images have better quality over the superimposed images. Additionally, based on the same steps, simulation result shows a blended IR image of better quality when only the original IR image is available.
Vollnhals, Florian; Audinot, Jean-Nicolas; Wirtz, Tom; Mercier-Bonin, Muriel; Fourquaux, Isabelle; Schroeppel, Birgit; Kraushaar, Udo; Lev-Ram, Varda; Ellisman, Mark H; Eswara, Santhana
2017-10-17
Correlative microscopy combining various imaging modalities offers powerful insights into obtaining a comprehensive understanding of physical, chemical, and biological phenomena. In this article, we investigate two approaches for image fusion in the context of combining the inherently lower-resolution chemical images obtained using secondary ion mass spectrometry (SIMS) with the high-resolution ultrastructural images obtained using electron microscopy (EM). We evaluate the image fusion methods with three different case studies selected to broadly represent the typical samples in life science research: (i) histology (unlabeled tissue), (ii) nanotoxicology, and (iii) metabolism (isotopically labeled tissue). We show that the intensity-hue-saturation fusion method often applied for EM-sharpening can result in serious image artifacts, especially in cases where different contrast mechanisms interplay. Here, we introduce and demonstrate Laplacian pyramid fusion as a powerful and more robust alternative method for image fusion. Both physical and technical aspects of correlative image overlay and image fusion specific to SIMS-based correlative microscopy are discussed in detail alongside the advantages, limitations, and the potential artifacts. Quantitative metrics to evaluate the results of image fusion are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagayama, T.; Mancini, R. C.; Mayes, D.
2015-11-15
Temperature and density asymmetry diagnosis is critical to advance inertial confinement fusion (ICF) science. A multi-monochromatic x-ray imager (MMI) is an attractive diagnostic for this purpose. The MMI records the spectral signature from an ICF implosion core with time resolution, 2-D space resolution, and spectral resolution. While narrow-band images and 2-D space-resolved spectra from the MMI data constrain temperature and density spatial structure of the core, the accuracy of the images and spectra depends not only on the quality of the MMI data but also on the reliability of the post-processing tools. Here, we synthetically quantify the accuracy of imagesmore » and spectra reconstructed from MMI data. Errors in the reconstructed images are less than a few percent when the space-resolution effect is applied to the modeled images. The errors in the reconstructed 2-D space-resolved spectra are also less than a few percent except those for the peripheral regions. Spectra reconstructed for the peripheral regions have slightly but systematically lower intensities by ∼6% due to the instrumental spatial-resolution effects. However, this does not alter the relative line ratios and widths and thus does not affect the temperature and density diagnostics. We also investigate the impact of the pinhole size variation on the extracted images and spectra. A 10% pinhole size variation could introduce spatial bias to the images and spectra of ∼10%. A correction algorithm is developed, and it successfully reduces the errors to a few percent. It is desirable to perform similar synthetic investigations to fully understand the reliability and limitations of each MMI application.« less
Graph-based Data Modeling and Analysis for Data Fusion in Remote Sensing
NASA Astrophysics Data System (ADS)
Fan, Lei
Hyperspectral imaging provides the capability of increased sensitivity and discrimination over traditional imaging methods by combining standard digital imaging with spectroscopic methods. For each individual pixel in a hyperspectral image (HSI), a continuous spectrum is sampled as the spectral reflectance/radiance signature to facilitate identification of ground cover and surface material. The abundant spectrum knowledge allows all available information from the data to be mined. The superior qualities within hyperspectral imaging allow wide applications such as mineral exploration, agriculture monitoring, and ecological surveillance, etc. The processing of massive high-dimensional HSI datasets is a challenge since many data processing techniques have a computational complexity that grows exponentially with the dimension. Besides, a HSI dataset may contain a limited number of degrees of freedom due to the high correlations between data points and among the spectra. On the other hand, merely taking advantage of the sampled spectrum of individual HSI data point may produce inaccurate results due to the mixed nature of raw HSI data, such as mixed pixels, optical interferences and etc. Fusion strategies are widely adopted in data processing to achieve better performance, especially in the field of classification and clustering. There are mainly three types of fusion strategies, namely low-level data fusion, intermediate-level feature fusion, and high-level decision fusion. Low-level data fusion combines multi-source data that is expected to be complementary or cooperative. Intermediate-level feature fusion aims at selection and combination of features to remove redundant information. Decision level fusion exploits a set of classifiers to provide more accurate results. The fusion strategies have wide applications including HSI data processing. With the fast development of multiple remote sensing modalities, e.g. Very High Resolution (VHR) optical sensors, LiDAR, etc., fusion of multi-source data can in principal produce more detailed information than each single source. On the other hand, besides the abundant spectral information contained in HSI data, features such as texture and shape may be employed to represent data points from a spatial perspective. Furthermore, feature fusion also includes the strategy of removing redundant and noisy features in the dataset. One of the major problems in machine learning and pattern recognition is to develop appropriate representations for complex nonlinear data. In HSI processing, a particular data point is usually described as a vector with coordinates corresponding to the intensities measured in the spectral bands. This vector representation permits the application of linear and nonlinear transformations with linear algebra to find an alternative representation of the data. More generally, HSI is multi-dimensional in nature and the vector representation may lose the contextual correlations. Tensor representation provides a more sophisticated modeling technique and a higher-order generalization to linear subspace analysis. In graph theory, data points can be generalized as nodes with connectivities measured from the proximity of a local neighborhood. The graph-based framework efficiently characterizes the relationships among the data and allows for convenient mathematical manipulation in many applications, such as data clustering, feature extraction, feature selection and data alignment. In this thesis, graph-based approaches applied in the field of multi-source feature and data fusion in remote sensing area are explored. We will mainly investigate the fusion of spatial, spectral and LiDAR information with linear and multilinear algebra under graph-based framework for data clustering and classification problems.
[An improved low spectral distortion PCA fusion method].
Peng, Shi; Zhang, Ai-Wu; Li, Han-Lun; Hu, Shao-Xing; Meng, Xian-Gang; Sun, Wei-Dong
2013-10-01
Aiming at the spectral distortion produced in PCA fusion process, the present paper proposes an improved low spectral distortion PCA fusion method. This method uses NCUT (normalized cut) image segmentation algorithm to make a complex hyperspectral remote sensing image into multiple sub-images for increasing the separability of samples, which can weaken the spectral distortions of traditional PCA fusion; Pixels similarity weighting matrix and masks were produced by using graph theory and clustering theory. These masks are used to cut the hyperspectral image and high-resolution image into some sub-region objects. All corresponding sub-region objects between the hyperspectral image and high-resolution image are fused by using PCA method, and all sub-regional integration results are spliced together to produce a new image. In the experiment, Hyperion hyperspectral data and Rapid Eye data were used. And the experiment result shows that the proposed method has the same ability to enhance spatial resolution and greater ability to improve spectral fidelity performance.
Research on segmentation based on multi-atlas in brain MR image
NASA Astrophysics Data System (ADS)
Qian, Yuejing
2018-03-01
Accurate segmentation of specific tissues in brain MR image can be effectively achieved with the multi-atlas-based segmentation method, and the accuracy mainly depends on the image registration accuracy and fusion scheme. This paper proposes an automatic segmentation method based on the multi-atlas for brain MR image. Firstly, to improve the registration accuracy in the area to be segmented, we employ a target-oriented image registration method for the refinement. Then In the label fusion, we proposed a new algorithm to detect the abnormal sparse patch and simultaneously abandon the corresponding abnormal sparse coefficients, this method is made based on the remaining sparse coefficients combined with the multipoint label estimator strategy. The performance of the proposed method was compared with those of the nonlocal patch-based label fusion method (Nonlocal-PBM), the sparse patch-based label fusion method (Sparse-PBM) and majority voting method (MV). Based on our experimental results, the proposed method is efficient in the brain MR images segmentation compared with MV, Nonlocal-PBM, and Sparse-PBM methods.
Compressive hyperspectral and multispectral imaging fusion
NASA Astrophysics Data System (ADS)
Espitia, Óscar; Castillo, Sergio; Arguello, Henry
2016-05-01
Image fusion is a valuable framework which combines two or more images of the same scene from one or multiple sensors, allowing to improve the resolution of the images and increase the interpretable content. In remote sensing a common fusion problem consists of merging hyperspectral (HS) and multispectral (MS) images that involve large amount of redundant data, which ignores the highly correlated structure of the datacube along the spatial and spectral dimensions. Compressive HS and MS systems compress the spectral data in the acquisition step allowing to reduce the data redundancy by using different sampling patterns. This work presents a compressed HS and MS image fusion approach, which uses a high dimensional joint sparse model. The joint sparse model is formulated by combining HS and MS compressive acquisition models. The high spectral and spatial resolution image is reconstructed by using sparse optimization algorithms. Different fusion spectral image scenarios are used to explore the performance of the proposed scheme. Several simulations with synthetic and real datacubes show promising results as the reliable reconstruction of a high spectral and spatial resolution image can be achieved by using as few as just the 50% of the datacube.
Towards Omni-Tomography—Grand Fusion of Multiple Modalities for Simultaneous Interior Tomography
Wang, Ge; Zhang, Jie; Gao, Hao; Weir, Victor; Yu, Hengyong; Cong, Wenxiang; Xu, Xiaochen; Shen, Haiou; Bennett, James; Furth, Mark; Wang, Yue; Vannier, Michael
2012-01-01
We recently elevated interior tomography from its origin in computed tomography (CT) to a general tomographic principle, and proved its validity for other tomographic modalities including SPECT, MRI, and others. Here we propose “omni-tomography”, a novel concept for the grand fusion of multiple tomographic modalities for simultaneous data acquisition in a region of interest (ROI). Omni-tomography can be instrumental when physiological processes under investigation are multi-dimensional, multi-scale, multi-temporal and multi-parametric. Both preclinical and clinical studies now depend on in vivo tomography, often requiring separate evaluations by different imaging modalities. Over the past decade, two approaches have been used for multimodality fusion: Software based image registration and hybrid scanners such as PET-CT, PET-MRI, and SPECT-CT among others. While there are intrinsic limitations with both approaches, the main obstacle to the seamless fusion of multiple imaging modalities has been the bulkiness of each individual imager and the conflict of their physical (especially spatial) requirements. To address this challenge, omni-tomography is now unveiled as an emerging direction for biomedical imaging and systems biomedicine. PMID:22768108
Sun, Yang; Stephens, Douglas N.; Park, Jesung; Sun, Yinghua; Marcu, Laura; Cannata, Jonathan M.; Shung, K. Kirk
2010-01-01
We report the development and validate a multi-modal tissue diagnostic technology, which combines three complementary techniques into one system including ultrasound backscatter microscopy (UBM), photoacoustic imaging (PAI), and time-resolved laser-induced fluorescence spectroscopy (TR-LIFS). UBM enables the reconstruction of the tissue microanatomy. PAI maps the optical absorption heterogeneity of the tissue associated with structure information and has the potential to provide functional imaging of the tissue. Examination of the UBM and PAI images allows for localization of regions of interest for TR-LIFS evaluation of the tissue composition. The hybrid probe consists of a single element ring transducer with concentric fiber optics for multi-modal data acquisition. Validation and characterization of the multi-modal system and ultrasonic, photoacoustic, and spectroscopic data coregistration were conducted in a physical phantom with properties of ultrasound scattering, optical absorption, and fluorescence. The UBM system with the 41 MHz ring transducer can reach the axial and lateral resolution of 30 and 65 μm, respectively. The PAI system with 532 nm excitation light from a Nd:YAG laser shows great contrast for the distribution of optical absorbers. The TR-LIFS system records the fluorescence decay with the time resolution of ~300 ps and a high sensitivity of nM concentration range. Biological phantom constructed with different types of tissues (tendon and fat) was used to demonstrate the complementary information provided by the three modalities. Fluorescence spectra and lifetimes were compared to differentiate chemical composition of tissues at the regions of interest determined by the coregistered high resolution UBM and PAI image. Current results demonstrate that the fusion of these techniques enables sequentially detection of functional, morphological, and compositional features of biological tissue, suggesting potential applications in diagnosis of tumors and atherosclerotic plaques. PMID:21894259
Sun, Yang; Stephens, Douglas N; Park, Jesung; Sun, Yinghua; Marcu, Laura; Cannata, Jonathan M; Shung, K Kirk
2008-01-01
We report the development and validate a multi-modal tissue diagnostic technology, which combines three complementary techniques into one system including ultrasound backscatter microscopy (UBM), photoacoustic imaging (PAI), and time-resolved laser-induced fluorescence spectroscopy (TR-LIFS). UBM enables the reconstruction of the tissue microanatomy. PAI maps the optical absorption heterogeneity of the tissue associated with structure information and has the potential to provide functional imaging of the tissue. Examination of the UBM and PAI images allows for localization of regions of interest for TR-LIFS evaluation of the tissue composition. The hybrid probe consists of a single element ring transducer with concentric fiber optics for multi-modal data acquisition. Validation and characterization of the multi-modal system and ultrasonic, photoacoustic, and spectroscopic data coregistration were conducted in a physical phantom with properties of ultrasound scattering, optical absorption, and fluorescence. The UBM system with the 41 MHz ring transducer can reach the axial and lateral resolution of 30 and 65 μm, respectively. The PAI system with 532 nm excitation light from a Nd:YAG laser shows great contrast for the distribution of optical absorbers. The TR-LIFS system records the fluorescence decay with the time resolution of ~300 ps and a high sensitivity of nM concentration range. Biological phantom constructed with different types of tissues (tendon and fat) was used to demonstrate the complementary information provided by the three modalities. Fluorescence spectra and lifetimes were compared to differentiate chemical composition of tissues at the regions of interest determined by the coregistered high resolution UBM and PAI image. Current results demonstrate that the fusion of these techniques enables sequentially detection of functional, morphological, and compositional features of biological tissue, suggesting potential applications in diagnosis of tumors and atherosclerotic plaques.
Multi exposure image fusion algorithm based on YCbCr space
NASA Astrophysics Data System (ADS)
Yang, T. T.; Fang, P. Y.
2018-05-01
To solve the problem that scene details and visual effects are difficult to be optimized in high dynamic image synthesis, we proposes a multi exposure image fusion algorithm for processing low dynamic range images in YCbCr space, and weighted blending of luminance and chromatic aberration components respectively. The experimental results show that the method can retain color effect of the fused image while balancing details of the bright and dark regions of the high dynamic image.
NASA Astrophysics Data System (ADS)
Dong, Min; Dong, Chenghui; Guo, Miao; Wang, Zhe; Mu, Xiaomin
2018-04-01
Multiresolution-based methods, such as wavelet and Contourlet are usually used to image fusion. This work presents a new image fusion frame-work by utilizing area-based standard deviation in dual tree Contourlet trans-form domain. Firstly, the pre-registered source images are decomposed with dual tree Contourlet transform; low-pass and high-pass coefficients are obtained. Then, the low-pass bands are fused with weighted average based on area standard deviation rather than the simple "averaging" rule. While the high-pass bands are merged with the "max-absolute' fusion rule. Finally, the modified low-pass and high-pass coefficients are used to reconstruct the final fused image. The major advantage of the proposed fusion method over conventional fusion is the approximately shift invariance and multidirectional selectivity of dual tree Contourlet transform. The proposed method is compared with wavelet- , Contourletbased methods and other the state-of-the art methods on common used multi focus images. Experiments demonstrate that the proposed fusion framework is feasible and effective, and it performs better in both subjective and objective evaluation.
Sedai, Suman; Garnavi, Rahil; Roy, Pallab; Xi Liang
2015-08-01
Multi-atlas segmentation first registers each atlas image to the target image and transfers the label of atlas image to the coordinate system of the target image. The transferred labels are then combined, using a label fusion algorithm. In this paper, we propose a novel label fusion method which aggregates discriminative learning and generative modeling for segmentation of cardiac MR images. First, a probabilistic Random Forest classifier is trained as a discriminative model to obtain the prior probability of a label at the given voxel of the target image. Then, a probability distribution of image patches is modeled using Gaussian Mixture Model for each label, providing the likelihood of the voxel belonging to the label. The final label posterior is obtained by combining the classification score and the likelihood score under Bayesian rule. Comparative study performed on MICCAI 2013 SATA Segmentation Challenge demonstrates that our proposed hybrid label fusion algorithm is accurate than other five state-of-the-art label fusion methods. The proposed method obtains dice similarity coefficient of 0.94 and 0.92 in segmenting epicardium and endocardium respectively. Moreover, our label fusion method achieves more accurate segmentation results compared to four other label fusion methods.
2D-3D registration using gradient-based MI for image guided surgery systems
NASA Astrophysics Data System (ADS)
Yim, Yeny; Chen, Xuanyi; Wakid, Mike; Bielamowicz, Steve; Hahn, James
2011-03-01
Registration of preoperative CT data to intra-operative video images is necessary not only to compare the outcome of the vocal fold after surgery with the preplanned shape but also to provide the image guidance for fusion of all imaging modalities. We propose a 2D-3D registration method using gradient-based mutual information. The 3D CT scan is aligned to 2D endoscopic images by finding the corresponding viewpoint between the real camera for endoscopic images and the virtual camera for CT scans. Even though mutual information has been successfully used to register different imaging modalities, it is difficult to robustly register the CT rendered image to the endoscopic image due to varying light patterns and shape of the vocal fold. The proposed method calculates the mutual information in the gradient images as well as original images, assigning more weight to the high gradient regions. The proposed method can emphasize the effect of vocal fold and allow a robust matching regardless of the surface illumination. To find the viewpoint with maximum mutual information, a downhill simplex method is applied in a conditional multi-resolution scheme which leads to a less-sensitive result to local maxima. To validate the registration accuracy, we evaluated the sensitivity to initial viewpoint of preoperative CT. Experimental results showed that gradient-based mutual information provided robust matching not only for two identical images with different viewpoints but also for different images acquired before and after surgery. The results also showed that conditional multi-resolution scheme led to a more accurate registration than single-resolution.
Intensity-hue-saturation-based image fusion using iterative linear regression
NASA Astrophysics Data System (ADS)
Cetin, Mufit; Tepecik, Abdulkadir
2016-10-01
The image fusion process basically produces a high-resolution image by combining the superior features of a low-resolution spatial image and a high-resolution panchromatic image. Despite its common usage due to its fast computing capability and high sharpening ability, the intensity-hue-saturation (IHS) fusion method may cause some color distortions, especially when a large number of gray value differences exist among the images to be combined. This paper proposes a spatially adaptive IHS (SA-IHS) technique to avoid these distortions by automatically adjusting the exact spatial information to be injected into the multispectral image during the fusion process. The SA-IHS method essentially suppresses the effects of those pixels that cause the spectral distortions by assigning weaker weights to them and avoiding a large number of redundancies on the fused image. The experimental database consists of IKONOS images, and the experimental results both visually and statistically prove the enhancement of the proposed algorithm when compared with the several other IHS-like methods such as IHS, generalized IHS, fast IHS, and generalized adaptive IHS.
Sensor fusion for synthetic vision
NASA Technical Reports Server (NTRS)
Pavel, M.; Larimer, J.; Ahumada, A.
1991-01-01
Display methodologies are explored for fusing images gathered by millimeter wave sensors with images rendered from an on-board terrain data base to facilitate visually guided flight and ground operations in low visibility conditions. An approach to fusion based on multiresolution image representation and processing is described which facilitates fusion of images differing in resolution within and between images. To investigate possible fusion methods, a workstation-based simulation environment is being developed.
Piqueras, Sara; Bedia, Carmen; Beleites, Claudia; Krafft, Christoph; Popp, Jürgen; Maeder, Marcel; Tauler, Romà; de Juan, Anna
2018-06-05
Data fusion of different imaging techniques allows a comprehensive description of chemical and biological systems. Yet, joining images acquired with different spectroscopic platforms is complex because of the different sample orientation and image spatial resolution. Whereas matching sample orientation is often solved by performing suitable affine transformations of rotation, translation, and scaling among images, the main difficulty in image fusion is preserving the spatial detail of the highest spatial resolution image during multitechnique image analysis. In this work, a special variant of the unmixing algorithm Multivariate Curve Resolution Alternating Least Squares (MCR-ALS) for incomplete multisets is proposed to provide a solution for this kind of problem. This algorithm allows analyzing simultaneously images collected with different spectroscopic platforms without losing spatial resolution and ensuring spatial coherence among the images treated. The incomplete multiset structure concatenates images of the two platforms at the lowest spatial resolution with the image acquired with the highest spatial resolution. As a result, the constituents of the sample analyzed are defined by a single set of distribution maps, common to all platforms used and with the highest spatial resolution, and their related extended spectral signatures, covering the signals provided by each of the fused techniques. We demonstrate the potential of the new variant of MCR-ALS for multitechnique analysis on three case studies: (i) a model example of MIR and Raman images of pharmaceutical mixture, (ii) FT-IR and Raman images of palatine tonsil tissue, and (iii) mass spectrometry and Raman images of bean tissue.
Research on fusion algorithm of polarization image in tetrolet domain
NASA Astrophysics Data System (ADS)
Zhang, Dexiang; Yuan, BaoHong; Zhang, Jingjing
2015-12-01
Tetrolets are Haar-type wavelets whose supports are tetrominoes which are shapes made by connecting four equal-sized squares. A fusion method for polarization images based on tetrolet transform is proposed. Firstly, the magnitude of polarization image and angle of polarization image can be decomposed into low-frequency coefficients and high-frequency coefficients with multi-scales and multi-directions using tetrolet transform. For the low-frequency coefficients, the average fusion method is used. According to edge distribution differences in high frequency sub-band images, for the directional high-frequency coefficients are used to select the better coefficients by region spectrum entropy algorithm for fusion. At last the fused image can be obtained by utilizing inverse transform for fused tetrolet coefficients. Experimental results show that the proposed method can detect image features more effectively and the fused image has better subjective visual effect
NASA Astrophysics Data System (ADS)
Maimaitijiang, Maitiniyazi; Ghulam, Abduwasit; Sidike, Paheding; Hartling, Sean; Maimaitiyiming, Matthew; Peterson, Kyle; Shavers, Ethan; Fishman, Jack; Peterson, Jim; Kadam, Suhas; Burken, Joel; Fritschi, Felix
2017-12-01
Estimating crop biophysical and biochemical parameters with high accuracy at low-cost is imperative for high-throughput phenotyping in precision agriculture. Although fusion of data from multiple sensors is a common application in remote sensing, less is known on the contribution of low-cost RGB, multispectral and thermal sensors to rapid crop phenotyping. This is due to the fact that (1) simultaneous collection of multi-sensor data using satellites are rare and (2) multi-sensor data collected during a single flight have not been accessible until recent developments in Unmanned Aerial Systems (UASs) and UAS-friendly sensors that allow efficient information fusion. The objective of this study was to evaluate the power of high spatial resolution RGB, multispectral and thermal data fusion to estimate soybean (Glycine max) biochemical parameters including chlorophyll content and nitrogen concentration, and biophysical parameters including Leaf Area Index (LAI), above ground fresh and dry biomass. Multiple low-cost sensors integrated on UASs were used to collect RGB, multispectral, and thermal images throughout the growing season at a site established near Columbia, Missouri, USA. From these images, vegetation indices were extracted, a Crop Surface Model (CSM) was advanced, and a model to extract the vegetation fraction was developed. Then, spectral indices/features were combined to model and predict crop biophysical and biochemical parameters using Partial Least Squares Regression (PLSR), Support Vector Regression (SVR), and Extreme Learning Machine based Regression (ELR) techniques. Results showed that: (1) For biochemical variable estimation, multispectral and thermal data fusion provided the best estimate for nitrogen concentration and chlorophyll (Chl) a content (RMSE of 9.9% and 17.1%, respectively) and RGB color information based indices and multispectral data fusion exhibited the largest RMSE 22.6%; the highest accuracy for Chl a + b content estimation was obtained by fusion of information from all three sensors with an RMSE of 11.6%. (2) Among the plant biophysical variables, LAI was best predicted by RGB and thermal data fusion while multispectral and thermal data fusion was found to be best for biomass estimation. (3) For estimation of the above mentioned plant traits of soybean from multi-sensor data fusion, ELR yields promising results compared to PLSR and SVR in this study. This research indicates that fusion of low-cost multiple sensor data within a machine learning framework can provide relatively accurate estimation of plant traits and provide valuable insight for high spatial precision in agriculture and plant stress assessment.
NASA Astrophysics Data System (ADS)
D'Addabbo, Annarita; Refice, Alberto; Lovergine, Francesco P.; Pasquariello, Guido
2018-03-01
High-resolution, remotely sensed images of the Earth surface have been proven to be of help in producing detailed flood maps, thanks to their synoptic overview of the flooded area and frequent revisits. However, flood scenarios can be complex situations, requiring the integration of different data in order to provide accurate and robust flood information. Several processing approaches have been recently proposed to efficiently combine and integrate heterogeneous information sources. In this paper, we introduce DAFNE, a Matlab®-based, open source toolbox, conceived to produce flood maps from remotely sensed and other ancillary information, through a data fusion approach. DAFNE is based on Bayesian Networks, and is composed of several independent modules, each one performing a different task. Multi-temporal and multi-sensor data can be easily handled, with the possibility of following the evolution of an event through multi-temporal output flood maps. Each DAFNE module can be easily modified or upgraded to meet different user needs. The DAFNE suite is presented together with an example of its application.
Walter, Uwe; Niendorf, Thoralf; Graessl, Andreas; Rieger, Jan; Krüger, Paul-Christian; Langner, Sönke; Guthoff, Rudolf F; Stachs, Oliver
2014-05-01
A combination of magnetic resonance images with real-time high-resolution ultrasound known as fusion imaging may improve ophthalmologic examination. This study was undertaken to evaluate the feasibility of orbital high-field magnetic resonance and real-time colour Doppler ultrasound image fusion and navigation. This case study, performed between April and June 2013, included one healthy man (age, 47 years) and two patients (one woman, 57 years; one man, 67 years) with choroidal melanomas. All cases underwent 7.0-T magnetic resonance imaging using a custom-made ocular imaging surface coil. The Digital Imaging and Communications in Medicine volume data set was then loaded into the ultrasound system for manual registration of the live ultrasound image and fusion imaging examination. Data registration, matching and then volume navigation were feasible in all cases. Fusion imaging provided real-time imaging capabilities and high tissue contrast of choroidal tumour and optic nerve. It also allowed adding a real-time colour Doppler signal on magnetic resonance images for assessment of vasculature of tumour and retrobulbar structures. The combination of orbital high-field magnetic resonance and colour Doppler ultrasound image fusion and navigation is feasible. Multimodal fusion imaging promises to foster assessment and monitoring of choroidal melanoma and optic nerve disorders. • Orbital magnetic resonance and colour Doppler ultrasound real-time fusion imaging is feasible • Fusion imaging combines the spatial and temporal resolution advantages of each modality • Magnetic resonance and ultrasound fusion imaging improves assessment of choroidal melanoma vascularisation.
NASA Astrophysics Data System (ADS)
Preibisch, Stephan; Rohlfing, Torsten; Hasak, Michael P.; Tomancak, Pavel
2008-03-01
Single Plane Illumination Microscopy (SPIM; Huisken et al., Nature 305(5686):1007-1009, 2004) is an emerging microscopic technique that enables live imaging of large biological specimens in their entirety. By imaging the living biological sample from multiple angles SPIM has the potential to achieve isotropic resolution throughout even relatively large biological specimens. For every angle, however, only a relatively shallow section of the specimen is imaged with high resolution, whereas deeper regions appear increasingly blurred. In order to produce a single, uniformly high resolution image, we propose here an image mosaicing algorithm that combines state of the art groupwise image registration for alignment with content-based image fusion to prevent degrading of the fused image due to regional blurring of the input images. For the registration stage, we introduce an application-specific groupwise transformation model that incorporates per-image as well as groupwise transformation parameters. We also propose a new fusion algorithm based on Gaussian filters, which is substantially faster than fusion based on local image entropy. We demonstrate the performance of our mosaicing method on data acquired from living embryos of the fruit fly, Drosophila, using four and eight angle acquisitions.
Introduction to clinical and laboratory (small-animal) image registration and fusion.
Zanzonico, Pat B; Nehmeh, Sadek A
2006-01-01
Imaging has long been a vital component of clinical medicine and, increasingly, of biomedical research in small-animals. Clinical and laboratory imaging modalities can be divided into two general categories, structural (or anatomical) and functional (or physiological). The latter, in particular, has spawned what has come to be known as "molecular imaging". Image registration and fusion have rapidly emerged as invaluable components of both clinical and small-animal imaging and has lead to the development and marketing of a variety of multi-modality, e.g. PET-CT, devices which provide registered and fused three-dimensional image sets. This paper briefly reviews the basics of image registration and fusion and available clinical and small-animal multi-modality instrumentation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lapuyade-Lahorgue, J; Ruan, S; Li, H
Purpose: Multi-tracer PET imaging is getting more attention in radiotherapy by providing additional tumor volume information such as glucose and oxygenation. However, automatic PET-based tumor segmentation is still a very challenging problem. We propose a statistical fusion approach to joint segment the sub-area of tumors from the two tracers FDG and FMISO PET images. Methods: Non-standardized Gamma distributions are convenient to model intensity distributions in PET. As a serious correlation exists in multi-tracer PET images, we proposed a new fusion method based on copula which is capable to represent dependency between different tracers. The Hidden Markov Field (HMF) model ismore » used to represent spatial relationship between PET image voxels and statistical dynamics of intensities for each modality. Real PET images of five patients with FDG and FMISO are used to evaluate quantitatively and qualitatively our method. A comparison between individual and multi-tracer segmentations was conducted to show advantages of the proposed fusion method. Results: The segmentation results show that fusion with Gaussian copula can receive high Dice coefficient of 0.84 compared to that of 0.54 and 0.3 of monomodal segmentation results based on individual segmentation of FDG and FMISO PET images. In addition, high correlation coefficients (0.75 to 0.91) for the Gaussian copula for all five testing patients indicates the dependency between tumor regions in the multi-tracer PET images. Conclusion: This study shows that using multi-tracer PET imaging can efficiently improve the segmentation of tumor region where hypoxia and glucidic consumption are present at the same time. Introduction of copulas for modeling the dependency between two tracers can simultaneously take into account information from both tracers and deal with two pathological phenomena. Future work will be to consider other families of copula such as spherical and archimedian copulas, and to eliminate partial volume effect by considering dependency between neighboring voxels.« less
Quantitative image fusion in infrared radiometry
NASA Astrophysics Data System (ADS)
Romm, Iliya; Cukurel, Beni
2018-05-01
Towards high-accuracy infrared radiance estimates, measurement practices and processing techniques aimed to achieve quantitative image fusion using a set of multi-exposure images of a static scene are reviewed. The conventional non-uniformity correction technique is extended, as the original is incompatible with quantitative fusion. Recognizing the inherent limitations of even the extended non-uniformity correction, an alternative measurement methodology, which relies on estimates of the detector bias using self-calibration, is developed. Combining data from multi-exposure images, two novel image fusion techniques that ultimately provide high tonal fidelity of a photoquantity are considered: ‘subtract-then-fuse’, which conducts image subtraction in the camera output domain and partially negates the bias frame contribution common to both the dark and scene frames; and ‘fuse-then-subtract’, which reconstructs the bias frame explicitly and conducts image fusion independently for the dark and the scene frames, followed by subtraction in the photoquantity domain. The performances of the different techniques are evaluated for various synthetic and experimental data, identifying the factors contributing to potential degradation of the image quality. The findings reflect the superiority of the ‘fuse-then-subtract’ approach, conducting image fusion via per-pixel nonlinear weighted least squares optimization.
Enhanced EDX images by fusion of multimodal SEM images using pansharpening techniques.
Franchi, G; Angulo, J; Moreaud, M; Sorbier, L
2018-01-01
The goal of this paper is to explore the potential interest of image fusion in the context of multimodal scanning electron microscope (SEM) imaging. In particular, we aim at merging the backscattered electron images that usually have a high spatial resolution but do not provide enough discriminative information to physically classify the nature of the sample, with energy-dispersive X-ray spectroscopy (EDX) images that have discriminative information but a lower spatial resolution. The produced images are named enhanced EDX. To achieve this goal, we have compared the results obtained with classical pansharpening techniques for image fusion with an original approach tailored for multimodal SEM fusion of information. Quantitative assessment is obtained by means of two SEM images and a simulated dataset produced by a software based on PENELOPE. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
Rajpoot, Kashif; Grau, Vicente; Noble, J Alison; Becher, Harald; Szmigielski, Cezary
2011-08-01
Real-time 3D echocardiography (RT3DE) promises a more objective and complete cardiac functional analysis by dynamic 3D image acquisition. Despite several efforts towards automation of left ventricle (LV) segmentation and tracking, these remain challenging research problems due to the poor-quality nature of acquired images usually containing missing anatomical information, speckle noise, and limited field-of-view (FOV). Recently, multi-view fusion 3D echocardiography has been introduced as acquiring multiple conventional single-view RT3DE images with small probe movements and fusing them together after alignment. This concept of multi-view fusion helps to improve image quality and anatomical information and extends the FOV. We now take this work further by comparing single-view and multi-view fused images in a systematic study. In order to better illustrate the differences, this work evaluates image quality and information content of single-view and multi-view fused images using image-driven LV endocardial segmentation and tracking. The image-driven methods were utilized to fully exploit image quality and anatomical information present in the image, thus purposely not including any high-level constraints like prior shape or motion knowledge in the analysis approaches. Experiments show that multi-view fused images are better suited for LV segmentation and tracking, while relatively more failures and errors were observed on single-view images. Copyright © 2011 Elsevier B.V. All rights reserved.
Fusion of infrared and visible images based on BEMD and NSDFB
NASA Astrophysics Data System (ADS)
Zhu, Pan; Huang, Zhanhua; Lei, Hai
2016-07-01
This paper presents a new fusion method based on the adaptive multi-scale decomposition of bidimensional empirical mode decomposition (BEMD) and the flexible directional expansion of nonsubsampled directional filter banks (NSDFB) for visible-infrared images. Compared with conventional multi-scale fusion methods, BEMD is non-parametric and completely data-driven, which is relatively more suitable for non-linear signals decomposition and fusion. NSDFB can provide direction filtering on the decomposition levels to capture more geometrical structure of the source images effectively. In our fusion framework, the entropies of the two patterns of source images are firstly calculated and the residue of the image whose entropy is larger is extracted to make it highly relevant with the other source image. Then, the residue and the other source image are decomposed into low-frequency sub-bands and a sequence of high-frequency directional sub-bands in different scales by using BEMD and NSDFB. In this fusion scheme, two relevant fusion rules are used in low-frequency sub-bands and high-frequency directional sub-bands, respectively. Finally, the fused image is obtained by applying corresponding inverse transform. Experimental results indicate that the proposed fusion algorithm can obtain state-of-the-art performance for visible-infrared images fusion in both aspects of objective assessment and subjective visual quality even for the source images obtained in different conditions. Furthermore, the fused results have high contrast, remarkable target information and rich details information that are more suitable for human visual characteristics or machine perception.
Fusion of spectral and panchromatic images using false color mapping and wavelet integrated approach
NASA Astrophysics Data System (ADS)
Zhao, Yongqiang; Pan, Quan; Zhang, Hongcai
2006-01-01
With the development of sensory technology, new image sensors have been introduced that provide a greater range of information to users. But as the power limitation of radiation, there will always be some trade-off between spatial and spectral resolution in the image captured by specific sensors. Images with high spatial resolution can locate objects with high accuracy, whereas images with high spectral resolution can be used to identify the materials. Many applications in remote sensing require fusing low-resolution imaging spectral images with panchromatic images to identify materials at high resolution in clutter. A pixel-based false color mapping and wavelet transform integrated fusion algorithm is presented in this paper, the resulting images have a higher information content than each of the original images and retain sensor-specific image information. The simulation results show that this algorithm can enhance the visibility of certain details and preserve the difference of different materials.
A review of potential image fusion methods for remote sensing-based irrigation management: Part II
USDA-ARS?s Scientific Manuscript database
Satellite-based sensors provide data at either greater spectral and coarser spatial resolutions, or lower spectral and finer spatial resolutions due to complementary spectral and spatial characteristics of optical sensor systems. In order to overcome this limitation, image fusion has been suggested ...
SIRF: Simultaneous Satellite Image Registration and Fusion in a Unified Framework.
Chen, Chen; Li, Yeqing; Liu, Wei; Huang, Junzhou
2015-11-01
In this paper, we propose a novel method for image fusion with a high-resolution panchromatic image and a low-resolution multispectral (Ms) image at the same geographical location. The fusion is formulated as a convex optimization problem which minimizes a linear combination of a least-squares fitting term and a dynamic gradient sparsity regularizer. The former is to preserve accurate spectral information of the Ms image, while the latter is to keep sharp edges of the high-resolution panchromatic image. We further propose to simultaneously register the two images during the fusing process, which is naturally achieved by virtue of the dynamic gradient sparsity property. An efficient algorithm is then devised to solve the optimization problem, accomplishing a linear computational complexity in the size of the output image in each iteration. We compare our method against six state-of-the-art image fusion methods on Ms image data sets from four satellites. Extensive experimental results demonstrate that the proposed method substantially outperforms the others in terms of both spatial and spectral qualities. We also show that our method can provide high-quality products from coarsely registered real-world IKONOS data sets. Finally, a MATLAB implementation is provided to facilitate future research.
Spatial Aspects of Multi-Sensor Data Fusion: Aerosol Optical Thickness
NASA Technical Reports Server (NTRS)
Leptoukh, Gregory; Zubko, V.; Gopalan, A.
2007-01-01
The Goddard Earth Sciences Data and Information Services Center (GES DISC) investigated the applicability and limitations of combining multi-sensor data through data fusion, to increase the usefulness of the multitude of NASA remote sensing data sets, and as part of a larger effort to integrate this capability in the GES-DISC Interactive Online Visualization and Analysis Infrastructure (Giovanni). This initial study focused on merging daily mean Aerosol Optical Thickness (AOT), as measured by the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard the Terra and Aqua satellites, to increase spatial coverage and produce complete fields to facilitate comparison with models and station data. The fusion algorithm used the maximum likelihood technique to merge the pixel values where available. The algorithm was applied to two regional AOT subsets (with mostly regular and irregular gaps, respectively) and a set of AOT fields that differed only in the size and location of artificially created gaps. The Cumulative Semivariogram (CSV) was found to be sensitive to the spatial distribution of gap areas and, thus, useful for assessing the sensitivity of the fused data to spatial gaps.
NASA Astrophysics Data System (ADS)
Gupta, S.; Paar, G.; Muller, J. P.; Tao, Y.; Tyler, L.; Traxler, C.; Hesina, G.; Huber, B.; Nauschnegg, B.
2014-12-01
The FP7-SPACE project PRoViDE has assembled a major portion of the imaging data gathered so far from rover vehicles, landers and probes on extra-terrestrial planetary surfaces into a unique database, bringing them into a common planetary geospatial context and providing access to a complete set of 3D vision products. One major aim of PRoViDE is the fusion between orbiter and rover image products. To close the gap between HiRISE imaging resolution (down to 25cm for the OrthoRectified image (ORI), down to 1m for the DTM) and surface vision products, images from multiple HiRISE acquisitions are combined into a super resolution data set (Tao & Muller, 2014), increasing to 5cm resolution the Ortho images. Furthermore, shape-from-shading is applied to one of the ORIs at its original resolution for refinement of the HiRISE DTM, leading to DTM ground resolutions of up to 25 cm. After texture-based co-registration with these refined orbiter 3D products, MER PanCam and NavCam 3D image products can be smoothly pasted into a multi-resolution 3D data representation. Typical results from the MER mission are presented by a dedicated real-time rendering tool which is fed by a hierarchical 3D data structure that is able to cope with all involved scales from global planetary scale down to close-up reconstructions in the mm range. This allows us to explore and analyze the geological characteristics of rock outcrops, for example the detailed geometry and internal features of sedimentary rock layers, to aid paleoenvironmental interpretation. This integrated approach enables more efficient development of geological models of martian rock outcrops. The rendering tool also provides measurement tools to obtain geospatial data of surface points and distances between them. We report on novel scientific use cases and the added value potential of the resultant high-quality data set and presentation means to support further geologic investigations. The research leading to these results has received funding from the EC's 7th Framework Programme (FP7/2007-2013) under grant agreement n° 312377.
Color image guided depth image super resolution using fusion filter
NASA Astrophysics Data System (ADS)
He, Jin; Liang, Bin; He, Ying; Yang, Jun
2018-04-01
Depth cameras are currently playing an important role in many areas. However, most of them can only obtain lowresolution (LR) depth images. Color cameras can easily provide high-resolution (HR) color images. Using color image as a guide image is an efficient way to get a HR depth image. In this paper, we propose a depth image super resolution (SR) algorithm, which uses a HR color image as a guide image and a LR depth image as input. We use the fusion filter of guided filter and edge based joint bilateral filter to get HR depth image. Our experimental results on Middlebury 2005 datasets show that our method can provide better quality in HR depth images both numerically and visually.
NASA Astrophysics Data System (ADS)
Moonon, Altan-Ulzii; Hu, Jianwen; Li, Shutao
2015-12-01
The remote sensing image fusion is an important preprocessing technique in remote sensing image processing. In this paper, a remote sensing image fusion method based on the nonsubsampled shearlet transform (NSST) with sparse representation (SR) is proposed. Firstly, the low resolution multispectral (MS) image is upsampled and color space is transformed from Red-Green-Blue (RGB) to Intensity-Hue-Saturation (IHS). Then, the high resolution panchromatic (PAN) image and intensity component of MS image are decomposed by NSST to high and low frequency coefficients. The low frequency coefficients of PAN and the intensity component are fused by the SR with the learned dictionary. The high frequency coefficients of intensity component and PAN image are fused by local energy based fusion rule. Finally, the fused result is obtained by performing inverse NSST and inverse IHS transform. The experimental results on IKONOS and QuickBird satellites demonstrate that the proposed method provides better spectral quality and superior spatial information in the fused image than other remote sensing image fusion methods both in visual effect and object evaluation.
NASA Astrophysics Data System (ADS)
Cheng, Boyang; Jin, Longxu; Li, Guoning
2018-06-01
Visible light and infrared images fusion has been a significant subject in imaging science. As a new contribution to this field, a novel fusion framework of visible light and infrared images based on adaptive dual-channel unit-linking pulse coupled neural networks with singular value decomposition (ADS-PCNN) in non-subsampled shearlet transform (NSST) domain is present in this paper. First, the source images are decomposed into multi-direction and multi-scale sub-images by NSST. Furthermore, an improved novel sum modified-Laplacian (INSML) of low-pass sub-image and an improved average gradient (IAVG) of high-pass sub-images are input to stimulate the ADS-PCNN, respectively. To address the large spectral difference between infrared and visible light and the occurrence of black artifacts in fused images, a local structure information operator (LSI), which comes from local area singular value decomposition in each source image, is regarded as the adaptive linking strength that enhances fusion accuracy. Compared with PCNN models in other studies, the proposed method simplifies certain peripheral parameters, and the time matrix is utilized to decide the iteration number adaptively. A series of images from diverse scenes are used for fusion experiments and the fusion results are evaluated subjectively and objectively. The results of the subjective and objective evaluation show that our algorithm exhibits superior fusion performance and is more effective than the existing typical fusion techniques.
A Multi-Resolution Approach for an Automated Fusion of Different Low-Cost 3D Sensors
Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner
2014-01-01
The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. PMID:24763255
A multi-resolution approach for an automated fusion of different low-cost 3D sensors.
Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner
2014-04-24
The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory.
Comparison of LSS-IV and LISS-III+LISS-IV merged data for classification of crops
NASA Astrophysics Data System (ADS)
Hebbar, R.; Sesha Sai, M. V. R.
2014-11-01
Resourcesat-1 satellite with its unique capability of simultaneous acquisition of multispectral images at different spatial resolutions (AWiFS, LISS-III and LISS-IV MX / Mono) has immense potential for crop inventory. The present study was carried for selection of suitable LISS-IV MX band for data fusion and its evaluation for delineation different crops in a multi-cropped area. Image fusion techniques namely intensity hue saturation (IHS), principal component analysis (PCA), brovey, high pass filter (HPF) and wavelet methods were used for merging LISS-III and LISS-IV Mono data. The merged products were evaluated visually and through universal image quality index, ERGAS and classification accuracy. The study revealed that red band of LISS-IV MX data was found to be optimal band for merging with LISS-III data in terms of maintaining both spectral and spatial information and thus, closely matching with multispectral LISS-IVMX data. Among the five data fusion techniques, wavelet method was found to be superior in retaining image quality and higher classification accuracy compared to commonly used methods of IHS, PCA and Brovey. The study indicated that LISS-IV data in mono mode with wider swath of 70 km could be exploited in place of 24km LISS-IVMX data by selection of appropriate fusion techniques by acquiring monochromatic data in the red band.
A novel design for scintillator-based neutron and gamma imaging in inertial confinement fusion
NASA Astrophysics Data System (ADS)
Geppert-Kleinrath, Verena; Cutler, Theresa; Danly, Chris; Madden, Amanda; Merrill, Frank; Tybo, Josh; Volegov, Petr; Wilde, Carl
2017-10-01
The LANL Advanced Imaging team has been providing reliable 2D neutron imaging of the burning fusion fuel at NIF for years, revealing possible multi-dimensional asymmetries in the fuel shape, and therefore calling for additional views. Adding a passive imaging system using image plate techniques along a new polar line of sight has recently demonstrated the merit of 3D neutron image reconstruction. Now, the team is in the process of designing a new active neutron imaging system for an additional equatorial view. The design will include a gamma imaging system as well, to allow for the imaging of carbon in the ablator of the NIF fuel capsules, constraining the burning fuel shape even further. The selection of ideal scintillator materials for a position-sensitive detector system is the key component for the new design. A comprehensive study of advanced scintillators has been carried out at the Los Alamos Neutron Science Center and the OMEGA Laser Facility in Rochester, NY. Neutron radiography using a fast-gated CCD camera system delivers measurements of resolution, light output and noise characteristics. The measured performance parameters inform the novel design, for which we conclude the feasibility of monolithic scintillators over pixelated counterparts.
Zhang, Jinpeng; Zhang, Lichi; Xiang, Lei; Shao, Yeqin; Wu, Guorong; Zhou, Xiaodong; Shen, Dinggang; Wang, Qian
2017-01-01
It is fundamentally important to fuse the brain atlas from magnetic resonance (MR) images for many imaging-based studies. Most existing works focus on fusing the atlases from high-quality MR images. However, for low-quality diagnostic images (i.e., with high inter-slice thickness), the problem of atlas fusion has not been addressed yet. In this paper, we intend to fuse the brain atlas from the high-thickness diagnostic MR images that are prevalent for clinical routines. The main idea of our works is to extend the conventional groupwise registration by incorporating a novel super-resolution strategy. The contribution of the proposed super-resolution framework is two-fold. First, each high-thickness subject image is reconstructed to be isotropic by the patch-based sparsity learning. Then, the reconstructed isotropic image is enhanced for better quality through the random-forest-based regression model. In this way, the images obtained by the super-resolution strategy can be fused together by applying the groupwise registration method to construct the required atlas. Our experiments have shown that the proposed framework can effectively solve the problem of atlas fusion from the low-quality brain MR images. PMID:29062159
Zhang, Jinpeng; Zhang, Lichi; Xiang, Lei; Shao, Yeqin; Wu, Guorong; Zhou, Xiaodong; Shen, Dinggang; Wang, Qian
2017-03-01
It is fundamentally important to fuse the brain atlas from magnetic resonance (MR) images for many imaging-based studies. Most existing works focus on fusing the atlases from high-quality MR images. However, for low-quality diagnostic images (i.e., with high inter-slice thickness), the problem of atlas fusion has not been addressed yet. In this paper, we intend to fuse the brain atlas from the high-thickness diagnostic MR images that are prevalent for clinical routines. The main idea of our works is to extend the conventional groupwise registration by incorporating a novel super-resolution strategy. The contribution of the proposed super-resolution framework is two-fold. First, each high-thickness subject image is reconstructed to be isotropic by the patch-based sparsity learning. Then, the reconstructed isotropic image is enhanced for better quality through the random-forest-based regression model. In this way, the images obtained by the super-resolution strategy can be fused together by applying the groupwise registration method to construct the required atlas. Our experiments have shown that the proposed framework can effectively solve the problem of atlas fusion from the low-quality brain MR images.
The API 120: A portable neutron generator for the associated particle technique
NASA Astrophysics Data System (ADS)
Chichester, D. L.; Lemchak, M.; Simpson, J. D.
2005-12-01
The API 120 is a lightweight, portable neutron generator for active neutron interrogation (ANI) field work exploiting the associated particle technique. It incorporates a small sealed-tube accelerator, an all digital control system with smart on-board diagnostics, a simple platform-independent control interface and a comprehensive safety interlock philosophy with provisions for wireless control. The generator operates in a continuous output mode using either the D-D or D-T fusion reactions. To register the helium ion associated with fusion, the system incorporates a high resolution fiber optic imaging plate that may be coated with one of several different phosphors. The ion beam on the target measures less than 2 mm in diameter, thus making the system suitable for multi-dimensional imaging. The system is rated at 1E7 n/s for over 1000 h although higher yields are possible. The overall weight is 12 kg; power consumption is less than 50 W.
NASA Astrophysics Data System (ADS)
Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng
2016-06-01
The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.
A detail-preserved and luminance-consistent multi-exposure image fusion algorithm
NASA Astrophysics Data System (ADS)
Wang, Guanquan; Zhou, Yue
2018-04-01
When irradiance across a scene varies greatly, we can hardly get an image of the scene without over- or underexposure area, because of the constraints of cameras. Multi-exposure image fusion (MEF) is an effective method to deal with this problem by fusing multi-exposure images of a static scene. A novel MEF method is described in this paper. In the proposed algorithm, coarser-scale luminance consistency is preserved by contribution adjustment using the luminance information between blocks; detail-preserved smoothing filter can stitch blocks smoothly without losing details. Experiment results show that the proposed method performs well in preserving luminance consistency and details.
Human visual system consistent quality assessment for remote sensing image fusion
NASA Astrophysics Data System (ADS)
Liu, Jun; Huang, Junyi; Liu, Shuguang; Li, Huali; Zhou, Qiming; Liu, Junchen
2015-07-01
Quality assessment for image fusion is essential for remote sensing application. Generally used indices require a high spatial resolution multispectral (MS) image for reference, which is not always readily available. Meanwhile, the fusion quality assessments using these indices may not be consistent with the Human Visual System (HVS). As an attempt to overcome this requirement and inconsistency, this paper proposes an HVS-consistent image fusion quality assessment index at the highest resolution without a reference MS image using Gaussian Scale Space (GSS) technology that could simulate the HVS. The spatial details and spectral information of original and fused images are first separated in GSS, and the qualities are evaluated using the proposed spatial and spectral quality index respectively. The overall quality is determined without a reference MS image by a combination of the proposed two indices. Experimental results on various remote sensing images indicate that the proposed index is more consistent with HVS evaluation compared with other widely used indices that may or may not require reference images.
Drug-related webpages classification based on multi-modal local decision fusion
NASA Astrophysics Data System (ADS)
Hu, Ruiguang; Su, Xiaojing; Liu, Yanxin
2018-03-01
In this paper, multi-modal local decision fusion is used for drug-related webpages classification. First, meaningful text are extracted through HTML parsing, and effective images are chosen by the FOCARSS algorithm. Second, six SVM classifiers are trained for six kinds of drug-taking instruments, which are represented by PHOG. One SVM classifier is trained for the cannabis, which is represented by the mid-feature of BOW model. For each instance in a webpage, seven SVMs give seven labels for its image, and other seven labels are given by searching the names of drug-taking instruments and cannabis in its related text. Concatenating seven labels of image and seven labels of text, the representation of those instances in webpages are generated. Last, Multi-Instance Learning is used to classify those drugrelated webpages. Experimental results demonstrate that the classification accuracy of multi-instance learning with multi-modal local decision fusion is much higher than those of single-modal classification.
Multispectral Image Enhancement Through Adaptive Wavelet Fusion
2016-09-14
13. SUPPLEMENTARY NOTES 14. ABSTRACT This research developed a multiresolution image fusion scheme based on guided filtering . Guided filtering can...effectively reduce noise while preserving detail boundaries. When applied in an iterative mode, guided filtering selectively eliminates small scale...details while restoring larger scale edges. The proposed multi-scale image fusion scheme achieves spatial consistency by using guided filtering both at
Momeni, Saba; Pourghassem, Hossein
2014-08-01
Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.
NASA Astrophysics Data System (ADS)
Tang, Xiaojing
Fast and accurate monitoring of tropical forest disturbance is essential for understanding current patterns of deforestation as well as helping eliminate illegal logging. This dissertation explores the use of data from different satellites for near real-time monitoring of forest disturbance in tropical forests, including: development of new monitoring methods; development of new assessment methods; and assessment of the performance and operational readiness of existing methods. Current methods for accuracy assessment of remote sensing products do not address the priority of near real-time monitoring of detecting disturbance events as early as possible. I introduce a new assessment framework for near real-time products that focuses on the timing and the minimum detectable size of disturbance events. The new framework reveals the relationship between change detection accuracy and the time needed to identify events. In regions that are frequently cloudy, near real-time monitoring using data from a single sensor is difficult. This study extends the work by Xin et al. (2013) and develops a new time series method (Fusion2) based on fusion of Landsat and MODIS (Moderate Resolution Imaging Spectroradiometer) data. Results of three test sites in the Amazon Basin show that Fusion2 can detect 44.4% of the forest disturbance within 13 clear observations (82 days) after the initial disturbance. The smallest event detected by Fusion2 is 6.5 ha. Also, Fusion2 detects disturbance faster and has less commission error than more conventional methods. In a comparison of coarse resolution sensors, MODIS Terra and Aqua combined provides faster and more accurate detection of disturbance events than VIIRS (Visible Infrared Imaging Radiometer Suite) and MODIS single sensor data. The performance of near real-time monitoring using VIIRS is slightly worse than MODIS Terra but significantly better than MODIS Aqua. New monitoring methods developed in this dissertation provide forest protection organizations the capacity to monitor illegal logging events promptly. In the future, combining two Landsat and two Sentinel-2 satellites will provide global coverage at 30 m resolution every 4 days, and routine monitoring may be possible at high resolution. The methods and assessment framework developed in this dissertation are adaptable to newly available datasets.
Land use/cover classification in the Brazilian Amazon using satellite images.
Lu, Dengsheng; Batistella, Mateus; Li, Guiying; Moran, Emilio; Hetrick, Scott; Freitas, Corina da Costa; Dutra, Luciano Vieira; Sant'anna, Sidnei João Siqueira
2012-09-01
Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation-based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi-resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Of the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, has the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical-based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data.
Land use/cover classification in the Brazilian Amazon using satellite images
Lu, Dengsheng; Batistella, Mateus; Li, Guiying; Moran, Emilio; Hetrick, Scott; Freitas, Corina da Costa; Dutra, Luciano Vieira; Sant’Anna, Sidnei João Siqueira
2013-01-01
Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation-based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi-resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Of the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, has the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical-based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data. PMID:24353353
Operational data fusion framework for building frequent Landsat-like imagery in a cloudy region
USDA-ARS?s Scientific Manuscript database
An operational data fusion framework is built to generate dense time-series Landsat-like images for a cloudy region by fusing Moderate Resolution Imaging Spectroradiometer (MODIS) data products and Landsat imagery. The Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) is integrated in ...
Yin, X-X; Zhang, Y; Cao, J; Wu, J-L; Hadjiloucas, S
2016-12-01
We provide a comprehensive account of recent advances in biomedical image analysis and classification from two complementary imaging modalities: terahertz (THz) pulse imaging and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The work aims to highlight underlining commonalities in both data structures so that a common multi-channel data fusion framework can be developed. Signal pre-processing in both datasets is discussed briefly taking into consideration advances in multi-resolution analysis and model based fractional order calculus system identification. Developments in statistical signal processing using principal component and independent component analysis are also considered. These algorithms have been developed independently by the THz-pulse imaging and DCE-MRI communities, and there is scope to place them in a common multi-channel framework to provide better software standardization at the pre-processing de-noising stage. A comprehensive discussion of feature selection strategies is also provided and the importance of preserving textural information is highlighted. Feature extraction and classification methods taking into consideration recent advances in support vector machine (SVM) and extreme learning machine (ELM) classifiers and their complex extensions are presented. An outlook on Clifford algebra classifiers and deep learning techniques suitable to both types of datasets is also provided. The work points toward the direction of developing a new unified multi-channel signal processing framework for biomedical image analysis that will explore synergies from both sensing modalities for inferring disease proliferation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
He, Xinzi; Yu, Zhen; Wang, Tianfu; Lei, Baiying; Shi, Yiyan
2018-01-01
Dermoscopy imaging has been a routine examination approach for skin lesion diagnosis. Accurate segmentation is the first step for automatic dermoscopy image assessment. The main challenges for skin lesion segmentation are numerous variations in viewpoint and scale of skin lesion region. To handle these challenges, we propose a novel skin lesion segmentation network via a very deep dense deconvolution network based on dermoscopic images. Specifically, the deep dense layer and generic multi-path Deep RefineNet are combined to improve the segmentation performance. The deep representation of all available layers is aggregated to form the global feature maps using skip connection. Also, the dense deconvolution layer is leveraged to capture diverse appearance features via the contextual information. Finally, we apply the dense deconvolution layer to smooth segmentation maps and obtain final high-resolution output. Our proposed method shows the superiority over the state-of-the-art approaches based on the public available 2016 and 2017 skin lesion challenge dataset and achieves the accuracy of 96.0% and 93.9%, which obtained a 6.0% and 1.2% increase over the traditional method, respectively. By utilizing Dense Deconvolution Net, the average time for processing one testing images with our proposed framework was 0.253 s.
Fusion of infrared polarization and intensity images based on improved toggle operator
NASA Astrophysics Data System (ADS)
Zhu, Pan; Ding, Lei; Ma, Xiaoqing; Huang, Zhanhua
2018-01-01
Integration of infrared polarization and intensity images has been a new topic in infrared image understanding and interpretation. The abundant infrared details and target from infrared image and the salient edge and shape information from polarization image should be preserved or even enhanced in the fused result. In this paper, a new fusion method is proposed for infrared polarization and intensity images based on the improved multi-scale toggle operator with spatial scale, which can effectively extract the feature information of source images and heavily reduce redundancy among different scale. Firstly, the multi-scale image features of infrared polarization and intensity images are respectively extracted at different scale levels by the improved multi-scale toggle operator. Secondly, the redundancy of the features among different scales is reduced by using spatial scale. Thirdly, the final image features are combined by simply adding all scales of feature images together, and a base image is calculated by performing mean value weighted method on smoothed source images. Finally, the fusion image is obtained by importing the combined image features into the base image with a suitable strategy. Both objective assessment and subjective vision of the experimental results indicate that the proposed method obtains better performance in preserving the details and edge information as well as improving the image contrast.
NASA Astrophysics Data System (ADS)
Zhang, Cheng; Wenbo, Mei; Huiqian, Du; Zexian, Wang
2018-04-01
A new algorithm was proposed for medical images fusion in this paper, which combined gradient minimization smoothing filter (GMSF) with non-sampled directional filter bank (NSDFB). In order to preserve more detail information, a multi scale edge preserving decomposition framework (MEDF) was used to decompose an image into a base image and a series of detail images. For the fusion of base images, the local Gaussian membership function is applied to construct the fusion weighted factor. For the fusion of detail images, NSDFB was applied to decompose each detail image into multiple directional sub-images that are fused by pulse coupled neural network (PCNN) respectively. The experimental results demonstrate that the proposed algorithm is superior to the compared algorithms in both visual effect and objective assessment.
Progressive Label Fusion Framework for Multi-atlas Segmentation by Dictionary Evolution
Song, Yantao; Wu, Guorong; Sun, Quansen; Bahrami, Khosro; Li, Chunming; Shen, Dinggang
2015-01-01
Accurate segmentation of anatomical structures in medical images is very important in neuroscience studies. Recently, multi-atlas patch-based label fusion methods have achieved many successes, which generally represent each target patch from an atlas patch dictionary in the image domain and then predict the latent label by directly applying the estimated representation coefficients in the label domain. However, due to the large gap between these two domains, the estimated representation coefficients in the image domain may not stay optimal for the label fusion. To overcome this dilemma, we propose a novel label fusion framework to make the weighting coefficients eventually to be optimal for the label fusion by progressively constructing a dynamic dictionary in a layer-by-layer manner, where a sequence of intermediate patch dictionaries gradually encode the transition from the patch representation coefficients in image domain to the optimal weights for label fusion. Our proposed framework is general to augment the label fusion performance of the current state-of-the-art methods. In our experiments, we apply our proposed method to hippocampus segmentation on ADNI dataset and achieve more accurate labeling results, compared to the counterpart methods with single-layer dictionary. PMID:26942233
Progressive Label Fusion Framework for Multi-atlas Segmentation by Dictionary Evolution.
Song, Yantao; Wu, Guorong; Sun, Quansen; Bahrami, Khosro; Li, Chunming; Shen, Dinggang
2015-10-01
Accurate segmentation of anatomical structures in medical images is very important in neuroscience studies. Recently, multi-atlas patch-based label fusion methods have achieved many successes, which generally represent each target patch from an atlas patch dictionary in the image domain and then predict the latent label by directly applying the estimated representation coefficients in the label domain. However, due to the large gap between these two domains, the estimated representation coefficients in the image domain may not stay optimal for the label fusion. To overcome this dilemma, we propose a novel label fusion framework to make the weighting coefficients eventually to be optimal for the label fusion by progressively constructing a dynamic dictionary in a layer-by-layer manner, where a sequence of intermediate patch dictionaries gradually encode the transition from the patch representation coefficients in image domain to the optimal weights for label fusion. Our proposed framework is general to augment the label fusion performance of the current state-of-the-art methods. In our experiments, we apply our proposed method to hippocampus segmentation on ADNI dataset and achieve more accurate labeling results, compared to the counterpart methods with single-layer dictionary.
NASA Astrophysics Data System (ADS)
Vercauteren, Tom; Doussoux, François; Cazaux, Matthieu; Schmid, Guillaume; Linard, Nicolas; Durin, Marie-Amélie; Gharbi, Hédi; Lacombe, François
2013-03-01
Since its inception in the field of in vivo imaging, endomicroscopy through optical fiber bundles, or probe-based Confocal Laser Endomicroscopy (pCLE), has extensively proven the benefit of in situ and real-time examination of living tissues at the microscopic scale. By continuously increasing image quality, reducing invasiveness and improving system ergonomics, Mauna Kea Technologies has turned pCLE not only into an irreplaceable research instrument for small animal imaging, but also into an accurate clinical decision making tool with applications as diverse as gastrointestinal endoscopy, pulmonology and urology. The current implementation of pCLE relies on a single fluorescence spectral band making different sources of in vivo information challenging to distinguish. Extending the pCLE approach to multi-color endomicroscopy therefore appears as a natural plan. Coupling simultaneous multi-laser excitation with minimally invasive, microscopic resolution, thin and flexible optics, allows the fusion of complementary and valuable biological information, thus paving the way to a combination of morphological and functional imaging. This paper will detail the architecture of a new system, Cellvizio Dual Band, capable of video rate in vivo and in situ multi-spectral fluorescence imaging with a microscopic resolution. In its standard configuration, the system simultaneously operates at 488 and 660 nm, where it automatically performs the necessary spectral, photometric and geometric calibrations to provide unambiguously co-registered images in real-time. The main hardware and software features, including calibration procedures and sub-micron registration algorithms, will be presented as well as a panorama of its current applications, illustrated with recent results in the field of pre-clinical imaging.
Swap intensified WDR CMOS module for I2/LWIR fusion
NASA Astrophysics Data System (ADS)
Ni, Yang; Noguier, Vincent
2015-05-01
The combination of high resolution visible-near-infrared low light sensor and moderate resolution uncooled thermal sensor provides an efficient way for multi-task night vision. Tremendous progress has been made on uncooled thermal sensors (a-Si, VOx, etc.). It's possible to make a miniature uncooled thermal camera module in a tiny 1cm3 cube with <1W power consumption. For silicon based solid-state low light CCD/CMOS sensors have observed also a constant progress in terms of readout noise, dark current, resolution and frame rate. In contrast to thermal sensing which is intrinsic day&night operational, the silicon based solid-state sensors are not yet capable to do the night vision performance required by defense and critical surveillance applications. Readout noise, dark current are 2 major obstacles. The low dynamic range at high sensitivity mode of silicon sensors is also an important limiting factor, which leads to recognition failure due to local or global saturations & blooming. In this context, the image intensifier based solution is still attractive for the following reasons: 1) high gain and ultra-low dark current; 2) wide dynamic range and 3) ultra-low power consumption. With high electron gain and ultra low dark current of image intensifier, the only requirement on the silicon image pickup device are resolution, dynamic range and power consumption. In this paper, we present a SWAP intensified Wide Dynamic Range CMOS module for night vision applications, especially for I2/LWIR fusion. This module is based on a dedicated CMOS image sensor using solar-cell mode photodiode logarithmic pixel design which covers a huge dynamic range (> 140dB) without saturation and blooming. The ultra-wide dynamic range image from this new generation logarithmic sensor can be used directly without any image processing and provide an instant light accommodation. The complete module is slightly bigger than a simple ANVIS format I2 tube with <500mW power consumption.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stutman, D.; Tritz, K.; Finkenthal, M.
New diagnostic and sensor designs are needed for future burning plasma (BP) fusion experiments, having good space and time resolution and capable of prolonged operation in the harsh BP environment. We evaluate the potential of multi-energy x-ray imaging with filtered detector arrays for BP diagnostic and control. Experimental studies show that this simple and robust technique enables measuring with good accuracy, speed, and spatial resolution the T{sub e} profile, impurity content, and MHD activity in a tokamak. Applied to the BP this diagnostic could also serve for non-magnetic sensing of the plasma position, centroid, ELM, and RWM instability. BP compatiblemore » x-ray sensors are proposed using 'optical array' or 'bi-cell' detectors.« less
A multi-temporal analysis approach for land cover mapping in support of nuclear incident response
NASA Astrophysics Data System (ADS)
Sah, Shagan; van Aardt, Jan A. N.; McKeown, Donald M.; Messinger, David W.
2012-06-01
Remote sensing can be used to rapidly generate land use maps for assisting emergency response personnel with resource deployment decisions and impact assessments. In this study we focus on constructing accurate land cover maps to map the impacted area in the case of a nuclear material release. The proposed methodology involves integration of results from two different approaches to increase classification accuracy. The data used included RapidEye scenes over Nine Mile Point Nuclear Power Station (Oswego, NY). The first step was building a coarse-scale land cover map from freely available, high temporal resolution, MODIS data using a time-series approach. In the case of a nuclear accident, high spatial resolution commercial satellites such as RapidEye or IKONOS can acquire images of the affected area. Land use maps from the two image sources were integrated using a probability-based approach. Classification results were obtained for four land classes - forest, urban, water and vegetation - using Euclidean and Mahalanobis distances as metrics. Despite the coarse resolution of MODIS pixels, acceptable accuracies were obtained using time series features. The overall accuracies using the fusion based approach were in the neighborhood of 80%, when compared with GIS data sets from New York State. The classifications were augmented using this fused approach, with few supplementary advantages such as correction for cloud cover and independence from time of year. We concluded that this method would generate highly accurate land maps, using coarse spatial resolution time series satellite imagery and a single date, high spatial resolution, multi-spectral image.
Fully automatic multi-atlas segmentation of CTA for partial volume correction in cardiac SPECT/CT
NASA Astrophysics Data System (ADS)
Liu, Qingyi; Mohy-ud-Din, Hassan; Boutagy, Nabil E.; Jiang, Mingyan; Ren, Silin; Stendahl, John C.; Sinusas, Albert J.; Liu, Chi
2017-05-01
Anatomical-based partial volume correction (PVC) has been shown to improve image quality and quantitative accuracy in cardiac SPECT/CT. However, this method requires manual segmentation of various organs from contrast-enhanced computed tomography angiography (CTA) data. In order to achieve fully automatic CTA segmentation for clinical translation, we investigated the most common multi-atlas segmentation methods. We also modified the multi-atlas segmentation method by introducing a novel label fusion algorithm for multiple organ segmentation to eliminate overlap and gap voxels. To evaluate our proposed automatic segmentation, eight canine 99mTc-labeled red blood cell SPECT/CT datasets that incorporated PVC were analyzed, using the leave-one-out approach. The Dice similarity coefficient of each organ was computed. Compared to the conventional label fusion method, our proposed label fusion method effectively eliminated gaps and overlaps and improved the CTA segmentation accuracy. The anatomical-based PVC of cardiac SPECT images with automatic multi-atlas segmentation provided consistent image quality and quantitative estimation of intramyocardial blood volume, as compared to those derived using manual segmentation. In conclusion, our proposed automatic multi-atlas segmentation method of CTAs is feasible, practical, and facilitates anatomical-based PVC of cardiac SPECT/CT images.
An automated procedure for detection of IDP's dwellings using VHR satellite imagery
NASA Astrophysics Data System (ADS)
Jenerowicz, Malgorzata; Kemper, Thomas; Soille, Pierre
2011-11-01
This paper presents the results for the estimation of dwellings structures in Al Salam IDP Camp, Southern Darfur, based on Very High Resolution multispectral satellite images obtained by implementation of Mathematical Morphology analysis. A series of image processing procedures, feature extraction methods and textural analysis have been applied in order to provide reliable information about dwellings structures. One of the issues in this context is related to similarity of the spectral response of thatched dwellings' roofs and the surroundings in the IDP camps, where the exploitation of multispectral information is crucial. This study shows the advantage of automatic extraction approach and highlights the importance of detailed spatial and spectral information analysis based on multi-temporal dataset. The additional data fusion of high-resolution panchromatic band with lower resolution multispectral bands of WorldView-2 satellite has positive influence on results and thereby can be useful for humanitarian aid agency, providing support of decisions and estimations of population especially in situations when frequent revisits by space imaging system are the only possibility of continued monitoring.
Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network
Qu, Xiaobo; He, Yifan
2018-01-01
Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods. PMID:29509666
Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network.
Du, Xiaofeng; Qu, Xiaobo; He, Yifan; Guo, Di
2018-03-06
Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods.
Dynamically re-configurable CMOS imagers for an active vision system
NASA Technical Reports Server (NTRS)
Yang, Guang (Inventor); Pain, Bedabrata (Inventor)
2005-01-01
A vision system is disclosed. The system includes a pixel array, at least one multi-resolution window operation circuit, and a pixel averaging circuit. The pixel array has an array of pixels configured to receive light signals from an image having at least one tracking target. The multi-resolution window operation circuits are configured to process the image. Each of the multi-resolution window operation circuits processes each tracking target within a particular multi-resolution window. The pixel averaging circuit is configured to sample and average pixels within the particular multi-resolution window.
NASA Astrophysics Data System (ADS)
Hu, Rongming; Wang, Shu; Guo, Jiao; Guo, Liankun
2018-04-01
Impervious surface area and vegetation coverage are important biophysical indicators of urban surface features which can be derived from medium-resolution images. However, remote sensing data obtained by a single sensor are easily affected by many factors such as weather conditions, and the spatial and temporal resolution can not meet the needs for soil erosion estimation. Therefore, the integrated multi-source remote sensing data are needed to carry out high spatio-temporal resolution vegetation coverage estimation. Two spatial and temporal vegetation coverage data and impervious data were obtained from MODIS and Landsat 8 remote sensing images. Based on the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM), the vegetation coverage data of two scales were fused and the data of vegetation coverage fusion (ESTARFM FVC) and impervious layer with high spatiotemporal resolution (30 m, 8 day) were obtained. On this basis, the spatial variability of the seepage-free surface and the vegetation cover landscape in the study area was measured by means of statistics and spatial autocorrelation analysis. The results showed that: 1) ESTARFM FVC and impermeable surface have higher accuracy and can characterize the characteristics of the biophysical components covered by the earth's surface; 2) The average impervious surface proportion and the spatial configuration of each area are different, which are affected by natural conditions and urbanization. In the urban area of Xi'an, which has typical characteristics of spontaneous urbanization, landscapes are fragmented and have less spatial dependence.
Formulation of image fusion as a constrained least squares optimization problem
Dwork, Nicholas; Lasry, Eric M.; Pauly, John M.; Balbás, Jorge
2017-01-01
Abstract. Fusing a lower resolution color image with a higher resolution monochrome image is a common practice in medical imaging. By incorporating spatial context and/or improving the signal-to-noise ratio, it provides clinicians with a single frame of the most complete information for diagnosis. In this paper, image fusion is formulated as a convex optimization problem that avoids image decomposition and permits operations at the pixel level. This results in a highly efficient and embarrassingly parallelizable algorithm based on widely available robust and simple numerical methods that realizes the fused image as the global minimizer of the convex optimization problem. PMID:28331885
Information Fusion and Visualisation in Anti Asymmetric Warfare
2006-12-01
thermal detectors (e.g. bolometers). They used the MWIR and LWIR part of the IR spectrum. Quantum detectors realize an image rate of over 100Hz while... panorama image by image fusion of several sensors components. EO designators are distinguished through their agility and increased resolution
Robust curb detection with fusion of 3D-Lidar and camera data.
Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen
2014-05-21
Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes.
Multispectral image sharpening using wavelet transform techniques and spatial correlation of edges
Lemeshewsky, George P.; Schowengerdt, Robert A.
2000-01-01
Several reported image fusion or sharpening techniques are based on the discrete wavelet transform (DWT). The technique described here uses a pixel-based maximum selection rule to combine respective transform coefficients of lower spatial resolution near-infrared (NIR) and higher spatial resolution panchromatic (pan) imagery to produce a sharpened NIR image. Sharpening assumes a radiometric correlation between the spectral band images. However, there can be poor correlation, including edge contrast reversals (e.g., at soil-vegetation boundaries), between the fused images and, consequently, degraded performance. To improve sharpening, a local area-based correlation technique originally reported for edge comparison with image pyramid fusion is modified for application with the DWT process. Further improvements are obtained by using redundant, shift-invariant implementation of the DWT. Example images demonstrate the improvements in NIR image sharpening with higher resolution pan imagery.
Infrared and visible image fusion method based on saliency detection in sparse domain
NASA Astrophysics Data System (ADS)
Liu, C. H.; Qi, Y.; Ding, W. R.
2017-06-01
Infrared and visible image fusion is a key problem in the field of multi-sensor image fusion. To better preserve the significant information of the infrared and visible images in the final fused image, the saliency maps of the source images is introduced into the fusion procedure. Firstly, under the framework of the joint sparse representation (JSR) model, the global and local saliency maps of the source images are obtained based on sparse coefficients. Then, a saliency detection model is proposed, which combines the global and local saliency maps to generate an integrated saliency map. Finally, a weighted fusion algorithm based on the integrated saliency map is developed to achieve the fusion progress. The experimental results show that our method is superior to the state-of-the-art methods in terms of several universal quality evaluation indexes, as well as in the visual quality.
Fusion of multi-tracer PET images for dose painting.
Lelandais, Benoît; Ruan, Su; Denœux, Thierry; Vera, Pierre; Gardin, Isabelle
2014-10-01
PET imaging with FluoroDesoxyGlucose (FDG) tracer is clinically used for the definition of Biological Target Volumes (BTVs) for radiotherapy. Recently, new tracers, such as FLuoroThymidine (FLT) or FluoroMisonidazol (FMiso), have been proposed. They provide complementary information for the definition of BTVs. Our work is to fuse multi-tracer PET images to obtain a good BTV definition and to help the radiation oncologist in dose painting. Due to the noise and the partial volume effect leading, respectively, to the presence of uncertainty and imprecision in PET images, the segmentation and the fusion of PET images is difficult. In this paper, a framework based on Belief Function Theory (BFT) is proposed for the segmentation of BTV from multi-tracer PET images. The first step is based on an extension of the Evidential C-Means (ECM) algorithm, taking advantage of neighboring voxels for dealing with uncertainty and imprecision in each mono-tracer PET image. Then, imprecision and uncertainty are, respectively, reduced using prior knowledge related to defects in the acquisition system and neighborhood information. Finally, a multi-tracer PET image fusion is performed. The results are represented by a set of parametric maps that provide important information for dose painting. The performances are evaluated on PET phantoms and patient data with lung cancer. Quantitative results show good performance of our method compared with other methods. Copyright © 2014 Elsevier B.V. All rights reserved.
Calhoun, Vince D; Sui, Jing
2016-01-01
It is becoming increasingly clear that combining multi-modal brain imaging data is able to provide more information for individual subjects by exploiting the rich multimodal information that exists. However, the number of studies that do true multimodal fusion (i.e. capitalizing on joint information among modalities) is still remarkably small given the known benefits. In part, this is because multi-modal studies require broader expertise in collecting, analyzing, and interpreting the results than do unimodal studies. In this paper, we start by introducing the basic reasons why multimodal data fusion is important and what it can do, and importantly how it can help us avoid wrong conclusions and help compensate for imperfect brain imaging studies. We also discuss the challenges that need to be confronted for such approaches to be more widely applied by the community. We then provide a review of the diverse studies that have used multimodal data fusion (primarily focused on psychosis) as well as provide an introduction to some of the existing analytic approaches. Finally, we discuss some up-and-coming approaches to multi-modal fusion including deep learning and multimodal classification which show considerable promise. Our conclusion is that multimodal data fusion is rapidly growing, but it is still underutilized. The complexity of the human brain coupled with the incomplete measurement provided by existing imaging technology makes multimodal fusion essential in order to mitigate against misdirection and hopefully provide a key to finding the missing link(s) in complex mental illness. PMID:27347565
Calhoun, Vince D; Sui, Jing
2016-05-01
It is becoming increasingly clear that combining multi-modal brain imaging data is able to provide more information for individual subjects by exploiting the rich multimodal information that exists. However, the number of studies that do true multimodal fusion (i.e. capitalizing on joint information among modalities) is still remarkably small given the known benefits. In part, this is because multi-modal studies require broader expertise in collecting, analyzing, and interpreting the results than do unimodal studies. In this paper, we start by introducing the basic reasons why multimodal data fusion is important and what it can do, and importantly how it can help us avoid wrong conclusions and help compensate for imperfect brain imaging studies. We also discuss the challenges that need to be confronted for such approaches to be more widely applied by the community. We then provide a review of the diverse studies that have used multimodal data fusion (primarily focused on psychosis) as well as provide an introduction to some of the existing analytic approaches. Finally, we discuss some up-and-coming approaches to multi-modal fusion including deep learning and multimodal classification which show considerable promise. Our conclusion is that multimodal data fusion is rapidly growing, but it is still underutilized. The complexity of the human brain coupled with the incomplete measurement provided by existing imaging technology makes multimodal fusion essential in order to mitigate against misdirection and hopefully provide a key to finding the missing link(s) in complex mental illness.
Integrated Multi-Aperture Sensor and Navigation Fusion
2010-02-01
Visio, Springer-Verlag Inc., New York, 2004. [3] R. G. Brown and P. Y. C. Hwang , Introduction to Random Signals and Applied Kalman Filtering, Third...formulate Kalman filter vision/inertial measurement observables for other images without the need to know (or measure) their feature ranges. As compared...Internal Data Fusion Multi-aperture/INS data fusion is formulated in the feature domain using the complementary Kalman filter methodology [3]. In this
Zhang, Xin; Cui, Jintian; Wang, Weisheng; Lin, Chao
2017-01-01
To address the problem of image texture feature extraction, a direction measure statistic that is based on the directionality of image texture is constructed, and a new method of texture feature extraction, which is based on the direction measure and a gray level co-occurrence matrix (GLCM) fusion algorithm, is proposed in this paper. This method applies the GLCM to extract the texture feature value of an image and integrates the weight factor that is introduced by the direction measure to obtain the final texture feature of an image. A set of classification experiments for the high-resolution remote sensing images were performed by using support vector machine (SVM) classifier with the direction measure and gray level co-occurrence matrix fusion algorithm. Both qualitative and quantitative approaches were applied to assess the classification results. The experimental results demonstrated that texture feature extraction based on the fusion algorithm achieved a better image recognition, and the accuracy of classification based on this method has been significantly improved. PMID:28640181
Remote Sensing Data Fusion to Detect Illicit Crops and Unauthorized Airstrips
NASA Astrophysics Data System (ADS)
Pena, J. A.; Yumin, T.; Liu, H.; Zhao, B.; Garcia, J. A.; Pinto, J.
2018-04-01
Remote sensing data fusion has been playing a more and more important role in crop planting area monitoring, especially for crop area information acquisition. Multi-temporal data and multi-spectral time series are two major aspects for improving crop identification accuracy. Remote sensing fusion provides high quality multi-spectral and panchromatic images in terms of spectral and spatial information, respectively. In this paper, we take one step further and prove the application of remote sensing data fusion in detecting illicit crop through LSMM, GOBIA, and MCE analyzing of strategic information. This methodology emerges as a complementary and effective strategy to control and eradicate illicit crops.
Research on multi-source image fusion technology in haze environment
NASA Astrophysics Data System (ADS)
Ma, GuoDong; Piao, Yan; Li, Bing
2017-11-01
In the haze environment, the visible image collected by a single sensor can express the details of the shape, color and texture of the target very well, but because of the haze, the sharpness is low and some of the target subjects are lost; Because of the expression of thermal radiation and strong penetration ability, infrared image collected by a single sensor can clearly express the target subject, but it will lose detail information. Therefore, the multi-source image fusion method is proposed to exploit their respective advantages. Firstly, the improved Dark Channel Prior algorithm is used to preprocess the visible haze image. Secondly, the improved SURF algorithm is used to register the infrared image and the haze-free visible image. Finally, the weighted fusion algorithm based on information complementary is used to fuse the image. Experiments show that the proposed method can improve the clarity of the visible target and highlight the occluded infrared target for target recognition.
Robust multi-atlas label propagation by deep sparse representation
Zu, Chen; Wang, Zhengxia; Zhang, Daoqiang; Liang, Peipeng; Shi, Yonghong; Shen, Dinggang; Wu, Guorong
2016-01-01
Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer (label-specific dictionaries) consists of groups of representative atlas patches and the subsequent layers (residual dictionaries) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared to other counterpart label fusion methods. PMID:27942077
Robust multi-atlas label propagation by deep sparse representation.
Zu, Chen; Wang, Zhengxia; Zhang, Daoqiang; Liang, Peipeng; Shi, Yonghong; Shen, Dinggang; Wu, Guorong
2017-03-01
Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer ( label-specific dictionaries ) consists of groups of representative atlas patches and the subsequent layers ( residual dictionaries ) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared to other counterpart label fusion methods.
Study on polarization image methods in turbid medium
NASA Astrophysics Data System (ADS)
Fu, Qiang; Mo, Chunhe; Liu, Boyu; Duan, Jin; Zhang, Su; Zhu, Yong
2014-11-01
Polarization imaging detection technology in addition to the traditional imaging information, also can get polarization multi-dimensional information, thus improve the probability of target detection and recognition.Image fusion in turbid medium target polarization image research, is helpful to obtain high quality images. Based on visible light wavelength of light wavelength of laser polarization imaging, through the rotation Angle of polaroid get corresponding linear polarized light intensity, respectively to obtain the concentration range from 5% to 10% of turbid medium target stocks of polarization parameters, introduces the processing of image fusion technology, main research on access to the polarization of the image by using different polarization image fusion methods for image processing, discusses several kinds of turbid medium has superior performance of polarization image fusion method, and gives the treatment effect and analysis of data tables. Then use pixel level, feature level and decision level fusion algorithm on three levels of information fusion, DOLP polarization image fusion, the results show that: with the increase of the polarization Angle, polarization image will be more and more fuzzy, quality worse and worse. Than a single fused image contrast of the image be improved obviously, the finally analysis on reasons of the increase the image contrast and polarized light.
Computation Methods for NASA Data-streams for Agricultural Efficiency Applications
NASA Astrophysics Data System (ADS)
Shrestha, B.; O'Hara, C. G.; Mali, P.
2007-12-01
Temporal Map Algebra (TMA) is a novel technique for analyzing time-series of satellite imageries using simple algebraic operators that treats time-series imageries as a three-dimensional dataset, where two dimensions encode planimetric position on earth surface and the third dimension encodes time. Spatio-temporal analytical processing methods such as TMA that utilize moderate spatial resolution satellite imagery having high temporal resolution to create multi-temporal composites are data intensive as well as computationally intensive. TMA analysis for multi-temporal composites provides dramatically enhanced usefulness that will yield previously unavailable capabilities to user communities, if deployment is coupled with significant High Performance Computing (HPC) capabilities; and interfaces are designed to deliver the full potential for these new technological developments. In this research, cross-platform data fusion and adaptive filtering using TMA was employed to create highly useful daily datasets and cloud-free high-temporal resolution vegetation index (VI) composites with enhanced information content for vegetation and bio-productivity monitoring, surveillance, and modeling. Fusion of Normalized Difference Vegetation Index (NDVI) data created from Aqua and Terra Moderate Resolution Imaging Spectroradiometer (MODIS) surface-reflectance data (MOD09) enables the creation of daily composites which are of immense value to a broad spectrum of global and national applications. Additionally these products are highly desired by many natural resources agencies like USDA/FAS/PECAD. Utilizing data streams collected by similar sensors on different platforms that transit the same areas at slightly different times of the day offers the opportunity to develop fused data products that have enhanced cloud-free and reduced noise characteristics. Establishing a Fusion Quality Confidence Code (FQCC) provides a metadata product that quantifies the method of fusion for a given pixel and enables a relative quality and confidence factor to be established for a given daily pixel value. When coupled with metadata that quantify the source sensor, day and time of acquisition, and the fusion method of each pixel to create the daily product; a wealth of information is available to assist in deriving new data and information products. These newly developed abilities to create highly useful daily data sets imply that temporal composites for a geographic area of interest may be created for user-defined temporal intervals that emphasize a user designated day of interest. At GeoResources Institute, Mississippi State University, solutions have been developed to create custom composites and cross-platform satellite data fusion using TMA which are useful for National Aeronautics and Space Administration (NASA) Rapid Prototyping Capability (RPC) and Integrated System Solutions (ISS) experiments for agricultural applications.
A new hyperspectral image compression paradigm based on fusion
NASA Astrophysics Data System (ADS)
Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto
2016-10-01
The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.
NASA Astrophysics Data System (ADS)
Sah, Shagan
An increasingly important application of remote sensing is to provide decision support during emergency response and disaster management efforts. Land cover maps constitute one such useful application product during disaster events; if generated rapidly after any disaster, such map products can contribute to the efficacy of the response effort. In light of recent nuclear incidents, e.g., after the earthquake/tsunami in Japan (2011), our research focuses on constructing rapid and accurate land cover maps of the impacted area in case of an accidental nuclear release. The methodology involves integration of results from two different approaches, namely coarse spatial resolution multi-temporal and fine spatial resolution imagery, to increase classification accuracy. Although advanced methods have been developed for classification using high spatial or temporal resolution imagery, only a limited amount of work has been done on fusion of these two remote sensing approaches. The presented methodology thus involves integration of classification results from two different remote sensing modalities in order to improve classification accuracy. The data used included RapidEye and MODIS scenes over the Nine Mile Point Nuclear Power Station in Oswego (New York, USA). The first step in the process was the construction of land cover maps from freely available, high temporal resolution, low spatial resolution MODIS imagery using a time-series approach. We used the variability in the temporal signatures among different land cover classes for classification. The time series-specific features were defined by various physical properties of a pixel, such as variation in vegetation cover and water content over time. The pixels were classified into four land cover classes - forest, urban, water, and vegetation - using Euclidean and Mahalanobis distance metrics. On the other hand, a high spatial resolution commercial satellite, such as RapidEye, can be tasked to capture images over the affected area in the case of a nuclear event. This imagery served as a second source of data to augment results from the time series approach. The classifications from the two approaches were integrated using an a posteriori probability-based fusion approach. This was done by establishing a relationship between the classes, obtained after classification of the two data sources. Despite the coarse spatial resolution of MODIS pixels, acceptable accuracies were obtained using time series features. The overall accuracies using the fusion-based approach were in the neighborhood of 80%, when compared with GIS data sets from New York State. This fusion thus contributed to classification accuracy refinement, with a few additional advantages, such as correction for cloud cover and providing for an approach that is robust against point-in-time seasonal anomalies, due to the inclusion of multi-temporal data. We concluded that this approach is capable of generating land cover maps of acceptable accuracy and rapid turnaround, which in turn can yield reliable estimates of crop acreage of a region. The final algorithm is part of an automated software tool, which can be used by emergency response personnel to generate a nuclear ingestion pathway information product within a few hours of data collection.
A fast and automatic fusion algorithm for unregistered multi-exposure image sequence
NASA Astrophysics Data System (ADS)
Liu, Yan; Yu, Feihong
2014-09-01
Human visual system (HVS) can visualize all the brightness levels of the scene through visual adaptation. However, the dynamic range of most commercial digital cameras and display devices are smaller than the dynamic range of human eye. This implies low dynamic range (LDR) images captured by normal digital camera may lose image details. We propose an efficient approach to high dynamic (HDR) image fusion that copes with image displacement and image blur degradation in a computationally efficient manner, which is suitable for implementation on mobile devices. The various image registration algorithms proposed in the previous literatures are unable to meet the efficiency and performance requirements in the application of mobile devices. In this paper, we selected Oriented Brief (ORB) detector to extract local image structures. The descriptor selected in multi-exposure image fusion algorithm has to be fast and robust to illumination variations and geometric deformations. ORB descriptor is the best candidate in our algorithm. Further, we perform an improved RANdom Sample Consensus (RANSAC) algorithm to reject incorrect matches. For the fusion of images, a new approach based on Stationary Wavelet Transform (SWT) is used. The experimental results demonstrate that the proposed algorithm generates high quality images at low computational cost. Comparisons with a number of other feature matching methods show that our method gets better performance.
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline; Laporte, Nadine; Netanyahuy, Nathan S.; Zukor, Dorothy (Technical Monitor)
2001-01-01
The characterization and the mapping of land cover/land use of forest areas, such as the Central African rainforest, is a very complex task. This complexity is mainly due to the extent of such areas and, as a consequence, to the lack of full and continuous cloud-free coverage of those large regions by one single remote sensing instrument, In order to provide improved vegetation maps of Central Africa and to develop forest monitoring techniques for applications at the local and regional scales, we propose to utilize multi-sensor remote sensing observations coupled with in-situ data. Fusion and clustering of multi-sensor data are the first steps towards the development of such a forest monitoring system. In this paper, we will describe some preliminary experiments involving the fusion of SAR and Landsat image data of the Lope Reserve in Gabon. Similarly to previous fusion studies, our fusion method is wavelet-based. The fusion provides a new image data set which contains more detailed texture features and preserves the large homogeneous regions that are observed by the Thematic Mapper sensor. The fusion step is followed by unsupervised clustering and provides a vegetation map of the area.
Muscatello, Christopher M.; Domier, Calvin W.; Hu, Xing; ...
2014-08-13
Here, quasi-optical imaging at sub-THz frequencies has had a major impact on fusion plasma diagnostics. Mm-wave imaging reflectometry utilizes microwaves to actively probe fusion plasmas, inferring the local properties of electron density fluctuations. Electron cyclotron emission imaging is a multichannel radiometer that passively measures the spontaneous emission of microwaves from the plasma to infer local properties of electron temperature fluctuations. These imaging diagnostics work together to diagnose the characteristics of turbulence. Important quantities such as amplitude and wavenumber of coherent fluctuations, correlation lengths and decor relation times of turbulence, and poloidal flow velocity of the plasma are readily inferred.
A transversal approach for patch-based label fusion via matrix completion
Sanroma, Gerard; Wu, Guorong; Gao, Yaozong; Thung, Kim-Han; Guo, Yanrong; Shen, Dinggang
2015-01-01
Recently, multi-atlas patch-based label fusion has received an increasing interest in the medical image segmentation field. After warping the anatomical labels from the atlas images to the target image by registration, label fusion is the key step to determine the latent label for each target image point. Two popular types of patch-based label fusion approaches are (1) reconstruction-based approaches that compute the target labels as a weighted average of atlas labels, where the weights are derived by reconstructing the target image patch using the atlas image patches; and (2) classification-based approaches that determine the target label as a mapping of the target image patch, where the mapping function is often learned using the atlas image patches and their corresponding labels. Both approaches have their advantages and limitations. In this paper, we propose a novel patch-based label fusion method to combine the above two types of approaches via matrix completion (and hence, we call it transversal). As we will show, our method overcomes the individual limitations of both reconstruction-based and classification-based approaches. Since the labeling confidences may vary across the target image points, we further propose a sequential labeling framework that first labels the highly confident points and then gradually labels more challenging points in an iterative manner, guided by the label information determined in the previous iterations. We demonstrate the performance of our novel label fusion method in segmenting the hippocampus in the ADNI dataset, subcortical and limbic structures in the LONI dataset, and mid-brain structures in the SATA dataset. We achieve more accurate segmentation results than both reconstruction-based and classification-based approaches. Our label fusion method is also ranked 1st in the online SATA Multi-Atlas Segmentation Challenge. PMID:26160394
Multi-observation PET image analysis for patient follow-up quantitation and therapy assessment
NASA Astrophysics Data System (ADS)
David, S.; Visvikis, D.; Roux, C.; Hatt, M.
2011-09-01
In positron emission tomography (PET) imaging, an early therapeutic response is usually characterized by variations of semi-quantitative parameters restricted to maximum SUV measured in PET scans during the treatment. Such measurements do not reflect overall tumor volume and radiotracer uptake variations. The proposed approach is based on multi-observation image analysis for merging several PET acquisitions to assess tumor metabolic volume and uptake variations. The fusion algorithm is based on iterative estimation using a stochastic expectation maximization (SEM) algorithm. The proposed method was applied to simulated and clinical follow-up PET images. We compared the multi-observation fusion performance to threshold-based methods, proposed for the assessment of the therapeutic response based on functional volumes. On simulated datasets the adaptive threshold applied independently on both images led to higher errors than the ASEM fusion and on clinical datasets it failed to provide coherent measurements for four patients out of seven due to aberrant delineations. The ASEM method demonstrated improved and more robust estimation of the evaluation leading to more pertinent measurements. Future work will consist in extending the methodology and applying it to clinical multi-tracer datasets in order to evaluate its potential impact on the biological tumor volume definition for radiotherapy applications.
Wong, K K; Chondrogiannis, S; Bowles, H; Fuster, D; Sánchez, N; Rampin, L; Rubello, D
Nuclear medicine traditionally employs planar and single photon emission computed tomography (SPECT) imaging techniques to depict the biodistribution of radiotracers for the diagnostic investigation of a range of disorders of endocrine gland function. The usefulness of combining functional information with anatomy derived from computed tomography (CT), magnetic resonance imaging (MRI), and high resolution ultrasound (US), has long been appreciated, either using visual side-by-side correlation, or software-based co-registration. The emergence of hybrid SPECT/CT camera technology now allows the simultaneous acquisition of combined multi-modality imaging, with seamless fusion of 3D volume datasets. Thus, it is not surprising that there is growing literature describing the many advantages that contemporary SPECT/CT technology brings to radionuclide investigation of endocrine disorders, showing potential advantages for the pre-operative locating of the parathyroid adenoma using a minimally invasive surgical approach, especially in the presence of ectopic glands and in multiglandular disease. In conclusion, hybrid SPECT/CT imaging has become an essential tool to ensure the most accurate diagnostic in the management of patients with hyperparathyroidism. Copyright © 2016 Elsevier España, S.L.U. y SEMNIM. All rights reserved.
Multi-Atlas Based Segmentation of Brainstem Nuclei from MR Images by Deep Hyper-Graph Learning.
Dong, Pei; Guo, Yangrong; Gao, Yue; Liang, Peipeng; Shi, Yonghong; Wang, Qian; Shen, Dinggang; Wu, Guorong
2016-10-01
Accurate segmentation of brainstem nuclei (red nucleus and substantia nigra) is very important in various neuroimaging applications such as deep brain stimulation and the investigation of imaging biomarkers for Parkinson's disease (PD). Due to iron deposition during aging, image contrast in the brainstem is very low in Magnetic Resonance (MR) images. Hence, the ambiguity of patch-wise similarity makes the recently successful multi-atlas patch-based label fusion methods have difficulty to perform as competitive as segmenting cortical and sub-cortical regions from MR images. To address this challenge, we propose a novel multi-atlas brainstem nuclei segmentation method using deep hyper-graph learning. Specifically, we achieve this goal in three-fold. First , we employ hyper-graph to combine the advantage of maintaining spatial coherence from graph-based segmentation approaches and the benefit of harnessing population priors from multi-atlas based framework. Second , besides using low-level image appearance, we also extract high-level context features to measure the complex patch-wise relationship. Since the context features are calculated on a tentatively estimated label probability map, we eventually turn our hyper-graph learning based label propagation into a deep and self-refining model. Third , since anatomical labels on some voxels (usually located in uniform regions) can be identified much more reliably than other voxels (usually located at the boundary between two regions), we allow these reliable voxels to propagate their labels to the nearby difficult-to-label voxels. Such hierarchical strategy makes our proposed label fusion method deep and dynamic. We evaluate our proposed label fusion method in segmenting substantia nigra (SN) and red nucleus (RN) from 3.0 T MR images, where our proposed method achieves significant improvement over the state-of-the-art label fusion methods.
Perception-oriented fusion of multi-sensor imagery: visible, IR, and SAR
NASA Astrophysics Data System (ADS)
Sidorchuk, D.; Volkov, V.; Gladilin, S.
2018-04-01
This paper addresses the problem of image fusion of optical (visible and thermal domain) data and radar data for the purpose of visualization. These types of images typically contain a lot of complimentary information, and their joint visualization can be useful and more convenient for human user than a set of individual images. To solve the image fusion problem we propose a novel algorithm that utilizes some peculiarities of human color perception and based on the grey-scale structural visualization. Benefits of presented algorithm are exemplified by satellite imagery.
NASA Astrophysics Data System (ADS)
Czarski, T.; Chernyshova, M.; Pozniak, K. T.; Kasprowicz, G.; Byszuk, A.; Juszczyk, B.; Wojenski, A.; Zabolotny, W.; Zienkiewicz, P.
2015-12-01
The measurement system based on GEM - Gas Electron Multiplier detector is developed for X-ray diagnostics of magnetic confinement fusion plasmas. The Triple Gas Electron Multiplier (T-GEM) is presented as soft X-ray (SXR) energy and position sensitive detector. The paper is focused on the measurement subject and describes the fundamental data processing to obtain reliable characteristics (histograms) useful for physicists. So, it is the software part of the project between the electronic hardware and physics applications. The project is original and it was developed by the paper authors. Multi-channel measurement system and essential data processing for X-ray energy and position recognition are considered. Several modes of data acquisition determined by hardware and software processing are introduced. Typical measuring issues are deliberated for the enhancement of data quality. The primary version based on 1-D GEM detector was applied for the high-resolution X-ray crystal spectrometer KX1 in the JET tokamak. The current version considers 2-D detector structures initially for the investigation purpose. Two detector structures with single-pixel sensors and multi-pixel (directional) sensors are considered for two-dimensional X-ray imaging. Fundamental output characteristics are presented for one and two dimensional detector structure. Representative results for reference source and tokamak plasma are demonstrated.
NASA Astrophysics Data System (ADS)
Câmara, F.; Oliveira, J.; Hormigo, T.; Araújo, J.; Ribeiro, R.; Falcão, A.; Gomes, M.; Dubois-Matra, O.; Vijendran, S.
2015-06-01
This paper discusses the design and evaluation of data fusion strategies to perform tiered fusion of several heterogeneous sensors and a priori data. The aim is to increase robustness and performance of hazard detection and avoidance systems, while enabling safe planetary and small body landings anytime, anywhere. The focus is on Mars and asteroid landing mission scenarios and three distinct data fusion algorithms are introduced and compared. The first algorithm consists of a hybrid camera-LIDAR hazard detection and avoidance system, the H2DAS, in which data fusion is performed at both sensor-level data (reconstruction of the point cloud obtained with a scanning LIDAR using the navigation motion states and correcting the image for motion compensation using IMU data), feature-level data (concatenation of multiple digital elevation maps, obtained from consecutive LIDAR images, to achieve higher accuracy and resolution maps while enabling relative positioning) as well as decision-level data (fusing hazard maps from multiple sensors onto a single image space, with a single grid orientation and spacing). The second method presented is a hybrid reasoning fusion, the HRF, in which innovative algorithms replace the decision-level functions of the previous method, by combining three different reasoning engines—a fuzzy reasoning engine, a probabilistic reasoning engine and an evidential reasoning engine—to produce safety maps. Finally, the third method presented is called Intelligent Planetary Site Selection, the IPSIS, an innovative multi-criteria, dynamic decision-level data fusion algorithm that takes into account historical information for the selection of landing sites and a piloting function with a non-exhaustive landing site search capability, i.e., capable of finding local optima by searching a reduced set of global maps. All the discussed data fusion strategies and algorithms have been integrated, verified and validated in a closed-loop simulation environment. Monte Carlo simulation campaigns were performed for the algorithms performance assessment and benchmarking. The simulations results comprise the landing phases of Mars and Phobos landing mission scenarios.
Dynamic image fusion and general observer preference
NASA Astrophysics Data System (ADS)
Burks, Stephen D.; Doe, Joshua M.
2010-04-01
Recent developments in image fusion give the user community many options for ways of presenting the imagery to an end-user. Individuals at the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate have developed an electronic system that allows users to quickly and efficiently determine optimal image fusion algorithms and color parameters based upon collected imagery and videos from environments that are typical to observers in a military environment. After performing multiple multi-band data collections in a variety of military-like scenarios, different waveband, fusion algorithm, image post-processing, and color choices are presented to observers as an output of the fusion system. The observer preferences can give guidelines as to how specific scenarios should affect the presentation of fused imagery.
Jia, Yuanyuan; Gholipour, Ali; He, Zhongshi; Warfield, Simon K
2017-05-01
In magnetic resonance (MR), hardware limitations, scan time constraints, and patient movement often result in the acquisition of anisotropic 3-D MR images with limited spatial resolution in the out-of-plane views. Our goal is to construct an isotropic high-resolution (HR) 3-D MR image through upsampling and fusion of orthogonal anisotropic input scans. We propose a multiframe super-resolution (SR) reconstruction technique based on sparse representation of MR images. Our proposed algorithm exploits the correspondence between the HR slices and the low-resolution (LR) sections of the orthogonal input scans as well as the self-similarity of each input scan to train pairs of overcomplete dictionaries that are used in a sparse-land local model to upsample the input scans. The upsampled images are then combined using wavelet fusion and error backprojection to reconstruct an image. Features are learned from the data and no extra training set is needed. Qualitative and quantitative analyses were conducted to evaluate the proposed algorithm using simulated and clinical MR scans. Experimental results show that the proposed algorithm achieves promising results in terms of peak signal-to-noise ratio, structural similarity image index, intensity profiles, and visualization of small structures obscured in the LR imaging process due to partial volume effects. Our novel SR algorithm outperforms the nonlocal means (NLM) method using self-similarity, NLM method using self-similarity and image prior, self-training dictionary learning-based SR method, averaging of upsampled scans, and the wavelet fusion method. Our SR algorithm can reduce through-plane partial volume artifact by combining multiple orthogonal MR scans, and thus can potentially improve medical image analysis, research, and clinical diagnosis.
Angiogram, fundus, and oxygen saturation optic nerve head image fusion
NASA Astrophysics Data System (ADS)
Cao, Hua; Khoobehi, Bahram
2009-02-01
A novel multi-modality optic nerve head image fusion approach has been successfully designed. The new approach has been applied on three ophthalmologic modalities: angiogram, fundus, and oxygen saturation retinal optic nerve head images. It has achieved an excellent result by giving the visualization of fundus or oxygen saturation images with a complete angiogram overlay. During this study, two contributions have been made in terms of novelty, efficiency, and accuracy. The first contribution is the automated control point detection algorithm for multi-sensor images. The new method employs retina vasculature and bifurcation features by identifying the initial good-guess of control points using the Adaptive Exploratory Algorithm. The second contribution is the heuristic optimization fusion algorithm. In order to maximize the objective function (Mutual-Pixel-Count), the iteration algorithm adjusts the initial guess of the control points at the sub-pixel level. A refinement of the parameter set is obtained at the end of each loop, and finally an optimal fused image is generated at the end of the iteration. It is the first time that Mutual-Pixel-Count concept has been introduced into biomedical image fusion area. By locking the images in one place, the fused image allows ophthalmologists to match the same eye over time and get a sense of disease progress and pinpoint surgical tools. The new algorithm can be easily expanded to human or animals' 3D eye, brain, or body image registration and fusion.
Shen, Kai; Lu, Hui; Baig, Sarfaraz; Wang, Michael R
2017-11-01
The multi-frame superresolution technique is introduced to significantly improve the lateral resolution and image quality of spectral domain optical coherence tomography (SD-OCT). Using several sets of low resolution C-scan 3D images with lateral sub-spot-spacing shifts on different sets, the multi-frame superresolution processing of these sets at each depth layer reconstructs a higher resolution and quality lateral image. Layer by layer processing yields an overall high lateral resolution and quality 3D image. In theory, the superresolution processing including deconvolution can solve the diffraction limit, lateral scan density and background noise problems together. In experiment, the improved lateral resolution by ~3 times reaching 7.81 µm and 2.19 µm using sample arm optics of 0.015 and 0.05 numerical aperture respectively as well as doubling the image quality has been confirmed by imaging a known resolution test target. Improved lateral resolution on in vitro skin C-scan images has been demonstrated. For in vivo 3D SD-OCT imaging of human skin, fingerprint and retina layer, we used the multi-modal volume registration method to effectively estimate the lateral image shifts among different C-scans due to random minor unintended live body motion. Further processing of these images generated high lateral resolution 3D images as well as high quality B-scan images of these in vivo tissues.
An integrated approach for updating cadastral maps in Pakistan using satellite remote sensing data
NASA Astrophysics Data System (ADS)
Ali, Zahir; Tuladhar, Arbind; Zevenbergen, Jaap
2012-08-01
Updating cadastral information is crucial for recording land ownership and property division changes in a timely fashioned manner. In most cases, the existing cadastral maps do not provide up-to-date information on land parcel boundaries. Such a situation demands that all the cadastral data and parcel boundaries information in these maps to be updated in a timely fashion. The existing techniques for acquiring cadastral information are discipline-oriented based on different disciplines such as geodesy, surveying, and photogrammetry. All these techniques require a large number of manpower, time, and cost when they are carried out separately. There is a need to integrate these techniques for acquiring cadastral information to update the existing cadastral data and (re)produce cadastral maps in an efficient manner. To reduce the time and cost involved in cadastral data acquisition, this study develops an integrated approach by integrating global position system (GPS) data, remote sensing (RS) imagery, and existing cadastral maps. For this purpose, the panchromatic image with 0.6 m spatial resolution and the corresponding multi-spectral image with 2.4 m spatial resolution and 3 spectral bands from QuickBird satellite were used. A digital elevation model (DEM) was extracted from SPOT-5 stereopairs and some ground control points (GCPs) were also used for ortho-rectifying the QuickBird images. After ortho-rectifying these images and registering the multi-spectral image to the panchromatic image, fusion between them was attained to get good quality multi-spectral images of these two study areas with 0.6 m spatial resolution. Cadastral parcel boundaries were then identified on QuickBird images of the two study areas via visual interpretation using participatory-GIS (PGIS) technique. The regions of study are the urban and rural areas of Peshawar and Swabi districts in the Khyber Pakhtunkhwa province of Pakistan. The results are the creation of updated cadastral maps with a lot of cadastral information which can be used in updating the existing cadastral data with less time and cost.
Pixel-based image fusion with false color mapping
NASA Astrophysics Data System (ADS)
Zhao, Wei; Mao, Shiyi
2003-06-01
In this paper, we propose a pixel-based image fusion algorithm that combines the gray-level image fusion method with the false color mapping. This algorithm integrates two gray-level images presenting different sensor modalities or at different frequencies and produces a fused false-color image. The resulting image has higher information content than each of the original images. The objects in the fused color image are easy to be recognized. This algorithm has three steps: first, obtaining the fused gray-level image of two original images; second, giving the generalized high-boost filtering images between fused gray-level image and two source images respectively; third, generating the fused false-color image. We use the hybrid averaging and selection fusion method to obtain the fused gray-level image. The fused gray-level image will provide better details than two original images and reduce noise at the same time. But the fused gray-level image can't contain all detail information in two source images. At the same time, the details in gray-level image cannot be discerned as easy as in a color image. So a color fused image is necessary. In order to create color variation and enhance details in the final fusion image, we produce three generalized high-boost filtering images. These three images are displayed through red, green and blue channel respectively. A fused color image is produced finally. This method is used to fuse two SAR images acquired on the San Francisco area (California, USA). The result shows that fused false-color image enhances the visibility of certain details. The resolution of the final false-color image is the same as the resolution of the input images.
Automated brain tumor segmentation using spatial accuracy-weighted hidden Markov Random Field.
Nie, Jingxin; Xue, Zhong; Liu, Tianming; Young, Geoffrey S; Setayesh, Kian; Guo, Lei; Wong, Stephen T C
2009-09-01
A variety of algorithms have been proposed for brain tumor segmentation from multi-channel sequences, however, most of them require isotropic or pseudo-isotropic resolution of the MR images. Although co-registration and interpolation of low-resolution sequences, such as T2-weighted images, onto the space of the high-resolution image, such as T1-weighted image, can be performed prior to the segmentation, the results are usually limited by partial volume effects due to interpolation of low-resolution images. To improve the quality of tumor segmentation in clinical applications where low-resolution sequences are commonly used together with high-resolution images, we propose the algorithm based on Spatial accuracy-weighted Hidden Markov random field and Expectation maximization (SHE) approach for both automated tumor and enhanced-tumor segmentation. SHE incorporates the spatial interpolation accuracy of low-resolution images into the optimization procedure of the Hidden Markov Random Field (HMRF) to segment tumor using multi-channel MR images with different resolutions, e.g., high-resolution T1-weighted and low-resolution T2-weighted images. In experiments, we evaluated this algorithm using a set of simulated multi-channel brain MR images with known ground-truth tissue segmentation and also applied it to a dataset of MR images obtained during clinical trials of brain tumor chemotherapy. The results show that more accurate tumor segmentation results can be obtained by comparing with conventional multi-channel segmentation algorithms.
Gradient-based multiresolution image fusion.
Petrović, Valdimir S; Xydeas, Costas S
2004-02-01
A novel approach to multiresolution signal-level image fusion is presented for accurately transferring visual information from any number of input image signals, into a single fused image without loss of information or the introduction of distortion. The proposed system uses a "fuse-then-decompose" technique realized through a novel, fusion/decomposition system architecture. In particular, information fusion is performed on a multiresolution gradient map representation domain of image signal information. At each resolution, input images are represented as gradient maps and combined to produce new, fused gradient maps. Fused gradient map signals are processed, using gradient filters derived from high-pass quadrature mirror filters to yield a fused multiresolution pyramid representation. The fused output image is obtained by applying, on the fused pyramid, a reconstruction process that is analogous to that of conventional discrete wavelet transform. This new gradient fusion significantly reduces the amount of distortion artefacts and the loss of contrast information usually observed in fused images obtained from conventional multiresolution fusion schemes. This is because fusion in the gradient map domain significantly improves the reliability of the feature selection and information fusion processes. Fusion performance is evaluated through informal visual inspection and subjective psychometric preference tests, as well as objective fusion performance measurements. Results clearly demonstrate the superiority of this new approach when compared to conventional fusion systems.
Multisensor data fusion across time and space
NASA Astrophysics Data System (ADS)
Villeneuve, Pierre V.; Beaven, Scott G.; Reed, Robert A.
2014-06-01
Field measurement campaigns typically deploy numerous sensors having different sampling characteristics for spatial, temporal, and spectral domains. Data analysis and exploitation is made more difficult and time consuming as the sample data grids between sensors do not align. This report summarizes our recent effort to demonstrate feasibility of a processing chain capable of "fusing" image data from multiple independent and asynchronous sensors into a form amenable to analysis and exploitation using commercially-available tools. Two important technical issues were addressed in this work: 1) Image spatial registration onto a common pixel grid, 2) Image temporal interpolation onto a common time base. The first step leverages existing image matching and registration algorithms. The second step relies upon a new and innovative use of optical flow algorithms to perform accurate temporal upsampling of slower frame rate imagery. Optical flow field vectors were first derived from high-frame rate, high-resolution imagery, and then finally used as a basis for temporal upsampling of the slower frame rate sensor's imagery. Optical flow field values are computed using a multi-scale image pyramid, thus allowing for more extreme object motion. This involves preprocessing imagery to varying resolution scales and initializing new vector flow estimates using that from the previous coarser-resolution image. Overall performance of this processing chain is demonstrated using sample data involving complex too motion observed by multiple sensors mounted to the same base. Multiple sensors were included, including a high-speed visible camera, up to a coarser resolution LWIR camera.
Data processing and analysis for 2D imaging GEM detector system
NASA Astrophysics Data System (ADS)
Czarski, T.; Chernyshova, M.; Pozniak, K. T.; Kasprowicz, G.; Byszuk, A.; Juszczyk, B.; Kolasinski, P.; Linczuk, M.; Wojenski, A.; Zabolotny, W.; Zienkiewicz, P.
2014-11-01
The Triple Gas Electron Multiplier (T-GEM) is presented as soft X-ray (SXR) energy and position sensitive detector for high-resolution X-ray diagnostics of magnetic confinement fusion plasmas [1]. Multi-channel measurement system and essential data processing for X-ray energy and position recognition is consider. Several modes of data acquisition are introduced depending on processing division for hardware and software components. Typical measuring issues aredeliberated for enhancement of data quality. Fundamental output characteristics are presented for one and two dimensional detector structure. Representative results for reference X-ray source and tokamak plasma are demonstrated.
Guffei, Amanda; Sarkar, Rahul; Klewes, Ludger; Righolt, Christiaan; Knecht, Hans; Mai, Sabine
2010-12-01
Hodgkin's lymphoma is characterized by the presence of mono-nucleated Hodgkin cells and bi- to multi-nucleated Reed-Sternberg cells. We have recently shown telomere dysfunction and aberrant synchronous/asynchronous cell divisions during the transition of Hodgkin cells to Reed-Sternberg cells.1 To determine whether overall changes in nuclear architecture affect genomic instability during the transition of Hodgkin cells to Reed-Sternberg cells, we investigated the nuclear organization of chromosomes in these cells. Three-dimensional fluorescent in situ hybridization revealed irregular nuclear positioning of individual chromosomes in Hodgkin cells and, more so, in Reed-Sternberg cells. We characterized an increasingly unequal distribution of chromosomes as mono-nucleated cells became multi-nucleated cells, some of which also contained chromosome-poor 'ghost' cell nuclei. Measurements of nuclear chromosome positions suggested chromosome overlaps in both types of cells. Spectral karyotyping then revealed both aneuploidy and complex chromosomal rearrangements: multiple breakage-bridge-fusion cycles were at the origin of the multiple rearranged chromosomes. This conclusion was challenged by super resolution three-dimensional structured illumination imaging of Hodgkin and Reed-Sternberg nuclei. Three-dimensional super resolution microscopy data documented inter-nuclear DNA bridges in multi-nucleated cells but not in mono-nucleated cells. These bridges consisted of chromatids and chromosomes shared by two Reed-Sternberg nuclei. The complexity of chromosomal rearrangements increased as Hodgkin cells developed into multi-nucleated cells, thus indicating tumor progression and evolution in Hodgkin's lymphoma, with Reed-Sternberg cells representing the highest complexity in chromosomal rearrangements in this disease. This is the first study to demonstrate nuclear remodeling and associated genomic instability leading to the generation of Reed-Sternberg cells of Hodgkin's lymphoma. We defined nuclear remodeling as a key feature of Hodgkin's lymphoma, highlighting the relevance of nuclear architecture in cancer.
Multispectral simulation environment for modeling low-light-level sensor systems
NASA Astrophysics Data System (ADS)
Ientilucci, Emmett J.; Brown, Scott D.; Schott, John R.; Raqueno, Rolando V.
1998-11-01
Image intensifying cameras have been found to be extremely useful in low-light-level (LLL) scenarios including military night vision and civilian rescue operations. These sensors utilize the available visible region photons and an amplification process to produce high contrast imagery. It has been demonstrated that processing techniques can further enhance the quality of this imagery. For example, fusion with matching thermal IR imagery can improve image content when very little visible region contrast is available. To aid in the improvement of current algorithms and the development of new ones, a high fidelity simulation environment capable of producing radiometrically correct multi-band imagery for low- light-level conditions is desired. This paper describes a modeling environment attempting to meet these criteria by addressing the task as two individual components: (1) prediction of a low-light-level radiance field from an arbitrary scene, and (2) simulation of the output from a low- light-level sensor for a given radiance field. The radiance prediction engine utilized in this environment is the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model which is a first principles based multi-spectral synthetic image generation model capable of producing an arbitrary number of bands in the 0.28 to 20 micrometer region. The DIRSIG model is utilized to produce high spatial and spectral resolution radiance field images. These images are then processed by a user configurable multi-stage low-light-level sensor model that applies the appropriate noise and modulation transfer function (MTF) at each stage in the image processing chain. This includes the ability to reproduce common intensifying sensor artifacts such as saturation and 'blooming.' Additionally, co-registered imagery in other spectral bands may be simultaneously generated for testing fusion and exploitation algorithms. This paper discusses specific aspects of the DIRSIG radiance prediction for low- light-level conditions including the incorporation of natural and man-made sources which emphasizes the importance of accurate BRDF. A description of the implementation of each stage in the image processing and capture chain for the LLL model is also presented. Finally, simulated images are presented and qualitatively compared to lab acquired imagery from a commercial system.
NASA Astrophysics Data System (ADS)
Vrieling, Anton; Skidmore, Andrew K.; Wang, Tiejun; Meroni, Michele; Ens, Bruno J.; Oosterbeek, Kees; O'Connor, Brian; Darvishzadeh, Roshanak; Heurich, Marco; Shepherd, Anita; Paganini, Marc
2017-07-01
Vegetation indices derived from satellite image time series have been extensively used to estimate the timing of phenological events like season onset. Medium spatial resolution (≥250 m) satellite sensors with daily revisit capability are typically employed for this purpose. In recent years, phenology is being retrieved at higher resolution (≤30 m) in response to increasing availability of high-resolution satellite data. To overcome the reduced acquisition frequency of such data, previous attempts involved fusion between high- and medium-resolution data, or combinations of multi-year acquisitions in a single phenological reconstruction. The objectives of this study are to demonstrate that phenological parameters can now be retrieved from single-season high-resolution time series, and to compare these retrievals against those derived from multi-year high-resolution and single-season medium-resolution satellite data. The study focuses on the island of Schiermonnikoog, the Netherlands, which comprises a highly-dynamic saltmarsh, dune vegetation, and agricultural land. Combining NDVI series derived from atmospherically-corrected images from RapidEye (5 m-resolution) and the SPOT5 Take5 experiment (10m-resolution) acquired between March and August 2015, phenological parameters were estimated using a function fitting approach. We then compared results with phenology retrieved from four years of 30 m Landsat 8 OLI data, and single-year 100 m Proba-V and 250 m MODIS temporal composites of the same period. Retrieved phenological parameters from combined RapidEye/SPOT5 displayed spatially consistent results and a large spatial variability, providing complementary information to existing vegetation community maps. Retrievals that combined four years of Landsat observations into a single synthetic year were affected by the inclusion of years with warmer spring temperatures, whereas adjustment of the average phenology to 2015 observations was only feasible for a few pixels due to cloud cover around phenological transition dates. The Proba-V and MODIS phenology retrievals scaled poorly relative to their high-resolution equivalents, indicating that medium-resolution phenology retrievals need to be interpreted with care, particularly in landscapes with fine-scale land cover variability.
A flexible spatiotemporal method for fusing satellite images with different resolutions
Xiaolin Zhu; Eileen H. Helmer; Feng Gao; Desheng Liu; Jin Chen; Michael A. Lefsky
2016-01-01
Studies of land surface dynamics in heterogeneous landscapes often require remote sensing datawith high acquisition frequency and high spatial resolution. However, no single sensor meets this requirement. This study presents a new spatiotemporal data fusion method, the Flexible Spatiotemporal DAta Fusion (FSDAF) method, to generate synthesized frequent high spatial...
Improving Secondary Ion Mass Spectrometry Image Quality with Image Fusion
NASA Astrophysics Data System (ADS)
Tarolli, Jay G.; Jackson, Lauren M.; Winograd, Nicholas
2014-12-01
The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images but also by detection sensitivity. As the probe size is reduced to below 1 μm, for example, a low signal in each pixel limits lateral resolution because of counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure.
Robust Curb Detection with Fusion of 3D-Lidar and Camera Data
Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen
2014-01-01
Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes. PMID:24854364
Trend analysis of the aerosol optical depth from fusion of MISR and MODIS retrievals over China
NASA Astrophysics Data System (ADS)
Guo, Jing; Gu, Xingfa; Yu, Tao; Cheng, Tianhai; Chen, Hao
2014-03-01
Atmospheric aerosol plays an important role in the climate change though direct and indirect processes. In order to evaluate the effects of aerosols on climate, it is necessary to have a research on their spatial and temporal distributions. Satellite aerosol remote sensing is a developing technology that may provide good temporal sampling and superior spatial coverage to study aerosols. The Moderate Resolution Imaging Spectroradiometer (MODIS) and Multi-angle Imaging Spectroradiometer (MISR) have provided aerosol observations since 2000, with large coverage and high accuracy. However, due to the complex surface, cloud contamination, and aerosol models used in the retrieving process, the uncertainties still exist in current satellite aerosol products. There are several observed differences in comparing the MISR and MODIS AOD data with the AERONET AOD. Combing multiple sensors could reduce uncertainties and improve observational accuracy. The validation results reveal that a better agreement between fusion AOD and AERONET AOD. The results confirm that the fusion AOD values are more accurate than single sensor. We have researched the trend analysis of the aerosol properties over China based on nine-year (2002-2010) fusion data. Compared with trend analysis in Jingjintang and Yangtze River Delta, the accuracy has increased by 5% and 3%, respectively. It is obvious that the increasing trend of the AOD occurred in Yangtze River Delta, where human activities may be the main source of the increasing AOD.
NASA Astrophysics Data System (ADS)
Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan
2017-03-01
Digital pathology and telepathology require imaging tools with high-throughput, high-resolution and accurate color reproduction. Lens-free on-chip microscopy based on digital in-line holography is a promising technique towards these needs, as it offers a wide field of view (FOV >20 mm2) and high resolution with a compact, low-cost and portable setup. Color imaging has been previously demonstrated by combining reconstructed images at three discrete wavelengths in the red, green and blue parts of the visible spectrum, i.e., the RGB combination method. However, this RGB combination method is subject to color distortions. To improve the color performance of lens-free microscopy for pathology imaging, here we present a wavelet-based color fusion imaging framework, termed "digital color fusion microscopy" (DCFM), which digitally fuses together a grayscale lens-free microscope image taken at a single wavelength and a low-resolution and low-magnification color-calibrated image taken by a lens-based microscope, which can simply be a mobile phone based cost-effective microscope. We show that the imaging results of an H&E stained breast cancer tissue slide with the DCFM technique come very close to a color-calibrated microscope using a 40x objective lens with 0.75 NA. Quantitative comparison showed 2-fold reduction in the mean color distance using the DCFM method compared to the RGB combination method, while also preserving the high-resolution features of the lens-free microscope. Due to the cost-effective and field-portable nature of both lens-free and mobile-phone microscopy techniques, their combination through the DCFM framework could be useful for digital pathology and telepathology applications, in low-resource and point-of-care settings.
NASA Astrophysics Data System (ADS)
Armstrong, Roy A.; Singh, Hanumant
2006-09-01
Optical imaging of coral reefs and other benthic communities present below one attenuation depth, the limit of effective airborne and satellite remote sensing, requires the use of in situ platforms such as autonomous underwater vehicles (AUVs). The Seabed AUV, which was designed for high-resolution underwater optical and acoustic imaging, was used to characterize several deep insular shelf reefs of Puerto Rico and the US Virgin Islands using digital imagery. The digital photo transects obtained by the Seabed AUV provided quantitative data on living coral, sponge, gorgonian, and macroalgal cover as well as coral species richness and diversity. Rugosity, an index of structural complexity, was derived from the pencil-beam acoustic data. The AUV benthic assessments could provide the required information for selecting unique areas of high coral cover, biodiversity and structural complexity for habitat protection and ecosystem-based management. Data from Seabed sensors and related imaging technologies are being used to conduct multi-beam sonar surveys, 3-D image reconstruction from a single camera, photo mosaicking, image based navigation, and multi-sensor fusion of acoustic and optical data.
Shen, Kai; Lu, Hui; Baig, Sarfaraz; Wang, Michael R.
2017-01-01
The multi-frame superresolution technique is introduced to significantly improve the lateral resolution and image quality of spectral domain optical coherence tomography (SD-OCT). Using several sets of low resolution C-scan 3D images with lateral sub-spot-spacing shifts on different sets, the multi-frame superresolution processing of these sets at each depth layer reconstructs a higher resolution and quality lateral image. Layer by layer processing yields an overall high lateral resolution and quality 3D image. In theory, the superresolution processing including deconvolution can solve the diffraction limit, lateral scan density and background noise problems together. In experiment, the improved lateral resolution by ~3 times reaching 7.81 µm and 2.19 µm using sample arm optics of 0.015 and 0.05 numerical aperture respectively as well as doubling the image quality has been confirmed by imaging a known resolution test target. Improved lateral resolution on in vitro skin C-scan images has been demonstrated. For in vivo 3D SD-OCT imaging of human skin, fingerprint and retina layer, we used the multi-modal volume registration method to effectively estimate the lateral image shifts among different C-scans due to random minor unintended live body motion. Further processing of these images generated high lateral resolution 3D images as well as high quality B-scan images of these in vivo tissues. PMID:29188089
Graphene metamaterial spatial light modulator for infrared single pixel imaging.
Fan, Kebin; Suen, Jonathan Y; Padilla, Willie J
2017-10-16
High-resolution and hyperspectral imaging has long been a goal for multi-dimensional data fusion sensing applications - of interest for autonomous vehicles and environmental monitoring. In the long wave infrared regime this quest has been impeded by size, weight, power, and cost issues, especially as focal-plane array detector sizes increase. Here we propose and experimentally demonstrated a new approach based on a metamaterial graphene spatial light modulator (GSLM) for infrared single pixel imaging. A frequency-division multiplexing (FDM) imaging technique is designed and implemented, and relies entirely on the electronic reconfigurability of the GSLM. We compare our approach to the more common raster-scan method and directly show FDM image frame rates can be 64 times faster with no degradation of image quality. Our device and related imaging architecture are not restricted to the infrared regime, and may be scaled to other bands of the electromagnetic spectrum. The study presented here opens a new approach for fast and efficient single pixel imaging utilizing graphene metamaterials with novel acquisition strategies.
Neutron imaging with bubble chambers for inertial confinement fusion
NASA Astrophysics Data System (ADS)
Ghilea, Marian C.
One of the main methods to obtain energy from controlled thermonuclear fusion is inertial confinement fusion (ICF), a process where nuclear fusion reactions are initiated by heating and compressing a fuel target, typically in the form of a pellet that contains deuterium and tritium, relying on the inertia of the fuel mass to provide confinement. In inertial confinement fusion experiments, it is important to distinguish failure mechanisms of the imploding capsule and unambiguously diagnose compression and hot spot formation in the fuel. Neutron imaging provides such a technique and bubble chambers are capable of generating higher resolution images than other types of neutron detectors. This thesis explores the use of a liquid bubble chamber to record high yield 14.1 MeV neutrons resulting from deuterium-tritium fusion reactions on ICF experiments. A design tool to deconvolve and reconstruct penumbral and pinhole neutron images was created, using an original ray tracing concept to simulate the neutron images. The design tool proved that misalignment and aperture fabrication errors can significantly decrease the resolution of the reconstructed neutron image. A theoretical model to describe the mechanism of bubble formation was developed. A bubble chamber for neutron imaging with Freon 115 as active medium was designed and implemented for the OMEGA laser system. High neutron yields resulting from deuterium-tritium capsule implosions were recorded. The bubble density was too low for neutron imaging on OMEGA but agreed with the model of bubble formation. The research done in here shows that bubble detectors are a promising technology for the higher neutron yields expected at National Ignition Facility (NIF).
Multi-sensor image fusion algorithm based on multi-objective particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Xie, Xia-zhu; Xu, Ya-wei
2017-11-01
On the basis of DT-CWT (Dual-Tree Complex Wavelet Transform - DT-CWT) theory, an approach based on MOPSO (Multi-objective Particle Swarm Optimization Algorithm) was proposed to objectively choose the fused weights of low frequency sub-bands. High and low frequency sub-bands were produced by DT-CWT. Absolute value of coefficients was adopted as fusion rule to fuse high frequency sub-bands. Fusion weights in low frequency sub-bands were used as particles in MOPSO. Spatial Frequency and Average Gradient were adopted as two kinds of fitness functions in MOPSO. The experimental result shows that the proposed approach performances better than Average Fusion and fusion methods based on local variance and local energy respectively in brightness, clarity and quantitative evaluation which includes Entropy, Spatial Frequency, Average Gradient and QAB/F.
Volume curtaining: a focus+context effect for multimodal volume visualization
NASA Astrophysics Data System (ADS)
Fairfield, Adam J.; Plasencia, Jonathan; Jang, Yun; Theodore, Nicholas; Crawford, Neil R.; Frakes, David H.; Maciejewski, Ross
2014-03-01
In surgical preparation, physicians will often utilize multimodal imaging scans to capture complementary information to improve diagnosis and to drive patient-specific treatment. These imaging scans may consist of data from magnetic resonance imaging (MR), computed tomography (CT), or other various sources. The challenge in using these different modalities is that the physician must mentally map the two modalities together during the diagnosis and planning phase. Furthermore, the different imaging modalities will be generated at various resolutions as well as slightly different orientations due to patient placement during scans. In this work, we present an interactive system for multimodal data fusion, analysis and visualization. Developed with partners from neurological clinics, this work discusses initial system requirements and physician feedback at the various stages of component development. Finally, we present a novel focus+context technique for the interactive exploration of coregistered multi-modal data.
Chen, Lidong; Basu, Anup; Zhang, Maojun; Wang, Wei; Liu, Yu
2014-03-20
A complementary catadioptric imaging technique was proposed to solve the problem of low and nonuniform resolution in omnidirectional imaging. To enhance this research, our paper focuses on how to generate a high-resolution panoramic image from the captured omnidirectional image. To avoid the interference between the inner and outer images while fusing the two complementary views, a cross-selection kernel regression method is proposed. First, in view of the complementarity of sampling resolution in the tangential and radial directions between the inner and the outer images, respectively, the horizontal gradients in the expected panoramic image are estimated based on the scattered neighboring pixels mapped from the outer, while the vertical gradients are estimated using the inner image. Then, the size and shape of the regression kernel are adaptively steered based on the local gradients. Furthermore, the neighboring pixels in the next interpolation step of kernel regression are also selected based on the comparison between the horizontal and vertical gradients. In simulation and real-image experiments, the proposed method outperforms existing kernel regression methods and our previous wavelet-based fusion method in terms of both visual quality and objective evaluation.
Assessment of spatiotemporal fusion algorithms for Planet and Worldview images
USDA-ARS?s Scientific Manuscript database
Although Worldview (WV) images (non-pansharpened) have 2-meter resolution, the re-visit times for the same areas may be 7 days or more. In contrast, Planet images using small satellites can cover the whole Earth almost daily. However, the resolution of Planet images is 3.125 m. It will be ideal to f...
Model-Based Building Detection from Low-Cost Optical Sensors Onboard Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Karantzalos, K.; Koutsourakis, P.; Kalisperakis, I.; Grammatikopoulos, L.
2015-08-01
The automated and cost-effective building detection in ultra high spatial resolution is of major importance for various engineering and smart city applications. To this end, in this paper, a model-based building detection technique has been developed able to extract and reconstruct buildings from UAV aerial imagery and low-cost imaging sensors. In particular, the developed approach through advanced structure from motion, bundle adjustment and dense image matching computes a DSM and a true orthomosaic from the numerous GoPro images which are characterised by important geometric distortions and fish-eye effect. An unsupervised multi-region, graphcut segmentation and a rule-based classification is responsible for delivering the initial multi-class classification map. The DTM is then calculated based on inpaininting and mathematical morphology process. A data fusion process between the detected building from the DSM/DTM and the classification map feeds a grammar-based building reconstruction and scene building are extracted and reconstructed. Preliminary experimental results appear quite promising with the quantitative evaluation indicating detection rates at object level of 88% regarding the correctness and above 75% regarding the detection completeness.
Ma, Xu; Cheng, Yongmei; Hao, Shuai
2016-12-10
Automatic classification of terrain surfaces from an aerial image is essential for an autonomous unmanned aerial vehicle (UAV) landing at an unprepared site by using vision. Diverse terrain surfaces may show similar spectral properties due to the illumination and noise that easily cause poor classification performance. To address this issue, a multi-stage classification algorithm based on low-rank recovery and multi-feature fusion sparse representation is proposed. First, color moments and Gabor texture feature are extracted from training data and stacked as column vectors of a dictionary. Then we perform low-rank matrix recovery for the dictionary by using augmented Lagrange multipliers and construct a multi-stage terrain classifier. Experimental results on an aerial map database that we prepared verify the classification accuracy and robustness of the proposed method.
Instrumentation for simultaneous kinetic imaging of multiple fluorophores in single living cells
NASA Astrophysics Data System (ADS)
Morris, Stephen J.; Beatty, Diane M.; Welling, Larry W.; Wiegmann, Thomas B.
1991-05-01
Low-light fluorescence video microscopy has established itself as an excellent method for investigations of cell dynamics. There is a growing interest in resolving multiple images of 'ratio' fluorophores like indo or BCECF or the emission from multiple dyes placed in the same cell system. For rapid kinetic studies, the problems of photodynamic damage and photobleaching on one hand and the need for good spatial and temporal resolution on the other, press the resolution of the instrumentation. Rapid resolution of multiple probes at multiple wavelengths presents a third set of problems of exciting the probes and appropriately imaging the emitted light. The authors have designed a new real-time low-light fluorescence video microscope for capturing intensified images of up to four dyes contained in the same cell system. These can be two dual-emission wavelength 'ratio' dyes or multiple dyes. The optics allow simultaneous excitation of up to four fluorophores and the real-time (30 frames/second) capture of four separate fluorescence emission images. Each emission wavelength is imaged simultaneously by one of four cameras, then digitized and appropriately combined at standard video frame rates to be stored at high resolution on tape or video disk for further off-line correction and analysis. The design has no moving parts in its optical train, which overcomes a number of technical difficulties encountered in filter wheel or mechanical shutter designs for multiple imaging. The instrument can be assembled form off-the-shelf components. Coupled to compatible image processing software utilizing PC-AT computers, it can be realized for relatively low cost. Two examples of simultaneous multi-parameter imaging are presented. Synchronous observations of calcium and pH distribution in kidney epithelial cells, loaded with both indo-1 and SNARF-1TM, show that both are altered in response to ionomycin treatment; however, the kinetics for the two changes are quite different. Intracellular calcium increases rapidly when the bath Ca2+ is raised. The pH remains stable for several seconds, then suddenly collapses. The second example concerns fusion of human red blood cells (RBC) to fibroblasts expressing influenza hemagglutinin. Movement of soluble and membrane-bound dyes follow different kinetics, depending upon the molecular weight of the soluble dye. Furthermore, the swelling of the RBC occurs after the onset of fusion, and therefore cannot provide the driving force.
Hogervorst, Maarten A.; Pinkus, Alan R.
2016-01-01
The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4–0.7μm), near-infrared (NIR, 0.7–1.0μm) and long-wave infrared (LWIR, 8–14μm) motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian) people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation) all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer). The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false) color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic) image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs can be used to define color mappings that give the multi-band imagery a realistic color appearance. PMID:28036328
Toet, Alexander; Hogervorst, Maarten A; Pinkus, Alan R
2016-01-01
The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4-0.7μm), near-infrared (NIR, 0.7-1.0μm) and long-wave infrared (LWIR, 8-14μm) motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian) people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation) all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer). The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false) color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic) image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs can be used to define color mappings that give the multi-band imagery a realistic color appearance.
NASA Astrophysics Data System (ADS)
Ma, Jinlei; Zhou, Zhiqiang; Wang, Bo; Zong, Hua
2017-05-01
The goal of infrared (IR) and visible image fusion is to produce a more informative image for human observation or some other computer vision tasks. In this paper, we propose a novel multi-scale fusion method based on visual saliency map (VSM) and weighted least square (WLS) optimization, aiming to overcome some common deficiencies of conventional methods. Firstly, we introduce a multi-scale decomposition (MSD) using the rolling guidance filter (RGF) and Gaussian filter to decompose input images into base and detail layers. Compared with conventional MSDs, this MSD can achieve the unique property of preserving the information of specific scales and reducing halos near edges. Secondly, we argue that the base layers obtained by most MSDs would contain a certain amount of residual low-frequency information, which is important for controlling the contrast and overall visual appearance of the fused image, and the conventional "averaging" fusion scheme is unable to achieve desired effects. To address this problem, an improved VSM-based technique is proposed to fuse the base layers. Lastly, a novel WLS optimization scheme is proposed to fuse the detail layers. This optimization aims to transfer more visual details and less irrelevant IR details or noise into the fused image. As a result, the fused image details would appear more naturally and be suitable for human visual perception. Experimental results demonstrate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.
Identifying Corresponding Patches in SAR and Optical Images With a Pseudo-Siamese CNN
NASA Astrophysics Data System (ADS)
Hughes, Lloyd H.; Schmitt, Michael; Mou, Lichao; Wang, Yuanyuan; Zhu, Xiao Xiang
2018-05-01
In this letter, we propose a pseudo-siamese convolutional neural network (CNN) architecture that enables to solve the task of identifying corresponding patches in very-high-resolution (VHR) optical and synthetic aperture radar (SAR) remote sensing imagery. Using eight convolutional layers each in two parallel network streams, a fully connected layer for the fusion of the features learned in each stream, and a loss function based on binary cross-entropy, we achieve a one-hot indication if two patches correspond or not. The network is trained and tested on an automatically generated dataset that is based on a deterministic alignment of SAR and optical imagery via previously reconstructed and subsequently co-registered 3D point clouds. The satellite images, from which the patches comprising our dataset are extracted, show a complex urban scene containing many elevated objects (i.e. buildings), thus providing one of the most difficult experimental environments. The achieved results show that the network is able to predict corresponding patches with high accuracy, thus indicating great potential for further development towards a generalized multi-sensor key-point matching procedure. Index Terms-synthetic aperture radar (SAR), optical imagery, data fusion, deep learning, convolutional neural networks (CNN), image matching, deep matching
SENTINEL-1 and SENTINEL-2 Data Fusion for Wetlands Mapping: Balikdami, Turkey
NASA Astrophysics Data System (ADS)
Kaplan, G.; Avdan, U.
2018-04-01
Wetlands provide a number of environmental and socio-economic benefits such as their ability to store floodwaters and improve water quality, providing habitats for wildlife and supporting biodiversity, as well as aesthetic values. Remote sensing technology has proven to be a useful and frequent application in monitoring and mapping wetlands. Combining optical and microwave satellite data can help with mapping and monitoring the biophysical characteristics of wetlands and wetlands` vegetation. Also, fusing radar and optical remote sensing data can increase the wetland classification accuracy. In this paper, data from the fine spatial resolution optical satellite, Sentinel-2 and the Synthetic Aperture Radar Satellite, Sentinel-1, were fused for mapping wetlands. Both Sentinel-1 and Sentinel-2 images were pre-processed. After the pre-processing, vegetation indices were calculated using the Sentinel-2 bands and the results were included in the fusion data set. For the classification of the fused data, three different classification approaches were used and compared. The results showed significant improvement in the wetland classification using both multispectral and microwave data. Also, the presence of the red edge bands and the vegetation indices used in the data set showed significant improvement in the discrimination between wetlands and other vegetated areas. The statistical results of the fusion of the optical and radar data showed high wetland mapping accuracy, showing an overall classification accuracy of approximately 90 % in the object-based classification method. For future research, we recommend multi-temporal image use, terrain data collection, as well as a comparison of the used method with the traditional image fusion techniques.
NASA Astrophysics Data System (ADS)
Xu, Saiping; Zhao, Qianjun; Yin, Kai; Cui, Bei; Zhang, Xiupeng
2016-10-01
Hollow village is a special phenomenon in the process of urbanization in China, which causes the waste of land resources. Therefore, it's imminent to carry out the hollow village recognition and renovation. However, there are few researches on the remote sensing identification of hollow village. In this context, in order to recognize the abandoned homesteads by remote sensing technique, the experiment was carried out as follows. Firstly, Gram-Schmidt transform method was utilized to complete the image fusion between multi-spectral images and panchromatic image of WorldView-2. Then the fusion images were made edge enhanced by high pass filtering. The multi-resolution segmentation and spectral difference segmentation were carried out to obtain the image objects. Secondly, spectral characteristic parameters were calculated, such as the normalized difference vegetation index (NDVI), the normalized difference water index (NDWI), the normalized difference Soil index (NDSI) etc. The shape feature parameters were extracted, such as Area, Length/Width Ratio and Rectangular Fit etc.. Thirdly, the SEaTH algorithm was used to determine the thresholds and optimize the feature space. Furthermore, the threshold classification method and the random forest classifier were combined, and the appropriate amount of samples were selected to train the classifier in order to determine the important feature parameters and the best classifier parameters involved in classification. Finally, the classification results was verified by computing the confusion matrix. The classification results were continuous and the phenomenon of salt and pepper using pixel classification was avoided effectively. In addition, the results showed that the extracted Abandoned Homesteads were in complete shapes, which could be distinguished from those confusing classes such as Homestead in Use and Roads.
Fusion of PET and MRI for Hybrid Imaging
NASA Astrophysics Data System (ADS)
Cho, Zang-Hee; Son, Young-Don; Kim, Young-Bo; Yoo, Seung-Schik
Recently, the development of the fusion PET-MRI system has been actively studied to meet the increasing demand for integrated molecular and anatomical imaging. MRI can provide detailed anatomical information on the brain, such as the locations of gray and white matter, blood vessels, axonal tracts with high resolution, while PET can measure molecular and genetic information, such as glucose metabolism, neurotransmitter-neuroreceptor binding and affinity, protein-protein interactions, and gene trafficking among biological tissues. State-of-the-art MRI systems, such as the 7.0 T whole-body MRI, now can visualize super-fine structures including neuronal bundles in the pons, fine blood vessels (such as lenticulostriate arteries) without invasive contrast agents, in vivo hippocampal substructures, and substantia nigra with excellent image contrast. High-resolution PET, known as High-Resolution Research Tomograph (HRRT), is a brain-dedicated system capable of imaging minute changes of chemicals, such as neurotransmitters and -receptors, with high spatial resolution and sensitivity. The synergistic power of the two, i.e., ultra high-resolution anatomical information offered by a 7.0 T MRI system combined with the high-sensitivity molecular information offered by HRRT-PET, will significantly elevate the level of our current understanding of the human brain, one of the most delicate, complex, and mysterious biological organs. This chapter introduces MRI, PET, and PET-MRI fusion system, and its algorithms are discussed in detail.
Discriminative confidence estimation for probabilistic multi-atlas label fusion.
Benkarim, Oualid M; Piella, Gemma; González Ballester, Miguel Angel; Sanroma, Gerard
2017-12-01
Quantitative neuroimaging analyses often rely on the accurate segmentation of anatomical brain structures. In contrast to manual segmentation, automatic methods offer reproducible outputs and provide scalability to study large databases. Among existing approaches, multi-atlas segmentation has recently shown to yield state-of-the-art performance in automatic segmentation of brain images. It consists in propagating the labelmaps from a set of atlases to the anatomy of a target image using image registration, and then fusing these multiple warped labelmaps into a consensus segmentation on the target image. Accurately estimating the contribution of each atlas labelmap to the final segmentation is a critical step for the success of multi-atlas segmentation. Common approaches to label fusion either rely on local patch similarity, probabilistic statistical frameworks or a combination of both. In this work, we propose a probabilistic label fusion framework based on atlas label confidences computed at each voxel of the structure of interest. Maximum likelihood atlas confidences are estimated using a supervised approach, explicitly modeling the relationship between local image appearances and segmentation errors produced by each of the atlases. We evaluate different spatial pooling strategies for modeling local segmentation errors. We also present a novel type of label-dependent appearance features based on atlas labelmaps that are used during confidence estimation to increase the accuracy of our label fusion. Our approach is evaluated on the segmentation of seven subcortical brain structures from the MICCAI 2013 SATA Challenge dataset and the hippocampi from the ADNI dataset. Overall, our results indicate that the proposed label fusion framework achieves superior performance to state-of-the-art approaches in the majority of the evaluated brain structures and shows more robustness to registration errors. Copyright © 2017 Elsevier B.V. All rights reserved.
Multi-modality image fusion based on enhanced fuzzy radial basis function neural networks.
Chao, Zhen; Kim, Dohyeon; Kim, Hee-Joung
2018-04-01
In clinical applications, single modality images do not provide sufficient diagnostic information. Therefore, it is necessary to combine the advantages or complementarities of different modalities of images. Recently, neural network technique was applied to medical image fusion by many researchers, but there are still many deficiencies. In this study, we propose a novel fusion method to combine multi-modality medical images based on the enhanced fuzzy radial basis function neural network (Fuzzy-RBFNN), which includes five layers: input, fuzzy partition, front combination, inference, and output. Moreover, we propose a hybrid of the gravitational search algorithm (GSA) and error back propagation algorithm (EBPA) to train the network to update the parameters of the network. Two different patterns of images are used as inputs of the neural network, and the output is the fused image. A comparison with the conventional fusion methods and another neural network method through subjective observation and objective evaluation indexes reveals that the proposed method effectively synthesized the information of input images and achieved better results. Meanwhile, we also trained the network by using the EBPA and GSA, individually. The results reveal that the EBPGSA not only outperformed both EBPA and GSA, but also trained the neural network more accurately by analyzing the same evaluation indexes. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Spatial Statistical Data Fusion for Remote Sensing Applications
NASA Technical Reports Server (NTRS)
Nguyen, Hai
2010-01-01
Data fusion is the process of combining information from heterogeneous sources into a single composite picture of the relevant process, such that the composite picture is generally more accurate and complete than that derived from any single source alone. Data collection is often incomplete, sparse, and yields incompatible information. Fusion techniques can make optimal use of such data. When investment in data collection is high, fusion gives the best return. Our study uses data from two satellites: (1) Multiangle Imaging SpectroRadiometer (MISR), (2) Moderate Resolution Imaging Spectroradiometer (MODIS).
NASA Technical Reports Server (NTRS)
Chirayath, Ved
2018-01-01
We present preliminary results from NASA NeMO-Net, the first neural multi-modal observation and training network for global coral reef assessment. NeMO-Net is an open-source deep convolutional neural network (CNN) and interactive active learning training software in development which will assess the present and past dynamics of coral reef ecosystems. NeMO-Net exploits active learning and data fusion of mm-scale remotely sensed 3D images of coral reefs captured using fluid lensing with the NASA FluidCam instrument, presently the highest-resolution remote sensing benthic imaging technology capable of removing ocean wave distortion, as well as hyperspectral airborne remote sensing data from the ongoing NASA CORAL mission and lower-resolution satellite data to determine coral reef ecosystem makeup globally at unprecedented spatial and temporal scales. Aquatic ecosystems, particularly coral reefs, remain quantitatively misrepresented by low-resolution remote sensing as a result of refractive distortion from ocean waves, optical attenuation, and remoteness. Machine learning classification of coral reefs using FluidCam mm-scale 3D data show that present satellite and airborne remote sensing techniques poorly characterize coral reef percent living cover, morphology type, and species breakdown at the mm, cm, and meter scales. Indeed, current global assessments of coral reef cover and morphology classification based on km-scale satellite data alone can suffer from segmentation errors greater than 40%, capable of change detection only on yearly temporal scales and decameter spatial scales, significantly hindering our understanding of patterns and processes in marine biodiversity at a time when these ecosystems are experiencing unprecedented anthropogenic pressures, ocean acidification, and sea surface temperature rise. NeMO-Net leverages our augmented machine learning algorithm that demonstrates data fusion of regional FluidCam (mm, cm-scale) airborne remote sensing with global low-resolution (m, km-scale) airborne and spaceborne imagery to reduce classification errors up to 80% over regional scales. Such technologies can substantially enhance our ability to assess coral reef ecosystems dynamics.
Implementing and validating of pan-sharpening algorithms in open-source software
NASA Astrophysics Data System (ADS)
Pesántez-Cobos, Paúl; Cánovas-García, Fulgencio; Alonso-Sarría, Francisco
2017-10-01
Several approaches have been used in remote sensing to integrate images with different spectral and spatial resolutions in order to obtain fused enhanced images. The objective of this research is three-fold. To implement in R three image fusion techniques (High Pass Filter, Principal Component Analysis and Gram-Schmidt); to apply these techniques to merging multispectral and panchromatic images from five different images with different spatial resolutions; finally, to evaluate the results using the universal image quality index (Q index) and the ERGAS index. As regards qualitative analysis, Landsat-7 and Landsat-8 show greater colour distortion with the three pansharpening methods, although the results for the other images were better. Q index revealed that HPF fusion performs better for the QuickBird, IKONOS and Landsat-7 images, followed by GS fusion; whereas in the case of Landsat-8 and Natmur-08 images, the results were more even. Regarding the ERGAS spatial index, the ACP algorithm performed better for the QuickBird, IKONOS, Landsat-7 and Natmur-08 images, followed closely by the GS algorithm. Only for the Landsat-8 image did, the GS fusion present the best result. In the evaluation of spectral components, HPF results tended to be better and ACP results worse, the opposite was the case with the spatial components. Better quantitative results are obtained in Landsat-7 and Landsat-8 images with the three fusion methods than with the QuickBird, IKONOS and Natmur-08 images. This contrasts with the qualitative evaluation reflecting the importance of splitting the two evaluation approaches (qualitative and quantitative). Significant disagreement may arise when different methodologies are used to asses the quality of an image fusion. Moreover, it is not possible to designate, a priori, a given algorithm as the best, not only because of the different characteristics of the sensors, but also because of the different atmospherics conditions or peculiarities of the different study areas, among other reasons.
Improving Secondary Ion Mass Spectrometry Image Quality with Image Fusion
Tarolli, Jay G.; Jackson, Lauren M.; Winograd, Nicholas
2014-01-01
The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images, but also by detection sensitivity. As the probe size is reduced to below 1 µm, for example, a low signal in each pixel limits lateral resolution due to counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure. PMID:24912432
A Remote Sensing Image Fusion Method based on adaptive dictionary learning
NASA Astrophysics Data System (ADS)
He, Tongdi; Che, Zongxi
2018-01-01
This paper discusses using a remote sensing fusion method, based on' adaptive sparse representation (ASP)', to provide improved spectral information, reduce data redundancy and decrease system complexity. First, the training sample set is formed by taking random blocks from the images to be fused, the dictionary is then constructed using the training samples, and the remaining terms are clustered to obtain the complete dictionary by iterated processing at each step. Second, the self-adaptive weighted coefficient rule of regional energy is used to select the feature fusion coefficients and complete the reconstruction of the image blocks. Finally, the reconstructed image blocks are rearranged and an average is taken to obtain the final fused images. Experimental results show that the proposed method is superior to other traditional remote sensing image fusion methods in both spectral information preservation and spatial resolution.
NASA Astrophysics Data System (ADS)
Suiter, Ashley Elizabeth
Multi-spectral imagery provides a robust and low-cost dataset for assessing wetland extent and quality over broad regions and is frequently used for wetland inventories. However in forested wetlands, hydrology is obscured by tree canopy making it difficult to detect with multi-spectral imagery alone. Because of this, classification of forested wetlands often includes greater errors than that of other wetlands types. Elevation and terrain derivatives have been shown to be useful for modelling wetland hydrology. But, few studies have addressed the use of LiDAR intensity data detecting hydrology in forested wetlands. Due the tendency of LiDAR signal to be attenuated by water, this research proposed the fusion of LiDAR intensity data with LiDAR elevation, terrain data, and aerial imagery, for the detection of forested wetland hydrology. We examined the utility of LiDAR intensity data and determined whether the fusion of Lidar derived data with multispectral imagery increased the accuracy of forested wetland classification compared with a classification performed with only multi-spectral image. Four classifications were performed: Classification A -- All Imagery, Classification B -- All LiDAR, Classification C -- LiDAR without Intensity, and Classification D -- Fusion of All Data. These classifications were performed using random forest and each resulted in a 3-foot resolution thematic raster of forested upland and forested wetland locations in Vermilion County, Illinois. The accuracies of these classifications were compared using Kappa Coefficient of Agreement. Importance statistics produced within the random forest classifier were evaluated in order to understand the contribution of individual datasets. Classification D, which used the fusion of LiDAR and multi-spectral imagery as input variables, had moderate to strong agreement between reference data and classification results. It was found that Classification A performed using all the LiDAR data and its derivatives (intensity, elevation, slope, aspect, curvatures, and Topographic Wetness Index) was the most accurate classification with Kappa: 78.04%, indicating moderate to strong agreement. However, Classification C, performed with LiDAR derivative without intensity data had less agreement than would be expected by chance, indicating that LiDAR contributed significantly to the accuracy of Classification B.
Different source image fusion based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Xiao; Piao, Yan
2016-03-01
The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.
Object Manifold Alignment for Multi-Temporal High Resolution Remote Sensing Images Classification
NASA Astrophysics Data System (ADS)
Gao, G.; Zhang, M.; Gu, Y.
2017-05-01
Multi-temporal remote sensing images classification is very useful for monitoring the land cover changes. Traditional approaches in this field mainly face to limited labelled samples and spectral drift of image information. With spatial resolution improvement, "pepper and salt" appears and classification results will be effected when the pixelwise classification algorithms are applied to high-resolution satellite images, in which the spatial relationship among the pixels is ignored. For classifying the multi-temporal high resolution images with limited labelled samples, spectral drift and "pepper and salt" problem, an object-based manifold alignment method is proposed. Firstly, multi-temporal multispectral images are cut to superpixels by simple linear iterative clustering (SLIC) respectively. Secondly, some features obtained from superpixels are formed as vector. Thirdly, a majority voting manifold alignment method aiming at solving high resolution problem is proposed and mapping the vector data to alignment space. At last, all the data in the alignment space are classified by using KNN method. Multi-temporal images from different areas or the same area are both considered in this paper. In the experiments, 2 groups of multi-temporal HR images collected by China GF1 and GF2 satellites are used for performance evaluation. Experimental results indicate that the proposed method not only has significantly outperforms than traditional domain adaptation methods in classification accuracy, but also effectively overcome the problem of "pepper and salt".
NASA Astrophysics Data System (ADS)
Zhu, L.; Radeloff, V.; Ives, A. R.; Barton, B.
2015-12-01
Deriving crop pattern with high accuracy is of great importance for characterizing landscape diversity, which affects the resilience of food webs in agricultural systems in the face of climatic and land cover changes. Landsat sensors were originally designed to monitor agricultural areas, and both radiometric and spatial resolution are optimized for monitoring large agricultural fields. Unfortunately, few clear Landsat images per year are available, which has limited the use of Landsat for making crop classification, and this situation is worse in cloudy areas of the Earth. Meanwhile, the MODerate Resolution Imaging Spectroradiometer (MODIS) data has better temporal resolution but cannot capture fine spatial heterogeneity of agricultural systems. Our question was to what extent fusing imagery from both sensors could improve crop classifications. We utilized the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) algorithm to simulate Landsat-like images at MODIS temporal resolution. Based on Random Forests (RF) classifier, we tested whether and by what degree crop maps from 2000 to 2014 of the Arlington Agricultural Research Station (Wisconsin, USA) were improved by integrating available clear Landsat images each year with synthetic images. We predicted that the degree to which classification accuracy can be improved by incorporating synthetic imagery depends on the number and acquisition time of clear Landsat images. Moreover, multi-season data are essential for mapping crop types by capturing their phenological dynamics, and STARFM-simulated images can be used to compensate for missing Landsat observations. Our study is helpful for eliminating the limits of the use of Landsat data in mapping crop patterns, and can provide a benchmark of accuracy when choosing STARFM-simulated images to make crop classification at broader scales.
Image Registration of High-Resolution Uav Data: the New Hypare Algorithm
NASA Astrophysics Data System (ADS)
Bahr, T.; Jin, X.; Lasica, R.; Giessel, D.
2013-08-01
Unmanned aerial vehicles play an important role in the present-day civilian and military intelligence. Equipped with a variety of sensors, such as SAR imaging modes, E/O- and IR sensor technology, they are due to their agility suitable for many applications. Hence, the necessity arises to use fusion technologies and to develop them continuously. Here an exact image-to-image registration is essential. It serves as the basis for important image processing operations such as georeferencing, change detection, and data fusion. Therefore we developed the Hybrid Powered Auto-Registration Engine (HyPARE). HyPARE combines all available spatial reference information with a number of image registration approaches to improve the accuracy, performance, and automation of tie point generation and image registration. We demonstrate this approach by the registration of 39 still images from a high-resolution image stream, acquired with a Aeryon Photo3S™ camera on an Aeryon Scout micro-UAV™.
Adaptive multiple super fast simulated annealing for stochastic microstructure reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryu, Seun; Lin, Guang; Sun, Xin
2013-01-01
Fast image reconstruction from statistical information is critical in image fusion from multimodality chemical imaging instrumentation to create high resolution image with large domain. Stochastic methods have been used widely in image reconstruction from two point correlation function. The main challenge is to increase the efficiency of reconstruction. A novel simulated annealing method is proposed for fast solution of image reconstruction. Combining the advantage of very fast cooling schedules, dynamic adaption and parallelization, the new simulation annealing algorithm increases the efficiencies by several orders of magnitude, making the large domain image fusion feasible.
Guo, Lu; Wang, Ping; Sun, Ranran; Yang, Chengwen; Zhang, Ning; Guo, Yu; Feng, Yuanming
2018-02-19
The diffusion and perfusion magnetic resonance (MR) images can provide functional information about tumour and enable more sensitive detection of the tumour extent. We aimed to develop a fuzzy feature fusion method for auto-segmentation of gliomas in radiotherapy planning using multi-parametric functional MR images including apparent diffusion coefficient (ADC), fractional anisotropy (FA) and relative cerebral blood volume (rCBV). For each functional modality, one histogram-based fuzzy model was created to transform image volume into a fuzzy feature space. Based on the fuzzy fusion result of the three fuzzy feature spaces, regions with high possibility belonging to tumour were generated automatically. The auto-segmentations of tumour in structural MR images were added in final auto-segmented gross tumour volume (GTV). For evaluation, one radiation oncologist delineated GTVs for nine patients with all modalities. Comparisons between manually delineated and auto-segmented GTVs showed that, the mean volume difference was 8.69% (±5.62%); the mean Dice's similarity coefficient (DSC) was 0.88 (±0.02); the mean sensitivity and specificity of auto-segmentation was 0.87 (±0.04) and 0.98 (±0.01) respectively. High accuracy and efficiency can be achieved with the new method, which shows potential of utilizing functional multi-parametric MR images for target definition in precision radiation treatment planning for patients with gliomas.
A method based on IHS cylindrical transform model for quality assessment of image fusion
NASA Astrophysics Data System (ADS)
Zhu, Xiaokun; Jia, Yonghong
2005-10-01
Image fusion technique has been widely applied to remote sensing image analysis and processing, and methods for quality assessment of image fusion in remote sensing have also become the research issues at home and abroad. Traditional assessment methods combine calculation of quantitative indexes and visual interpretation to compare fused images quantificationally and qualitatively. However, in the existing assessment methods, there are two defects: on one hand, most imdexes lack the theoretic support to compare different fusion methods. On the hand, there is not a uniform preference for most of the quantitative assessment indexes when they are applied to estimate the fusion effects. That is, the spatial resolution and spectral feature could not be analyzed synchronously by these indexes and there is not a general method to unify the spatial and spectral feature assessment. So in this paper, on the basis of the approximate general model of four traditional fusion methods, including Intensity Hue Saturation(IHS) triangle transform fusion, High Pass Filter(HPF) fusion, Principal Component Analysis(PCA) fusion, Wavelet Transform(WT) fusion, a correlation coefficient assessment method based on IHS cylindrical transform is proposed. By experiments, this method can not only get the evaluation results of spatial and spectral features on the basis of uniform preference, but also can acquire the comparison between fusion image sources and fused images, and acquire differences among fusion methods. Compared with the traditional assessment methods, the new methods is more intuitionistic, and in accord with subjective estimation.
Infrared and visible image fusion scheme based on NSCT and low-level visual features
NASA Astrophysics Data System (ADS)
Li, Huafeng; Qiu, Hongmei; Yu, Zhengtao; Zhang, Yafei
2016-05-01
Multi-scale transform (MST) is an efficient tool for image fusion. Recently, many fusion methods have been developed based on different MSTs, and they have shown potential application in many fields. In this paper, we propose an effective infrared and visible image fusion scheme in nonsubsampled contourlet transform (NSCT) domain, in which the NSCT is firstly employed to decompose each of the source images into a series of high frequency subbands and one low frequency subband. To improve the fusion performance we designed two new activity measures for fusion of the lowpass subbands and the highpass subbands. These measures are developed based on the fact that the human visual system (HVS) percept the image quality mainly according to its some low-level features. Then, the selection principles of different subbands are presented based on the corresponding activity measures. Finally, the merged subbands are constructed according to the selection principles, and the final fused image is produced by applying the inverse NSCT on these merged subbands. Experimental results demonstrate the effectiveness and superiority of the proposed method over the state-of-the-art fusion methods in terms of both visual effect and objective evaluation results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Z; Gong, G
2014-06-01
Purpose: To design an external marking body (EMB) that could be visible on computed tomography (CT), magnetic resonance (MR), positron emission tomography (PET) and single-photon emission computed tomography (SPECT) images and to investigate the use of the EMB for multiple medical images registration and fusion in the clinic. Methods: We generated a solution containing paramagnetic metal ions and iodide ions (CT'MR dual-visible solution) that could be viewed on CT and MR images and multi-mode image visible solution (MIVS) that could be obtained by mixing radioactive nuclear material. A globular plastic theca (diameter: 3–6 mm) that mothball the MIVS and themore » EMB was brought by filling MIVS. The EMBs were fixed on the patient surface and CT, MR, PET and SPECT scans were obtained. The feasibility of clinical application and the display and registration error of EMB among different image modalities were investigated. Results: The dual-visible solution was highly dense on CT images (HU>700). A high signal was also found in all MR scanning (T1, T2, STIR and FLAIR) images, and the signal was higher than subcutaneous fat. EMB with radioactive nuclear material caused a radionuclide concentration area on PET and SPECT images, and the signal of EMB was similar to or higher than tumor signals. The theca with MIVS was clearly visible on all the images without artifact, and the shape was round or oval with a sharp edge. The maximum diameter display error was 0.3 ± 0.2mm on CT and MRI images, and 1.0 ± 0.3mm on PET and SPECT images. In addition, the registration accuracy of the theca center among multi-mode images was less than 1mm. Conclusion: The application of EMB with MIVS improves the registration and fusion accuracy of multi-mode medical images. Furthermore, it has the potential to ameliorate disease diagnosis and treatment outcome.« less
Spatial resolution enhancement of satellite image data using fusion approach
NASA Astrophysics Data System (ADS)
Lestiana, H.; Sukristiyanti
2018-02-01
Object identification using remote sensing data has a problem when the spatial resolution is not in accordance with the object. The fusion approach is one of methods to solve the problem, to improve the object recognition and to increase the objects information by combining data from multiple sensors. The application of fusion image can be used to estimate the environmental component that is needed to monitor in multiple views, such as evapotranspiration estimation, 3D ground-based characterisation, smart city application, urban environments, terrestrial mapping, and water vegetation. Based on fusion application method, the visible object in land area has been easily recognized using the method. The variety of object information in land area has increased the variation of environmental component estimation. The difficulties in recognizing the invisible object like Submarine Groundwater Discharge (SGD), especially in tropical area, might be decreased by the fusion method. The less variation of the object in the sea surface temperature is a challenge to be solved.
NASA Astrophysics Data System (ADS)
Guan, Wen; Li, Li; Jin, Weiqi; Qiu, Su; Zou, Yan
2015-10-01
Extreme-Low-Light CMOS has been widely applied in the field of night-vision as a new type of solid image sensor. But if the illumination in the scene has drastic changes or the illumination is too strong, Extreme-Low-Light CMOS can't both clearly present the high-light scene and low-light region. According to the partial saturation problem in the field of night-vision, a HDR image fusion algorithm based on the Laplace Pyramid was researched. The overall gray value and the contrast of the low light image is very low. We choose the fusion strategy based on regional average gradient for the top layer of the long exposure image and short exposure image, which has rich brightness and textural features. The remained layers which represent the edge feature information of the target are based on the fusion strategy based on regional energy. In the process of source image reconstruction with Laplacian pyramid image, we compare the fusion results with four kinds of basal images. The algorithm is tested using Matlab and compared with the different fusion strategies. We use information entropy, average gradient and standard deviation these three objective evaluation parameters for the further analysis of the fusion result. Different low illumination environment experiments show that the algorithm in this paper can rapidly get wide dynamic range while keeping high entropy. Through the verification of this algorithm features, there is a further application prospect of the optimized algorithm. Keywords: high dynamic range imaging, image fusion, multi-exposure image, weight coefficient, information fusion, Laplacian pyramid transform.
Scaling dimensions in spectroscopy of soil and vegetation
NASA Astrophysics Data System (ADS)
Malenovský, Zbyněk; Bartholomeus, Harm M.; Acerbi-Junior, Fausto W.; Schopfer, Jürg T.; Painter, Thomas H.; Epema, Gerrit F.; Bregt, Arnold K.
2007-05-01
The paper revises and clarifies definitions of the term scale and scaling conversions for imaging spectroscopy of soil and vegetation. We demonstrate a new four-dimensional scale concept that includes not only spatial but also the spectral, directional and temporal components. Three scaling remote sensing techniques are reviewed: (1) radiative transfer, (2) spectral (un)mixing, and (3) data fusion. Relevant case studies are given in the context of their up- and/or down-scaling abilities over the soil/vegetation surfaces and a multi-source approach is proposed for their integration. Radiative transfer (RT) models are described to show their capacity for spatial, spectral up-scaling, and directional down-scaling within a heterogeneous environment. Spectral information and spectral derivatives, like vegetation indices (e.g. TCARI/OSAVI), can be scaled and even tested by their means. Radiative transfer of an experimental Norway spruce ( Picea abies (L.) Karst.) research plot in the Czech Republic was simulated by the Discrete Anisotropic Radiative Transfer (DART) model to prove relevance of the correct object optical properties scaled up to image data at two different spatial resolutions. Interconnection of the successive modelling levels in vegetation is shown. A future development in measurement and simulation of the leaf directional spectral properties is discussed. We describe linear and/or non-linear spectral mixing techniques and unmixing methods that demonstrate spatial down-scaling. Relevance of proper selection or acquisition of the spectral endmembers using spectral libraries, field measurements, and pure pixels of the hyperspectral image is highlighted. An extensive list of advanced unmixing techniques, a particular example of unmixing a reflective optics system imaging spectrometer (ROSIS) image from Spain, and examples of other mixture applications give insight into the present status of scaling capabilities. Simultaneous spatial and temporal down-scaling by means of a data fusion technique is described. A demonstrative example is given for the moderate resolution imaging spectroradiometer (MODIS) and LANDSAT Thematic Mapper (TM) data from Brazil. Corresponding spectral bands of both sensors were fused via a pyramidal wavelet transform in Fourier space. New spectral and temporal information of the resultant image can be used for thematic classification or qualitative mapping. All three described scaling techniques can be integrated as the relevant methodological steps within a complex multi-source approach. We present this concept of combining numerous optical remote sensing data and methods to generate inputs for ecosystem process models.
Wu, Mingquan; Li, Hua; Huang, Wenjiang; Niu, Zheng; Wang, Changyao
2015-08-01
There is a shortage of daily high spatial land surface temperature (LST) data for use in high spatial and temporal resolution environmental process monitoring. To address this shortage, this work used the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM), and the Spatial and Temporal Data Fusion Approach (STDFA) to estimate high spatial and temporal resolution LST by combining Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) LST and Moderate Resolution Imaging Spectroradiometer (MODIS) LST products. The actual ASTER LST products were used to evaluate the precision of the combined LST images using the correlation analysis method. This method was tested and validated in study areas located in Gansu Province, China. The results show that all the models can generate daily synthetic LST image with a high correlation coefficient (r) of 0.92 between the synthetic image and the actual ASTER LST observations. The ESTARFM has the best performance, followed by the STDFA and the STARFM. Those models had better performance in desert areas than in cropland. The STDFA had better noise immunity than the other two models.
Multi-Focus Image Fusion Based on NSCT and NSST
NASA Astrophysics Data System (ADS)
Moonon, Altan-Ulzii; Hu, Jianwen
2015-12-01
In this paper, a multi-focus image fusion algorithm based on the nonsubsampled contourlet transform (NSCT) and the nonsubsampled shearlet transform (NSST) is proposed. The source images are first decomposed by the NSCT and NSST into low frequency coefficients and high frequency coefficients. Then, the average method is used to fuse low frequency coefficient of the NSCT. To obtain more accurate salience measurement, the high frequency coefficients of the NSST and NSCT are combined to measure salience. The high frequency coefficients of the NSCT with larger salience are selected as fused high frequency coefficients. Finally, the fused image is reconstructed by the inverse NSCT. We adopt three metrics (Q AB/F , Q e and Q w ) to evaluate the quality of fused images. The experimental results demonstrate that the proposed method outperforms other methods. It retains highly detailed edges and contours.
Shamwell, E Jared; Nothwang, William D; Perlis, Donald
2018-05-04
Aimed at improving size, weight, and power (SWaP)-constrained robotic vision-aided state estimation, we describe our unsupervised, deep convolutional-deconvolutional sensor fusion network, Multi-Hypothesis DeepEfference (MHDE). MHDE learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. We show how our multi-hypothesis formulation provides increased robustness against dynamic, heteroscedastic sensor and motion noise by computing hypothesis image mappings and predictions at 76⁻357 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel, inter-connected architectural pathways and n (1⁻20 in this work) multi-hypothesis generating sub-pathways to produce n global correspondence estimates between a source and a target image. We evaluated MHDE on the KITTI Odometry dataset and benchmarked it against the vision-only DeepMatching and Deformable Spatial Pyramids algorithms and were able to demonstrate a significant runtime decrease and a performance increase compared to the next-best performing method.
NASA Astrophysics Data System (ADS)
Mu, Wei; Qi, Jin; Lu, Hong; Schabath, Matthew; Balagurunathan, Yoganand; Tunali, Ilke; Gillies, Robert James
2018-02-01
Purpose: Investigate the ability of using complementary information provided by the fusion of PET/CT images to predict immunotherapy response in non-small cell lung cancer (NSCLC) patients. Materials and methods: We collected 64 patients diagnosed with primary NSCLC treated with anti PD-1 checkpoint blockade. Using PET/CT images, fused images were created following multiple methodologies, resulting in up to 7 different images for the tumor region. Quantitative image features were extracted from the primary image (PET/CT) and the fused images, which included 195 from primary images and 1235 features from the fusion images. Three clinical characteristics were also analyzed. We then used support vector machine (SVM) classification models to identify discriminant features that predict immunotherapy response at baseline. Results: A SVM built with 87 fusion features and 13 primary PET/CT features on validation dataset had an accuracy and area under the ROC curve (AUROC) of 87.5% and 0.82, respectively, compared to a model built with 113 original PET/CT features on validation dataset 78.12% and 0.68. Conclusion: The fusion features shows better ability to predict immunotherapy response prediction compared to individual image features.
NASA Astrophysics Data System (ADS)
Witharana, Chandi; LaRue, Michelle A.; Lynch, Heather J.
2016-03-01
Remote sensing is a rapidly developing tool for mapping the abundance and distribution of Antarctic wildlife. While both panchromatic and multispectral imagery have been used in this context, image fusion techniques have received little attention. We tasked seven widely-used fusion algorithms: Ehlers fusion, hyperspherical color space fusion, high-pass fusion, principal component analysis (PCA) fusion, University of New Brunswick fusion, and wavelet-PCA fusion to resolution enhance a series of single-date QuickBird-2 and Worldview-2 image scenes comprising penguin guano, seals, and vegetation. Fused images were assessed for spectral and spatial fidelity using a variety of quantitative quality indicators and visual inspection methods. Our visual evaluation elected the high-pass fusion algorithm and the University of New Brunswick fusion algorithm as best for manual wildlife detection while the quantitative assessment suggested the Gram-Schmidt fusion algorithm and the University of New Brunswick fusion algorithm as best for automated classification. The hyperspherical color space fusion algorithm exhibited mediocre results in terms of spectral and spatial fidelities. The PCA fusion algorithm showed spatial superiority at the expense of spectral inconsistencies. The Ehlers fusion algorithm and the wavelet-PCA algorithm showed the weakest performances. As remote sensing becomes a more routine method of surveying Antarctic wildlife, these benchmarks will provide guidance for image fusion and pave the way for more standardized products for specific types of wildlife surveys.
Image Fusion Algorithms Using Human Visual System in Transform Domain
NASA Astrophysics Data System (ADS)
Vadhi, Radhika; Swamy Kilari, Veera; Samayamantula, Srinivas Kumar
2017-08-01
The endeavor of digital image fusion is to combine the important visual parts from various sources to advance the visibility eminence of the image. The fused image has a more visual quality than any source images. In this paper, the Human Visual System (HVS) weights are used in the transform domain to select appropriate information from various source images and then to attain a fused image. In this process, mainly two steps are involved. First, apply the DWT to the registered source images. Later, identify qualitative sub-bands using HVS weights. Hence, qualitative sub-bands are selected from different sources to form high quality HVS based fused image. The quality of the HVS based fused image is evaluated with general fusion metrics. The results show the superiority among the state-of-the art resolution Transforms (MRT) such as Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), Contourlet Transform (CT), and Non Sub Sampled Contourlet Transform (NSCT) using maximum selection fusion rule.
Improved detection probability of low level light and infrared image fusion system
NASA Astrophysics Data System (ADS)
Luo, Yuxiang; Fu, Rongguo; Zhang, Junju; Wang, Wencong; Chang, Benkang
2018-02-01
Low level light(LLL) image contains rich information on environment details, but is easily affected by the weather. In the case of smoke, rain, cloud or fog, much target information will lose. Infrared image, which is from the radiation produced by the object itself, can be "active" to obtain the target information in the scene. However, the image contrast and resolution is bad, the ability of the acquisition of target details is very poor, and the imaging mode does not conform to the human visual habit. The fusion of LLL and infrared image can make up for the deficiency of each sensor and give play to the advantages of single sensor. At first, we show the hardware design of fusion circuit. Then, through the recognition probability calculation of the target(one person) and the background image(trees), we find that the trees detection probability of LLL image is higher than that of the infrared image, and the person detection probability of the infrared image is obviously higher than that of LLL image. The detection probability of fusion image for one person and trees is higher than that of single detector. Therefore, image fusion can significantly enlarge recognition probability and improve detection efficiency.
Generating High-Temporal and Spatial Resolution TIR Image Data
NASA Astrophysics Data System (ADS)
Herrero-Huerta, M.; Lagüela, S.; Alfieri, S. M.; Menenti, M.
2017-09-01
Remote sensing imagery to monitor global biophysical dynamics requires the availability of thermal infrared data at high temporal and spatial resolution because of the rapid development of crops during the growing season and the fragmentation of most agricultural landscapes. Conversely, no single sensor meets these combined requirements. Data fusion approaches offer an alternative to exploit observations from multiple sensors, providing data sets with better properties. A novel spatio-temporal data fusion model based on constrained algorithms denoted as multisensor multiresolution technique (MMT) was developed and applied to generate TIR synthetic image data at both temporal and spatial high resolution. Firstly, an adaptive radiance model is applied based on spectral unmixing analysis of . TIR radiance data at TOA (top of atmosphere) collected by MODIS daily 1-km and Landsat - TIRS 16-day sampled at 30-m resolution are used to generate synthetic daily radiance images at TOA at 30-m spatial resolution. The next step consists of unmixing the 30 m (now lower resolution) images using the information about their pixel land-cover composition from co-registered images at higher spatial resolution. In our case study, TIR synthesized data were unmixed to the Sentinel 2 MSI with 10 m resolution. The constrained unmixing preserves all the available radiometric information of the 30 m images and involves the optimization of the number of land-cover classes and the size of the moving window for spatial unmixing. Results are still being evaluated, with particular attention for the quality of the data streams required to apply our approach.
Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions
NASA Astrophysics Data System (ADS)
Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.
2005-03-01
The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.
Multisource geological data mining and its utilization of uranium resources exploration
NASA Astrophysics Data System (ADS)
Zhang, Jie-lin
2009-10-01
Nuclear energy as one of clear energy sources takes important role in economic development in CHINA, and according to the national long term development strategy, many more nuclear powers will be built in next few years, so it is a great challenge for uranium resources exploration. Research and practice on mineral exploration demonstrates that utilizing the modern Earth Observe System (EOS) technology and developing new multi-source geological data mining methods are effective approaches to uranium deposits prospecting. Based on data mining and knowledge discovery technology, this paper uses multi-source geological data to character electromagnetic spectral, geophysical and spatial information of uranium mineralization factors, and provides the technical support for uranium prospecting integrating with field remote sensing geological survey. Multi-source geological data used in this paper include satellite hyperspectral image (Hyperion), high spatial resolution remote sensing data, uranium geological information, airborne radiometric data, aeromagnetic and gravity data, and related data mining methods have been developed, such as data fusion of optical data and Radarsat image, information integration of remote sensing and geophysical data, and so on. Based on above approaches, the multi-geoscience information of uranium mineralization factors including complex polystage rock mass, mineralization controlling faults and hydrothermal alterations have been identified, the metallogenic potential of uranium has been evaluated, and some predicting areas have been located.
NASA Astrophysics Data System (ADS)
Lee, Joohwi; Kim, Sun Hyung; Styner, Martin
2016-03-01
The delineation of rodent brain structures is challenging due to low-contrast multiple cortical and subcortical organs that are closely interfacing to each other. Atlas-based segmentation has been widely employed due to its ability to delineate multiple organs at the same time via image registration. The use of multiple atlases and subsequent label fusion techniques has further improved the robustness and accuracy of atlas-based segmentation. However, the accuracy of atlas-based segmentation is still prone to registration errors; for example, the segmentation of in vivo MR images can be less accurate and robust against image artifacts than the segmentation of post mortem images. In order to improve the accuracy and robustness of atlas-based segmentation, we propose a multi-object, model-based, multi-atlas segmentation method. We first establish spatial correspondences across atlases using a set of dense pseudo-landmark particles. We build a multi-object point distribution model using those particles in order to capture inter- and intra- subject variation among brain structures. The segmentation is obtained by fitting the model into a subject image, followed by label fusion process. Our result shows that the proposed method resulted in greater accuracy than comparable segmentation methods, including a widely used ANTs registration tool.
Quality evaluation of pansharpened hyperspectral images generated using multispectral images
NASA Astrophysics Data System (ADS)
Matsuoka, Masayuki; Yoshioka, Hiroki
2012-11-01
Hyperspectral remote sensing can provide a smooth spectral curve of a target by using a set of higher spectral resolution detectors. The spatial resolution of the hyperspectral images, however, is generally much lower than that of multispectral images due to the lower energy of incident radiation. Pansharpening is an image-fusion technique that generates higher spatial resolution multispectral images by combining lower resolution multispectral images with higher resolution panchromatic images. In this study, higher resolution hyperspectral images were generated by pansharpening of simulated lower hyperspectral and higher multispectral data. Spectral and spatial qualities of pansharpened images, then, were accessed in relation to the spectral bands of multispectral images. Airborne hyperspectral data of AVIRIS was used in this study, and it was pansharpened using six methods. Quantitative evaluations of pansharpened image are achieved using two frequently used indices, ERGAS, and the Q index.
High-resolution Imaging of Deuterium-Tritium Capsule Implosions on the National Ignition Facility
NASA Astrophysics Data System (ADS)
Bachmann, Benjamin; Rygg, Ryan; Collins, Gilbert; Patel, Pravesh
2017-10-01
Highly-resolved 3-D simulations of inertial confinement fusion (ICF) implosions predict a hot spot plasma that exhibits complex micron-scale structure originating from a variety of 3-D perturbations. Experimental diagnosis of these conditions requires high spatial resolution imaging techniques. X-ray penumbral imaging can improve the spatial resolution over pinhole imaging while simultaneously increasing the detected photon yield at x-ray energies where the ablator opacity becomes negligible. Here we report on the first time-integrated x-ray penumbral imaging experiments of ICF capsule implosions at the National Ignition Facility that achieved spatial resolution as high as 4 micrometer. 6 to 30 keV hot spot images from layered DT implosions will be presented from a variety of experimental ICF campaigns, revealing previously unseen detail. It will be discussed how these and future results can be used to improve our physics understanding of inertially confined fusion plasmas by enabling spatially resolved measurements of hot spot properties, such as radiation energy, temperature or derived quantities. This work performed under the auspices of the U.S. Department of Energy by LLNL under Contract DE-AC52-07NA27344.
Buildings Change Detection Based on Shape Matching for Multi-Resolution Remote Sensing Imagery
NASA Astrophysics Data System (ADS)
Abdessetar, M.; Zhong, Y.
2017-09-01
Buildings change detection has the ability to quantify the temporal effect, on urban area, for urban evolution study or damage assessment in disaster cases. In this context, changes analysis might involve the utilization of the available satellite images with different resolutions for quick responses. In this paper, to avoid using traditional method with image resampling outcomes and salt-pepper effect, building change detection based on shape matching is proposed for multi-resolution remote sensing images. Since the object's shape can be extracted from remote sensing imagery and the shapes of corresponding objects in multi-scale images are similar, it is practical for detecting buildings changes in multi-scale imagery using shape analysis. Therefore, the proposed methodology can deal with different pixel size for identifying new and demolished buildings in urban area using geometric properties of objects of interest. After rectifying the desired multi-dates and multi-resolutions images, by image to image registration with optimal RMS value, objects based image classification is performed to extract buildings shape from the images. Next, Centroid-Coincident Matching is conducted, on the extracted building shapes, based on the Euclidean distance measurement between shapes centroid (from shape T0 to shape T1 and vice versa), in order to define corresponding building objects. Then, New and Demolished buildings are identified based on the obtained distances those are greater than RMS value (No match in the same location).
NASA Astrophysics Data System (ADS)
Cifelli, R.; Chen, H.; Chandrasekar, V.; Xie, P.
2015-12-01
A large number of precipitation products at multi-scales have been developed based upon satellite, radar, and/or rain gauge observations. However, how to produce optimal rainfall estimation for a given region is still challenging due to the spatial and temporal sampling difference of different sensors. In this study, we develop a data fusion mechanism to improve regional quantitative precipitation estimation (QPE) by utilizing satellite-based CMORPH product, ground radar measurements, as well as numerical model simulations. The CMORPH global precipitation product is essentially derived based on retrievals from passive microwave measurements and infrared observations onboard satellites (Joyce et al. 2004). The fine spatial-temporal resolution of 0.05o Lat/Lon and 30-min is appropriate for regional hydrologic and climate studies. However, it is inadequate for localized hydrometeorological applications such as urban flash flood forecasting. Via fusion of the Regional CMORPH product and local precipitation sensors, the high-resolution QPE performance can be improved. The area of interest is the Dallas-Fort Worth (DFW) Metroplex, which is the largest land-locked metropolitan area in the U.S. In addition to an NWS dual-polarization S-band WSR-88DP radar (i.e., KFWS radar), DFW hosts the high-resolution dual-polarization X-band radar network developed by the center for Collaborative Adaptive Sensing of the Atmosphere (CASA). This talk will present a general framework of precipitation data fusion based on satellite and ground observations. The detailed prototype architecture of using regional rainfall instruments to improve regional CMORPH precipitation product via multi-scale fusion techniques will also be discussed. Particularly, the temporal and spatial fusion algorithms developed for the DFW Metroplex will be described, which utilizes CMORPH product, S-band WSR-88DP, and X-band CASA radar measurements. In order to investigate the uncertainties associated with each individual product and demonstrate the precipitation data fusion performance, both individual and fused QPE products are evaluated using rainfall measurements from a disdrometer and gauge network.
NASA Astrophysics Data System (ADS)
D'Addabbo, Annarita; Refice, Alberto; Lovergine, Francesco; Pasquariello, Guido
2016-04-01
Flooding is one of the most frequent and expansive natural hazard. High-resolution flood mapping is an essential step in the monitoring and prevention of inundation hazard, both to gain insight into the processes involved in the generation of flooding events, and from the practical point of view of the precise assessment of inundated areas. Remote sensing data are recognized to be useful in this respect, thanks to the high resolution and regular revisit schedules of state-of-the-art satellites, moreover offering a synoptic overview of the extent of flooding. In particular, Synthetic Aperture Radar (SAR) data present several favorable characteristics for flood mapping, such as their relative insensitivity to the meteorological conditions during acquisitions, as well as the possibility of acquiring independently of solar illumination, thanks to the active nature of the radar sensors [1]. However, flood scenarios are typical examples of complex situations in which different factors have to be considered to provide accurate and robust interpretation of the situation on the ground: the presence of many land cover types, each one with a particular signature in presence of flood, requires modelling the behavior of different objects in the scene in order to associate them to flood or no flood conditions [2]. Generally, the fusion of multi-temporal, multi-sensor, multi-resolution and/or multi-platform Earth observation image data, together with other ancillary information, seems to have a key role in the pursuit of a consistent interpretation of complex scenes. In the case of flooding, distance from the river, terrain elevation, hydrologic information or some combination thereof can add useful information to remote sensing data. Suitable methods, able to manage and merge different kind of data, are so particularly needed. In this work, a fully automatic tool, based on Bayesian Networks (BNs) [3] and able to perform data fusion, is presented. It supplies flood maps describing the dynamics of each analysed event, combining time series of images, acquired by different sensors, with ancillary information. Some experiments have been performed by combining multi-temporal SAR intensity images, InSAR coherence and optical data, with geomorphic and other ground information. The tool has been tested on different flood events occurred in the Basilicata region (Italy) during the last years, showing good capabilities of identification of a large area interested by the flood phenomenon, partially overcoming the obstacle constituted by the presence of scattering/coherence classes corresponding to different land cover types, which respond differently to the presence of water and to inundation evolution [1] A. Refice et al, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 7, no. 7, pp. 2711-2722, 2014. [2] L. Pulvirenti et al., IEEE Trans. Geosci. Rem. Sens., Vol. PP, pp. 1- 13, 2015. [3] A. D'Addabbo et al., "A Bayesian Network for Flood Detection combining SAR Imagery and Ancillary Data," IEEE Trans. Geosci. Rem. Sens., in press.
Multi-exposure high dynamic range image synthesis with camera shake correction
NASA Astrophysics Data System (ADS)
Li, Xudong; Chen, Yongfu; Jiang, Hongzhi; Zhao, Huijie
2017-10-01
Machine vision plays an important part in industrial online inspection. Owing to the nonuniform illuminance conditions and variable working distances, the captured image tends to be over-exposed or under-exposed. As a result, when processing the image such as crack inspection, the algorithm complexity and computing time increase. Multiexposure high dynamic range (HDR) image synthesis is used to improve the quality of the captured image, whose dynamic range is limited. Inevitably, camera shake will result in ghost effect, which blurs the synthesis image to some extent. However, existed exposure fusion algorithms assume that the input images are either perfectly aligned or captured in the same scene. These assumptions limit the application. At present, widely used registration based on Scale Invariant Feature Transform (SIFT) is usually time consuming. In order to rapidly obtain a high quality HDR image without ghost effect, we come up with an efficient Low Dynamic Range (LDR) images capturing approach and propose a registration method based on ORiented Brief (ORB) and histogram equalization which can eliminate the illumination differences between the LDR images. The fusion is performed after alignment. The experiment results demonstrate that the proposed method is robust to illumination changes and local geometric distortion. Comparing with other exposure fusion methods, our method is more efficient and can produce HDR images without ghost effect by registering and fusing four multi-exposure images.
NASA Astrophysics Data System (ADS)
Descloux, A.; Grußmayer, K. S.; Bostan, E.; Lukes, T.; Bouwens, A.; Sharipov, A.; Geissbuehler, S.; Mahul-Mellier, A.-L.; Lashuel, H. A.; Leutenegger, M.; Lasser, T.
2018-03-01
Super-resolution fluorescence microscopy provides unprecedented insight into cellular and subcellular structures. However, going `beyond the diffraction barrier' comes at a price, since most far-field super-resolution imaging techniques trade temporal for spatial super-resolution. We propose the combination of a novel label-free white light quantitative phase imaging with fluorescence to provide high-speed imaging and spatial super-resolution. The non-iterative phase retrieval relies on the acquisition of single images at each z-location and thus enables straightforward 3D phase imaging using a classical microscope. We realized multi-plane imaging using a customized prism for the simultaneous acquisition of eight planes. This allowed us to not only image live cells in 3D at up to 200 Hz, but also to integrate fluorescence super-resolution optical fluctuation imaging within the same optical instrument. The 4D microscope platform unifies the sensitivity and high temporal resolution of phase imaging with the specificity and high spatial resolution of fluorescence microscopy.
NASA Astrophysics Data System (ADS)
Jin, Y.; Lee, D.
2017-12-01
North Korea (the Democratic People's Republic of Korea, DPRK) is known to have some of the most degraded forest in the world. The characteristics of forest landscape in North Korea is complex and heterogeneous, the major vegetation cover types in the forest are hillside farm, unstocked forest, natural forest, and plateau vegetation. Better classification of types in high spatial resolution of deforested areas could provide essential information for decisions about forest management priorities and restoration of deforested areas. For mapping heterogeneous vegetation covers, the phenology-based indices are helpful to overcome the reflectance value confusion that occurs when using one season images. Coarse spatial resolution images may be acquired with a high repetition rate and it is useful for analyzing phenology characteristics, but may not capture the spatial detail of the land cover mosaic of the region of interest. Previous spatial-temporal fusion methods were only capture the temporal change, or focused on both temporal change and spatial change but with low accuracy in heterogeneous landscapes and small patches. In this study, a new concept for spatial-temporal image fusion method focus on heterogeneous landscape was proposed to produce fine resolution images at both fine spatial and temporal resolution. We classified the three types of pixels between the base image and target image, the first type is only reflectance changed caused by phenology, this type of pixels supply the reflectance, shape and texture information; the second type is both reflectance and spectrum changed in some bands caused by phenology like rice paddy or farmland, this type of pixels only supply shape and texture information; the third type is reflectance and spectrum changed caused by land cover type change, this type of pixels don't provide any information because we can't know how land cover changed in target image; and each type of pixels were applied different prediction methods. Results show that both STARFM and FSDAF predicted in low accuracy in second type pixels and small patches. Classification results used spatial-temporal image fusion method proposed in this study showed overall classification accuracy of 89.38%, with corresponding kappa coefficients of 0.87.
Liu, Xingbin; Mei, Wenbo; Du, Huiqian
2018-02-13
In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images. Graphical abstract The detailed implementation of the proposed medical image fusion algorithm.
Wavelength scanning achieves pixel super-resolution in holographic on-chip microscopy
NASA Astrophysics Data System (ADS)
Luo, Wei; Göröcs, Zoltan; Zhang, Yibo; Feizi, Alborz; Greenbaum, Alon; Ozcan, Aydogan
2016-03-01
Lensfree holographic on-chip imaging is a potent solution for high-resolution and field-portable bright-field imaging over a wide field-of-view. Previous lensfree imaging approaches utilize a pixel super-resolution technique, which relies on sub-pixel lateral displacements between the lensfree diffraction patterns and the image sensor's pixel-array, to achieve sub-micron resolution under unit magnification using state-of-the-art CMOS imager chips, commonly used in e.g., mobile-phones. Here we report, for the first time, a wavelength scanning based pixel super-resolution technique in lensfree holographic imaging. We developed an iterative super-resolution algorithm, which generates high-resolution reconstructions of the specimen from low-resolution (i.e., under-sampled) diffraction patterns recorded at multiple wavelengths within a narrow spectral range (e.g., 10-30 nm). Compared with lateral shift-based pixel super-resolution, this wavelength scanning approach does not require any physical shifts in the imaging setup, and the resolution improvement is uniform in all directions across the sensor-array. Our wavelength scanning super-resolution approach can also be integrated with multi-height and/or multi-angle on-chip imaging techniques to obtain even higher resolution reconstructions. For example, using wavelength scanning together with multi-angle illumination, we achieved a halfpitch resolution of 250 nm, corresponding to a numerical aperture of 1. In addition to pixel super-resolution, the small scanning steps in wavelength also enable us to robustly unwrap phase, revealing the specimen's optical path length in our reconstructed images. We believe that this new wavelength scanning based pixel super-resolution approach can provide competitive microscopy solutions for high-resolution and field-portable imaging needs, potentially impacting tele-pathology applications in resource-limited-settings.
Multi-atlas learner fusion: An efficient segmentation approach for large-scale data.
Asman, Andrew J; Huo, Yuankai; Plassard, Andrew J; Landman, Bennett A
2015-12-01
We propose multi-atlas learner fusion (MLF), a framework for rapidly and accurately replicating the highly accurate, yet computationally expensive, multi-atlas segmentation framework based on fusing local learners. In the largest whole-brain multi-atlas study yet reported, multi-atlas segmentations are estimated for a training set of 3464 MR brain images. Using these multi-atlas estimates we (1) estimate a low-dimensional representation for selecting locally appropriate example images, and (2) build AdaBoost learners that map a weak initial segmentation to the multi-atlas segmentation result. Thus, to segment a new target image we project the image into the low-dimensional space, construct a weak initial segmentation, and fuse the trained, locally selected, learners. The MLF framework cuts the runtime on a modern computer from 36 h down to 3-8 min - a 270× speedup - by completely bypassing the need for deformable atlas-target registrations. Additionally, we (1) describe a technique for optimizing the weak initial segmentation and the AdaBoost learning parameters, (2) quantify the ability to replicate the multi-atlas result with mean accuracies approaching the multi-atlas intra-subject reproducibility on a testing set of 380 images, (3) demonstrate significant increases in the reproducibility of intra-subject segmentations when compared to a state-of-the-art multi-atlas framework on a separate reproducibility dataset, (4) show that under the MLF framework the large-scale data model significantly improve the segmentation over the small-scale model under the MLF framework, and (5) indicate that the MLF framework has comparable performance as state-of-the-art multi-atlas segmentation algorithms without using non-local information. Copyright © 2015 Elsevier B.V. All rights reserved.
Fusion set selection with surrogate metric in multi-atlas based image segmentation
NASA Astrophysics Data System (ADS)
Zhao, Tingting; Ruan, Dan
2016-02-01
Multi-atlas based image segmentation sees unprecedented opportunities but also demanding challenges in the big data era. Relevant atlas selection before label fusion plays a crucial role in reducing potential performance loss from heterogeneous data quality and high computation cost from extensive data. This paper starts with investigating the image similarity metric (termed ‘surrogate’), an alternative to the inaccessible geometric agreement metric (termed ‘oracle’) in atlas relevance assessment, and probes into the problem of how to select the ‘most-relevant’ atlases and how many such atlases to incorporate. We propose an inference model to relate the surrogates and the oracle geometric agreement metrics. Based on this model, we quantify the behavior of the surrogates in mimicking oracle metrics for atlas relevance ordering. Finally, analytical insights on the choice of fusion set size are presented from a probabilistic perspective, with the integrated goal of including the most relevant atlases and excluding the irrelevant ones. Empirical evidence and performance assessment are provided based on prostate and corpus callosum segmentation.
Spatial and temporal remote sensing data fusion for vegetation monitoring
USDA-ARS?s Scientific Manuscript database
The suite of available remote sensing instruments varies widely in terms of sensor characteristics, spatial resolution and acquisition frequency. For example, the Moderate-resolution Imaging Spectroradiometer (MODIS) provides daily global observations at 250m to 1km spatial resolution. While imagery...
V S, Unni; Mishra, Deepak; Subrahmanyam, G R K S
2016-12-01
The need for image fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. Image fusion is the process of combining substantial information from several sensors using mathematical techniques in order to create a single composite image that will be more comprehensive and thus more useful for a human operator or other computer vision tasks. This paper presents a new approach to multifocus image fusion based on sparse signal representation. Block-based compressive sensing integrated with a projection-driven compressive sensing (CS) recovery that encourages sparsity in the wavelet domain is used as a method to get the focused image from a set of out-of-focus images. Compression is achieved during the image acquisition process using a block compressive sensing method. An adaptive thresholding technique within the smoothed projected Landweber recovery process reconstructs high-resolution focused images from low-dimensional CS measurements of out-of-focus images. Discrete wavelet transform and dual-tree complex wavelet transform are used as the sparsifying basis for the proposed fusion. The main finding lies in the fact that sparsification enables a better selection of the fusion coefficients and hence better fusion. A Laplacian mixture model fit is done in the wavelet domain and estimation of the probability density function (pdf) parameters by expectation maximization leads us to the proper selection of the coefficients of the fused image. Using the proposed method compared with the fusion scheme without employing the projected Landweber (PL) scheme and the other existing CS-based fusion approaches, it is observed that with fewer samples itself, the proposed method outperforms other approaches.
Heideklang, René; Shokouhi, Parisa
2016-01-01
This article focuses on the fusion of flaw indications from multi-sensor nondestructive materials testing. Because each testing method makes use of a different physical principle, a multi-method approach has the potential of effectively differentiating actual defect indications from the many false alarms, thus enhancing detection reliability. In this study, we propose a new technique for aggregating scattered two- or three-dimensional sensory data. Using a density-based approach, the proposed method explicitly addresses localization uncertainties such as registration errors. This feature marks one of the major of advantages of this approach over pixel-based image fusion techniques. We provide guidelines on how to set all the key parameters and demonstrate the technique’s robustness. Finally, we apply our fusion approach to experimental data and demonstrate its capability to locate small defects by substantially reducing false alarms under conditions where no single-sensor method is adequate. PMID:26784200
Live-cell imaging of conidial anastomosis tube fusion during colony initiation in Fusarium oxysporum
Kurian, Smija M.; Di Pietro, Antonio
2018-01-01
Fusarium oxysporum exhibits conidial anastomosis tube (CAT) fusion during colony initiation to form networks of conidial germlings. Here we determined the optimal culture conditions for this fungus to undergo CAT fusion between microconidia in liquid medium. Extensive high resolution, confocal live-cell imaging was performed to characterise the different stages of CAT fusion, using genetically encoded fluorescent labelling and vital fluorescent organelle stains. CAT homing and fusion were found to be dependent on adhesion to the surface, in contrast to germ tube development which occurs in the absence of adhesion. Staining with fluorescently labelled concanavalin A indicated that the cell wall composition of CATs differs from that of microconidia and germ tubes. The movement of nuclei, mitochondria, vacuoles and lipid droplets through fused germlings was observed by live-cell imaging. PMID:29734342
Kurian, Smija M; Di Pietro, Antonio; Read, Nick D
2018-01-01
Fusarium oxysporum exhibits conidial anastomosis tube (CAT) fusion during colony initiation to form networks of conidial germlings. Here we determined the optimal culture conditions for this fungus to undergo CAT fusion between microconidia in liquid medium. Extensive high resolution, confocal live-cell imaging was performed to characterise the different stages of CAT fusion, using genetically encoded fluorescent labelling and vital fluorescent organelle stains. CAT homing and fusion were found to be dependent on adhesion to the surface, in contrast to germ tube development which occurs in the absence of adhesion. Staining with fluorescently labelled concanavalin A indicated that the cell wall composition of CATs differs from that of microconidia and germ tubes. The movement of nuclei, mitochondria, vacuoles and lipid droplets through fused germlings was observed by live-cell imaging.
Sugimura, Daisuke; Kobayashi, Suguru; Hamamoto, Takayuki
2017-11-01
Light field imaging is an emerging technique that is employed to realize various applications such as multi-viewpoint imaging, focal-point changing, and depth estimation. In this paper, we propose a concept of a dual-resolution light field imaging system to synthesize super-resolved multi-viewpoint images. The key novelty of this study is the use of an organic photoelectric conversion film (OPCF), which is a device that converts spectra information of incoming light within a certain wavelength range into an electrical signal (pixel value), for light field imaging. In our imaging system, we place the OPCF having the green spectral sensitivity onto the micro-lens array of the conventional light field camera. The OPCF allows us to acquire the green spectra information only at the center viewpoint with the full resolution of the image sensor. In contrast, the optical system of the light field camera in our imaging system captures the other spectra information (red and blue) at multiple viewpoints (sub-aperture images) but with low resolution. Thus, our dual-resolution light field imaging system enables us to simultaneously capture information about the target scene at a high spatial resolution as well as the direction information of the incoming light. By exploiting these advantages of our imaging system, our proposed method enables the synthesis of full-resolution multi-viewpoint images. We perform experiments using synthetic images, and the results demonstrate that our method outperforms other previous methods.
Automatic Structural Parcellation of Mouse Brain MRI Using Multi-Atlas Label Fusion
Ma, Da; Cardoso, Manuel J.; Modat, Marc; Powell, Nick; Wells, Jack; Holmes, Holly; Wiseman, Frances; Tybulewicz, Victor; Fisher, Elizabeth; Lythgoe, Mark F.; Ourselin, Sébastien
2014-01-01
Multi-atlas segmentation propagation has evolved quickly in recent years, becoming a state-of-the-art methodology for automatic parcellation of structural images. However, few studies have applied these methods to preclinical research. In this study, we present a fully automatic framework for mouse brain MRI structural parcellation using multi-atlas segmentation propagation. The framework adopts the similarity and truth estimation for propagated segmentations (STEPS) algorithm, which utilises a locally normalised cross correlation similarity metric for atlas selection and an extended simultaneous truth and performance level estimation (STAPLE) framework for multi-label fusion. The segmentation accuracy of the multi-atlas framework was evaluated using publicly available mouse brain atlas databases with pre-segmented manually labelled anatomical structures as the gold standard, and optimised parameters were obtained for the STEPS algorithm in the label fusion to achieve the best segmentation accuracy. We showed that our multi-atlas framework resulted in significantly higher segmentation accuracy compared to single-atlas based segmentation, as well as to the original STAPLE framework. PMID:24475148
A multi-component evaporation model for beam melting processes
NASA Astrophysics Data System (ADS)
Klassen, Alexander; Forster, Vera E.; Körner, Carolin
2017-02-01
In additive manufacturing using laser or electron beam melting technologies, evaporation losses and changes in chemical composition are known issues when processing alloys with volatile elements. In this paper, a recently described numerical model based on a two-dimensional free surface lattice Boltzmann method is further developed to incorporate the effects of multi-component evaporation. The model takes into account the local melt pool composition during heating and fusion of metal powder. For validation, the titanium alloy Ti-6Al-4V is melted by selective electron beam melting and analysed using mass loss measurements and high-resolution microprobe imaging. Numerically determined evaporation losses and spatial distributions of aluminium compare well with experimental data. Predictions of the melt pool formation in bulk samples provide insight into the competition between the loss of volatile alloying elements from the irradiated surface and their advective redistribution within the molten region.
Data fusion and photometric restoration
NASA Astrophysics Data System (ADS)
Pirzkal, Norbert; Hook, Richard N.
2001-11-01
The current generation of 8-10m optical ground-based telescopes have a symbiotic relationship with space telescopes. For direct imaging in the optical the former can collect photons relatively cheaply but the latter can still achieve, even in the era of adaptive optics, significantly higher spatial resolution, point-spread function stability and astrometric fidelity over fields of a few arcminutes. The large archives of HST imaging already in place, when combined with the ease of access to ground-based data afforded by the virtual observatory currently under development, will make space-ground data fusion a powerful tool for the future. We describe a photometric image restoration method that we have developed which allows the efficient and accurate use of high-resolution space imaging of crowded fields to extract high quality photometry from very crowded ground-based images. We illustrate the method using HST and ESO VLT/FORS imaging of a globular cluster and demonstrate quantitatively the photometric measurements quality that can achieved using the data fusion approach instead of just using data from just one telescope. This method can handle most of the common difficulties encountered when attempting this problem such as determining the geometric mapping to the requisite precision, deriving the PSF and the background.
Multi-frequency SAR, SSM/I and AVHRR derived geophysical information of the marginal ice zone
NASA Technical Reports Server (NTRS)
Shuchman, R. A.; Onstott, R. G.; Wackerman, C. C.; Russel, C. A.; Sutherland, L. L.; Johannessen, O. M.; Johannessen, J. A.; Sandven, S.; Gloerson, P.
1991-01-01
A description is given of the fusion of synthetic aperture radar (SAR), special sensor microwave imager (SSM/I), and NOAA Advanced Very High Resolution Radiometer (AVHRR) data to study arctic processes. These data were collected during the SIZEX/CEAREX experiments that occurred in the Greenland Sea in March of 1989. Detailed comparisons between the SAR, AVHRR, and SSM/I indicated: (1) The ice edge position was in agreement to within 25 km, (2) The SSM/I SAR total ice concentration compared favorably, however, the SSM/I significantly underpredicted the multiyear fraction, (3) Combining high resolution SAR with SSM/I can potentially map open water and new ice features in the marginal ice zone (MIZ) which cannot be mapped by the single sensors, and (4) The combination of all three sensors provides accurate ice information as well as sea surface temperature and wind speeds.
NASA Astrophysics Data System (ADS)
Zhou, Xiran; Liu, Jun; Liu, Shuguang; Cao, Lei; Zhou, Qiming; Huang, Huawen
2014-02-01
High spatial resolution and spectral fidelity are basic standards for evaluating an image fusion algorithm. Numerous fusion methods for remote sensing images have been developed. Some of these methods are based on the intensity-hue-saturation (IHS) transform and the generalized IHS (GIHS), which may cause serious spectral distortion. Spectral distortion in the GIHS is proven to result from changes in saturation during fusion. Therefore, reducing such changes can achieve high spectral fidelity. A GIHS-based spectral preservation fusion method that can theoretically reduce spectral distortion is proposed in this study. The proposed algorithm consists of two steps. The first step is spectral modulation (SM), which uses the Gaussian function to extract spatial details and conduct SM of multispectral (MS) images. This method yields a desirable visual effect without requiring histogram matching between the panchromatic image and the intensity of the MS image. The second step uses the Gaussian convolution function to restore lost edge details during SM. The proposed method is proven effective and shown to provide better results compared with other GIHS-based methods.
NASA Astrophysics Data System (ADS)
Aglitskiy, Yefim; Weaver, J. L.; Karasik, M.; Serlin, V.; Obenschain, S. P.; Ralchenko, Yu.
2014-10-01
The spectra of multi-charged ions of Hf, Ta, W, Pt, Au and Bi have been studied on Nike krypton-fluoride laser facility with the help of two kinds of X-ray spectrometers. First, survey instrument covering a spectral range from 0.5 to 19.5 angstroms which allows simultaneous observation of both M- and N- spectra of above mentioned elements with high spectral resolution. Second, an imaging spectrometer with interchangeable spherically bent Quartz crystals that added higher efficiency, higher spectral resolution and high spatial resolution to the qualities of the former one. Multiple spectral lines with X-ray energies as high as 4 keV that belong to the isoelectronic sequences of Fe, Co, Ni, Cu and Zn were identified with the help of NOMAD package developed by Dr. Yu. Ralchenko and colleagues. In our continuous effort to support DOE-NNSA's inertial fusion program, this campaign covered a wide range of plasma conditions that result in production of relatively energetic X-rays. Work supported by the US DOE/NNSA.
Multi-Wavelength Imaging of Solar Plasma - High-Beta Disruption Model of Solar Flares -
NASA Astrophysics Data System (ADS)
Shibasaki, Kiyoto
Solar atmosphere is filled with plasma and magnetic field. Activities in the atmosphere are due to plasma instabilities in the magnetic field. To understand the physical mechanisms of activities / instabilities, it is necessary to know the physical conditions of magnetized plasma, such as temperature, density, magnetic field, and their spatial structures and temporal developments. Multi-wavelength imaging is essential for this purpose. Imaging observations of the Sun at microwave, X-ray, EUV and optical ranges are routinely going on. Due to free exchange of original data among solar physics and related field communities, we can easily combine images covering wide range of spectrum. Even under such circumstances, we still do not understand the cause of activities in the solar atmosphere well. The current standard model of solar activities is based on magnetic reconnection: release of stored magnetic energy by reconnection is the cause of solar activities on the Sun such as solar flares. However, recent X-ray, EUV and microwave observations with high spatial and temporal resolution show that dense plasma is involved in activities from the beginning. Based on these observations, I propose a high-beta model of solar activities, which is very similar to high-beta disruptions in magnetically confined fusion experiments.
Maximizing Science Return from Future Mars Missions with Onboard Image Analyses
NASA Technical Reports Server (NTRS)
Gulick, V. C.; Morris, R. L.; Bandari, E. B.; Roush, T. L.
2000-01-01
We have developed two new techniques to enhance science return and to decrease returned data volume for near-term Mars missions: 1) multi-spectral image compression and 2) autonomous identification and fusion of in-focus regions in an image series.
NASA Astrophysics Data System (ADS)
Li, Jing; Xie, Weixin; Pei, Jihong
2018-03-01
Sea-land segmentation is one of the key technologies of sea target detection in remote sensing images. At present, the existing algorithms have the problems of low accuracy, low universality and poor automatic performance. This paper puts forward a sea-land segmentation algorithm based on multi-feature fusion for a large-field remote sensing image removing island. Firstly, the coastline data is extracted and all of land area is labeled by using the geographic information in large-field remote sensing image. Secondly, three features (local entropy, local texture and local gradient mean) is extracted in the sea-land border area, and the three features combine a 3D feature vector. And then the MultiGaussian model is adopted to describe 3D feature vectors of sea background in the edge of the coastline. Based on this multi-gaussian sea background model, the sea pixels and land pixels near coastline are classified more precise. Finally, the coarse segmentation result and the fine segmentation result are fused to obtain the accurate sea-land segmentation. Comparing and analyzing the experimental results by subjective vision, it shows that the proposed method has high segmentation accuracy, wide applicability and strong anti-disturbance ability.
Kandukuri, Jayanth; Yu, Shuai; Cheng, Bingbing; Bandi, Venugopal; D’Souza, Francis; Nguyen, Kytai T.; Hong, Yi; Yuan, Baohong
2017-01-01
Simultaneous imaging of multiple targets (SIMT) in opaque biological tissues is an important goal for molecular imaging in the future. Multi-color fluorescence imaging in deep tissues is a promising technology to reach this goal. In this work, we developed a dual-modality imaging system by combining our recently developed ultrasound-switchable fluorescence (USF) imaging technology with the conventional ultrasound (US) B-mode imaging. This dual-modality system can simultaneously image tissue acoustic structure information and multi-color fluorophores in centimeter-deep tissue with comparable spatial resolutions. To conduct USF imaging on the same plane (i.e., x-z plane) as US imaging, we adopted two 90°-crossed ultrasound transducers with an overlapped focal region, while the US transducer (the third one) was positioned at the center of these two USF transducers. Thus, the axial resolution of USF is close to the lateral resolution, which allows a point-by-point USF scanning on the same plane as the US imaging. Both multi-color USF and ultrasound imaging of a tissue phantom were demonstrated. PMID:28165390
Chen, Yuanbo; Li, Hulin; Wu, Dingtao; Bi, Keming; Liu, Chunxiao
2014-12-01
Construction of three-dimensional (3D) model of renal tumor facilitated surgical planning and imaging guidance of manual image fusion in laparoscopic partial nephrectomy (LPN) for intrarenal tumors. Fifteen patients with intrarenal tumors underwent LPN between January and December 2012. Computed tomography-based reconstruction of the 3D models of renal tumors was performed using Mimics 12.1 software. Surgical planning was performed through morphometry and multi-angle visual views of the tumor model. Two-step manual image fusion superimposed 3D model images onto 2D laparoscopic images. The image fusion was verified by intraoperative ultrasound. Imaging-guided laparoscopic hilar clamping and tumor excision was performed. Manual fusion time, patient demographics, surgical details, and postoperative treatment parameters were analyzed. The reconstructed 3D tumor models accurately represented the patient's physiological anatomical landmarks. The surgical planning markers were marked successfully. Manual image fusion was flexible and feasible with fusion time of 6 min (5-7 min). All surgeries were completed laparoscopically. The median tumor excision time was 5.4 min (3.5-10 min), whereas the median warm ischemia time was 25.5 min (16-32 min). Twelve patients (80 %) demonstrated renal cell carcinoma on final pathology, and all surgical margins were negative. No tumor recurrence was detected after a media follow-up of 1 year (3-15 months). The surgical planning and two-step manual image fusion based on 3D model of renal tumor facilitated visible-imaging-guided tumor resection with negative margin in LPN for intrarenal tumor. It is promising and moves us one step closer to imaging-guided surgery.
Energy-resolved neutron imaging for inertial confinement fusion
NASA Astrophysics Data System (ADS)
Moran, M. J.; Haan, S. W.; Hatchett, S. P.; Izumi, N.; Koch, J. A.; Lerche, R. A.; Phillips, T. W.
2003-03-01
The success of the National Ignition Facility program will depend on diagnostic measurements which study the performance of inertial confinement fusion (ICF) experiments. Neutron yield, fusion-burn time history, and images are examples of important diagnostics. Neutron and x-ray images will record the geometries of compressed targets during the fusion-burn process. Such images provide a critical test of the accuracy of numerical modeling of ICF experiments. They also can provide valuable information in cases where experiments produce unexpected results. Although x-ray and neutron images provide similar data, they do have significant differences. X-ray images represent the distribution of high-temperature regions where fusion occurs, while neutron images directly reveal the spatial distribution of fusion-neutron emission. X-ray imaging has the advantage of a relatively straightforward path to the imaging system design. Neutron imaging, by using energy-resolved detection, offers the intriguing advantage of being able to provide independent images of burning and nonburning regions of the nuclear fuel. The usefulness of energy-resolved neutron imaging depends on both the information content of the data and on the quality of the data that can be recorded. The information content will relate to the characteristic neutron spectra that are associated with emission from different regions of the source. Numerical modeling of ICF fusion burn will be required to interpret the corresponding energy-dependent images. The exercise will be useful only if the images can be recorded with sufficient definition to reveal the spatial and energy-dependent features of interest. Several options are being evaluated with respect to the feasibility of providing the desired simultaneous spatial and energy resolution.
Morphological filtering and multiresolution fusion for mammographic microcalcification detection
NASA Astrophysics Data System (ADS)
Chen, Lulin; Chen, Chang W.; Parker, Kevin J.
1997-04-01
Mammographic images are often of relatively low contrast and poor sharpness with non-stationary background or clutter and are usually corrupted by noise. In this paper, we propose a new method for microcalcification detection using gray scale morphological filtering followed by multiresolution fusion and present a unified general filtering form called the local operating transformation for whitening filtering and adaptive thresholding. The gray scale morphological filters are used to remove all large areas that are considered as non-stationary background or clutter variations, i.e., to prewhiten images. The multiresolution fusion decision is based on matched filter theory. In addition to the normal matched filter, the Laplacian matched filter which is directly related through the wavelet transforms to multiresolution analysis is exploited for microcalcification feature detection. At the multiresolution fusion stage, the region growing techniques are used in each resolution level. The parent-child relations between resolution levels are adopted to make final detection decision. FROC is computed from test on the Nijmegen database.
Turkbey, Baris; Xu, Sheng; Kruecker, Jochen; Locklin, Julia; Pang, Yuxi; Shah, Vijay; Bernardo, Marcelino; Baccala, Angelo; Rastinehad, Ardeshir; Benjamin, Compton; Merino, Maria J; Wood, Bradford J; Choyke, Peter L; Pinto, Peter A
2011-03-29
During transrectal ultrasound (TRUS)-guided prostate biopsies, the actual location of the biopsy site is rarely documented. Here, we demonstrate the capability of TRUS-magnetic resonance imaging (MRI) image fusion to document the biopsy site and correlate biopsy results with multi-parametric MRI findings. Fifty consecutive patients (median age 61 years) with a median prostate-specific antigen (PSA) level of 5.8 ng/ml underwent 12-core TRUS-guided biopsy of the prostate. Pre-procedural T2-weighted magnetic resonance images were fused to TRUS. A disposable needle guide with miniature tracking sensors was attached to the TRUS probe to enable fusion with MRI. Real-time TRUS images during biopsy and the corresponding tracking information were recorded. Each biopsy site was superimposed onto the MRI. Each biopsy site was classified as positive or negative for cancer based on the results of each MRI sequence. Sensitivity, specificity, and receiver operating curve (ROC) area under the curve (AUC) values were calculated for multi-parametric MRI. Gleason scores for each multi-parametric MRI pattern were also evaluated. Six hundred and 5 systemic biopsy cores were analyzed in 50 patients, of whom 20 patients had 56 positive cores. MRI identified 34 of 56 positive cores. Overall, sensitivity, specificity, and ROC area values for multi-parametric MRI were 0.607, 0.727, 0.667, respectively. TRUS-MRI fusion after biopsy can be used to document the location of each biopsy site, which can then be correlated with MRI findings. Based on correlation with tracked biopsies, T2-weighted MRI and apparent diffusion coefficient maps derived from diffusion-weighted MRI are the most sensitive sequences, whereas the addition of delayed contrast enhancement MRI and three-dimensional magnetic resonance spectroscopy demonstrated higher specificity consistent with results obtained using radical prostatectomy specimens.
Statistical label fusion with hierarchical performance models
Asman, Andrew J.; Dagley, Alexander S.; Landman, Bennett A.
2014-01-01
Label fusion is a critical step in many image segmentation frameworks (e.g., multi-atlas segmentation) as it provides a mechanism for generalizing a collection of labeled examples into a single estimate of the underlying segmentation. In the multi-label case, typical label fusion algorithms treat all labels equally – fully neglecting the known, yet complex, anatomical relationships exhibited in the data. To address this problem, we propose a generalized statistical fusion framework using hierarchical models of rater performance. Building on the seminal work in statistical fusion, we reformulate the traditional rater performance model from a multi-tiered hierarchical perspective. This new approach provides a natural framework for leveraging known anatomical relationships and accurately modeling the types of errors that raters (or atlases) make within a hierarchically consistent formulation. Herein, we describe several contributions. First, we derive a theoretical advancement to the statistical fusion framework that enables the simultaneous estimation of multiple (hierarchical) performance models within the statistical fusion context. Second, we demonstrate that the proposed hierarchical formulation is highly amenable to the state-of-the-art advancements that have been made to the statistical fusion framework. Lastly, in an empirical whole-brain segmentation task we demonstrate substantial qualitative and significant quantitative improvement in overall segmentation accuracy. PMID:24817809
Collaborative classification of hyperspectral and visible images with convolutional neural network
NASA Astrophysics Data System (ADS)
Zhang, Mengmeng; Li, Wei; Du, Qian
2017-10-01
Recent advances in remote sensing technology have made multisensor data available for the same area, and it is well-known that remote sensing data processing and analysis often benefit from multisource data fusion. Specifically, low spatial resolution of hyperspectral imagery (HSI) degrades the quality of the subsequent classification task while using visible (VIS) images with high spatial resolution enables high-fidelity spatial analysis. A collaborative classification framework is proposed to fuse HSI and VIS images for finer classification. First, the convolutional neural network model is employed to extract deep spectral features for HSI classification. Second, effective binarized statistical image features are learned as contextual basis vectors for the high-resolution VIS image, followed by a classifier. The proposed approach employs diversified data in a decision fusion, leading to an integration of the rich spectral information, spatial information, and statistical representation information. In particular, the proposed approach eliminates the potential problems of the curse of dimensionality and excessive computation time. The experiments evaluated on two standard data sets demonstrate better classification performance offered by this framework.
Image fusion pitfalls for cranial radiosurgery.
Jonker, Benjamin P
2013-01-01
Stereotactic radiosurgery requires imaging to define both the stereotactic space in which the treatment is delivered and the target itself. Image fusion is the process of using rotation and translation to bring a second image set into alignment with the first image set. This allows the potential concurrent use of multiple image sets to define the target and stereotactic space. While a single magnetic resonance imaging (MRI) sequence alone can be used for delineation of the target and fiducials, there may be significant advantages to using additional imaging sets including other MRI sequences, computed tomography (CT) scans, and advanced imaging sets such as catheter-based angiography, diffusor tension imaging-based fiber tracking and positon emission tomography in order to more accurately define the target and surrounding critical structures. Stereotactic space is usually defined by detection of fiducials on the stereotactic head frame or mask system. Unfortunately MRI sequences are susceptible to geometric distortion, whereas CT scans do not face this problem (although they have poorer resolution of the target in most cases). Thus image fusion can allow the definition of stereotactic space to proceed from the geometrically accurate CT images at the same time as using MRI to define the target. The use of image fusion is associated with risk of error introduced by inaccuracies of the fusion process, as well as workflow changes that if not properly accounted for can mislead the treating clinician. The purpose of this review is to describe the uses of image fusion in stereotactic radiosurgery as well as its potential pitfalls.
Multi-atlas and label fusion approach for patient-specific MRI based skull estimation.
Torrado-Carvajal, Angel; Herraiz, Joaquin L; Hernandez-Tamames, Juan A; San Jose-Estepar, Raul; Eryaman, Yigitcan; Rozenholc, Yves; Adalsteinsson, Elfar; Wald, Lawrence L; Malpica, Norberto
2016-04-01
MRI-based skull segmentation is a useful procedure for many imaging applications. This study describes a methodology for automatic segmentation of the complete skull from a single T1-weighted volume. The skull is estimated using a multi-atlas segmentation approach. Using a whole head computed tomography (CT) scan database, the skull in a new MRI volume is detected by nonrigid image registration of the volume to every CT, and combination of the individual segmentations by label-fusion. We have compared Majority Voting, Simultaneous Truth and Performance Level Estimation (STAPLE), Shape Based Averaging (SBA), and the Selective and Iterative Method for Performance Level Estimation (SIMPLE) algorithms. The pipeline has been evaluated quantitatively using images from the Retrospective Image Registration Evaluation database (reaching an overlap of 72.46 ± 6.99%), a clinical CT-MR dataset (maximum overlap of 78.31 ± 6.97%), and a whole head CT-MRI pair (maximum overlap 78.68%). A qualitative evaluation has also been performed on MRI acquisition of volunteers. It is possible to automatically segment the complete skull from MRI data using a multi-atlas and label fusion approach. This will allow the creation of complete MRI-based tissue models that can be used in electromagnetic dosimetry applications and attenuation correction in PET/MR. © 2015 Wiley Periodicals, Inc.
Super-resolution mapping using multi-viewing CHRIS/PROBA data
NASA Astrophysics Data System (ADS)
Dwivedi, Manish; Kumar, Vinay
2016-04-01
High-spatial resolution Remote Sensing (RS) data provides detailed information which ensures high-definition visual image analysis of earth surface features. These data sets also support improved information extraction capabilities at a fine scale. In order to improve the spatial resolution of coarser resolution RS data, the Super Resolution Reconstruction (SRR) technique has become widely acknowledged which focused on multi-angular image sequences. In this study multi-angle CHRIS/PROBA data of Kutch area is used for SR image reconstruction to enhance the spatial resolution from 18 m to 6m in the hope to obtain a better land cover classification. Various SR approaches like Projection onto Convex Sets (POCS), Robust, Iterative Back Projection (IBP), Non-Uniform Interpolation and Structure-Adaptive Normalized Convolution (SANC) chosen for this study. Subjective assessment through visual interpretation shows substantial improvement in land cover details. Quantitative measures including peak signal to noise ratio and structural similarity are used for the evaluation of the image quality. It was observed that SANC SR technique using Vandewalle algorithm for the low resolution image registration outperformed the other techniques. After that SVM based classifier is used for the classification of SRR and data resampled to 6m spatial resolution using bi-cubic interpolation. A comparative analysis is carried out between classified data of bicubic interpolated and SR derived images of CHRIS/PROBA and SR derived classified data have shown a significant improvement of 10-12% in the overall accuracy. The results demonstrated that SR methods is able to improve spatial detail of multi-angle images as well as the classification accuracy.
NASA Astrophysics Data System (ADS)
Ren, Ruizhi; Gu, Lingjia; Fu, Haoyang; Sun, Chenglin
2017-04-01
An effective super-resolution (SR) algorithm is proposed for actual spectral remote sensing images based on sparse representation and wavelet preprocessing. The proposed SR algorithm mainly consists of dictionary training and image reconstruction. Wavelet preprocessing is used to establish four subbands, i.e., low frequency, horizontal, vertical, and diagonal high frequency, for an input image. As compared to the traditional approaches involving the direct training of image patches, the proposed approach focuses on the training of features derived from these four subbands. The proposed algorithm is verified using different spectral remote sensing images, e.g., moderate-resolution imaging spectroradiometer (MODIS) images with different bands, and the latest Chinese Jilin-1 satellite images with high spatial resolution. According to the visual experimental results obtained from the MODIS remote sensing data, the SR images using the proposed SR algorithm are superior to those using a conventional bicubic interpolation algorithm or traditional SR algorithms without preprocessing. Fusion algorithms, e.g., standard intensity-hue-saturation, principal component analysis, wavelet transform, and the proposed SR algorithms are utilized to merge the multispectral and panchromatic images acquired by the Jilin-1 satellite. The effectiveness of the proposed SR algorithm is assessed by parameters such as peak signal-to-noise ratio, structural similarity index, correlation coefficient, root-mean-square error, relative dimensionless global error in synthesis, relative average spectral error, spectral angle mapper, and the quality index Q4, and its performance is better than that of the standard image fusion algorithms.
Object-oriented recognition of high-resolution remote sensing image
NASA Astrophysics Data System (ADS)
Wang, Yongyan; Li, Haitao; Chen, Hong; Xu, Yuannan
2016-01-01
With the development of remote sensing imaging technology and the improvement of multi-source image's resolution in satellite visible light, multi-spectral and hyper spectral , the high resolution remote sensing image has been widely used in various fields, for example military field, surveying and mapping, geophysical prospecting, environment and so forth. In remote sensing image, the segmentation of ground targets, feature extraction and the technology of automatic recognition are the hotspot and difficulty in the research of modern information technology. This paper also presents an object-oriented remote sensing image scene classification method. The method is consist of vehicles typical objects classification generation, nonparametric density estimation theory, mean shift segmentation theory, multi-scale corner detection algorithm, local shape matching algorithm based on template. Remote sensing vehicles image classification software system is designed and implemented to meet the requirements .
Efficient Multi-Atlas Registration using an Intermediate Template Image
Dewey, Blake E.; Carass, Aaron; Blitz, Ari M.; Prince, Jerry L.
2017-01-01
Multi-atlas label fusion is an accurate but time-consuming method of labeling the human brain. Using an intermediate image as a registration target can allow researchers to reduce time constraints by storing the deformations required of the atlas images. In this paper, we investigate the effect of registration through an intermediate template image on multi-atlas label fusion and propose a novel registration technique to counteract the negative effects of through-template registration. We show that overall computation time can be decreased dramatically with minimal impact on final label accuracy and time can be exchanged for improved results in a predictable manner. We see almost complete recovery of Dice similarity over a simple through-template registration using the corrected method and still maintain a 3–4 times speed increase. Further, we evaluate the effectiveness of this method on brains of patients with normal-pressure hydrocephalus, where abnormal brain shape presents labeling difficulties, specifically the ventricular labels. Our correction method creates substantially better ventricular labeling than traditional methods and maintains the speed increase seen in healthy subjects. PMID:28943702
Efficient multi-atlas registration using an intermediate template image
NASA Astrophysics Data System (ADS)
Dewey, Blake E.; Carass, Aaron; Blitz, Ari M.; Prince, Jerry L.
2017-03-01
Multi-atlas label fusion is an accurate but time-consuming method of labeling the human brain. Using an intermediate image as a registration target can allow researchers to reduce time constraints by storing the deformations required of the atlas images. In this paper, we investigate the effect of registration through an intermediate template image on multi-atlas label fusion and propose a novel registration technique to counteract the negative effects of through-template registration. We show that overall computation time can be decreased dramatically with minimal impact on final label accuracy and time can be exchanged for improved results in a predictable manner. We see almost complete recovery of Dice similarity over a simple through-template registration using the corrected method and still maintain a 3-4 times speed increase. Further, we evaluate the effectiveness of this method on brains of patients with normal-pressure hydrocephalus, where abnormal brain shape presents labeling difficulties, specifically the ventricular labels. Our correction method creates substantially better ventricular labeling than traditional methods and maintains the speed increase seen in healthy subjects.
Multi-focus image fusion algorithm using NSCT and MPCNN
NASA Astrophysics Data System (ADS)
Liu, Kang; Wang, Lianli
2018-04-01
Based on nonsubsampled contourlet transform (NSCT) and modified pulse coupled neural network (MPCNN), the paper proposes an effective method of image fusion. Firstly, the paper decomposes the source image into the low-frequency components and high-frequency components using NSCT, and then processes the low-frequency components by regional statistical fusion rules. For high-frequency components, the paper calculates the spatial frequency (SF), which is input into MPCNN model to get relevant coefficients according to the fire-mapping image of MPCNN. At last, the paper restructures the final image by inverse transformation of low-frequency and high-frequency components. Compared with the wavelet transformation (WT) and the traditional NSCT algorithm, experimental results indicate that the method proposed in this paper achieves an improvement both in human visual perception and objective evaluation. It indicates that the method is effective, practical and good performance.
Evaluation of MRI-US Fusion Technology in Sports-Related Musculoskeletal Injuries.
Wong-On, Manuel; Til-Pérez, Lluís; Balius, Ramón
2015-06-01
A combination of magnetic resonance imaging (MRI) with real-time high-resolution ultrasound (US) known as fusion imaging may improve visualization of musculoskeletal (MSK) sports medicine injuries. The aim of this study was to evaluate the applicability of MRI-US fusion technology in MSK sports medicine. This study was conducted by the medical services of the FC Barcelona. The participants included volunteers and referred athletes with symptomatic and asymptomatic MSK injuries. All cases underwent MRI which was loaded into the US system for manual registration on the live US image and fusion imaging examination. After every test, an evaluation form was completed in terms of advantages, disadvantages, and anatomic fusion landmarks. From November 2014 to March 2015, we evaluated 20 subjects who underwent fusion imaging, 5 non-injured volunteers and 15 injured athletes, 11 symptomatic and 4 asymptomatic, age range 16-50 years, mean 22. We describe some of the anatomic landmarks used to guide fusion in different regions. This technology allowed us to examine muscle and tendon injuries simultaneously in US and MRI, and the correlation of both techniques, especially low-grade muscular injuries. This has also helped compensate for the limited field of view with US. It improves spatial orientation of cartilage, labrum and meniscal injuries. However, a high-quality MRI image is essential in achieving an adequate fusion image, and 3D sequences need to be added in MRI protocols to improve navigation. The combination of real-time MRI and US image fusion and navigation is relatively easy to perform and is helping to improve understanding of MSK injuries. However, it requires specific skills in MSK imaging and still needs further research in sports-related injuries. Toshiba Medical Systems Corporation.
Quality evaluation of different fusion techniques applied on Worldview-2 data
NASA Astrophysics Data System (ADS)
Vaiopoulos, Aristides; Nikolakopoulos, Konstantinos G.
2015-10-01
In the current study a Worldview-2 image was used for fusion quality assessment. The bundle image was collected on July 2014 over Araxos area in Western Peloponnese. Worldview-2 is the first satellite that collects at the same time a panchromatic (Pan) image and 8 band multispectral (MS) image. The Pan data have a spatial resolution of 0.46m while the MS data have a spatial resolution of 1.84m. In contrary to the respective Pan band of Ikonos and Quickbird that range between 0.45 and 0.90 micrometers the Worldview Pan band is narrower and ranges between 0.45 and 0.8 micrometers. The MS bands include four conventional visible and near-infrared bands common to multispectral satellites like Ikonos Quickbird, Geoeye Landsat-7 etc., and four new bands. Thus, it is quite interesting to investigate the assessment of commonly used fusion algorithms with Worldview-2 data. Twelve fusion techniques and more especially the Ehlers, Gram-Schmidt, Color Normalized, High Pass Filter, Hyperspherical Color Space, Local Mean Matching (LMM), Local Mean and Variance Matching (LMVM), Modified IHS (ModIHS), Pansharp, Pansharp2, PCA and Wavelet were used for the fusion of Worldview-2 panchromatic and multispectral data. The optical result, the statistical parameters and different quality indexes such as ERGAS, Q and entropy difference were examined and the results are presented. The quality control was evaluated both in spectral and spatial domain.
Multisource image fusion method using support value transform.
Zheng, Sheng; Shi, Wen-Zhong; Liu, Jian; Zhu, Guang-Xi; Tian, Jin-Wen
2007-07-01
With the development of numerous imaging sensors, many images can be simultaneously pictured by various sensors. However, there are many scenarios where no one sensor can give the complete picture. Image fusion is an important approach to solve this problem and produces a single image which preserves all relevant information from a set of different sensors. In this paper, we proposed a new image fusion method using the support value transform, which uses the support value to represent the salient features of image. This is based on the fact that, in support vector machines (SVMs), the data with larger support values have a physical meaning in the sense that they reveal relative more importance of the data points for contributing to the SVM model. The mapped least squares SVM (mapped LS-SVM) is used to efficiently compute the support values of image. The support value analysis is developed by using a series of multiscale support value filters, which are obtained by filling zeros in the basic support value filter deduced from the mapped LS-SVM to match the resolution of the desired level. Compared with the widely used image fusion methods, such as the Laplacian pyramid, discrete wavelet transform methods, the proposed method is an undecimated transform-based approach. The fusion experiments are undertaken on multisource images. The results demonstrate that the proposed approach is effective and is superior to the conventional image fusion methods in terms of the pertained quantitative fusion evaluation indexes, such as quality of visual information (Q(AB/F)), the mutual information, etc.
NASA Astrophysics Data System (ADS)
Gilmore, Mark; Hsu, Scott
2015-11-01
The goal of the Plasma Liner eXperiment PLX-alpha at Los Alamos National Laboratory is to establish the viability of creating a spherically imploding plasma liner for MIF and HED applications, using a spherical array of supersonic plasma jets launched by innovative contoured-gap coaxial plasma guns. PLX- α experiments will focus in particular on establishing the ram pressure and uniformity scalings of partial and fully spherical plasma liners. In order to characterize these parameters experimentally, a suite of diagnostics is planned, including multi-camera fast imaging, a 16-channel visible interferometer (upgraded from 8 channels) with reconfigurable, fiber-coupled front end, and visible and VUV high-resolution and survey spectroscopy. Tomographic reconstruction and data fusion techniques will be used in conjunction with interferometry, imaging, and synthetic diagnostics from modeling to characterize liner uniformity in 3D. Diagnostic and data analysis design, implementation, and status will be presented. Supported by the Advanced Research Projects Agency - Energy - U.S. Department of Energy.
Fourier domain image fusion for differential X-ray phase-contrast breast imaging.
Coello, Eduardo; Sperl, Jonathan I; Bequé, Dirk; Benz, Tobias; Scherer, Kai; Herzen, Julia; Sztrókay-Gaul, Anikó; Hellerhoff, Karin; Pfeiffer, Franz; Cozzini, Cristina; Grandl, Susanne
2017-04-01
X-Ray Phase-Contrast (XPC) imaging is a novel technology with a great potential for applications in clinical practice, with breast imaging being of special interest. This work introduces an intuitive methodology to combine and visualize relevant diagnostic features, present in the X-ray attenuation, phase shift and scattering information retrieved in XPC imaging, using a Fourier domain fusion algorithm. The method allows to present complementary information from the three acquired signals in one single image, minimizing the noise component and maintaining visual similarity to a conventional X-ray image, but with noticeable enhancement in diagnostic features, details and resolution. Radiologists experienced in mammography applied the image fusion method to XPC measurements of mastectomy samples and evaluated the feature content of each input and the fused image. This assessment validated that the combination of all the relevant diagnostic features, contained in the XPC images, was present in the fused image as well. Copyright © 2017 Elsevier B.V. All rights reserved.
System integration and DICOM image creation for PET-MR fusion.
Hsiao, Chia-Hung; Kao, Tsair; Fang, Yu-Hua; Wang, Jiunn-Kuen; Guo, Wan-Yuo; Chao, Liang-Hsiao; Yen, Sang-Hue
2005-03-01
This article demonstrates a gateway system for converting image fusion results to digital imaging and communication in medicine (DICOM) objects. For the purpose of standardization and integration, we have followed the guidelines of the Integrated Healthcare Enterprise technical framework and developed a DICOM gateway. The gateway system combines data from hospital information system, image fusion results, and the information generated itself to constitute new DICOM objects. All the mandatory tags defined in standard DICOM object were generated in the gateway system. The gateway system will generate two series of SOP instances of each PET-MR fusion result; SOP (Service Object Pair) one for the reconstructed magnetic resonance (MR) images and the other for position emission tomography (PET) images. The size, resolution, spatial coordinates, and number of frames are the same in both series of SOP instances. Every new generated MR image exactly fits with one of the reconstructed PET images. Those DICOM images are stored to the picture archiving and communication system (PACS) server by means of standard DICOM protocols. When those images are retrieved and viewed by standard DICOM viewing systems, both images can be viewed at the same anatomy location. This system is useful for precise diagnosis and therapy.
NASA Astrophysics Data System (ADS)
Hayashida, K.; Kawabata, T.; Nakajima, H.; Inoue, S.; Tsunemi, H.
2017-10-01
The best angular resolution of 0.5 arcsec is realized with the X-ray mirror onborad the Chandra satellite. Nevertheless, further better or comparable resolution is anticipated to be difficult in near future. In fact, the goal of ATHENA telescope is 5 arcsec in the angular resolution. We propose a new type of X-ray interferometer consisting simply of an X-ray absorption grating and an X-ray spectral imaging detector, such as X-ray CCDs or new generation CMOS detectors, by stacking the multi images created with the Talbot interferenece (Hayashida et al. 2016). This system, now we call Multi Image X-ray Interferometer Module (MIXIM) enables arcseconds resolution with very small satellites of 50cm size, and sub-arcseconds resolution with small sattellites. We have performed ground experiments, in which a micro-focus X-ray source, grating with pitch of 4.8μm, and 30 μm pixel detector placed about 1m from the source. We obtained the self-image (interferometirc fringe) of the grating for wide band pass around 10keV. This result corresponds to about 2 arcsec resolution for parrallel beam incidence. The MIXIM is usefull for high angular resolution imaging of relatively bright sources. Search for super massive black holes and resolving AGN torus would be the targets of this system.
Multisensor data fusion for IED threat detection
NASA Astrophysics Data System (ADS)
Mees, Wim; Heremans, Roel
2012-10-01
In this paper we present the multi-sensor registration and fusion algorithms that were developed for a force protection research project in order to detect threats against military patrol vehicles. The fusion is performed at object level, using a hierarchical evidence aggregation approach. It first uses expert domain knowledge about the features used to characterize the detected threats, that is implemented in the form of a fuzzy expert system. The next level consists in fusing intra-sensor and inter-sensor information. Here an ordered weighted averaging operator is used. The object level fusion between candidate threats that are detected asynchronously on a moving vehicle by sensors with different imaging geometries, requires an accurate sensor to world coordinate transformation. This image registration will also be discussed in this paper.
NASA Astrophysics Data System (ADS)
Ogawa, Masahiko; Shidoji, Kazunori
2011-03-01
High-resolution stereoscopic images are effective for use in virtual reality and teleoperation systems. However, the higher the image resolution, the higher is the cost of computer processing and communication. To reduce this cost, numerous earlier studies have suggested the use of multi-resolution images, which have high resolution in region of interests and low resolution in other areas. However, observers can perceive unpleasant sensations and incorrect depth because they can see low-resolution areas in their field of vision. In this study, we conducted an experiment to research the relationship between the viewing field and the perception of image resolution, and determined respective thresholds of image-resolution perception for various positions of the viewing field. The results showed that participants could not distinguish between the high-resolution stimulus and the decreased stimulus, 63 ppi, at positions more than 8 deg outside the gaze point. Moreover, with positions shifted a further 11 and 13 deg from the gaze point, participants could not distinguish between the high-resolution stimulus and the decreased stimuli whose resolution densities were 42 and 25 ppi. Hence, we will propose the composition of multi-resolution images in which observers do not perceive unpleasant sensations and incorrect depth with data reduction (compression).
NASA Astrophysics Data System (ADS)
Ebrahimi Orimi, H.; Esmaeili, M.; Refahi Oskouei, A.; Mirhadizadehd, S. A.; Tse, P. W.
2017-10-01
Condition monitoring of rotary devices such as helical gears is an issue of great significance in industrial projects. This paper introduces a feature extraction method for gear fault diagnosis using wavelet packet due to its higher frequency resolution. During this investigation, the mother wavelet Daubechies 10 (Db-10) was applied to calculate the coefficient entropy of each frequency band of 5th level (32 frequency bands) as features. In this study, the peak value of the signal entropies was selected as applicable features in order to improve frequency band differentiation and reduce feature vectors' dimension. Feature extraction is followed by the fusion network where four different structured multi-layer perceptron networks are trained to classify the recorded signals (healthy/faulty). The robustness of fusion network outputs is greater compared to perceptron networks. The results provided by the fusion network indicate a classification of 98.88 and 97.95% for healthy and faulty classes, respectively.
Time-of-Flight Microwave Camera.
Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh
2015-10-05
Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable "stealth" regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows "camera-like" behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.
Hu, Dandan; Sarder, Pinaki; Ronhovde, Peter; Orthaus, Sandra; Achilefu, Samuel; Nussinov, Zohar
2014-01-01
Inspired by a multi-resolution community detection (MCD) based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Further, using the proposed method, the mean-square error (MSE) in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The MCD method appeared to perform better than a popular spectral clustering based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in MSE with increasing resolution. PMID:24251410
Image fusion pitfalls for cranial radiosurgery
Jonker, Benjamin P.
2013-01-01
Stereotactic radiosurgery requires imaging to define both the stereotactic space in which the treatment is delivered and the target itself. Image fusion is the process of using rotation and translation to bring a second image set into alignment with the first image set. This allows the potential concurrent use of multiple image sets to define the target and stereotactic space. While a single magnetic resonance imaging (MRI) sequence alone can be used for delineation of the target and fiducials, there may be significant advantages to using additional imaging sets including other MRI sequences, computed tomography (CT) scans, and advanced imaging sets such as catheter-based angiography, diffusor tension imaging-based fiber tracking and positon emission tomography in order to more accurately define the target and surrounding critical structures. Stereotactic space is usually defined by detection of fiducials on the stereotactic head frame or mask system. Unfortunately MRI sequences are susceptible to geometric distortion, whereas CT scans do not face this problem (although they have poorer resolution of the target in most cases). Thus image fusion can allow the definition of stereotactic space to proceed from the geometrically accurate CT images at the same time as using MRI to define the target. The use of image fusion is associated with risk of error introduced by inaccuracies of the fusion process, as well as workflow changes that if not properly accounted for can mislead the treating clinician. The purpose of this review is to describe the uses of image fusion in stereotactic radiosurgery as well as its potential pitfalls. PMID:23682338
Comparison and evaluation on image fusion methods for GaoFen-1 imagery
NASA Astrophysics Data System (ADS)
Zhang, Ningyu; Zhao, Junqing; Zhang, Ling
2016-10-01
Currently, there are many research works focusing on the best fusion method suitable for satellite images of SPOT, QuickBird, Landsat and so on, but only a few of them discuss the application of GaoFen-1 satellite images. This paper proposes a novel idea by using four fusion methods, such as principal component analysis transform, Brovey transform, hue-saturation-value transform, and Gram-Schmidt transform, from the perspective of keeping the original image spectral information. The experimental results showed that the transformed images by the four fusion methods not only retain high spatial resolution on panchromatic band but also have the abundant spectral information. Through comparison and evaluation, the integration of Brovey transform is better, but the color fidelity is not the premium. The brightness and color distortion in hue saturation-value transformed image is the largest. Principal component analysis transform did a good job in color fidelity, but its clarity still need improvement. Gram-Schmidt transform works best in color fidelity, and the edge of the vegetation is the most obvious, the fused image sharpness is higher than that of principal component analysis. Brovey transform, is suitable for distinguishing the Gram-Schmidt transform, and the most appropriate for GaoFen-1 satellite image in vegetation and non-vegetation area. In brief, different fusion methods have different advantages in image quality and class extraction, and should be used according to the actual application information and image fusion algorithm.
Time-resolved wide-field optically sectioned fluorescence microscopy
NASA Astrophysics Data System (ADS)
Dupuis, Guillaume; Benabdallah, Nadia; Chopinaud, Aurélien; Mayet, Céline; Lévêque-Fort, Sandrine
2013-02-01
We present the implementation of a fast wide-field optical sectioning technique called HiLo microscopy on a fluorescence lifetime imaging microscope. HiLo microscopy is based on the fusion of two images, one with structured illumination and another with uniform illumination. Optically sectioned images are then digitally generated thanks to a fusion algorithm. HiLo images are comparable in quality with confocal images but they can be acquired faster over larger fields of view. We obtain 4D imaging by combining HiLo optical sectioning, time-gated detection, and z-displacement. We characterize the performances of this set-up in terms of 3D spatial resolution and time-resolved capabilities in both fixed- and live-cell imaging modes.
NASA Astrophysics Data System (ADS)
Tang, Xiangyang; Hsieh, Jiang; Taha, Basel H.; Vass, Melissa L.; Seamans, John L.; Okerlund, Darin R.
2009-02-01
With increasing longitudinal detector dimension available in diagnostic volumetric CT, step-and-shoot scan is becoming popular for cardiac imaging. In comparison to helical scan, step-and-shoot scan decouples patient table movement from cardiac gating/triggering, which facilitates the cardiac imaging via multi-sector data acquisition, as well as the administration of inter-cycle heart beat variation (arrhythmia) and radiation dose efficiency. Ideally, a multi-sector data acquisition can improve temporal resolution at a factor the same as the number of sectors (best scenario). In reality, however, the effective temporal resolution is jointly determined by gantry rotation speed and patient heart beat rate, which may significantly lower than the ideal or no improvement (worst scenario). Hence, it is clinically relevant to investigate the behavior of effective temporal resolution in cardiac imaging with multi-sector data acquisition. In this study, a 5-second cine scan of a porcine heart, which cascades 6 porcine cardiac cycles, is acquired. In addition to theoretical analysis and motion phantom study, the clinical consequences due to the effective temporal resolution variation are evaluated qualitative or quantitatively. By employing a 2-sector image reconstruction strategy, a total of 15 (the permutation of P(6, 2)) cases between the best and worst scenarios are studied, providing informative guidance for the design and optimization of CT cardiac imaging in volumetric CT with multi-sector data acquisition.
Multi-detector CT imaging in the postoperative orthopedic patient with metal hardware.
Vande Berg, Bruno; Malghem, Jacques; Maldague, Baudouin; Lecouvet, Frederic
2006-12-01
Multi-detector CT imaging (MDCT) becomes routine imaging modality in the assessment of the postoperative orthopedic patients with metallic instrumentation that degrades image quality at MR imaging. This article reviews the physical basis and CT appearance of such metal-related artifacts. It also addresses the clinical value of MDCT in postoperative orthopedic patients with emphasis on fracture healing, spinal fusion or arthrodesis, and joint replacement. MDCT imaging shows limitations in the assessment of the bone marrow cavity and of the soft tissues for which MR imaging remains the imaging modality of choice despite metal-related anatomic distortions and signal alteration.
NASA Astrophysics Data System (ADS)
Dang, Hao; Webster Stayman, J.; Sisniega, Alejandro; Zbijewski, Wojciech; Xu, Jennifer; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.
2017-01-01
A prototype cone-beam CT (CBCT) head scanner featuring model-based iterative reconstruction (MBIR) has been recently developed and demonstrated the potential for reliable detection of acute intracranial hemorrhage (ICH), which is vital to diagnosis of traumatic brain injury and hemorrhagic stroke. However, data truncation (e.g. due to the head holder) can result in artifacts that reduce image uniformity and challenge ICH detection. We propose a multi-resolution MBIR method with an extended reconstruction field of view (RFOV) to mitigate truncation effects in CBCT of the head. The image volume includes a fine voxel size in the (inner) nontruncated region and a coarse voxel size in the (outer) truncated region. This multi-resolution scheme allows extension of the RFOV to mitigate truncation effects while introducing minimal increase in computational complexity. The multi-resolution method was incorporated in a penalized weighted least-squares (PWLS) reconstruction framework previously developed for CBCT of the head. Experiments involving an anthropomorphic head phantom with truncation due to a carbon-fiber holder were shown to result in severe artifacts in conventional single-resolution PWLS, whereas extending the RFOV within the multi-resolution framework strongly reduced truncation artifacts. For the same extended RFOV, the multi-resolution approach reduced computation time compared to the single-resolution approach (viz. time reduced by 40.7%, 83.0%, and over 95% for an image volume of 6003, 8003, 10003 voxels). Algorithm parameters (e.g. regularization strength, the ratio of the fine and coarse voxel size, and RFOV size) were investigated to guide reliable parameter selection. The findings provide a promising method for truncation artifact reduction in CBCT and may be useful for other MBIR methods and applications for which truncation is a challenge.
HIGH-RESOLUTION IMAGING OF THE ATLBS REGIONS: THE RADIO SOURCE COUNTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thorat, K.; Subrahmanyan, R.; Saripalli, L.
2013-01-01
The Australia Telescope Low-brightness Survey (ATLBS) regions have been mosaic imaged at a radio frequency of 1.4 GHz with 6'' angular resolution and 72 {mu}Jy beam{sup -1} rms noise. The images (centered at R.A. 00{sup h}35{sup m}00{sup s}, decl. -67 Degree-Sign 00'00'' and R.A. 00{sup h}59{sup m}17{sup s}, decl. -67 Degree-Sign 00'00'', J2000 epoch) cover 8.42 deg{sup 2} sky area and have no artifacts or imaging errors above the image thermal noise. Multi-resolution radio and optical r-band images (made using the 4 m CTIO Blanco telescope) were used to recognize multi-component sources and prepare a source list; the detection thresholdmore » was 0.38 mJy in a low-resolution radio image made with beam FWHM of 50''. Radio source counts in the flux density range 0.4-8.7 mJy are estimated, with corrections applied for noise bias, effective area correction, and resolution bias. The resolution bias is mitigated using low-resolution radio images, while effects of source confusion are removed by using high-resolution images for identifying blended sources. Below 1 mJy the ATLBS counts are systematically lower than the previous estimates. Showing no evidence for an upturn down to 0.4 mJy, they do not require any changes in the radio source population down to the limit of the survey. The work suggests that automated image analysis for counts may be dependent on the ability of the imaging to reproduce connecting emission with low surface brightness and on the ability of the algorithm to recognize sources, which may require that source finding algorithms effectively work with multi-resolution and multi-wavelength data. The work underscores the importance of using source lists-as opposed to component lists-and correcting for the noise bias in order to precisely estimate counts close to the image noise and determine the upturn at sub-mJy flux density.« less
Hill, K W; Bitter, M; Delgado-Aparacio, L; Pablant, N A; Beiersdorfer, P; Schneider, M; Widmann, K; Sanchez del Rio, M; Zhang, L
2012-10-01
High resolution (λ∕Δλ ∼ 10 000) 1D imaging x-ray spectroscopy using a spherically bent crystal and a 2D hybrid pixel array detector is used world wide for Doppler measurements of ion-temperature and plasma flow-velocity profiles in magnetic confinement fusion plasmas. Meter sized plasmas are diagnosed with cm spatial resolution and 10 ms time resolution. This concept can also be used as a diagnostic of small sources, such as inertial confinement fusion plasmas and targets on x-ray light source beam lines, with spatial resolution of micrometers, as demonstrated by laboratory experiments using a 250-μm (55)Fe source, and by ray-tracing calculations. Throughput calculations agree with measurements, and predict detector counts in the range 10(-8)-10(-6) times source x-rays, depending on crystal reflectivity and spectrometer geometry. Results of the lab demonstrations, application of the technique to the National Ignition Facility (NIF), and predictions of performance on NIF will be presented.
High Efficiency Multi-shot Interleaved Spiral-In/Out Acquisition for High Resolution BOLD fMRI
Jung, Youngkyoo; Samsonov, Alexey A.; Liu, Thomas T.; Buracas, Giedrius T.
2012-01-01
Growing demand for high spatial resolution BOLD functional MRI faces a challenge of the spatial resolution vs. coverage or temporal resolution tradeoff, which can be addressed by methods that afford increased acquisition efficiency. Spiral acquisition trajectories have been shown to be superior to currently prevalent echo-planar imaging in terms of acquisition efficiency, and high spatial resolution can be achieved by employing multiple-shot spiral acquisition. The interleaved spiral in-out trajectory is preferred over spiral-in due to increased BOLD signal CNR and higher acquisition efficiency than that of spiral-out or non-interleaved spiral in/out trajectories (1), but to date applicability of the multi-shot interleaved spiral in-out for high spatial resolution imaging has not been studied. Herein we propose multi-shot interleaved spiral in-out acquisition and investigate its applicability for high spatial resolution BOLD fMRI. Images reconstructed from interleaved spiral-in and -out trajectories possess artifacts caused by differences in T2* decay, off-resonance and k-space errors associated with the two trajectories. We analyze the associated errors and demonstrate that application of conjugate phase reconstruction and spectral filtering can substantially mitigate these image artifacts. After applying these processing steps, the multishot interleaved spiral in-out pulse sequence yields high BOLD CNR images at in-plane resolution below 1x1 mm while preserving acceptable temporal resolution (4 s) and brain coverage (15 slices of 2 mm thickness). Moreover, this method yields sufficient BOLD CNR at 1.5 mm isotropic resolution for detection of activation in hippocampus associated with cognitive tasks (Stern memory task). The multi-shot interleaved spiral in-out acquisition is a promising technique for high spatial resolution BOLD fMRI applications. PMID:23023395
Wobser, Hella; Wiest, Reiner; Salzberger, Bernd; Wohlgemuth, Walter Alexander; Stroszczynski, Christian; Jung, Ernst-Michael
2014-01-01
To evaluate treatment response of hepatocellular carcinoma (HCC) after transarterial chemoembolization (TACE) with a new real-time imaging fusion technique of contrast-enhanced ultrasound (CEUS) with multi-slice detection computed tomography (CT) in comparison to conventional post-interventional follow-up. 40 patients with HCC (26 male, ages 46-81 years) were evaluated 24 hours after TACE using CEUS with ultrasound volume navigation and image fusion with CT compared to non-enhanced CT and follow-up contrast-enhanced CT after 6-8 weeks. Reduction of tumor vascularization to less than 25% was regarded as "successful" treatment, whereas reduction to levels >25% was considered as "partial" treatment response. Homogenous lipiodol retention was regarded as successful treatment in non-enhanced CT. Post-interventional image fusion of CEUS with CT was feasible in all 40 patients. In 24 patients (24/40), post-interventional image fusion with CEUS revealed residual tumor vascularity, that was confirmed by contrast-enhanced CT 6-8 weeks later in 24/24 patients. In 16 patients (16/40), post-interventional image fusion with CEUS demonstrated successful treatment, but follow-up CT detected residual viable tumor (6/16). Non-enhanced CT did not identify any case of treatment failure. Image fusion with CEUS assessed treatment efficacy with a specificity of 100%, sensitivity of 80% and a positive predictive value of 1 (negative predictive value 0.63). Image fusion of CEUS with CT allows a reliable, highly specific post-interventional evaluation of embolization response with good sensitivity without any further radiation exposure. It can detect residual viable tumor at early state, resulting in a close patient monitoring or re-therapy.
Standardizing Quality Assessment of Fused Remotely Sensed Images
NASA Astrophysics Data System (ADS)
Pohl, C.; Moellmann, J.; Fries, K.
2017-09-01
The multitude of available operational remote sensing satellites led to the development of many image fusion techniques to provide high spatial, spectral and temporal resolution images. The comparison of different techniques is necessary to obtain an optimized image for the different applications of remote sensing. There are two approaches in assessing image quality: 1. Quantitatively by visual interpretation and 2. Quantitatively using image quality indices. However an objective comparison is difficult due to the fact that a visual assessment is always subject and a quantitative assessment is done by different criteria. Depending on the criteria and indices the result varies. Therefore it is necessary to standardize both processes (qualitative and quantitative assessment) in order to allow an objective image fusion quality evaluation. Various studies have been conducted at the University of Osnabrueck (UOS) to establish a standardized process to objectively compare fused image quality. First established image fusion quality assessment protocols, i.e. Quality with No Reference (QNR) and Khan's protocol, were compared on varies fusion experiments. Second the process of visual quality assessment was structured and standardized with the aim to provide an evaluation protocol. This manuscript reports on the results of the comparison and provides recommendations for future research.
Optimizing Within-Subject Experimental Designs for jICA of Multi-Channel ERP and fMRI
Mangalathu-Arumana, Jain; Liebenthal, Einat; Beardsley, Scott A.
2018-01-01
Joint independent component analysis (jICA) can be applied within subject for fusion of multi-channel event-related potentials (ERP) and functional magnetic resonance imaging (fMRI), to measure brain function at high spatiotemporal resolution (Mangalathu-Arumana et al., 2012). However, the impact of experimental design choices on jICA performance has not been systematically studied. Here, the sensitivity of jICA for recovering neural sources in individual data was evaluated as a function of imaging SNR, number of independent representations of the ERP/fMRI data, relationship between instantiations of the joint ERP/fMRI activity (linear, non-linear, uncoupled), and type of sources (varying parametrically and non-parametrically across representations of the data), using computer simulations. Neural sources were simulated with spatiotemporal and noise attributes derived from experimental data. The best performance, maximizing both cross-modal data fusion and the separation of brain sources, occurred with a moderate number of representations of the ERP/fMRI data (10–30), as in a mixed block/event related experimental design. Importantly, the type of relationship between instantiations of the ERP/fMRI activity, whether linear, non-linear or uncoupled, did not in itself impact jICA performance, and was accurately recovered in the common profiles (i.e., mixing coefficients). Thus, jICA provides an unbiased way to characterize the relationship between ERP and fMRI activity across brain regions, in individual data, rendering it potentially useful for characterizing pathological conditions in which neurovascular coupling is adversely affected. PMID:29410611
Design of a normal incidence multilayer imaging x-ray microscope.
Shealy, D L; Gabardi, D R; Hoover, R B; Walker, A B; Lindblom, J F; Barbee, T W
1989-01-01
Normal incidence multilayer Cassegrain x-ray telescopes were flown on the Stanford/MSFC Rocket X-Ray Spectroheliograph. These instruments produced high spatial resolution images of the Sun and conclusively demonstrated that doubly reflecting multilayer x-ray optical systems are feasible. The images indicated that aplanatic imaging soft x-ray /EUV microscopes should be achievable using multilayer optics technology. We have designed a doubly reflecting normal incidence multilayer imaging x-ray microscope based on the Schwarzschild configuration. The Schwarzschild microscope utilizes two spherical mirrors with concentric radii of curvature which are chosen such that the third-order spherical aberration and coma are minimized. We discuss the design of the microscope and the results of the optical system ray trace analysis which indicates that diffraction-limited performance with 600 Å spatial resolution should be obtainable over a 1 mm field of view at a wavelength of 100 Å. Fabrication of several imaging soft x-ray microscopes based upon these designs, for use in conjunction with x-ray telescopes and laser fusion research, is now in progress. High resolution aplanatic imaging x-ray microscopes using normal incidence multilayer x-ray mirrors should have many important applications in advanced x-ray astronomical instrumentation, x-ray lithography, biological, biomedical, metallurgical, and laser fusion research.
Multi-pinhole collimator design for small-object imaging with SiliSPECT: a high-resolution SPECT
NASA Astrophysics Data System (ADS)
Shokouhi, S.; Metzler, S. D.; Wilson, D. W.; Peterson, T. E.
2009-01-01
We have designed a multi-pinhole collimator for a dual-headed, stationary SPECT system that incorporates high-resolution silicon double-sided strip detectors. The compact camera design of our system enables imaging at source-collimator distances between 20 and 30 mm. Our analytical calculations show that using knife-edge pinholes with small-opening angles or cylindrically shaped pinholes in a focused, multi-pinhole configuration in combination with this camera geometry can generate narrow sensitivity profiles across the field of view that can be useful for imaging small objects at high sensitivity and resolution. The current prototype system uses two collimators each containing 127 cylindrically shaped pinholes that are focused toward a target volume. Our goal is imaging objects such as a mouse brain, which could find potential applications in molecular imaging.
Fusion of radar and ultrasound sensors for concealed weapons detection
NASA Astrophysics Data System (ADS)
Felber, Franklin S.; Davis, Herbert T., III; Mallon, Charles E.; Wild, Norbert C.
1996-06-01
An integrated radar and ultrasound sensor, capable of remotely detecting and imaging concealed weapons, is being developed. A modified frequency-agile, mine-detection radar is intended to specify with high probability of detection at ranges of 1 to 10 m which individuals in a moving crowd may be concealing metallic or nonmetallic weapons. Within about 1 to 5 m, the active ultrasound sensor is intended to enable a user to identify a concealed weapon on a moving person with low false-detection rate, achieved through a real-time centimeter-resolution image of the weapon. The goal for sensor fusion is to have the radar acquire concealed weapons at long ranges and seamlessly hand over tracking data to the ultrasound sensor for high-resolution imaging on a video monitor. We have demonstrated centimeter-resolution ultrasound images of metallic and non-metallic weapons concealed on a human at ranges over 1 m. Processing of the ultrasound images includes filters for noise, frequency, brightness, and contrast. A frequency-agile radar has been developed by JAYCOR under the U.S. Army Advanced Mine Detection Radar Program. The signature of an armed person, detected by this radar, differs appreciably from that of the same person unarmed.
Enhanced video indirect ophthalmoscopy (VIO) via robust mosaicing.
Estrada, Rolando; Tomasi, Carlo; Cabrera, Michelle T; Wallace, David K; Freedman, Sharon F; Farsiu, Sina
2011-10-01
Indirect ophthalmoscopy (IO) is the standard of care for evaluation of the neonatal retina. When recorded on video from a head-mounted camera, IO images have low quality and narrow Field of View (FOV). We present an image fusion methodology for converting a video IO recording into a single, high quality, wide-FOV mosaic that seamlessly blends the best frames in the video. To this end, we have developed fast and robust algorithms for automatic evaluation of video quality, artifact detection and removal, vessel mapping, registration, and multi-frame image fusion. Our experiments show the effectiveness of the proposed methods.
A fast and automatic mosaic method for high-resolution satellite images
NASA Astrophysics Data System (ADS)
Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing
2015-12-01
We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.
Multicolor Super-Resolution Fluorescence Imaging via Multi-Parameter Fluorophore Detection
Bates, Mark; Dempsey, Graham T; Chen, Kok Hao; Zhuang, Xiaowei
2012-01-01
Understanding the complexity of the cellular environment will benefit from the ability to unambiguously resolve multiple cellular components, simultaneously and with nanometer-scale spatial resolution. Multicolor super-resolution fluorescence microscopy techniques have been developed to achieve this goal, yet challenges remain in terms of the number of targets that can be simultaneously imaged and the crosstalk between color channels. Herein, we demonstrate multicolor stochastic optical reconstruction microscopy (STORM) based on a multi-parameter detection strategy, which uses both the fluorescence activation wavelength and the emission color to discriminate between photo-activatable fluorescent probes. First, we obtained two-color super-resolution images using the near-infrared cyanine dye Alexa 750 in conjunction with a red cyanine dye Alexa 647, and quantified color crosstalk levels and image registration accuracy. Combinatorial pairing of these two switchable dyes with fluorophores which enhance photo-activation enabled multi-parameter detection of six different probes. Using this approach, we obtained six-color super-resolution fluorescence images of a model sample. The combination of multiple fluorescence detection parameters for improved fluorophore discrimination promises to substantially enhance our ability to visualize multiple cellular targets with sub-diffraction-limit resolution. PMID:22213647
Image construction from the IRAS survey and data fusion
NASA Technical Reports Server (NTRS)
Bontekoe, Tj. R.
1990-01-01
The IRAS survey data can be used successfully to produce images of extended objects. The major difficulty, viz. non-uniform sampling, different response functions for each detector, and varying signal-to-noise levels for each detector for each scan, were resolved. The results of three different image construction techniques are compared: co-addition, constrained least squares, and maximum entropy. The maximum entropy result is superior. An image of the galaxy M51 with an average spatial resolution of 45 arc seconds, is presented using 60 micron survey data. This exceeds the telescope diffraction limit of 1 minute of arc, at this wavelength. Data fusion is a proposed method for combining data from different instruments, with different spatial resolutions, at different wavelengths. Direct estimates of the physical parameters, temperature, density and composition, can be made from the data without prior images (re-)construction. An increase in the accuracy of these parameters is expected as the result of this more systematic approach.
A multi-focus image fusion method via region mosaicking on Laplacian pyramids
Kou, Liang; Zhang, Liguo; Sun, Jianguo; Han, Qilong; Jin, Zilong
2018-01-01
In this paper, a method named Region Mosaicking on Laplacian Pyramids (RMLP) is proposed to fuse multi-focus images that is captured by microscope. First, the Sum-Modified-Laplacian is applied to measure the focus of multi-focus images. Then the density-based region growing algorithm is utilized to segment the focused region mask of each image. Finally, the mask is decomposed into a mask pyramid to supervise region mosaicking on a Laplacian pyramid. The region level pyramid keeps more original information than the pixel level. The experiment results show that RMLP has best performance in quantitative comparison with other methods. In addition, RMLP is insensitive to noise and can reduces the color distortion of the fused images on two datasets. PMID:29771912
Robust mosiacs of close-range high-resolution images
NASA Astrophysics Data System (ADS)
Song, Ran; Szymanski, John E.
2008-03-01
This paper presents a robust algorithm which relies only on the information contained within the captured images for the construction of massive composite mosaic images from close-range and high-resolution originals, such as those obtained when imaging architectural and heritage structures. We first apply Harris algorithm to extract a selection of corners and, then, employ both the intensity correlation and the spatial correlation between the corresponding corners for matching them. Then we estimate the eight-parameter projective transformation matrix by the genetic algorithm. Lastly, image fusion using a weighted blending function together with intensity compensation produces an effective seamless mosaic image.
Hierarchical patch-based co-registration of differently stained histopathology slides
NASA Astrophysics Data System (ADS)
Yigitsoy, Mehmet; Schmidt, Günter
2017-03-01
Over the past decades, digital pathology has emerged as an alternative way of looking at the tissue at subcellular level. It enables multiplexed analysis of different cell types at micron level. Information about cell types can be extracted by staining sections of a tissue block using different markers. However, robust fusion of structural and functional information from different stains is necessary for reproducible multiplexed analysis. Such a fusion can be obtained via image co-registration by establishing spatial correspondences between tissue sections. Spatial correspondences can then be used to transfer various statistics about cell types between sections. However, the multi-modal nature of images and sparse distribution of interesting cell types pose several challenges for the registration of differently stained tissue sections. In this work, we propose a co-registration framework that efficiently addresses such challenges. We present a hierarchical patch-based registration of intensity normalized tissue sections. Preliminary experiments demonstrate the potential of the proposed technique for the fusion of multi-modal information from differently stained digital histopathology sections.
Sensor-agnostic photogrammetric image registration with applications to population modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Devin A; Moehl, Jessica J
2016-01-01
Photogrammetric registration of airborne and spaceborne imagery is a crucial prerequisite to many data fusion tasks. While embedded sensor models provide a rough geolocation estimate, these metadata may be incomplete or imprecise. Manual solutions are appropriate for small-scale projects, but for rapid streams of cross-modal, multi-sensor, multi-temporal imagery with varying metadata standards, an automated approach is required. We present a high-performance image registration workflow to address this need. This paper outlines the core development concepts and demonstrates its utility with respect to the 2016 data fusion contest imagery. In particular, Iris ultra-HD video is georeferenced to the Earth surface viamore » registration to DEIMOS-2 imagery, which serves as a trusted control source. Geolocation provides opportunity to augment the video with spatial context, stereo-derived disparity, spectral sensitivity, change detection, and numerous ancillary geospatial layers. We conclude by leveraging these derivative data layers towards one such fusion application: population distribution modeling.« less
NASA Astrophysics Data System (ADS)
Alonso, Carmelo; Tarquis, Ana M.; Zúñiga, Ignacio; Benito, Rosa M.
2017-03-01
Several studies have shown that vegetation indexes can be used to estimate root zone soil moisture. Earth surface images, obtained by high-resolution satellites, presently give a lot of information on these indexes, based on the data of several wavelengths. Because of the potential capacity for systematic observations at various scales, remote sensing technology extends the possible data archives from the present time to several decades back. Because of this advantage, enormous efforts have been made by researchers and application specialists to delineate vegetation indexes from local scale to global scale by applying remote sensing imagery. In this work, four band images have been considered, which are involved in these vegetation indexes, and were taken by satellites Ikonos-2 and Landsat-7 of the same geographic location, to study the effect of both spatial (pixel size) and radiometric (number of bits coding the image) resolution on these wavelength bands as well as two vegetation indexes: the Normalized Difference Vegetation Index (NDVI) and the Enhanced Vegetation Index (EVI). In order to do so, a multi-fractal analysis of these multi-spectral images was applied in each of these bands and the two indexes derived. The results showed that spatial resolution has a similar scaling effect in the four bands, but radiometric resolution has a larger influence in blue and green bands than in red and near-infrared bands. The NDVI showed a higher sensitivity to the radiometric resolution than EVI. Both were equally affected by the spatial resolution. From both factors, the spatial resolution has a major impact in the multi-fractal spectrum for all the bands and the vegetation indexes. This information should be taken in to account when vegetation indexes based on different satellite sensors are obtained.
NASA Astrophysics Data System (ADS)
Pan, Feng; Deng, Yating; Ma, Xichao; Xiao, Wen
2017-11-01
Digital holographic microtomography is improved and applied to the measurements of three-dimensional refractive index distributions of fusion spliced optical fibers. Tomographic images are reconstructed from full-angle phase projection images obtained with a setup-rotation approach, in which the laser source, the optical system and the image sensor are arranged on an optical breadboard and synchronously rotated around the fixed object. For retrieving high-quality tomographic images, a numerical method is proposed to compensate the unwanted movements of the object in the lateral, axial and vertical directions during rotation. The compensation is implemented on the two-dimensional phase images instead of the sinogram. The experimental results exhibit distinctly the internal structures of fusion splices between a single-mode fiber and other fibers, including a multi-mode fiber, a panda polarization maintaining fiber, a bow-tie polarization maintaining fiber and a photonic crystal fiber. In particular, the internal structure distortion in the fusion areas can be intuitively observed, such as the expansion of the stress zones of polarization maintaining fibers, the collapse of the air holes of photonic crystal fibers, etc.
A Flexible Spatiotemporal Method for Fusing Satellite Images with Different Resolutions
USDA-ARS?s Scientific Manuscript database
Studies of land surface dynamics in heterogeneous landscapes often require remote sensing data with high acquisition frequency and high spatial resolution. However, no single sensor meets this requirement. This study presents a new spatiotemporal data fusion method, the Flexible Spatiotemporal DAta ...
Towards a Unified Approach to Information Integration - A review paper on data/information fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitney, Paul D.; Posse, Christian; Lei, Xingye C.
2005-10-14
Information or data fusion of data from different sources are ubiquitous in many applications, from epidemiology, medical, biological, political, and intelligence to military applications. Data fusion involves integration of spectral, imaging, text, and many other sensor data. For example, in epidemiology, information is often obtained based on many studies conducted by different researchers at different regions with different protocols. In the medical field, the diagnosis of a disease is often based on imaging (MRI, X-Ray, CT), clinical examination, and lab results. In the biological field, information is obtained based on studies conducted on many different species. In military field, informationmore » is obtained based on data from radar sensors, text messages, chemical biological sensor, acoustic sensor, optical warning and many other sources. Many methodologies are used in the data integration process, from classical, Bayesian, to evidence based expert systems. The implementation of the data integration ranges from pure software design to a mixture of software and hardware. In this review we summarize the methodologies and implementations of data fusion process, and illustrate in more detail the methodologies involved in three examples. We propose a unified multi-stage and multi-path mapping approach to the data fusion process, and point out future prospects and challenges.« less
Calibration of Fuji BAS-SR type imaging plate as high spatial resolution x-ray radiography recorder
NASA Astrophysics Data System (ADS)
Yan, Ji; Zheng, Jianhua; Zhang, Xing; Chen, Li; Wei, Minxi
2017-05-01
Image Plates as x-ray recorder have advantages including reusable, high dynamic range, large active area, and so on. In this work, Fuji BAS-SR type image plate combined with BAS-5000 scanner is calibrated. The fade rates of Image Plates has been measured using x-ray diffractometric in different room temperature; the spectral response of Image Plates has been measured using 241Am radioactive sealed source and fitting with linear model; the spatial resolution of Image Plates has been measured using micro-focus x-ray tube. The results show that Image Plates has an exponent decade curve and double absorption edge response curve. The spatial resolution of Image Plates with 25μ/50μ scanner resolution is 6.5lp/mm, 11.9lp/mm respectively and gold grid radiography is collected with 80lp/mm spatial resolution using SR-type Image Plates. BAS-SR type Image Plates can do high spatial resolution and quantitative radiographic works. It can be widely used in High energy density physics (HEDP), inertial confinement fusion (ICF) and laboratory astronomy physics.
NASA Astrophysics Data System (ADS)
Wu, Wei; Zhao, Dewei; Zhang, Huan
2015-12-01
Super-resolution image reconstruction is an effective method to improve the image quality. It has important research significance in the field of image processing. However, the choice of the dictionary directly affects the efficiency of image reconstruction. A sparse representation theory is introduced into the problem of the nearest neighbor selection. Based on the sparse representation of super-resolution image reconstruction method, a super-resolution image reconstruction algorithm based on multi-class dictionary is analyzed. This method avoids the redundancy problem of only training a hyper complete dictionary, and makes the sub-dictionary more representatives, and then replaces the traditional Euclidean distance computing method to improve the quality of the whole image reconstruction. In addition, the ill-posed problem is introduced into non-local self-similarity regularization. Experimental results show that the algorithm is much better results than state-of-the-art algorithm in terms of both PSNR and visual perception.
Quantitative multi-modal NDT data analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heideklang, René; Shokouhi, Parisa
2014-02-18
A single NDT technique is often not adequate to provide assessments about the integrity of test objects with the required coverage or accuracy. In such situations, it is often resorted to multi-modal testing, where complementary and overlapping information from different NDT techniques are combined for a more comprehensive evaluation. Multi-modal material and defect characterization is an interesting task which involves several diverse fields of research, including signal and image processing, statistics and data mining. The fusion of different modalities may improve quantitative nondestructive evaluation by effectively exploiting the augmented set of multi-sensor information about the material. It is the redundantmore » information in particular, whose quantification is expected to lead to increased reliability and robustness of the inspection results. There are different systematic approaches to data fusion, each with its specific advantages and drawbacks. In our contribution, these will be discussed in the context of nondestructive materials testing. A practical study adopting a high-level scheme for the fusion of Eddy Current, GMR and Thermography measurements on a reference metallic specimen with built-in grooves will be presented. Results show that fusion is able to outperform the best single sensor regarding detection specificity, while retaining the same level of sensitivity.« less
NASA Astrophysics Data System (ADS)
Hu, Ruiguang; Xiao, Liping; Zheng, Wenjuan
2015-12-01
In this paper, multi-kernel learning(MKL) is used for drug-related webpages classification. First, body text and image-label text are extracted through HTML parsing, and valid images are chosen by the FOCARSS algorithm. Second, text based BOW model is used to generate text representation, and image-based BOW model is used to generate images representation. Last, text and images representation are fused with a few methods. Experimental results demonstrate that the classification accuracy of MKL is higher than those of all other fusion methods in decision level and feature level, and much higher than the accuracy of single-modal classification.
NASA Astrophysics Data System (ADS)
Luo, Shouhua; Shen, Tao; Sun, Yi; Li, Jing; Li, Guang; Tang, Xiangyang
2018-04-01
In high resolution (microscopic) CT applications, the scan field of view should cover the entire specimen or sample to allow complete data acquisition and image reconstruction. However, truncation may occur in projection data and results in artifacts in reconstructed images. In this study, we propose a low resolution image constrained reconstruction algorithm (LRICR) for interior tomography in microscopic CT at high resolution. In general, the multi-resolution acquisition based methods can be employed to solve the data truncation problem if the project data acquired at low resolution are utilized to fill up the truncated projection data acquired at high resolution. However, most existing methods place quite strict restrictions on the data acquisition geometry, which greatly limits their utility in practice. In the proposed LRICR algorithm, full and partial data acquisition (scan) at low and high resolutions, respectively, are carried out. Using the image reconstructed from sparse projection data acquired at low resolution as the prior, a microscopic image at high resolution is reconstructed from the truncated projection data acquired at high resolution. Two synthesized digital phantoms, a raw bamboo culm and a specimen of mouse femur, were utilized to evaluate and verify performance of the proposed LRICR algorithm. Compared with the conventional TV minimization based algorithm and the multi-resolution scout-reconstruction algorithm, the proposed LRICR algorithm shows significant improvement in reduction of the artifacts caused by data truncation, providing a practical solution for high quality and reliable interior tomography in microscopic CT applications. The proposed LRICR algorithm outperforms the multi-resolution scout-reconstruction method and the TV minimization based reconstruction for interior tomography in microscopic CT.
Medical image registration based on normalized multidimensional mutual information
NASA Astrophysics Data System (ADS)
Li, Qi; Ji, Hongbing; Tong, Ming
2009-10-01
Registration of medical images is an essential research topic in medical image processing and applications, and especially a preliminary and key step for multimodality image fusion. This paper offers a solution to medical image registration based on normalized multi-dimensional mutual information. Firstly, affine transformation with translational and rotational parameters is applied to the floating image. Then ordinal features are extracted by ordinal filters with different orientations to represent spatial information in medical images. Integrating ordinal features with pixel intensities, the normalized multi-dimensional mutual information is defined as similarity criterion to register multimodality images. Finally the immune algorithm is used to search registration parameters. The experimental results demonstrate the effectiveness of the proposed registration scheme.
Study on Hybrid Image Search Technology Based on Texts and Contents
NASA Astrophysics Data System (ADS)
Wang, H. T.; Ma, F. L.; Yan, C.; Pan, H.
2018-05-01
Image search was studied first here based on texts and contents, respectively. The text-based image feature extraction was put forward by integrating the statistical and topic features in view of the limitation of extraction of keywords only by means of statistical features of words. On the other hand, a search-by-image method was put forward based on multi-feature fusion in view of the imprecision of the content-based image search by means of a single feature. The layered-searching method depended on primarily the text-based image search method and additionally the content-based image search was then put forward in view of differences between the text-based and content-based methods and their difficult direct fusion. The feasibility and effectiveness of the hybrid search algorithm were experimentally verified.
Single-Scale Fusion: An Effective Approach to Merging Images.
Ancuti, Codruta O; Ancuti, Cosmin; De Vleeschouwer, Christophe; Bovik, Alan C
2017-01-01
Due to its robustness and effectiveness, multi-scale fusion (MSF) based on the Laplacian pyramid decomposition has emerged as a popular technique that has shown utility in many applications. Guided by several intuitive measures (weight maps) the MSF process is versatile and straightforward to be implemented. However, the number of pyramid levels increases with the image size, which implies sophisticated data management and memory accesses, as well as additional computations. Here, we introduce a simplified formulation that reduces MSF to only a single level process. Starting from the MSF decomposition, we explain both mathematically and intuitively (visually) a way to simplify the classical MSF approach with minimal loss of information. The resulting single-scale fusion (SSF) solution is a close approximation of the MSF process that eliminates important redundant computations. It also provides insights regarding why MSF is so effective. While our simplified expression is derived in the context of high dynamic range imaging, we show its generality on several well-known fusion-based applications, such as image compositing, extended depth of field, medical imaging, and blending thermal (infrared) images with visible light. Besides visual validation, quantitative evaluations demonstrate that our SSF strategy is able to yield results that are highly competitive with traditional MSF approaches.
NASA Astrophysics Data System (ADS)
Anderson, Neal T.; Marchisio, Giovanni B.
2012-06-01
Over the last decade DigitalGlobe (DG) has built and launched a series of remote sensing satellites with steadily increasing capabilities: QuickBird, WorldView-1 (WV-1), and WorldView-2 (WV-2). Today, this constellation acquires over 2.5 million km2 of imagery on a daily basis. This paper presents the configuration and performance capabilities of each of these satellites, with emphasis on the unique spatial and spectral capabilities of WV-2. WV-2 employs high-precision star tracker and inertial measurement units to achieve a geolocation accuracy of 5 m Circular Error, 90% confidence (CE90). The native resolution of WV-2 is 0.5 m GSD in the panchromatic band and 2 m GSD in 8 multispectral bands. Four of the multispectral bands match those of the Landsat series of satellites; four new bands enable novel and expanded applications. We are rapidly establishing and refreshing a global database of very high resolution (VHR) 8-band multispectral imagery. Control moment gyroscopes (CMGs) on both WV-1 and WV-2 improve collection capacity and provide the agility to capture multi-angle sequences in rapid succession. These capabilities result in a rich combination of image features that can be exploited to develop enhanced monitoring solutions. Algorithms for interpretation and analysis can leverage: 1) broader and more continuous spectral coverage at 2 m resolution; 2) textural and morphological information from the 0.5 m panchromatic band; 3) ancillary information from stereo and multi-angle collects, including high precision digital elevation models; 4) frequent revisits and time-series collects; and 5) the global reference image archives. We introduce the topic of creative fusion of image attributes, as this provides a unifying theme for many of the papers in this WV-2 Special Session.
Hayworth, Kenneth J.; Morgan, Josh L.; Schalek, Richard; Berger, Daniel R.; Hildebrand, David G. C.; Lichtman, Jeff W.
2014-01-01
The automated tape-collecting ultramicrotome (ATUM) makes it possible to collect large numbers of ultrathin sections quickly—the equivalent of a petabyte of high resolution images each day. However, even high throughput image acquisition strategies generate images far more slowly (at present ~1 terabyte per day). We therefore developed WaferMapper, a software package that takes a multi-resolution approach to mapping and imaging select regions within a library of ultrathin sections. This automated method selects and directs imaging of corresponding regions within each section of an ultrathin section library (UTSL) that may contain many thousands of sections. Using WaferMapper, it is possible to map thousands of tissue sections at low resolution and target multiple points of interest for high resolution imaging based on anatomical landmarks. The program can also be used to expand previously imaged regions, acquire data under different imaging conditions, or re-image after additional tissue treatments. PMID:25018701
Picosecond imaging of inertial confinement fusion plasmas using electron pulse-dilation
NASA Astrophysics Data System (ADS)
Hilsabeck, T. J.; Nagel, S. R.; Hares, J. D.; Kilkenny, J. D.; Bell, P. M.; Bradley, D. K.; Dymoke-Bradshaw, A. K. L.; Piston, K.; Chung, T. M.
2017-02-01
Laser driven inertial confinement fusion (ICF) plasmas typically have burn durations on the order of 100 ps. Time resolved imaging of the x-ray self emission during the hot spot formation is an important diagnostic tool which gives information on implosion symmetry, transient features and stagnation time. Traditional x-ray gated imagers for ICF use microchannel plate detectors to obtain gate widths of 40-100 ps. The development of electron pulse-dilation imaging has enabled a 10X improvement in temporal resolution over legacy instruments. In this technique, the incoming x-ray image is converted to electrons at a photocathode. The electrons are accelerated with a time-varying potential that leads to temporal expansion as the electron signal transits the tube. This expanded signal is recorded with a gated detector and the effective temporal resolution of the composite system can be as low as several picoseconds. An instrument based on this principle, known as the Dilation X-ray Imager (DIXI) has been constructed and fielded at the National Ignition Facility. Design features and experimental results from DIXI will be presented.
Time-of-Flight Microwave Camera
Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh
2015-01-01
Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz–12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum. PMID:26434598
Time-of-Flight Microwave Camera
NASA Astrophysics Data System (ADS)
Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh
2015-10-01
Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.
Cox, Benjamin L; Mackie, Thomas R; Eliceiri, Kevin W
2015-01-01
Multi-modal imaging approaches of tumor metabolism that provide improved specificity, physiological relevance and spatial resolution would improve diagnosing of tumors and evaluation of tumor progression. Currently, the molecular probe FDG, glucose fluorinated with 18F at the 2-carbon, is the primary metabolic approach for clinical diagnostics with PET imaging. However, PET lacks the resolution necessary to yield intratumoral distributions of deoxyglucose, on the cellular level. Multi-modal imaging could elucidate this problem, but requires the development of new glucose analogs that are better suited for other imaging modalities. Several such analogs have been created and are reviewed here. Also reviewed are several multi-modal imaging studies that have been performed that attempt to shed light on the cellular distribution of glucose analogs within tumors. Some of these studies are performed in vitro, while others are performed in vivo, in an animal model. The results from these studies introduce a visualization gap between the in vitro and in vivo studies that, if solved, could enable the early detection of tumors, the high resolution monitoring of tumors during treatment, and the greater accuracy in assessment of different imaging agents. PMID:25625022
Motion immune diffusion imaging using augmented MUSE (AMUSE) for high-resolution multi-shot EPI
Guhaniyogi, Shayan; Chu, Mei-Lan; Chang, Hing-Chiu; Song, Allen W.; Chen, Nan-kuei
2015-01-01
Purpose To develop new techniques for reducing the effects of microscopic and macroscopic patient motion in diffusion imaging acquired with high-resolution multi-shot EPI. Theory The previously reported Multiplexed Sensitivity Encoding (MUSE) algorithm is extended to account for macroscopic pixel misregistrations as well as motion-induced phase errors in a technique called Augmented MUSE (AMUSE). Furthermore, to obtain more accurate quantitative DTI measures in the presence of subject motion, we also account for the altered diffusion encoding among shots arising from macroscopic motion. Methods MUSE and AMUSE were evaluated on simulated and in vivo motion-corrupted multi-shot diffusion data. Evaluations were made both on the resulting imaging quality and estimated diffusion tensor metrics. Results AMUSE was found to reduce image blurring resulting from macroscopic subject motion compared to MUSE, but yielded inaccurate tensor estimations when neglecting the altered diffusion encoding. Including the altered diffusion encoding in AMUSE produced better estimations of diffusion tensors. Conclusion The use of AMUSE allows for improved image quality and diffusion tensor accuracy in the presence of macroscopic subject motion during multi-shot diffusion imaging. These techniques should facilitate future high-resolution diffusion imaging. PMID:25762216
Chang, Hing-Chiu; Guhaniyogi, Shayan; Chen, Nan-kuei
2014-01-01
Purpose We report a series of techniques to reliably eliminate artifacts in interleaved echo-planar imaging (EPI) based diffusion weighted imaging (DWI). Methods First, we integrate the previously reported multiplexed sensitivity encoding (MUSE) algorithm with a new adaptive Homodyne partial-Fourier reconstruction algorithm, so that images reconstructed from interleaved partial-Fourier DWI data are free from artifacts even in the presence of either a) motion-induced k-space energy peak displacement, or b) susceptibility field gradient induced fast phase changes. Second, we generalize the previously reported single-band MUSE framework to multi-band MUSE, so that both through-plane and in-plane aliasing artifacts in multi-band multi-shot interleaved DWI data can be effectively eliminated. Results The new adaptive Homodyne-MUSE reconstruction algorithm reliably produces high-quality and high-resolution DWI, eliminating residual artifacts in images reconstructed with previously reported methods. Furthermore, the generalized MUSE algorithm is compatible with multi-band and high-throughput DWI. Conclusion The integration of the multi-band and adaptive Homodyne-MUSE algorithms significantly improves the spatial-resolution, image quality, and scan throughput of interleaved DWI. We expect that the reported reconstruction framework will play an important role in enabling high-resolution DWI for both neuroscience research and clinical uses. PMID:24925000
Diagnosing magnetized liner inertial fusion experiments on Z
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Stephanie B.; Gomez, Matthew R.; Sefkow, Adam B.
The Magnetized Liner Inertial Fusion experiments performed at Sandia's Z facility have demonstrated significant thermonuclear fusion neutron yields (~10 12 DD neutrons) from multi-keV deuterium plasmasinertially confined by slow (~10 cm/μs), stable, cylindrical implosions. Moreover, effective magnetic confinement of charged fusion reactants and products is signaled by high secondary DT neutron yields above 10 10. Further analysis of extensive power, imaging, and spectroscopicx-ray measurements provides a detailed picture of ~3 keV temperatures, 0.3 g/cm 3 densities, gradients, and mix in the fuel and liner over the 1–2 ns stagnation duration.
Diagnosing magnetized liner inertial fusion experiments on Z
Hansen, Stephanie B.; Gomez, Matthew R.; Sefkow, Adam B.; ...
2015-05-14
The Magnetized Liner Inertial Fusion experiments performed at Sandia's Z facility have demonstrated significant thermonuclear fusion neutron yields (~10 12 DD neutrons) from multi-keV deuterium plasmasinertially confined by slow (~10 cm/μs), stable, cylindrical implosions. Moreover, effective magnetic confinement of charged fusion reactants and products is signaled by high secondary DT neutron yields above 10 10. Further analysis of extensive power, imaging, and spectroscopicx-ray measurements provides a detailed picture of ~3 keV temperatures, 0.3 g/cm 3 densities, gradients, and mix in the fuel and liner over the 1–2 ns stagnation duration.
A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method
NASA Astrophysics Data System (ADS)
Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang
2016-01-01
Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR.
A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method
Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang
2016-01-01
Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR. PMID:26781194
NASA Astrophysics Data System (ADS)
Davis, Brynmor J.
Fluorescence microscopy is an important and ubiquitous tool in biological imaging due to the high specificity with which fluorescent molecules can be attached to an organism and the subsequent nondestructive in-vivo imaging allowed. Focused-light microscopies allow three-dimensional fluorescence imaging but their resolution is restricted by diffraction. This effect is particularly limiting in the axial dimension as the diffraction-limited focal volume produced by a lens is more extensive along the optical axis than perpendicular to it. Approaches such as confocal microscopy and 4Pi microscopy have been developed to improve the axial resolution. Spectral Self-Interference Fluorescence Microscopy (SSFM) is another high-axial-resolution technique and is the principal subject of this dissertation. Nanometer-precision localization of a single fluorescent layer has been demonstrated using SSFM. This accuracy compares favorably with the axial resolutions given by confocal and 4Pi systems at similar operating parameters (these resolutions are approximately 350nm and 80nm respectively). This theoretical work analyzes the expected performance of the SSFM system when imaging a general object, i.e. an arbitrary fluorophore density function rather than a single layer. An existing model of SSFM is used in simulations to characterize the system's resolution. Several statistically-based reconstruction methods are applied to show that the expected resolution for SSFM is similar to 4Pi microscopy for a general object but does give very high localization accuracy when the object is known to consist of a limited number of layers. SSFM is then analyzed in a linear systems framework and shown to have strong connections, both physically and mathematically, to a multi-channel 4Pi microscope. Fourier-domain analysis confirms that SSFM cannot be expected to outperform this multi-channel 4Pi instrument. Differences between the channels in spatial-scanning, multi-channel microscopies are then exploited to show that such instruments can operate at a sub-Nyquist scanning rate but still produce images largely free of aliasing effects. Multi-channel analysis is also used to show how light typically discarded in confocal and 4Pi systems can be collected and usefully incorporated into the measured image.
Sobieranski, Antonio C; Inci, Fatih; Tekin, H Cumhur; Yuksekkaya, Mehmet; Comunello, Eros; Cobra, Daniel; von Wangenheim, Aldo; Demirci, Utkan
2017-01-01
In this paper, an irregular displacement-based lensless wide-field microscopy imaging platform is presented by combining digital in-line holography and computational pixel super-resolution using multi-frame processing. The samples are illuminated by a nearly coherent illumination system, where the hologram shadows are projected into a complementary metal-oxide semiconductor-based imaging sensor. To increase the resolution, a multi-frame pixel resolution approach is employed to produce a single holographic image from multiple frame observations of the scene, with small planar displacements. Displacements are resolved by a hybrid approach: (i) alignment of the LR images by a fast feature-based registration method, and (ii) fine adjustment of the sub-pixel information using a continuous optimization approach designed to find the global optimum solution. Numerical method for phase-retrieval is applied to decode the signal and reconstruct the morphological details of the analyzed sample. The presented approach was evaluated with various biological samples including sperm and platelets, whose dimensions are in the order of a few microns. The obtained results demonstrate a spatial resolution of 1.55 µm on a field-of-view of ≈30 mm2. PMID:29657866
Fusion of Geophysical Images in the Study of Archaeological Sites
NASA Astrophysics Data System (ADS)
Karamitrou, A. A.; Petrou, M.; Tsokas, G. N.
2011-12-01
This paper presents results from different fusion techniques between geophysical images from different modalities in order to combine them into one image with higher information content than the two original images independently. The resultant image will be useful for the detection and mapping of buried archaeological relics. The examined archaeological area is situated in Kampana site (NE Greece) near the ancient theater of Maronia city. Archaeological excavations revealed an ancient theater, an aristocratic house and the temple of the ancient Greek God Dionysus. Numerous ceramic objects found in the broader area indicated the probability of the existence of buried urban structure. In order to accurately locate and map the latter, geophysical measurements performed with the use of the magnetic method (vertical gradient of the magnetic field) and of the electrical method (apparent resistivity). We performed a semi-stochastic pixel based registration method between the geophysical images in order to fine register them by correcting their local spatial offsets produced by the use of hand held devices. After this procedure we applied to the registered images three different fusion approaches. Image fusion is a relatively new technique that not only allows integration of different information sources, but also takes advantage of the spatial and spectral resolution as well as the orientation characteristics of each image. We have used three different fusion techniques, fusion with mean values, with wavelets by enhancing selected frequency bands and curvelets giving emphasis at specific bands and angles (according the expecting orientation of the relics). In all three cases the fused images gave significantly better results than each of the original geophysical images separately. The comparison of the results of the three different approaches showed that the fusion with the use of curvelets, giving emphasis at the features' orientation, seems to give the best fused image. In the resultant image appear clear linear and ellipsoid features corresponding to potential archaeological relics.
Multimodal microscopy and the stepwise multi-photon activation fluorescence of melanin
NASA Astrophysics Data System (ADS)
Lai, Zhenhua
The author's work is divided into three aspects: multimodal microscopy, stepwise multi-photon activation fluorescence (SMPAF) of melanin, and customized-profile lenses (CPL) for on-axis laser scanners, which will be introduced respectively. A multimodal microscope provides the ability to image samples with multiple modalities on the same stage, which incorporates the benefits of all modalities. The multimodal microscopes developed in this dissertation are the Keck 3D fusion multimodal microscope 2.0 (3DFM 2.0), upgraded from the old 3DFM with improved performance and flexibility, and the multimodal microscope for targeting small particles (the "Target" system). The control systems developed for both microscopes are low-cost and easy-to-build, with all components off-the-shelf. The control system have not only significantly decreased the complexity and size of the microscope, but also increased the pixel resolution and flexibility. The SMPAF of melanin, activated by a continuous-wave (CW) mode near-infrared (NIR) laser, has potential applications for a low-cost and reliable method of detecting melanin. The photophysics of melanin SMPAF has been studied by theoretical analysis of the excitation process and investigation of the spectra, activation threshold, and photon number absorption of melanin SMPAF. SMPAF images of melanin in mouse hair and skin, mouse melanoma, and human black and white hairs are compared with images taken by conventional multi-photon fluorescence microscopy (MPFM) and confocal reflectance microscopy (CRM). SMPAF images significantly increase specificity and demonstrate the potential to increase sensitivity for melanin detection compared to MPFM images and CRM images. Employing melanin SMPAF imaging to detect melanin inside human skin in vivo has been demonstrated, which proves the effectiveness of melanin detection using SMPAF for medical purposes. Selective melanin ablation with micrometer resolution has been presented using the Target system. Compared to the traditional selective photothermolysis, this method demonstrates higher precision, higher specificity and deeper penetration. Therefore, the SMPAF guided selective ablation of melanin is a promising tool of removing melanin for both medical and cosmetic purposes. Three CPLs have been designed for low-cost linear-motion scanners, low-cost fast spinning scanners and high-precision fast spinning scanners. Each design has been tailored to the industrial manufacturing ability and market demands.
Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images
Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki
2015-01-01
In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744
Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.
Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki
2015-05-22
In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Guang, E-mail: lig2@mskcc.org; Wei, Jie; Kadbi, Mo
Purpose: To develop and evaluate a super-resolution approach to reconstruct time-resolved 4-dimensional magnetic resonance imaging (TR-4DMRI) with a high spatiotemporal resolution for multi-breathing cycle motion assessment. Methods and Materials: A super-resolution approach was developed to combine fast 3-dimensional (3D) cine MRI with low resolution during free breathing (FB) and high-resolution 3D static MRI during breath hold (BH) using deformable image registration. A T1-weighted, turbo field echo sequence, coronal 3D cine acquisition, partial Fourier approximation, and SENSitivity Encoding parallel acceleration were used. The same MRI pulse sequence, field of view, and acceleration techniques were applied in both FB and BH acquisitions;more » the intensity-based Demons deformable image registration method was used. Under an institutional review board–approved protocol, 7 volunteers were studied with 3D cine FB scan (voxel size: 5 × 5 × 5 mm{sup 3}) at 2 Hz for 40 seconds and a 3D static BH scan (2 × 2 × 2 mm{sup 3}). To examine the image fidelity of 3D cine and super-resolution TR-4DMRI, a mobile gel phantom with multi-internal targets was scanned at 3 speeds and compared with the 3D static image. Image similarity among 3D cine, 4DMRI, and 3D static was evaluated visually using difference image and quantitatively using voxel intensity correlation and Dice index (phantom only). Multi-breathing-cycle waveforms were extracted and compared in both phantom and volunteer images using the 3D cine as the references. Results: Mild imaging artifacts were found in the 3D cine and TR-4DMRI of the mobile gel phantom with a Dice index of >0.95. Among 7 volunteers, the super-resolution TR-4DMRI yielded high voxel-intensity correlation (0.92 ± 0.05) and low voxel-intensity difference (<0.05). The detected motion differences between TR-4DMRI and 3D cine were −0.2 ± 0.5 mm (phantom) and −0.2 ± 1.9 mm (diaphragms). Conclusion: Super-resolution TR-4DMRI has been reconstructed with adequate temporal (2 Hz) and spatial (2 × 2 × 2 mm{sup 3}) resolutions. Further TR-4DMRI characterization and improvement are necessary before clinical applications. Multi-breathing cycles can be examined, providing patient-specific breathing irregularities and motion statistics for future 4D radiation therapy.« less
Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming
2018-01-01
There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L0 gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements. PMID:29414893
Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming
2018-02-07
There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L ₀ gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements.
Fusion of MRIs and CT scans for surgical treatment of cholesteatoma of the middle ear in children.
Plouin-Gaudon, Isabelle; Bossard, Denis; Ayari-Khalfallah, Sonia; Froehlich, Patrick
2010-09-01
To evaluate the efficiency of diffusion-weighted magnetic resonance imaging (MRI) and high-resolution computed tomographic (CT) scan coregistration in predicting and adequately locating primary or recurrent cholesteatoma in children. Prospective study. Tertiary care university hospital. Ten patients aged 2 to 17 years (mean age, 8.5 years) with cholesteatoma of the middle ear, some of which were previously treated, were included for follow-up with systematic CT scanning and MRI between 2007 and 2008. Computed tomographic scanning was performed on a Siemens Somaton 128 (0.5/0.2-mm slices reformatted in 0.5/0.3-mm images). Fine cuts were obtained parallel and perpendicular to the lateral semicircular canal in each ear (100 × 100-mm field of view). Magnetic resonance imaging was undertaken on a Siemens Avanto 1.5T unit, with a protocol adapted for young children. Diffusion-weighted imaging was acquired using a single-shot turbo spin-echo mode. To allow for diagnosis and localization of the cholesteatoma, CT and diffusion-weighted MRIs were fused for each case. In 10 children, fusion technique allowed for correct diagnosis and precise localization (hypotympanum, epitympanum, mastoid recess, and attical space) as confirmed by subsequent standard surgery (positive predictive value, 100%). In 3 cases, the surgical approach was adequately determined from the fusion results. Lesion sizes on the CT-MRI fusion corresponded with perioperative findings. Recent developments in imaging techniques have made diffusion-weighted MRI more effective for detecting recurrent cholesteatoma. The major drawback of this technique, however, has been its poor anatomical and spatial discrimination. Fusion imaging using high-resolution CT and diffusion-weighted MRI appears to be a promising technique for both the diagnosis and precise localization of cholesteatomas. It provides useful information for surgical planning and, furthermore, is easy to use in pediatric cases.
USDA-ARS?s Scientific Manuscript database
Satellite remote sensing technologies have been widely used to map spatiotemporal variability in consumptive water use (or evapotranspiration; ET) for agricultural water management applications. However, current satellite-based sensors with the high spatial resolution required to map ET at sub-field...
USDA-ARS?s Scientific Manuscript database
Satellite remote sensing technologies have been widely used to map spatiotemporal variability in consumptive water use (or evapotranspiration; ET) for agricultural water management applications. However, current satellite-based sensors with the high spatial resolution required to map ET at sub-field...
Zhao, Fengjun; Liang, Jimin; Chen, Xueli; Liu, Junting; Chen, Dongmei; Yang, Xiang; Tian, Jie
2016-03-01
Previous studies showed that all the vascular parameters from both the morphological and topological parameters were affected with the altering of imaging resolutions. However, neither the sensitivity analysis of the vascular parameters at multiple resolutions nor the distinguishability estimation of vascular parameters from different data groups has been discussed. In this paper, we proposed a quantitative analysis method of vascular parameters for vascular networks of multi-resolution, by analyzing the sensitivity of vascular parameters at multiple resolutions and estimating the distinguishability of vascular parameters from different data groups. Combining the sensitivity and distinguishability, we designed a hybrid formulation to estimate the integrated performance of vascular parameters in a multi-resolution framework. Among the vascular parameters, degree of anisotropy and junction degree were two insensitive parameters that were nearly irrelevant with resolution degradation; vascular area, connectivity density, vascular length, vascular junction and segment number were five parameters that could better distinguish the vascular networks from different groups and abide by the ground truth. Vascular area, connectivity density, vascular length and segment number not only were insensitive to multi-resolution but could also better distinguish vascular networks from different groups, which provided guidance for the quantification of the vascular networks in multi-resolution frameworks.
Cross contrast multi-channel image registration using image synthesis for MR brain images.
Chen, Min; Carass, Aaron; Jog, Amod; Lee, Junghoon; Roy, Snehashis; Prince, Jerry L
2017-02-01
Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information. Copyright © 2016 Elsevier B.V. All rights reserved.
Fusion of imaging and nonimaging data for surveillance aircraft
NASA Astrophysics Data System (ADS)
Shahbazian, Elisa; Gagnon, Langis; Duquet, Jean Remi; Macieszczak, Maciej; Valin, Pierre
1997-06-01
This paper describes a phased incremental integration approach for application of image analysis and data fusion technologies to provide automated intelligent target tracking and identification for airborne surveillance on board an Aurora Maritime Patrol Aircraft. The sensor suite of the Aurora consists of a radar, an identification friend or foe (IFF) system, an electronic support measures (ESM) system, a spotlight synthetic aperture radar (SSAR), a forward looking infra-red (FLIR) sensor and a link-11 tactical datalink system. Lockheed Martin Canada (LMCan) is developing a testbed, which will be used to analyze and evaluate approaches for combining the data provided by the existing sensors, which were initially not designed to feed a fusion system. Three concurrent research proof-of-concept activities provide techniques, algorithms and methodology into three sequential phases of integration of this testbed. These activities are: (1) analysis of the fusion architecture (track/contact/hybrid) most appropriate for the type of data available, (2) extraction and fusion of simple features from the imaging data into the fusion system performing automatic target identification, and (3) development of a unique software architecture which will permit integration and independent evolution, enhancement and optimization of various decision aid capabilities, such as multi-sensor data fusion (MSDF), situation and threat assessment (STA) and resource management (RM).
Non-rigid registration for fusion of carotid vascular ultrasound and MRI volumetric datasets
NASA Astrophysics Data System (ADS)
Chan, R. C.; Sokka, S.; Hinton, D.; Houser, S.; Manzke, R.; Hanekamp, A.; Reddy, V. Y.; Kaazempur-Mofrad, M. R.; Rasche, V.
2006-03-01
In carotid plaque imaging, MRI provides exquisite soft-tissue characterization, but lacks the temporal resolution for tissue strain imaging that real-time 3D ultrasound (3DUS) can provide. On the other hand, real-time 3DUS currently lacks the spatial resolution of carotid MRI. Non-rigid alignment of ultrasound and MRI data is essential for integrating complementary morphology and biomechanical information for carotid vascular assessment. We assessed non-rigid registration for fusion of 3DUS and MRI carotid data based on deformable models which are warped to maximize voxel similarity. We performed validation in vitro using isolated carotid artery imaging. These samples were subjected to soft-tissue deformations during 3DUS and were imaged in a static configuration with standard MR carotid pulse sequences. Registration of the source ultrasound sequences to the target MR volume was performed and the mean absolute distance between fiducials within the ultrasound and MR datasets was measured to determine inter-modality alignment quality. Our results indicate that registration errors on the order of 1mm are possible in vitro despite the low-resolution of current generation 3DUS transducers. Registration performance should be further improved with the use of higher frequency 3DUS prototypes and efforts are underway to test those probes for in vivo 3DUS carotid imaging.
NASA Astrophysics Data System (ADS)
Huang, C.; LI, Y.
2017-12-01
Continuous monitoring of daily evapotranspiration (ET) is crucial for allocating and managing water resources in irrigated agricultural areas in arid regions. In this study, continuous daily ET at a 90-m spatial resolution was estimated using the Surface Energy Balance System (SEBS) by fusing Moderate Resolution Imaging Spectroradiometer (MODIS) images with high temporal resolution and Advanced Space-borne Thermal Emission Reflectance Radiometer (ASTER) images with high spatial resolution. The spatiotemporal characteristics of these sensors were obtained using the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM). The performance of this approach was validated over a heterogeneous oasis-desert region covered by cropland, residential, woodland, water, Gobi desert, sandy desert, desert steppe, and wetland areas using in situ observations from automatic meteorological systems (AMS) and eddy covariance (EC) systems in the middle reaches of the Heihe River Basin in Northwest China. The error introduced during the data fusion process based on STARFM is within an acceptable range for predicted LST at a 90-m spatial resolution. The surface energy fluxes estimated using SEBS based on predicted remotely sensed data that combined the spatiotemporal characteristics of MODIS and ASTER agree well with the surface energy fluxes observed using EC systems for all land cover types, especially for vegetated area with MAP values range from 9% to 15%, which are less than the uncertainty (18%) of the observed in this study area. Time series of daily ET modelled from SEBS were compared to that modelled from PT-JPL (one of Satellite-based Priestley-Taylor ET model) and observations from EC systems. SEBS performed generally better than PT-JPL for vegetated area, especially irrigated cropland with bias, RMSE, and MAP values of 0.29 mm/d, 0.75 mm/d, 13% at maize site, -0.33 mm/d, 0.81 mm/d, and 14% at vegetable sites.
Ultra-high spatial resolution multi-energy CT using photon counting detector technology
NASA Astrophysics Data System (ADS)
Leng, S.; Gutjahr, R.; Ferrero, A.; Kappler, S.; Henning, A.; Halaweish, A.; Zhou, W.; Montoya, J.; McCollough, C.
2017-03-01
Two ultra-high-resolution (UHR) imaging modes, each with two energy thresholds, were implemented on a research, whole-body photon-counting-detector (PCD) CT scanner, referred to as sharp and UHR, respectively. The UHR mode has a pixel size of 0.25 mm at iso-center for both energy thresholds, with a collimation of 32 × 0.25 mm. The sharp mode has a 0.25 mm pixel for the low-energy threshold and 0.5 mm for the high-energy threshold, with a collimation of 48 × 0.25 mm. Kidney stones with mixed mineral composition and lung nodules with different shapes were scanned using both modes, and with the standard imaging mode, referred to as macro mode (0.5 mm pixel and 32 × 0.5 mm collimation). Evaluation and comparison of the three modes focused on the ability to accurately delineate anatomic structures using the high-spatial resolution capability and the ability to quantify stone composition using the multi-energy capability. The low-energy threshold images of the sharp and UHR modes showed better shape and texture information due to the achieved higher spatial resolution, although noise was also higher. No noticeable benefit was shown in multi-energy analysis using UHR compared to standard resolution (macro mode) when standard doses were used. This was due to excessive noise in the higher resolution images. However, UHR scans at higher dose showed improvement in multi-energy analysis over macro mode with regular dose. To fully take advantage of the higher spatial resolution in multi-energy analysis, either increased radiation dose, or application of noise reduction techniques, is needed.
NASA Astrophysics Data System (ADS)
Sukawattanavijit, Chanika; Srestasathiern, Panu
2017-10-01
Land Use and Land Cover (LULC) information are significant to observe and evaluate environmental change. LULC classification applying remotely sensed data is a technique popularly employed on a global and local dimension particularly, in urban areas which have diverse land cover types. These are essential components of the urban terrain and ecosystem. In the present, object-based image analysis (OBIA) is becoming widely popular for land cover classification using the high-resolution image. COSMO-SkyMed SAR data was fused with THAICHOTE (namely, THEOS: Thailand Earth Observation Satellite) optical data for land cover classification using object-based. This paper indicates a comparison between object-based and pixel-based approaches in image fusion. The per-pixel method, support vector machines (SVM) was implemented to the fused image based on Principal Component Analysis (PCA). For the objectbased classification was applied to the fused images to separate land cover classes by using nearest neighbor (NN) classifier. Finally, the accuracy assessment was employed by comparing with the classification of land cover mapping generated from fused image dataset and THAICHOTE image. The object-based data fused COSMO-SkyMed with THAICHOTE images demonstrated the best classification accuracies, well over 85%. As the results, an object-based data fusion provides higher land cover classification accuracy than per-pixel data fusion.
Geometric registration of remotely sensed data with SAMIR
NASA Astrophysics Data System (ADS)
Gianinetto, Marco; Barazzetti, Luigi; Dini, Luigi; Fusiello, Andrea; Toldo, Roberto
2015-06-01
The commercial market offers several software packages for the registration of remotely sensed data through standard one-to-one image matching. Although very rapid and simple, this strategy does not take into consideration all the interconnections among the images of a multi-temporal data set. This paper presents a new scientific software, called Satellite Automatic Multi-Image Registration (SAMIR), able to extend the traditional registration approach towards multi-image global processing. Tests carried out with high-resolution optical (IKONOS) and high-resolution radar (COSMO-SkyMed) data showed that SAMIR can improve the registration phase with a more rigorous and robust workflow without initial approximations, user's interaction or limitation in spatial/spectral data size. The validation highlighted a sub-pixel accuracy in image co-registration for the considered imaging technologies, including optical and radar imagery.
NASA Astrophysics Data System (ADS)
Gholoum, M.; Bruce, D.; Hazeam, S. Al
2012-07-01
A coral reef ecosystem, one of the most complex marine environmental systems on the planet, is defined as biologically diverse and immense. It plays an important role in maintaining a vast biological diversity for future generations and functions as an essential spawning, nursery, breeding and feeding ground for many kinds of marine species. In addition, coral reef ecosystems provide valuable benefits such as fisheries, ecological goods and services and recreational activities to many communities. However, this valuable resource is highly threatened by a number of environmental changes and anthropogenic impacts that can lead to reduced coral growth and production, mass coral mortality and loss of coral diversity. With the growth of these threats on coral reef ecosystems, there is a strong management need for mapping and monitoring of coral reef ecosystems. Remote sensing technology can be a valuable tool for mapping and monitoring of these ecosystems. However, the diversity and complexity of coral reef ecosystems, the resolution capabilities of satellite sensors and the low reflectivity of shallow water increases the difficulties to identify and classify its features. This paper reviews the methods used in mapping and monitoring coral reef ecosystems. In addition, this paper proposes improved methods for mapping and monitoring coral reef ecosystems based on image fusion techniques. This image fusion techniques will be applied to satellite images exhibiting high spatial and low to medium spectral resolution with images exhibiting low spatial and high spectral resolution. Furthermore, a new method will be developed to fuse hyperspectral imagery with multispectral imagery. The fused image will have a large number of spectral bands and it will have all pairs of corresponding spatial objects. This will potentially help to accurately classify the image data. Accuracy assessment use ground truth will be performed for the selected methods to determine the quality of the information derived from image classification. The research will be applied to the Kuwait's southern coral reefs: Kubbar and Um Al-Maradim.
NASA Astrophysics Data System (ADS)
Alderliesten, Tanja; Bosman, Peter A. N.; Sonke, Jan-Jakob; Bel, Arjan
2014-03-01
Currently, two major challenges dominate the field of deformable image registration. The first challenge is related to the tuning of the developed methods to specific problems (i.e. how to best combine different objectives such as similarity measure and transformation effort). This is one of the reasons why, despite significant progress, clinical implementation of such techniques has proven to be difficult. The second challenge is to account for large anatomical differences (e.g. large deformations, (dis)appearing structures) that occurred between image acquisitions. In this paper, we study a framework based on multi-objective optimization to improve registration robustness and to simplify tuning for specific applications. Within this framework we specifically consider the use of an advanced model-based evolutionary algorithm for optimization and a dual-dynamic transformation model (i.e. two "non-fixed" grids: one for the source- and one for the target image) to accommodate for large anatomical differences. The framework computes and presents multiple outcomes that represent efficient trade-offs between the different objectives (a so-called Pareto front). In image processing it is common practice, for reasons of robustness and accuracy, to use a multi-resolution strategy. This is, however, only well-established for single-objective registration methods. Here we describe how such a strategy can be realized for our multi-objective approach and compare its results with a single-resolution strategy. For this study we selected the case of prone-supine breast MRI registration. Results show that the well-known advantages of a multi-resolution strategy are successfully transferred to our multi-objective approach, resulting in superior (i.e. Pareto-dominating) outcomes.
Image-fusion of MR spectroscopic images for treatment planning of gliomas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang Jenghwa; Thakur, Sunitha; Perera, Gerard
2006-01-15
{sup 1}H magnetic resonance spectroscopic imaging (MRSI) can improve the accuracy of target delineation for gliomas, but it lacks the anatomic resolution needed for image fusion. This paper presents a simple protocol for fusing simulation computer tomography (CT) and MRSI images for glioma intensity-modulated radiotherapy (IMRT), including a retrospective study of 12 patients. Each patient first underwent whole-brain axial fluid-attenuated-inversion-recovery (FLAIR) MRI (3 mm slice thickness, no spacing), followed by three-dimensional (3D) MRSI measurements (TE/TR: 144/1000 ms) of a user-specified volume encompassing the extent of the tumor. The nominal voxel size of MRSI ranged from 8x8x10 mm{sup 3} to 12x12x10more » mm{sup 3}. A system was developed to grade the tumor using the choline-to-creatine (Cho/Cr) ratios from each MRSI voxel. The merged MRSI images were then generated by replacing the Cho/Cr value of each MRSI voxel with intensities according to the Cho/Cr grades, and resampling the poorer-resolution Cho/Cr map into the higher-resolution FLAIR image space. The FUNCTOOL processing software was also used to create the screen-dumped MRSI images in which these data were overlaid with each FLAIR MRI image. The screen-dumped MRSI images were manually translated and fused with the FLAIR MRI images. Since the merged MRSI images were intrinsically fused with the FLAIR MRI images, they were also registered with the screen-dumped MRSI images. The position of the MRSI volume on the merged MRSI images was compared with that of the screen-dumped MRSI images and was shifted until agreement was within a predetermined tolerance. Three clinical target volumes (CTVs) were then contoured on the FLAIR MRI images corresponding to the Cho/Cr grades. Finally, the FLAIR MRI images were fused with the simulation CT images using a mutual-information algorithm, yielding an IMRT plan that simultaneously delivers three different dose levels to the three CTVs. The image-fusion protocol was tested on 12 (six high-grade and six low-grade) glioma patients. The average agreement of the MRSI volume position on the screen-dumped MRSI images and the merged MRSI images was 0.29 mm with a standard deviation of 0.07 mm. Of all the voxels with Cho/Cr grade one or above, the distribution of Cho/Cr grade was found to correlate with the glioma grade from pathologic finding and is consistent with literature results indicating Cho/Cr elevation as a marker for malignancy. In conclusion, an image-fusion protocol was developed that successfully incorporates MRSI information into the IMRT treatment plan for glioma.« less
NASA Astrophysics Data System (ADS)
McMackin, Lenore; Herman, Matthew A.; Weston, Tyler
2016-02-01
We present the design of a multi-spectral imager built using the architecture of the single-pixel camera. The architecture is enabled by the novel sampling theory of compressive sensing implemented optically using the Texas Instruments DLP™ micro-mirror array. The array not only implements spatial modulation necessary for compressive imaging but also provides unique diffractive spectral features that result in a multi-spectral, high-spatial resolution imager design. The new camera design provides multi-spectral imagery in a wavelength range that extends from the visible to the shortwave infrared without reduction in spatial resolution. In addition to the compressive imaging spectrometer design, we present a diffractive model of the architecture that allows us to predict a variety of detailed functional spatial and spectral design features. We present modeling results, architectural design and experimental results that prove the concept.
Image charge multi-role and function detectors
NASA Astrophysics Data System (ADS)
Milnes, James; Lapington, Jon S.; Jagutzki, Ottmar; Howorth, Jon
2009-06-01
The image charge technique used with microchannel plate imaging tubes provides several operational and practical benefits by serving to isolate the electronic image readout from the detector. The simple dielectric interface between detector and readout provides vacuum isolation and no vacuum electrical feed-throughs are required. Since the readout is mechanically separate from the detector, an image tube of generic design can be simply optimised for various applications by attaching it to different readout devices and electronics. We present imaging performance results using a single image tube with a variety of readout devices suited to differing applications: (a) A four electrode charge division tetra wedge anode, optimised for best spatial resolution in photon counting mode. (b) A cross delay line anode, enabling higher count rate, and the possibility of discriminating near co-incident events, and an event timing resolution of better than 1 ns. (c) A multi-anode readout connected, either to a multi-channel oscilloscope for analogue measurements of fast optical pulses, or alternately, to a multi-channel time correlated single photon counting (TCSPC) card.
Evaluation of registration accuracy between Sentinel-2 and Landsat 8
NASA Astrophysics Data System (ADS)
Barazzetti, Luigi; Cuca, Branka; Previtali, Mattia
2016-08-01
Starting from June 2015, Sentinel-2A is delivering high resolution optical images (ground resolution up to 10 meters) to provide a global coverage of the Earth's land surface every 10 days. The planned launch of Sentinel-2B along with the integration of Landsat images will provide time series with an unprecedented revisit time indispensable for numerous monitoring applications, in which high resolution multi-temporal information is required. They include agriculture, water bodies, natural hazards to name a few. However, the combined use of multi-temporal images requires an accurate geometric registration, i.e. pixel-to-pixel correspondence for terrain-corrected products. This paper presents an analysis of spatial co-registration accuracy for several datasets of Sentinel-2 and Landsat 8 images distributed all around the world. Images were compared with digital correlation techniques for image matching, obtaining an evaluation of registration accuracy with an affine transformation as geometrical model. Results demonstrate that sub-pixel accuracy was achieved between 10 m resolution Sentinel-2 bands (band 3) and 15 m resolution panchromatic Landsat images (band 8).
NASA Astrophysics Data System (ADS)
Lv, Zheng; Sui, Haigang; Zhang, Xilin; Huang, Xianfeng
2007-11-01
As one of the most important geo-spatial objects and military establishment, airport is always a key target in fields of transportation and military affairs. Therefore, automatic recognition and extraction of airport from remote sensing images is very important and urgent for updating of civil aviation and military application. In this paper, a new multi-source data fusion approach on automatic airport information extraction, updating and 3D modeling is addressed. Corresponding key technologies including feature extraction of airport information based on a modified Ostu algorithm, automatic change detection based on new parallel lines-based buffer detection algorithm, 3D modeling based on gradual elimination of non-building points algorithm, 3D change detecting between old airport model and LIDAR data, typical CAD models imported and so on are discussed in detail. At last, based on these technologies, we develop a prototype system and the results show our method can achieve good effects.
NASA Astrophysics Data System (ADS)
Willingham, David; Naes, Benjamin E.; Tarolli, Jay G.; Schemer-Kohrn, Alan; Rhodes, Mark; Dahl, Michael; Guzman, Anthony; Burkes, Douglas E.
2018-01-01
Uranium-molybdenum (U-Mo) monolithic fuels represent one option for converting civilian research and test reactors operating with high enriched uranium (HEU) to low enriched uranium (LEU), effectively reducing the threat of nuclear proliferation world-wide. However, processes associated with fabrication of U-Mo monolithic fuels result in regions of elemental heterogeneity, observed as bands traversing the cross-section of representative samples. Isotopic variations (e.g., 235U and 238U) could also be introduced because of associated processing steps, particularly since HEU feedstock is melted with natural or depleted uranium diluent to produce LEU. This study demonstrates the utility of correlative analysis of Energy-Dispersive X-ray Spectroscopy (EDS) and Secondary Ion Mass Spectrometry (SIMS) with their image data streams using image fusion, resulting in a comprehensive microanalytical characterization toolbox. Elemental and isotopic measurements were made on a sample from the Advanced Test Reactor (ATR) Full-sized plate In-center flux trap Position (AFIP)-7 experiment and compared to previous optical and electron microscopy results. The image fusion results are characteristic of SIMS isotopic maps, but with the spatial resolution of EDS images and, therefore, can be used to increase the effective spatial resolution of the SIMS imaging results to better understand homogeneity or heterogeneity that persists because of processing selections. Visual inspection using the image fusion methodology indicated slight variations in the 235U/238U ratio and quantitative analysis using the image intensities across several FoVs revealed an average 235U atom percent value of 17.9 ± 2.4%, which was indicative of a non-uniform U isotopic distribution in the area sampled. Further development of this capability is useful for understanding the connections between the properties of LEU fuel alternatives and the ability to predict performance under irradiation.
Yang, Xiaofeng; Wu, Wei; Wang, Guoan
2015-04-01
This paper presents a surgical optical navigation system with non-invasive, real-time, and positioning characteristics for open surgical procedure. The design was based on the principle of near-infrared fluorescence molecular imaging. The in vivo fluorescence excitation technology, multi-channel spectral camera technology and image fusion software technology were used. Visible and near-infrared light ring LED excitation source, multi-channel band pass filters, spectral camera 2 CCD optical sensor technology and computer systems were integrated, and, as a result, a new surgical optical navigation system was successfully developed. When the near-infrared fluorescence was injected, the system could display anatomical images of the tissue surface and near-infrared fluorescent functional images of surgical field simultaneously. The system can identify the lymphatic vessels, lymph node, tumor edge which doctor cannot find out with naked eye intra-operatively. Our research will guide effectively the surgeon to remove the tumor tissue to improve significantly the success rate of surgery. The technologies have obtained a national patent, with patent No. ZI. 2011 1 0292374. 1.
Diagnosing magnetized liner inertial fusion experiments on Z
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, S. B., E-mail: sbhanse@sandia.gov; Gomez, M. R.; Sefkow, A. B.
Magnetized Liner Inertial Fusion experiments performed at Sandia's Z facility have demonstrated significant thermonuclear fusion neutron yields (∼10{sup 12} DD neutrons) from multi-keV deuterium plasmas inertially confined by slow (∼10 cm/μs), stable, cylindrical implosions. Effective magnetic confinement of charged fusion reactants and products is signaled by high secondary DT neutron yields above 10{sup 10}. Analysis of extensive power, imaging, and spectroscopic x-ray measurements provides a detailed picture of ∼3 keV temperatures, 0.3 g/cm{sup 3} densities, gradients, and mix in the fuel and liner over the 1–2 ns stagnation duration.
Sethi, A; Rusu, I; Surucu, M; Halama, J
2012-06-01
Evaluate accuracy of multi-modality image registration in radiotherapy planning process. A water-filled anthropomorphic head phantom containing eight 'donut-shaped' fiducial markers (3 internal + 5 external) was selected for this study. Seven image sets (3CTs, 3MRs and PET) of phantom were acquired and fused in a commercial treatment planning system. First, a narrow slice (0.75mm) baseline CT scan was acquired (CT1). Subsequently, the phantom was re-scanned with a coarse slice width = 1.5mm (CT2) and after subjecting phantom to rotation/displacement (CT3). Next, the phantom was scanned in a 1.5 Tesla MR scanner and three MR image sets (axial T1, axial T2, coronal T1) were acquired at 2mm slice width. Finally, the phantom and center of fiducials were doped with 18F and a PET scan was performed with 2mm cubic voxels. All image scans (CT/MR/PET) were fused to the baseline (CT1) data using automated mutual-information based fusion algorithm. Difference between centroids of fiducial markers in various image modalities was used to assess image registration accuracy. CT/CT image registration was superior to CT/MR and CT/PET: average CT/CT fusion error was found to be 0.64 ± 0.14 mm. Corresponding values for CT/MR and CT/PET fusion were 1.33 ± 0.71mm and 1.11 ± 0.37mm. Internal markers near the center of phantom fused better than external markers placed on the phantom surface. This was particularly true for the CT/MR and CT/PET. The inferior quality of external marker fusion indicates possible distortion effects toward the edges of MR image. Peripheral targets in the PET scan may be subject to parallax error caused by depth of interaction of photons in detectors. Current widespread use of multimodality imaging in radiotherapy planning calls for periodic quality assurance of image registration process. Such studies may help improve safety and accuracy in treatment planning. © 2012 American Association of Physicists in Medicine.
Imaging multi-scale dynamics in vivo with spiral volumetric optoacoustic tomography
NASA Astrophysics Data System (ADS)
Deán-Ben, X. Luís.; Fehm, Thomas F.; Ford, Steven J.; Gottschalk, Sven; Razansky, Daniel
2017-03-01
Imaging dynamics in living organisms is essential for the understanding of biological complexity. While multiple imaging modalities are often required to cover both microscopic and macroscopic spatial scales, dynamic phenomena may also extend over different temporal scales, necessitating the use of different imaging technologies based on the trade-off between temporal resolution and effective field of view. Optoacoustic (photoacoustic) imaging has been shown to offer the exclusive capability to link multiple spatial scales ranging from organelles to entire organs of small animals. Yet, efficient visualization of multi-scale dynamics remained difficult with state-of-the-art systems due to inefficient trade-offs between image acquisition and effective field of view. Herein, we introduce a spiral volumetric optoacoustic tomography (SVOT) technique that provides spectrally-enriched high-resolution optical absorption contrast across multiple spatio-temporal scales. We demonstrate that SVOT can be used to monitor various in vivo dynamics, from video-rate volumetric visualization of cardiac-associated motion in whole organs to high-resolution imaging of pharmacokinetics in larger regions. The multi-scale dynamic imaging capability thus emerges as a powerful and unique feature of the optoacoustic technology that adds to the multiple advantages of this technology for structural, functional and molecular imaging.
Hadwiger, M; Beyer, J; Jeong, Won-Ki; Pfister, H
2012-12-01
This paper presents the first volume visualization system that scales to petascale volumes imaged as a continuous stream of high-resolution electron microscopy images. Our architecture scales to dense, anisotropic petascale volumes because it: (1) decouples construction of the 3D multi-resolution representation required for visualization from data acquisition, and (2) decouples sample access time during ray-casting from the size of the multi-resolution hierarchy. Our system is designed around a scalable multi-resolution virtual memory architecture that handles missing data naturally, does not pre-compute any 3D multi-resolution representation such as an octree, and can accept a constant stream of 2D image tiles from the microscopes. A novelty of our system design is that it is visualization-driven: we restrict most computations to the visible volume data. Leveraging the virtual memory architecture, missing data are detected during volume ray-casting as cache misses, which are propagated backwards for on-demand out-of-core processing. 3D blocks of volume data are only constructed from 2D microscope image tiles when they have actually been accessed during ray-casting. We extensively evaluate our system design choices with respect to scalability and performance, compare to previous best-of-breed systems, and illustrate the effectiveness of our system for real microscopy data from neuroscience.
Multi-pass transmission electron microscopy
Juffmann, Thomas; Koppell, Stewart A.; Klopfer, Brannon B.; ...
2017-05-10
Feynman once asked physicists to build better electron microscopes to be able to watch biology at work. While electron microscopes can now provide atomic resolution, electron beam induced specimen damage precludes high resolution imaging of sensitive materials, such as single proteins or polymers. Here, we use simulations to show that an electron microscope based on a multi-pass measurement protocol enables imaging of single proteins, without averaging structures over multiple images. While we demonstrate the method for particular imaging targets, the approach is broadly applicable and is expected to improve resolution and sensitivity for a range of electron microscopy imaging modalities,more » including, for example, scanning and spectroscopic techniques. The approach implements a quantum mechanically optimal strategy which under idealized conditions can be considered interaction-free.« less
NASA Astrophysics Data System (ADS)
Poggio, Andrew J.
1988-10-01
This issue of Energy and Technology Review contains: Neutron Penumbral Imaging of Laser-Fusion Targets--using our new penumbral-imaging diagnostic, we have obtained the first images that can be used to measure directly the deuterium-tritium burn region in laser-driven fusion targets; Computed Tomography for Nondestructive Evaluation--various computed tomography systems and computational techniques are used in nondestructive evaluation; Three-Dimensional Image Analysis for Studying Nuclear Chromatin Structure--we have developed an optic-electronic system for acquiring cross-sectional views of cell nuclei, and computer codes to analyze these images and reconstruct the three-dimensional structures they represent; Imaging in the Nuclear Test Program--advanced techniques produce images of unprecedented detail and resolution from Nevada Test Site data; and Computational X-Ray Holography--visible-light experiments and numerically simulated holograms test our ideas about an X-ray microscope for biological research.
D Object Classification Based on Thermal and Visible Imagery in Urban Area
NASA Astrophysics Data System (ADS)
Hasani, H.; Samadzadegan, F.
2015-12-01
The spatial distribution of land cover in the urban area especially 3D objects (buildings and trees) is a fundamental dataset for urban planning, ecological research, disaster management, etc. According to recent advances in sensor technologies, several types of remotely sensed data are available from the same area. Data fusion has been widely investigated for integrating different source of data in classification of urban area. Thermal infrared imagery (TIR) contains information on emitted radiation and has unique radiometric properties. However, due to coarse spatial resolution of thermal data, its application has been restricted in urban areas. On the other hand, visible image (VIS) has high spatial resolution and information in visible spectrum. Consequently, there is a complementary relation between thermal and visible imagery in classification of urban area. This paper evaluates the potential of aerial thermal hyperspectral and visible imagery fusion in classification of urban area. In the pre-processing step, thermal imagery is resampled to the spatial resolution of visible image. Then feature level fusion is applied to construct hybrid feature space include visible bands, thermal hyperspectral bands, spatial and texture features and moreover Principle Component Analysis (PCA) transformation is applied to extract PCs. Due to high dimensionality of feature space, dimension reduction method is performed. Finally, Support Vector Machines (SVMs) classify the reduced hybrid feature space. The obtained results show using thermal imagery along with visible imagery, improved the classification accuracy up to 8% respect to visible image classification.
Chen, Nan-kuei; Guidon, Arnaud; Chang, Hing-Chiu; Song, Allen W.
2013-01-01
Diffusion weighted magnetic resonance imaging (DWI) data have been mostly acquired with single-shot echo-planar imaging (EPI) to minimize motion induced artifacts. The spatial resolution, however, is inherently limited in single-shot EPI, even when the parallel imaging (usually at an acceleration factor of 2) is incorporated. Multi-shot acquisition strategies could potentially achieve higher spatial resolution and fidelity, but they are generally susceptible to motion-induced phase errors among excitations that are exacerbated by diffusion sensitizing gradients, rendering the reconstructed images unusable. It has been shown that shot-to-shot phase variations may be corrected using navigator echoes, but at the cost of imaging throughput. To address these challenges, a novel and robust multi-shot DWI technique, termed multiplexed sensitivity-encoding (MUSE), is developed here to reliably and inherently correct nonlinear shot-to-shot phase variations without the use of navigator echoes. The performance of the MUSE technique is confirmed experimentally in healthy adult volunteers on 3 Tesla MRI systems. This newly developed technique should prove highly valuable for mapping brain structures and connectivities at high spatial resolution for neuroscience studies. PMID:23370063
Lin, X; Qureshi, M Z; Romero, M A; Yaylim, I; Arif, S; Ucak, I; Fayyaz, S; Farooqi, A A; Mansoor, Q; Ismail, M
2017-02-28
Overwhelmingly increasing scientific evidence has provided near complete resolution of prostate cancer landscape and it is now more understandable that wide ranging factors underlies its development and progression. Increasingly it is being realized that genetic/epigenetic factors, Intra-tumoral and inter-tumoral heterogeneity, loss of apoptosis, dysregulations of spatio-temporally controlled signaling cascades, Darwinian evolution in response to therapeutic pressures play instrumental role in prostate carcinogenesis. Moreover, multi-directional patterns of spread between primary tumors and metastatic sites have also been studied extensively in prostate cancer. Research over the years has gradually and systematically revealed closer association between tumor phenotype and type of gene fusion. Latest developments in deep sequencing technologies have shown that gene fusions originate in a non-random, cell type dependent manner and are much more frequent than previously surmised. These findings enabled sub-classification and categorization of seemingly identical diseases. Furthermore, research methodologies have shown that many gene fusions inform us about risk stratification and many chimeric proteins encoded by the fused genes are being studied as drug target/s. We partition this multi-component review into the molecular basis of formation of fusion transcripts, how protein network is regulated in fusion positive prostate cancer cells and therapeutic strategies which are currently being investigated to efficiently target fusion transcript and its protein product.
Visual tracking for multi-modality computer-assisted image guidance
NASA Astrophysics Data System (ADS)
Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp
2017-03-01
With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.
Multi-energy SXR cameras for magnetically confined fusion plasmas (invited)
NASA Astrophysics Data System (ADS)
Delgado-Aparicio, L. F.; Maddox, J.; Pablant, N.; Hill, K.; Bitter, M.; Rice, J. E.; Granetz, R.; Hubbard, A.; Irby, J.; Greenwald, M.; Marmar, E.; Tritz, K.; Stutman, D.; Stratton, B.; Efthimion, P.
2016-11-01
A compact multi-energy soft x-ray camera has been developed for time, energy and space-resolved measurements of the soft-x-ray emissivity in magnetically confined fusion plasmas. Multi-energy soft x-ray imaging provides a unique opportunity for measuring, simultaneously, a variety of important plasma properties (Te, nZ, ΔZeff, and ne,fast). The electron temperature can be obtained by modeling the slope of the continuum radiation from ratios of the available brightness and inverted radial emissivity profiles over multiple energy ranges. Impurity density measurements are also possible using the line-emission from medium- to high-Z impurities to separate the background as well as transient levels of metal contributions. This technique should be explored also as a burning plasma diagnostic in-view of its simplicity and robustness.
Spinal fusion-hardware construct: Basic concepts and imaging review
Nouh, Mohamed Ragab
2012-01-01
The interpretation of spinal images fixed with metallic hardware forms an increasing bulk of daily practice in a busy imaging department. Radiologists are required to be familiar with the instrumentation and operative options used in spinal fixation and fusion procedures, especially in his or her institute. This is critical in evaluating the position of implants and potential complications associated with the operative approaches and spinal fixation devices used. Thus, the radiologist can play an important role in patient care and outcome. This review outlines the advantages and disadvantages of commonly used imaging methods and reports on the best yield for each modality and how to overcome the problematic issues associated with the presence of metallic hardware during imaging. Baseline radiographs are essential as they are the baseline point for evaluation of future studies should patients develop symptoms suggesting possible complications. They may justify further imaging workup with computed tomography, magnetic resonance and/or nuclear medicine studies as the evaluation of a patient with a spinal implant involves a multi-modality approach. This review describes imaging features of potential complications associated with spinal fusion surgery as well as the instrumentation used. This basic knowledge aims to help radiologists approach everyday practice in clinical imaging. PMID:22761979
Ghost detection and removal based on super-pixel grouping in exposure fusion
NASA Astrophysics Data System (ADS)
Jiang, Shenyu; Xu, Zhihai; Li, Qi; Chen, Yueting; Feng, Huajun
2014-09-01
A novel multi-exposure images fusion method for dynamic scenes is proposed. The commonly used techniques for high dynamic range (HDR) imaging are based on the combination of multiple differently exposed images of the same scene. The drawback of these methods is that ghosting artifacts will be introduced into the final HDR image if the scene is not static. In this paper, a super-pixel grouping based method is proposed to detect the ghost in the image sequences. We introduce the zero mean normalized cross correlation (ZNCC) as a measure of similarity between a given exposure image and the reference. The calculation of ZNCC is implemented in super-pixel level, and the super-pixels which have low correlation with the reference are excluded by adjusting the weight maps for fusion. Without any prior information on camera response function or exposure settings, the proposed method generates low dynamic range (LDR) images which can be shown on conventional display devices directly with details preserving and ghost effects reduced. Experimental results show that the proposed method generates high quality images which have less ghost artifacts and provide a better visual quality than previous approaches.
A High-resolution Multi-wavelength Simultaneous Imaging System with Solar Adaptive Optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Changhui; Zhu, Lei; Gu, Naiting
A high-resolution multi-wavelength simultaneous imaging system from visible to near-infrared bands with a solar adaptive optics system, in which seven imaging channels, including the G band (430.5 nm), the Na i line (589 nm), the H α line (656.3 nm), the TiO band (705.7 nm), the Ca ii IR line (854.2 nm), the He i line (1083 nm), and the Fe i line (1565.3 nm), are chosen, is developed to image the solar atmosphere from the photosphere layer to the chromosphere layer. To our knowledge, this is the solar high-resolution imaging system with the widest spectral coverage. This system wasmore » demonstrated at the 1 m New Vaccum Solar Telescope and the on-sky high-resolution observational results were acquired. In this paper, we will illustrate the design and performance of the imaging system. The calibration and the data reduction of the system are also presented.« less
Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery
Fuerst, Bernhard; Tateno, Keisuke; Johnson, Alex; Fotouhi, Javad; Osgood, Greg; Tombari, Federico; Navab, Nassir
2017-01-01
Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion. PMID:29184659
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, G; Zakian, K; Deasy, J
Purpose: To develop a novel super-resolution time-resolved 4DMRI technique to evaluate multi-breath, irregular and complex organ motion without respiratory surrogate for radiotherapy planning. Methods: The super-resolution time-resolved (TR) 4DMRI approach combines a series of low-resolution 3D cine MRI images acquired during free breathing (FB) with a high-resolution breath-hold (BH) 3DMRI via deformable image registration (DIR). Five volunteers participated in the study under an IRB-approved protocol. The 3D cine images with voxel size of 5×5×5 mm{sup 3} at two volumes per second (2Hz) were acquired coronally using a T1 fast field echo sequence, half-scan (0.8) acceleration, and SENSE (3) parallel imaging.more » Phase-encoding was set in the lateral direction to minimize motion artifacts. The BH image with voxel size of 2×2×2 mm{sup 3} was acquired using the same sequence within 10 seconds. A demons-based DIR program was employed to produce super-resolution 2Hz 4DMRI. Registration quality was visually assessed using difference images between TR 4DMRI and 3D cine and quantitatively assessed using average voxel correlation. The fidelity of the 3D cine images was assessed using a gel phantom and a 1D motion platform by comparing mobile and static images. Results: Owing to voxel intensity similarity using the same MRI scanning sequence, accurate DIR between FB and BH images is achieved. The voxel correlations between 3D cine and TR 4DMRI are greater than 0.92 in all cases and the difference images illustrate minimal residual error with little systematic patterns. The 3D cine images of the mobile gel phantom preserve object geometry with minimal scanning artifacts. Conclusion: The super-resolution time-resolved 4DMRI technique has been achieved via DIR, providing a potential solution for multi-breath motion assessment. Accurate DIR mapping has been achieved to map high-resolution BH images to low-resolution FB images, producing 2Hz volumetric high-resolution 4DMRI. Further validation and improvement are still required prior to clinical applications. This study is in part supported by the NIH (U54CA137788/U54CA132378).« less
Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei
2016-01-01
Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762
Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei
2016-01-01
Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.
Mapping detailed 3D information onto high resolution SAR signatures
NASA Astrophysics Data System (ADS)
Anglberger, H.; Speck, R.
2017-05-01
Due to challenges in the visual interpretation of radar signatures or in the subsequent information extraction, a fusion with other data sources can be beneficial. The most accurate basis for a fusion of any kind of remote sensing data is the mapping of the acquired 2D image space onto the true 3D geometry of the scenery. In the case of radar images this is a challenging task because the coordinate system is based on the measured range which causes ambiguous regions due to layover effects. This paper describes a method that accurately maps the detailed 3D information of a scene to the slantrange-based coordinate system of imaging radars. Due to this mapping all the contributing geometrical parts of one resolution cell can be determined in 3D space. The proposed method is highly efficient, because computationally expensive operations can be directly performed on graphics card hardware. The described approach builds a perfect basis for sophisticated methods to extract data from multiple complimentary sensors like from radar and optical images, especially because true 3D information from whole cities will be available in the near future. The performance of the developed methods will be demonstrated with high resolution radar data acquired by the space-borne SAR-sensor TerraSAR-X.
NASA Astrophysics Data System (ADS)
Imani, Farhad; Ghavidel, Sahar; Abolmaesumi, Purang; Khallaghi, Siavash; Gibson, Eli; Khojaste, Amir; Gaed, Mena; Moussa, Madeleine; Gomez, Jose A.; Romagnoli, Cesare; Cool, Derek W.; Bastian-Jordan, Matthew; Kassam, Zahra; Siemens, D. Robert; Leveridge, Michael; Chang, Silvia; Fenster, Aaron; Ward, Aaron D.; Mousavi, Parvin
2016-03-01
Recently, multi-parametric Magnetic Resonance Imaging (mp-MRI) has been used to improve the sensitivity of detecting high-risk prostate cancer (PCa). Prior to biopsy, primary and secondary cancer lesions are identified on mp-MRI. The lesions are then targeted using TRUS guidance. In this paper, for the first time, we present a fused mp-MRI-temporal-ultrasound framework for characterization of PCa, in vivo. Cancer classification results obtained using temporal ultrasound are fused with those achieved using consolidated mp-MRI maps determined by multiple observers. We verify the outcome of our study using histopathology following deformable registration of ultrasound and histology images. Fusion of temporal ultrasound and mp-MRI for characterization of the PCa results in an area under the receiver operating characteristic curve (AUC) of 0.86 for cancerous regions with Gleason scores (GSs)>=3+3, and AUC of 0.89 for those with GSs>=3+4.
Fusion of MODIS and Landsat-8 Surface Temperature Images: A New Approach
Hazaymeh, Khaled; Hassan, Quazi K.
2015-01-01
Here, our objective was to develop a spatio-temporal image fusion model (STI-FM) for enhancing temporal resolution of Landsat-8 land surface temperature (LST) images by fusing LST images acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS); and implement the developed algorithm over a heterogeneous semi-arid study area in Jordan, Middle East. The STI-FM technique consisted of two major components: (i) establishing a linear relationship between two consecutive MODIS 8-day composite LST images acquired at time 1 and time 2; and (ii) utilizing the above mentioned relationship as a function of a Landsat-8 LST image acquired at time 1 in order to predict a synthetic Landsat-8 LST image at time 2. It revealed that strong linear relationships (i.e., r2, slopes, and intercepts were in the range 0.93–0.94, 0.94–0.99; and 2.97–20.07) existed between the two consecutive MODIS LST images. We evaluated the synthetic LST images qualitatively and found high visual agreements with the actual Landsat-8 LST images. In addition, we conducted quantitative evaluations of these synthetic images; and found strong agreements with the actual Landsat-8 LST images. For example, r2, root mean square error (RMSE), and absolute average difference (AAD)-values were in the ranges 084–0.90, 0.061–0.080, and 0.003–0.004, respectively. PMID:25730279
Fusion of MODIS and landsat-8 surface temperature images: a new approach.
Hazaymeh, Khaled; Hassan, Quazi K
2015-01-01
Here, our objective was to develop a spatio-temporal image fusion model (STI-FM) for enhancing temporal resolution of Landsat-8 land surface temperature (LST) images by fusing LST images acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS); and implement the developed algorithm over a heterogeneous semi-arid study area in Jordan, Middle East. The STI-FM technique consisted of two major components: (i) establishing a linear relationship between two consecutive MODIS 8-day composite LST images acquired at time 1 and time 2; and (ii) utilizing the above mentioned relationship as a function of a Landsat-8 LST image acquired at time 1 in order to predict a synthetic Landsat-8 LST image at time 2. It revealed that strong linear relationships (i.e., r2, slopes, and intercepts were in the range 0.93-0.94, 0.94-0.99; and 2.97-20.07) existed between the two consecutive MODIS LST images. We evaluated the synthetic LST images qualitatively and found high visual agreements with the actual Landsat-8 LST images. In addition, we conducted quantitative evaluations of these synthetic images; and found strong agreements with the actual Landsat-8 LST images. For example, r2, root mean square error (RMSE), and absolute average difference (AAD)-values were in the ranges 084-0.90, 0.061-0.080, and 0.003-0.004, respectively.
Methods for increasing the sensitivity of gamma-ray imagers
Mihailescu, Lucian [Pleasanton, CA; Vetter, Kai M [Alameda, CA; Chivers, Daniel H [Fremont, CA
2012-02-07
Methods are presented that increase the position resolution and granularity of double sided segmented semiconductor detectors. These methods increase the imaging resolution capability of such detectors, either used as Compton cameras, or as position sensitive radiation detectors in imagers such as SPECT, PET, coded apertures, multi-pinhole imagers, or other spatial or temporal modulated imagers.
Systems for increasing the sensitivity of gamma-ray imagers
Mihailescu, Lucian; Vetter, Kai M.; Chivers, Daniel H.
2012-12-11
Systems that increase the position resolution and granularity of double sided segmented semiconductor detectors are provided. These systems increase the imaging resolution capability of such detectors, either used as Compton cameras, or as position sensitive radiation detectors in imagers such as SPECT, PET, coded apertures, multi-pinhole imagers, or other spatial or temporal modulated imagers.
High-throughput isotropic mapping of whole mouse brain using multi-view light-sheet microscopy
NASA Astrophysics Data System (ADS)
Nie, Jun; Li, Yusha; Zhao, Fang; Ping, Junyu; Liu, Sa; Yu, Tingting; Zhu, Dan; Fei, Peng
2018-02-01
Light-sheet fluorescence microscopy (LSFM) uses an additional laser-sheet to illuminate selective planes of the sample, thereby enabling three-dimensional imaging at high spatial-temporal resolution. These advantages make LSFM a promising tool for high-quality brain visualization. However, even by the use of LSFM, the spatial resolution remains insufficient to resolve the neural structures across a mesoscale whole mouse brain in three dimensions. At the same time, the thick-tissue scattering prevents a clear observation from the deep of brain. Here we use multi-view LSFM strategy to solve this challenge, surpassing the resolution limit of standard light-sheet microscope under a large field-of-view (FOV). As demonstrated by the imaging of optically-cleared mouse brain labelled with thy1-GFP, we achieve a brain-wide, isotropic cellular resolution of 3μm. Besides the resolution enhancement, multi-view braining imaging can also recover complete signals from deep tissue scattering and attenuation. The identification of long distance neural projections across encephalic regions can be identified and annotated as a result.
WE-H-206-02: Recent Advances in Multi-Modality Molecular Imaging of Small Animals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsui, B.
Lihong V. Wang: Photoacoustic tomography (PAT), combining non-ionizing optical and ultrasonic waves via the photoacoustic effect, provides in vivo multiscale functional, metabolic, and molecular imaging. Broad applications include imaging of the breast, brain, skin, esophagus, colon, vascular system, and lymphatic system in humans or animals. Light offers rich contrast but does not penetrate biological tissue in straight paths as x-rays do. Consequently, high-resolution pure optical imaging (e.g., confocal microscopy, two-photon microscopy, and optical coherence tomography) is limited to penetration within the optical diffusion limit (∼1 mm in the skin). Ultrasonic imaging, on the contrary, provides fine spatial resolution but suffersmore » from both poor contrast in early-stage tumors and strong speckle artifacts. In PAT, pulsed laser light penetrates tissue and generates a small but rapid temperature rise, which induces emission of ultrasonic waves due to thermoelastic expansion. The ultrasonic waves, orders of magnitude less scattering than optical waves, are then detected to form high-resolution images of optical absorption at depths up to 7 cm, conquering the optical diffusion limit. PAT is the only modality capable of imaging across the length scales of organelles, cells, tissues, and organs (up to whole-body small animals) with consistent contrast. This rapidly growing technology promises to enable multiscale biological research and accelerate translation from microscopic laboratory discoveries to macroscopic clinical practice. PAT may also hold the key to label-free early detection of cancer by in vivo quantification of hypermetabolism, the quintessential hallmark of malignancy. Learning Objectives: To understand the contrast mechanism of PAT To understand the multiscale applications of PAT Benjamin M. W. Tsui: Multi-modality molecular imaging instrumentation and techniques have been major developments in small animal imaging that has contributed significantly to biomedical research during the past decade. The initial development was an extension of clinical PET/CT and SPECT/CT from human to small animals and combine the unique functional information obtained from PET and SPECT with anatomical information provided by the CT in registered multi-modality images. The requirements to image a mouse whose size is an order of magnitude smaller than that of a human have spurred advances in new radiation detector technologies, novel imaging system designs and special image reconstruction and processing techniques. Examples are new detector materials and designs with high intrinsic resolution, multi-pinhole (MPH) collimator design for much improved resolution and detection efficiency compared to the conventional collimator designs in SPECT, 3D high-resolution and artifact-free MPH and sparse-view image reconstruction techniques, and iterative image reconstruction methods with system response modeling for resolution recovery and image noise reduction for much improved image quality. The spatial resolution of PET and SPECT has improved from ∼6–12 mm to ∼1 mm a few years ago to sub-millimeter today. A recent commercial small animal SPECT system has achieved a resolution of ∼0.25 mm which surpasses that of a state-of-art PET system whose resolution is limited by the positron range. More recently, multimodality SA PET/MRI and SPECT/MRI systems have been developed in research laboratories. Also, multi-modality SA imaging systems that include other imaging modalities such as optical and ultrasound are being actively pursued. In this presentation, we will provide a review of the development, recent advances and future outlook of multi-modality molecular imaging of small animals. Learning Objectives: To learn about the two major multi-modality molecular imaging techniques of small animals. To learn about the spatial resolution achievable by the molecular imaging systems for small animal today. To learn about the new multi-modality imaging instrumentation and techniques that are being developed. Sang Hyun Cho; X-ray fluorescence (XRF) imaging, such as x-ray fluorescence computed tomography (XFCT), offers unique capabilities for accurate identification and quantification of metals within the imaging objects. As a result, it has emerged as a promising quantitative imaging modality in recent years, especially in conjunction with metal-based imaging probes. This talk will familiarize the audience with the basic principles of XRF/XFCT imaging. It will also cover the latest development of benchtop XFCT technology. Additionally, the use of metallic nanoparticles such as gold nanoparticles, in conjunction with benchtop XFCT, will be discussed within the context of preclinical multimodal multiplexed molecular imaging. Learning Objectives: To learn the basic principles of XRF/XFCT imaging To learn the latest advances in benchtop XFCT development for preclinical imaging Funding support received from NIH and DOD; Funding support received from GE Healthcare; Funding support received from Siemens AX; Patent royalties received from GE Healthcare; L. Wang, Funding Support: NIH; COI: Microphotoacoustics; S. Cho, Yes: ;NIH/NCI grant R01CA155446 DOD/PCRP grant W81XWH-12-1-0198.« less
A New Pansharpening Method Based on Spatial and Spectral Sparsity Priors.
He, Xiyan; Condat, Laurent; Bioucas-Diaz, Jose; Chanussot, Jocelyn; Xia, Junshi
2014-06-27
The development of multisensor systems in recent years has led to great increase in the amount of available remote sensing data. Image fusion techniques aim at inferring high quality images of a given area from degraded versions of the same area obtained by multiple sensors. This paper focuses on pansharpening, which is the inference of a high spatial resolution multispectral image from two degraded versions with complementary spectral and spatial resolution characteristics: a) a low spatial resolution multispectral image; and b) a high spatial resolution panchromatic image. We introduce a new variational model based on spatial and spectral sparsity priors for the fusion. In the spectral domain we encourage low-rank structure, whereas in the spatial domain we promote sparsity on the local differences. Given the fact that both panchromatic and multispectral images are integrations of the underlying continuous spectra using different channel responses, we propose to exploit appropriate regularizations based on both spatial and spectral links between panchromatic and the fused multispectral images. A weighted version of the vector Total Variation (TV) norm of the data matrix is employed to align the spatial information of the fused image with that of the panchromatic image. With regard to spectral information, two different types of regularization are proposed to promote a soft constraint on the linear dependence between the panchromatic and the fused multispectral images. The first one estimates directly the linear coefficients from the observed panchromatic and low resolution multispectral images by Linear Regression (LR) while the second one employs the Principal Component Pursuit (PCP) to obtain a robust recovery of the underlying low-rank structure. We also show that the two regularizers are strongly related. The basic idea of both regularizers is that the fused image should have low-rank and preserve edge locations. We use a variation of the recently proposed Split Augmented Lagrangian Shrinkage (SALSA) algorithm to effectively solve the proposed variational formulations. Experimental results on simulated and real remote sensing images show the effectiveness of the proposed pansharpening method compared to the state-of-the-art.
Comparison and evaluation of fusion methods used for GF-2 satellite image in coastal mangrove area
NASA Astrophysics Data System (ADS)
Ling, Chengxing; Ju, Hongbo; Liu, Hua; Zhang, Huaiqing; Sun, Hua
2018-04-01
GF-2 satellite is the highest spatial resolution Remote Sensing Satellite of the development history of China's satellite. In this study, three traditional fusion methods including Brovey, Gram-Schmidt and Color Normalized (CN were used to compare with the other new fusion method NNDiffuse, which used the qualitative assessment and quantitative fusion quality index, including information entropy, variance, mean gradient, deviation index, spectral correlation coefficient. Analysis results show that NNDiffuse method presented the optimum in qualitative and quantitative analysis. It had more effective for the follow up of remote sensing information extraction and forest, wetland resources monitoring applications.
Infrared and visible fusion face recognition based on NSCT domain
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan
2018-01-01
Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In this paper, a novel fusion algorithm in non-subsampled contourlet transform (NSCT) domain is proposed for Infrared and visible face fusion recognition. Firstly, NSCT is used respectively to process the infrared and visible face images, which exploits the image information at multiple scales, orientations, and frequency bands. Then, to exploit the effective discriminant feature and balance the power of high-low frequency band of NSCT coefficients, the local Gabor binary pattern (LGBP) and Local Binary Pattern (LBP) are applied respectively in different frequency parts to obtain the robust representation of infrared and visible face images. Finally, the score-level fusion is used to fuse the all the features for final classification. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. Experiments results show that the proposed method extracts the complementary features of near-infrared and visible-light images and improves the robustness of unconstrained face recognition.
Fast, long-term, super-resolution imaging with Hessian structured illumination microscopy.
Huang, Xiaoshuai; Fan, Junchao; Li, Liuju; Liu, Haosen; Wu, Runlong; Wu, Yi; Wei, Lisi; Mao, Heng; Lal, Amit; Xi, Peng; Tang, Liqiang; Zhang, Yunfeng; Liu, Yanmei; Tan, Shan; Chen, Liangyi
2018-06-01
To increase the temporal resolution and maximal imaging time of super-resolution (SR) microscopy, we have developed a deconvolution algorithm for structured illumination microscopy based on Hessian matrixes (Hessian-SIM). It uses the continuity of biological structures in multiple dimensions as a priori knowledge to guide image reconstruction and attains artifact-minimized SR images with less than 10% of the photon dose used by conventional SIM while substantially outperforming current algorithms at low signal intensities. Hessian-SIM enables rapid imaging of moving vesicles or loops in the endoplasmic reticulum without motion artifacts and with a spatiotemporal resolution of 88 nm and 188 Hz. Its high sensitivity allows the use of sub-millisecond excitation pulses followed by dark recovery times to reduce photobleaching of fluorescent proteins, enabling hour-long time-lapse SR imaging of actin filaments in live cells. Finally, we observed the structural dynamics of mitochondrial cristae and structures that, to our knowledge, have not been observed previously, such as enlarged fusion pores during vesicle exocytosis.
Statistical image quantification toward optimal scan fusion and change quantification
NASA Astrophysics Data System (ADS)
Potesil, Vaclav; Zhou, Xiang Sean
2007-03-01
Recent advance of imaging technology has brought new challenges and opportunities for automatic and quantitative analysis of medical images. With broader accessibility of more imaging modalities for more patients, fusion of modalities/scans from one time point and longitudinal analysis of changes across time points have become the two most critical differentiators to support more informed, more reliable and more reproducible diagnosis and therapy decisions. Unfortunately, scan fusion and longitudinal analysis are both inherently plagued with increased levels of statistical errors. A lack of comprehensive analysis by imaging scientists and a lack of full awareness by physicians pose potential risks in clinical practice. In this paper, we discuss several key error factors affecting imaging quantification, studying their interactions, and introducing a simulation strategy to establish general error bounds for change quantification across time. We quantitatively show that image resolution, voxel anisotropy, lesion size, eccentricity, and orientation are all contributing factors to quantification error; and there is an intricate relationship between voxel anisotropy and lesion shape in affecting quantification error. Specifically, when two or more scans are to be fused at feature level, optimal linear fusion analysis reveals that scans with voxel anisotropy aligned with lesion elongation should receive a higher weight than other scans. As a result of such optimal linear fusion, we will achieve a lower variance than naïve averaging. Simulated experiments are used to validate theoretical predictions. Future work based on the proposed simulation methods may lead to general guidelines and error lower bounds for quantitative image analysis and change detection.
Hyperspectral Super-Resolution of Locally Low Rank Images From Complementary Multisource Data.
Veganzones, Miguel A; Simoes, Miguel; Licciardi, Giorgio; Yokoya, Naoto; Bioucas-Dias, Jose M; Chanussot, Jocelyn
2016-01-01
Remote sensing hyperspectral images (HSIs) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low-dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods mainly decreases because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSIs are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low-dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough, such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution through local dictionary learning using endmember induction algorithms. We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data.
Adaptive multi-resolution 3D Hartree-Fock-Bogoliubov solver for nuclear structure
NASA Astrophysics Data System (ADS)
Pei, J. C.; Fann, G. I.; Harrison, R. J.; Nazarewicz, W.; Shi, Yue; Thornton, S.
2014-08-01
Background: Complex many-body systems, such as triaxial and reflection-asymmetric nuclei, weakly bound halo states, cluster configurations, nuclear fragments produced in heavy-ion fusion reactions, cold Fermi gases, and pasta phases in neutron star crust, are all characterized by large sizes and complex topologies in which many geometrical symmetries characteristic of ground-state configurations are broken. A tool of choice to study such complex forms of matter is an adaptive multi-resolution wavelet analysis. This method has generated much excitement since it provides a common framework linking many diversified methodologies across different fields, including signal processing, data compression, harmonic analysis and operator theory, fractals, and quantum field theory. Purpose: To describe complex superfluid many-fermion systems, we introduce an adaptive pseudospectral method for solving self-consistent equations of nuclear density functional theory in three dimensions, without symmetry restrictions. Methods: The numerical method is based on the multi-resolution and computational harmonic analysis techniques with a multi-wavelet basis. The application of state-of-the-art parallel programming techniques include sophisticated object-oriented templates which parse the high-level code into distributed parallel tasks with a multi-thread task queue scheduler for each multi-core node. The internode communications are asynchronous. The algorithm is variational and is capable of solving coupled complex-geometric systems of equations adaptively, with functional and boundary constraints, in a finite spatial domain of very large size, limited by existing parallel computer memory. For smooth functions, user-defined finite precision is guaranteed. Results: The new adaptive multi-resolution Hartree-Fock-Bogoliubov (HFB) solver madness-hfb is benchmarked against a two-dimensional coordinate-space solver hfb-ax that is based on the B-spline technique and a three-dimensional solver hfodd that is based on the harmonic-oscillator basis expansion. Several examples are considered, including the self-consistent HFB problem for spin-polarized trapped cold fermions and the Skyrme-Hartree-Fock (+BCS) problem for triaxial deformed nuclei. Conclusions: The new madness-hfb framework has many attractive features when applied to nuclear and atomic problems involving many-particle superfluid systems. Of particular interest are weakly bound nuclear configurations close to particle drip lines, strongly elongated and dinuclear configurations such as those present in fission and heavy-ion fusion, and exotic pasta phases that appear in neutron star crust.
NASA Astrophysics Data System (ADS)
Ning, Nannan; Tian, Jie; Liu, Xia; Deng, Kexin; Wu, Ping; Wang, Bo; Wang, Kun; Ma, Xibo
2014-02-01
In mathematics, optical molecular imaging including bioluminescence tomography (BLT), fluorescence tomography (FMT) and Cerenkov luminescence tomography (CLT) are concerned with a similar inverse source problem. They all involve the reconstruction of the 3D location of a single/multiple internal luminescent/fluorescent sources based on 3D surface flux distribution. To achieve that, an accurate fusion between 2D luminescent/fluorescent images and 3D structural images that may be acquired form micro-CT, MRI or beam scanning is extremely critical. However, the absence of a universal method that can effectively convert 2D optical information into 3D makes the accurate fusion challengeable. In this study, to improve the fusion accuracy, a new fusion method for dual-modality tomography (luminescence/fluorescence and micro-CT) based on natural light surface reconstruction (NLSR) and iterated closest point (ICP) was presented. It consisted of Octree structure, exact visual hull from marching cubes and ICP. Different from conventional limited projection methods, it is 360° free-space registration, and utilizes more luminescence/fluorescence distribution information from unlimited multi-orientation 2D optical images. A mouse mimicking phantom (one XPM-2 Phantom Light Source, XENOGEN Corporation) and an in-vivo BALB/C mouse with implanted one luminescent light source were used to evaluate the performance of the new fusion method. Compared with conventional fusion methods, the average error of preset markers was improved by 0.3 and 0.2 pixels from the new method, respectively. After running the same 3D internal light source reconstruction algorithm of the BALB/C mouse, the distance error between the actual and reconstructed internal source was decreased by 0.19 mm.
A general CFD framework for fault-resilient simulations based on multi-resolution information fusion
NASA Astrophysics Data System (ADS)
Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em
2017-10-01
We develop a general CFD framework for multi-resolution simulations to target multiscale problems but also resilience in exascale simulations, where faulty processors may lead to gappy, in space-time, simulated fields. We combine approximation theory and domain decomposition together with statistical learning techniques, e.g. coKriging, to estimate boundary conditions and minimize communications by performing independent parallel runs. To demonstrate this new simulation approach, we consider two benchmark problems. First, we solve the heat equation (a) on a small number of spatial "patches" distributed across the domain, simulated by finite differences at fine resolution and (b) on the entire domain simulated at very low resolution, thus fusing multi-resolution models to obtain the final answer. Second, we simulate the flow in a lid-driven cavity in an analogous fashion, by fusing finite difference solutions obtained with fine and low resolution assuming gappy data sets. We investigate the influence of various parameters for this framework, including the correlation kernel, the size of a buffer employed in estimating boundary conditions, the coarseness of the resolution of auxiliary data, and the communication frequency across different patches in fusing the information at different resolution levels. In addition to its robustness and resilience, the new framework can be employed to generalize previous multiscale approaches involving heterogeneous discretizations or even fundamentally different flow descriptions, e.g. in continuum-atomistic simulations.
James, Joseph; Murukeshan, Vadakke Matham; Woh, Lye Sun
2014-07-01
The structural and molecular heterogeneities of biological tissues demand the interrogation of the samples with multiple energy sources and provide visualization capabilities at varying spatial resolution and depth scales for obtaining complementary diagnostic information. A novel multi-modal imaging approach that uses optical and acoustic energies to perform photoacoustic, ultrasound and fluorescence imaging at multiple resolution scales from the tissue surface and depth is proposed in this paper. The system comprises of two distinct forms of hardware level integration so as to have an integrated imaging system under a single instrumentation set-up. The experimental studies show that the system is capable of mapping high resolution fluorescence signatures from the surface, optical absorption and acoustic heterogeneities along the depth (>2cm) of the tissue at multi-scale resolution (<1µm to <0.5mm).
Sensors and OBIA synergy for operational monitoring of surface water
NASA Astrophysics Data System (ADS)
Masson, Eric; Thenard, Lucas
2010-05-01
This contribution will focus on combining Object Based Image Analysis (i.e. OBIA with e-Cognition 8) and recent sensors (i.e. Spot 5 XS, Pan and ALOS Prism, Avnir2, Palsar) to address the technical feasibility for an operational monitoring of surface water. Three cases of river meandering (India), flood mapping (Nepal) and dam's seasonal water level monitoring (Morocco) using recent sensors will present various application of surface water monitoring. The operational aspect will be demonstrated either by sensor properties (i.e. spatial resolution and bandwidth), data acquisition properties (i.e. multi sensor, return period and near real-time acquisition) but also with OBIA algorithms (i.e. fusion of multi sensors / multi resolution data and batch processes). In the first case of river meandering (India) we will address multi sensor and multi date satellite acquisition to monitor the river bed mobility within a floodplain using an ALOS dataset. It will demonstrate the possibility of an operational monitoring system that helps the geomorphologist in the analysis of fluvial dynamic and sediment budget for high energy rivers. In the second case of flood mapping (Nepal) we will address near real time Palsar data acquisition at high spatial resolution to monitor and to map a flood extension. This ALOS sensor takes benefit both from SAR and L band properties (i.e. atmospheric transparency, day/night acquisition, low sensibility to surface wind). It's a real achievement compared to optical imagery or even other high resolution SAR properties (i.e. acquisition swath, bandwidth and data price). These advantages meet the operational needs set by crisis management of hydrological disasters but also for the implementation of flood risk management plans. The last case of dam surface water monitoring (Morocco) will address an important issue of water resource management in countries affected by water scarcity. In such countries water users have to cope with over exploitation, frequent drought period and now with foreseen climate change impacts. This third case will demonstrate the efficiency of SPOT 5 programming in synergy with OBIA methodology to assess the evolution of dam surface water within a complete water cycle (i.e. 2008-09). In all those three cases image segmentation and classification algorithms developed with e-Cognition 8 software allow an easy to use implementation of simple to highly sophisticate OBIA rulsets fully operational in batch processes. Finally this contribution foresees the new opportunity of integration of Worldview 2 multispectral imagery (i.e. 8 bands) including its "coastal" band that will also find an application in continental surface water bathymetry. Worldview 2 is a recently launch satellite (e.g. October 2009) that starts to collect earth observation data since January 2010. It is therefore a promising new remote sensing tool to develop operational hydrology in combination high resolution SAR imagery and OBIA methodology. This contribution will conclude on the strong potential for operationalisation in hydrology and water resources management that recent and future sensors and image analysis methodologies are offering to water management and decision makers.
Joint interpretation of geophysical data using Image Fusion techniques
NASA Astrophysics Data System (ADS)
Karamitrou, A.; Tsokas, G.; Petrou, M.
2013-12-01
Joint interpretation of geophysical data produced from different methods is a challenging area of research in a wide range of applications. In this work we apply several image fusion approaches to combine maps of electrical resistivity, electromagnetic conductivity, vertical gradient of the magnetic field, magnetic susceptibility, and ground penetrating radar reflections, in order to detect archaeological relics. We utilize data gathered from Arkansas University, with the support of the U.S. Department of Defense, through the Strategic Environmental Research and Development Program (SERDP-CS1263). The area of investigation is the Army City, situated in Riley Country of Kansas, USA. The depth of the relics is estimated about 30 cm from the surface, yet the surface indications of its existence are limited. We initially register the images from the different methods to correct from random offsets due to the use of hand-held devices during the measurement procedure. Next, we apply four different image fusion approaches to create combined images, using fusion with mean values, wavelet decomposition, curvelet transform, and curvelet transform enhancing the images along specific angles. We create seven combinations of pairs between the available geophysical datasets. The combinations are such that for every pair at least one high-resolution method (resistivity or magnetic gradiometry) is included. Our results indicate that in almost every case the method of mean values produces satisfactory fused images that corporate the majority of the features of the initial images. However, the contrast of the final image is reduced, and in some cases the averaging process nearly eliminated features that are fade in the original images. Wavelet based fusion outputs also good results, providing additional control in selecting the feature wavelength. Curvelet based fusion is proved the most effective method in most of the cases. The ability of curvelet domain to unfold the image in terms of space, wavenumber, and orientation, provides important advantages compared with the rest of the methods by allowing the incorporation of a-priori information about the orientation of the potential targets.
Fusion of 3D models derived from TLS and image-based techniques for CH enhanced documentation
NASA Astrophysics Data System (ADS)
Bastonero, P.; Donadio, E.; Chiabrando, F.; Spanò, A.
2014-05-01
Recognizing the various advantages offered by 3D new metric survey technologies in the Cultural Heritage documentation phase, this paper presents some tests of 3D model generation, using different methods, and their possible fusion. With the aim to define potentialities and problems deriving from integration or fusion of metric data acquired with different survey techniques, the elected test case is an outstanding Cultural Heritage item, presenting both widespread and specific complexities connected to the conservation of historical buildings. The site is the Staffarda Abbey, the most relevant evidence of medieval architecture in Piedmont. This application faced one of the most topical architectural issues consisting in the opportunity to study and analyze an object as a whole, from twice location of acquisition sensors, both the terrestrial and the aerial one. In particular, the work consists in the evaluation of chances deriving from a simple union or from the fusion of different 3D cloudmodels of the abbey, achieved by multi-sensor techniques. The aerial survey is based on a photogrammetric RPAS (Remotely piloted aircraft system) flight while the terrestrial acquisition have been fulfilled by laser scanning survey. Both techniques allowed to extract and process different point clouds and to generate consequent 3D continuous models which are characterized by different scale, that is to say different resolutions and diverse contents of details and precisions. Starting from these models, the proposed process, applied to a sample area of the building, aimed to test the generation of a unique 3Dmodel thorough a fusion of different sensor point clouds. Surely, the describing potential and the metric and thematic gains feasible by the final model exceeded those offered by the two detached models.
An efficient multi-resolution GA approach to dental image alignment
NASA Astrophysics Data System (ADS)
Nassar, Diaa Eldin; Ogirala, Mythili; Adjeroh, Donald; Ammar, Hany
2006-02-01
Automating the process of postmortem identification of individuals using dental records is receiving an increased attention in forensic science, especially with the large volume of victims encountered in mass disasters. Dental radiograph alignment is a key step required for automating the dental identification process. In this paper, we address the problem of dental radiograph alignment using a Multi-Resolution Genetic Algorithm (MR-GA) approach. We use location and orientation information of edge points as features; we assume that affine transformations suffice to restore geometric discrepancies between two images of a tooth, we efficiently search the 6D space of affine parameters using GA progressively across multi-resolution image versions, and we use a Hausdorff distance measure to compute the similarity between a reference tooth and a query tooth subject to a possible alignment transform. Testing results based on 52 teeth-pair images suggest that our algorithm converges to reasonable solutions in more than 85% of the test cases, with most of the error in the remaining cases due to excessive misalignments.
NASA Astrophysics Data System (ADS)
Huang, Wei; Chen, Xiu; Wang, Yueyun
2018-03-01
Landsat data are widely used in various earth observations, but the clouds interfere with the applications of the images. This paper proposes a weighted variational gradient-based fusion method (WVGBF) for high-fidelity thin cloud removal of Landsat images, which is an improvement of the variational gradient-based fusion (VGBF) method. The VGBF method integrates the gradient information from the reference band into visible bands of cloudy image to enable spatial details and remove thin clouds. The VGBF method utilizes the same gradient constraints to the entire image, which causes the color distortion in cloudless areas. In our method, a weight coefficient is introduced into the gradient approximation term to ensure the fidelity of image. The distribution of weight coefficient is related to the cloud thickness map. The map is built on Independence Component Analysis (ICA) by using multi-temporal Landsat images. Quantitatively, we use R value to evaluate the fidelity in the cloudless regions and metric Q to evaluate the clarity in the cloud areas. The experimental results indicate that the proposed method has the better ability to remove thin cloud and achieve high fidelity.
Field-Portable Pixel Super-Resolution Colour Microscope
Greenbaum, Alon; Akbari, Najva; Feizi, Alborz; Luo, Wei; Ozcan, Aydogan
2013-01-01
Based on partially-coherent digital in-line holography, we report a field-portable microscope that can render lensfree colour images over a wide field-of-view of e.g., >20 mm2. This computational holographic microscope weighs less than 145 grams with dimensions smaller than 17×6×5 cm, making it especially suitable for field settings and point-of-care use. In this lensfree imaging design, we merged a colorization algorithm with a source shifting based multi-height pixel super-resolution technique to mitigate ‘rainbow’ like colour artefacts that are typical in holographic imaging. This image processing scheme is based on transforming the colour components of an RGB image into YUV colour space, which separates colour information from brightness component of an image. The resolution of our super-resolution colour microscope was characterized using a USAF test chart to confirm sub-micron spatial resolution, even for reconstructions that employ multi-height phase recovery to handle dense and connected objects. To further demonstrate the performance of this colour microscope Papanicolaou (Pap) smears were also successfully imaged. This field-portable and wide-field computational colour microscope could be useful for tele-medicine applications in resource poor settings. PMID:24086742
Field-portable pixel super-resolution colour microscope.
Greenbaum, Alon; Akbari, Najva; Feizi, Alborz; Luo, Wei; Ozcan, Aydogan
2013-01-01
Based on partially-coherent digital in-line holography, we report a field-portable microscope that can render lensfree colour images over a wide field-of-view of e.g., >20 mm(2). This computational holographic microscope weighs less than 145 grams with dimensions smaller than 17×6×5 cm, making it especially suitable for field settings and point-of-care use. In this lensfree imaging design, we merged a colorization algorithm with a source shifting based multi-height pixel super-resolution technique to mitigate 'rainbow' like colour artefacts that are typical in holographic imaging. This image processing scheme is based on transforming the colour components of an RGB image into YUV colour space, which separates colour information from brightness component of an image. The resolution of our super-resolution colour microscope was characterized using a USAF test chart to confirm sub-micron spatial resolution, even for reconstructions that employ multi-height phase recovery to handle dense and connected objects. To further demonstrate the performance of this colour microscope Papanicolaou (Pap) smears were also successfully imaged. This field-portable and wide-field computational colour microscope could be useful for tele-medicine applications in resource poor settings.
Information fusion based techniques for HEVC
NASA Astrophysics Data System (ADS)
Fernández, D. G.; Del Barrio, A. A.; Botella, Guillermo; Meyer-Baese, Uwe; Meyer-Baese, Anke; Grecos, Christos
2017-05-01
Aiming at the conflict circumstances of multi-parameter H.265/HEVC encoder system, the present paper introduces the analysis of many optimizations' set in order to improve the trade-off between quality, performance and power consumption for different reliable and accurate applications. This method is based on the Pareto optimization and has been tested with different resolutions on real-time encoders.
Lensless transport-of-intensity phase microscopy and tomography with a color LED matrix
NASA Astrophysics Data System (ADS)
Zuo, Chao; Sun, Jiasong; Zhang, Jialin; Hu, Yan; Chen, Qian
2015-07-01
We demonstrate lens-less quantitative phase microscopy and diffraction tomography based on a compact on-chip platform, using only a CMOS image sensor and a programmable color LED array. Based on multi-wavelength transport-of- intensity phase retrieval and multi-angle illumination diffraction tomography, this platform offers high quality, depth resolved images with a lateral resolution of ˜3.7μm and an axial resolution of ˜5μm, over wide large imaging FOV of 24mm2. The resolution and FOV can be further improved by using a larger image sensors with small pixels straightforwardly. This compact, low-cost, robust, portable platform with a decent imaging performance may offer a cost-effective tool for telemedicine needs, or for reducing health care costs for point-of-care diagnostics in resource-limited environments.
Multispectral image enhancement processing for microsat-borne imager
NASA Astrophysics Data System (ADS)
Sun, Jianying; Tan, Zheng; Lv, Qunbo; Pei, Linlin
2017-10-01
With the rapid development of remote sensing imaging technology, the micro satellite, one kind of tiny spacecraft, appears during the past few years. A good many studies contribute to dwarfing satellites for imaging purpose. Generally speaking, micro satellites weigh less than 100 kilograms, even less than 50 kilograms, which are slightly larger or smaller than the common miniature refrigerators. However, the optical system design is hard to be perfect due to the satellite room and weight limitation. In most cases, the unprocessed data captured by the imager on the microsatellite cannot meet the application need. Spatial resolution is the key problem. As for remote sensing applications, the higher spatial resolution of images we gain, the wider fields we can apply them. Consequently, how to utilize super resolution (SR) and image fusion to enhance the quality of imagery deserves studying. Our team, the Key Laboratory of Computational Optical Imaging Technology, Academy Opto-Electronics, is devoted to designing high-performance microsat-borne imagers and high-efficiency image processing algorithms. This paper addresses a multispectral image enhancement framework for space-borne imagery, jointing the pan-sharpening and super resolution techniques to deal with the spatial resolution shortcoming of microsatellites. We test the remote sensing images acquired by CX6-02 satellite and give the SR performance. The experiments illustrate the proposed approach provides high-quality images.
Performing label-fusion-based segmentation using multiple automatically generated templates.
Chakravarty, M Mallar; Steadman, Patrick; van Eede, Matthijs C; Calcott, Rebecca D; Gu, Victoria; Shaw, Philip; Raznahan, Armin; Collins, D Louis; Lerch, Jason P
2013-10-01
Classically, model-based segmentation procedures match magnetic resonance imaging (MRI) volumes to an expertly labeled atlas using nonlinear registration. The accuracy of these techniques are limited due to atlas biases, misregistration, and resampling error. Multi-atlas-based approaches are used as a remedy and involve matching each subject to a number of manually labeled templates. This approach yields numerous independent segmentations that are fused using a voxel-by-voxel label-voting procedure. In this article, we demonstrate how the multi-atlas approach can be extended to work with input atlases that are unique and extremely time consuming to construct by generating a library of multiple automatically generated templates of different brains (MAGeT Brain). We demonstrate the efficacy of our method for the mouse and human using two different nonlinear registration algorithms (ANIMAL and ANTs). The input atlases consist a high-resolution mouse brain atlas and an atlas of the human basal ganglia and thalamus derived from serial histological data. MAGeT Brain segmentation improves the identification of the mouse anterior commissure (mean Dice Kappa values (κ = 0.801), but may be encountering a ceiling effect for hippocampal segmentations. Applying MAGeT Brain to human subcortical structures improves segmentation accuracy for all structures compared to regular model-based techniques (κ = 0.845, 0.752, and 0.861 for the striatum, globus pallidus, and thalamus, respectively). Experiments performed with three manually derived input templates suggest that MAGeT Brain can approach or exceed the accuracy of multi-atlas label-fusion segmentation (κ = 0.894, 0.815, and 0.895 for the striatum, globus pallidus, and thalamus, respectively). Copyright © 2012 Wiley Periodicals, Inc.
Sensor fusion to enable next generation low cost Night Vision systems
NASA Astrophysics Data System (ADS)
Schweiger, R.; Franz, S.; Löhlein, O.; Ritter, W.; Källhammer, J.-E.; Franks, J.; Krekels, T.
2010-04-01
The next generation of automotive Night Vision Enhancement systems offers automatic pedestrian recognition with a performance beyond current Night Vision systems at a lower cost. This will allow high market penetration, covering the luxury as well as compact car segments. Improved performance can be achieved by fusing a Far Infrared (FIR) sensor with a Near Infrared (NIR) sensor. However, fusing with today's FIR systems will be too costly to get a high market penetration. The main cost drivers of the FIR system are its resolution and its sensitivity. Sensor cost is largely determined by sensor die size. Fewer and smaller pixels will reduce die size but also resolution and sensitivity. Sensitivity limits are mainly determined by inclement weather performance. Sensitivity requirements should be matched to the possibilities of low cost FIR optics, especially implications of molding of highly complex optical surfaces. As a FIR sensor specified for fusion can have lower resolution as well as lower sensitivity, fusing FIR and NIR can solve performance and cost problems. To allow compensation of FIR-sensor degradation on the pedestrian detection capabilities, a fusion approach called MultiSensorBoosting is presented that produces a classifier holding highly discriminative sub-pixel features from both sensors at once. The algorithm is applied on data with different resolution and on data obtained from cameras with varying optics to incorporate various sensor sensitivities. As it is not feasible to record representative data with all different sensor configurations, transformation routines on existing high resolution data recorded with high sensitivity cameras are investigated in order to determine the effects of lower resolution and lower sensitivity to the overall detection performance. This paper also gives an overview of the first results showing that a reduction of FIR sensor resolution can be compensated using fusion techniques and a reduction of sensitivity can be compensated.
Multi-slice ptychography with large numerical aperture multilayer Laue lenses
Ozturk, Hande; Yan, Hanfei; He, Yan; ...
2018-05-09
Here, the highly convergent x-ray beam focused by multilayer Laue lenses with large numerical apertures is used as a three-dimensional (3D) probe to image layered structures with an axial separation larger than the depth of focus. Instead of collecting weakly scattered high-spatial-frequency signals, the depth-resolving power is provided purely by the intense central cone diverged from the focused beam. Using the multi-slice ptychography method combined with the on-the-fly scan scheme, two layers of nanoparticles separated by 10 μm are successfully reconstructed with 8.1 nm lateral resolution and with a dwell time as low as 0.05 s per scan point. Thismore » approach obtains high-resolution images with extended depth of field, which paves the way for multi-slice ptychography as a high throughput technique for high-resolution 3D imaging of thick samples.« less
Multi-slice ptychography with large numerical aperture multilayer Laue lenses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozturk, Hande; Yan, Hanfei; He, Yan
Here, the highly convergent x-ray beam focused by multilayer Laue lenses with large numerical apertures is used as a three-dimensional (3D) probe to image layered structures with an axial separation larger than the depth of focus. Instead of collecting weakly scattered high-spatial-frequency signals, the depth-resolving power is provided purely by the intense central cone diverged from the focused beam. Using the multi-slice ptychography method combined with the on-the-fly scan scheme, two layers of nanoparticles separated by 10 μm are successfully reconstructed with 8.1 nm lateral resolution and with a dwell time as low as 0.05 s per scan point. Thismore » approach obtains high-resolution images with extended depth of field, which paves the way for multi-slice ptychography as a high throughput technique for high-resolution 3D imaging of thick samples.« less
Label fusion based brain MR image segmentation via a latent selective model
NASA Astrophysics Data System (ADS)
Liu, Gang; Guo, Xiantang; Zhu, Kai; Liao, Hengxu
2018-04-01
Multi-atlas segmentation is an effective approach and increasingly popular for automatically labeling objects of interest in medical images. Recently, segmentation methods based on generative models and patch-based techniques have become the two principal branches of label fusion. However, these generative models and patch-based techniques are only loosely related, and the requirement for higher accuracy, faster segmentation, and robustness is always a great challenge. In this paper, we propose novel algorithm that combines the two branches using global weighted fusion strategy based on a patch latent selective model to perform segmentation of specific anatomical structures for human brain magnetic resonance (MR) images. In establishing this probabilistic model of label fusion between the target patch and patch dictionary, we explored the Kronecker delta function in the label prior, which is more suitable than other models, and designed a latent selective model as a membership prior to determine from which training patch the intensity and label of the target patch are generated at each spatial location. Because the image background is an equally important factor for segmentation, it is analyzed in label fusion procedure and we regard it as an isolated label to keep the same privilege between the background and the regions of interest. During label fusion with the global weighted fusion scheme, we use Bayesian inference and expectation maximization algorithm to estimate the labels of the target scan to produce the segmentation map. Experimental results indicate that the proposed algorithm is more accurate and robust than the other segmentation methods.
NASA Astrophysics Data System (ADS)
Yan, Dan; Bai, Lianfa; Zhang, Yi; Han, Jing
2018-02-01
For the problems of missing details and performance of the colorization based on sparse representation, we propose a conceptual model framework for colorizing gray-scale images, and then a multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement (CEMDC) is proposed based on this framework. The algorithm can achieve a natural colorized effect for a gray-scale image, and it is consistent with the human vision. First, the algorithm establishes a multi-sparse dictionary classification colorization model. Then, to improve the accuracy rate of the classification, the corresponding local constraint algorithm is proposed. Finally, we propose a detail enhancement based on Laplacian Pyramid, which is effective in solving the problem of missing details and improving the speed of image colorization. In addition, the algorithm not only realizes the colorization of the visual gray-scale image, but also can be applied to the other areas, such as color transfer between color images, colorizing gray fusion images, and infrared images.
Handheld microwave bomb-detecting imaging system
NASA Astrophysics Data System (ADS)
Gorwara, Ashok; Molchanov, Pavlo
2017-05-01
Proposed novel imaging technique will provide all weather high-resolution imaging and recognition capability for RF/Microwave signals with good penetration through highly scattered media: fog, snow, dust, smoke, even foliage, camouflage, walls and ground. Image resolution in proposed imaging system is not limited by diffraction and will be determined by processor and sampling frequency. Proposed imaging system can simultaneously cover wide field of view, detect multiple targets and can be multi-frequency, multi-function. Directional antennas in imaging system can be close positioned and installed in cell phone size handheld device, on small aircraft or distributed around protected border or object. Non-scanning monopulse system allows dramatically decrease in transmitting power and at the same time provides increased imaging range by integrating 2-3 orders more signals than regular scanning imaging systems.
NASA Astrophysics Data System (ADS)
Huang, Xin; Chen, Huijun; Gong, Jianya
2018-01-01
Spaceborne multi-angle images with a high-resolution are capable of simultaneously providing spatial details and three-dimensional (3D) information to support detailed and accurate classification of complex urban scenes. In recent years, satellite-derived digital surface models (DSMs) have been increasingly utilized to provide height information to complement spectral properties for urban classification. However, in such a way, the multi-angle information is not effectively exploited, which is mainly due to the errors and difficulties of the multi-view image matching and the inaccuracy of the generated DSM over complex and dense urban scenes. Therefore, it is still a challenging task to effectively exploit the available angular information from high-resolution multi-angle images. In this paper, we investigate the potential for classifying urban scenes based on local angular properties characterized from high-resolution ZY-3 multi-view images. Specifically, three categories of angular difference features (ADFs) are proposed to describe the angular information at three levels (i.e., pixel, feature, and label levels): (1) ADF-pixel: the angular information is directly extrapolated by pixel comparison between the multi-angle images; (2) ADF-feature: the angular differences are described in the feature domains by comparing the differences between the multi-angle spatial features (e.g., morphological attribute profiles (APs)). (3) ADF-label: label-level angular features are proposed based on a group of urban primitives (e.g., buildings and shadows), in order to describe the specific angular information related to the types of primitive classes. In addition, we utilize spatial-contextual information to refine the multi-level ADF features using superpixel segmentation, for the purpose of alleviating the effects of salt-and-pepper noise and representing the main angular characteristics within a local area. The experiments on ZY-3 multi-angle images confirm that the proposed ADF features can effectively improve the accuracy of urban scene classification, with a significant increase in overall accuracy (3.8-11.7%) compared to using the spectral bands alone. Furthermore, the results indicated the superiority of the proposed ADFs in distinguishing between the spectrally similar and complex man-made classes, including roads and various types of buildings (e.g., high buildings, urban villages, and residential apartments).
A multi-frame soft x-ray pinhole imaging diagnostic for single-shot applicationsa)
NASA Astrophysics Data System (ADS)
Wurden, G. A.; Coffey, S. K.
2012-10-01
For high energy density magnetized target fusion experiments at the Air Force Research Laboratory FRCHX machine, obtaining multi-frame soft x-ray images of the field reversed configuration (FRC) plasma as it is being compressed will provide useful dynamics and symmetry information. However, vacuum hardware will be destroyed during the implosion. We have designed a simple in-vacuum pinhole nosecone attachment, fitting onto a Conflat window, coated with 3.2 mg/cm2 of P-47 phosphor, and covered with a thin 50-nm aluminum reflective overcoat, lens-coupled to a multi-frame Hadland Ultra intensified digital camera. We compare visible and soft x-ray axial images of translating (˜200 eV) plasmas in the FRX-L and FRCHX machines in Los Alamos and Albuquerque.
Multi-PSF fusion in image restoration of range-gated systems
NASA Astrophysics Data System (ADS)
Wang, Canjin; Sun, Tao; Wang, Tingfeng; Miao, Xikui; Wang, Rui
2018-07-01
For the task of image restoration, an accurate estimation of degrading PSF/kernel is the premise of recovering a visually superior image. The imaging process of range-gated imaging system in atmosphere associates with lots of factors, such as back scattering, background radiation, diffraction limit and the vibration of the platform. On one hand, due to the difficulty of constructing models for all factors, the kernels from physical-model based methods are not strictly accurate and practical. On the other hand, there are few strong edges in images, which brings significant errors to most of image-feature-based methods. Since different methods focus on different formation factors of the kernel, their results often complement each other. Therefore, we propose an approach which combines physical model with image features. With an fusion strategy using GCRF (Gaussian Conditional Random Fields) framework, we get a final kernel which is closer to the actual one. Aiming at the problem that ground-truth image is difficult to obtain, we then propose a semi data-driven fusion method in which different data sets are used to train fusion parameters. Finally, a semi blind restoration strategy based on EM (Expectation Maximization) and RL (Richardson-Lucy) algorithm is proposed. Our methods not only models how the lasers transfer in the atmosphere and imaging in the ICCD (Intensified CCD) plane, but also quantifies other unknown degraded factors using image-based methods, revealing how multiple kernel elements interact with each other. The experimental results demonstrate that our method achieves better performance than state-of-the-art restoration approaches.
Panoramic 3D Reconstruction by Fusing Color Intensity and Laser Range Data
NASA Astrophysics Data System (ADS)
Jiang, Wei; Lu, Jian
Technology for capturing panoramic (360 degrees) three-dimensional information in a real environment have many applications in fields: virtual and complex reality, security, robot navigation, and so forth. In this study, we examine an acquisition device constructed of a regular CCD camera and a 2D laser range scanner, along with a technique for panoramic 3D reconstruction using a data fusion algorithm based on an energy minimization framework. The acquisition device can capture two types of data of a panoramic scene without occlusion between two sensors: a dense spatio-temporal volume from a camera and distance information from a laser scanner. We resample the dense spatio-temporal volume for generating a dense multi-perspective panorama that has equal spatial resolution to that of the original images acquired using a regular camera, and also estimate a dense panoramic depth-map corresponding to the generated reference panorama by extracting trajectories from the dense spatio-temporal volume with a selecting camera. Moreover, for determining distance information robustly, we propose a data fusion algorithm that is embedded into an energy minimization framework that incorporates active depth measurements using a 2D laser range scanner and passive geometry reconstruction from an image sequence obtained using the CCD camera. Thereby, measurement precision and robustness can be improved beyond those available by conventional methods using either passive geometry reconstruction (stereo vision) or a laser range scanner. Experimental results using both synthetic and actual images show that our approach can produce high-quality panoramas and perform accurate 3D reconstruction in a panoramic environment.
Gamma-Ray imaging for nuclear security and safety: Towards 3-D gamma-ray vision
NASA Astrophysics Data System (ADS)
Vetter, Kai; Barnowksi, Ross; Haefner, Andrew; Joshi, Tenzing H. Y.; Pavlovsky, Ryan; Quiter, Brian J.
2018-01-01
The development of portable gamma-ray imaging instruments in combination with the recent advances in sensor and related computer vision technologies enable unprecedented capabilities in the detection, localization, and mapping of radiological and nuclear materials in complex environments relevant for nuclear security and safety. Though multi-modal imaging has been established in medicine and biomedical imaging for some time, the potential of multi-modal data fusion for radiological localization and mapping problems in complex indoor and outdoor environments remains to be explored in detail. In contrast to the well-defined settings in medical or biological imaging associated with small field-of-view and well-constrained extension of the radiation field, in many radiological search and mapping scenarios, the radiation fields are not constrained and objects and sources are not necessarily known prior to the measurement. The ability to fuse radiological with contextual or scene data in three dimensions, in analog to radiological and functional imaging with anatomical fusion in medicine, provides new capabilities enhancing image clarity, context, quantitative estimates, and visualization of the data products. We have developed new means to register and fuse gamma-ray imaging with contextual data from portable or moving platforms. These developments enhance detection and mapping capabilities as well as provide unprecedented visualization of complex radiation fields, moving us one step closer to the realization of gamma-ray vision in three dimensions.
Multi-energy SXR cameras for magnetically confined fusion plasmas (invited).
Delgado-Aparicio, L F; Maddox, J; Pablant, N; Hill, K; Bitter, M; Rice, J E; Granetz, R; Hubbard, A; Irby, J; Greenwald, M; Marmar, E; Tritz, K; Stutman, D; Stratton, B; Efthimion, P
2016-11-01
A compact multi-energy soft x-ray camera has been developed for time, energy and space-resolved measurements of the soft-x-ray emissivity in magnetically confined fusion plasmas. Multi-energy soft x-ray imaging provides a unique opportunity for measuring, simultaneously, a variety of important plasma properties (T e , n Z , ΔZ eff , and n e,fast ). The electron temperature can be obtained by modeling the slope of the continuum radiation from ratios of the available brightness and inverted radial emissivity profiles over multiple energy ranges. Impurity density measurements are also possible using the line-emission from medium- to high-Z impurities to separate the background as well as transient levels of metal contributions. This technique should be explored also as a burning plasma diagnostic in-view of its simplicity and robustness.
NASA Astrophysics Data System (ADS)
Miecznik, Grzegorz; Shafer, Jeff; Baugh, William M.; Bader, Brett; Karspeck, Milan; Pacifici, Fabio
2017-05-01
WorldView-3 (WV-3) is a DigitalGlobe commercial, high resolution, push-broom imaging satellite with three instruments: visible and near-infrared VNIR consisting of panchromatic (0.3m nadir GSD) plus multi-spectral (1.2m), short-wave infrared SWIR (3.7m), and multi-spectral CAVIS (30m). Nine VNIR bands, which are on one instrument, are nearly perfectly registered to each other, whereas eight SWIR bands, belonging to the second instrument, are misaligned with respect to VNIR and to each other. Geometric calibration and ortho-rectification results in a VNIR/SWIR alignment which is accurate to approximately 0.75 SWIR pixel at 3.7m GSD, whereas inter-SWIR, band to band registration is 0.3 SWIR pixel. Numerous high resolution, spectral applications, such as object classification and material identification, require more accurate registration, which can be achieved by utilizing image processing algorithms, for example Mutual Information (MI). Although MI-based co-registration algorithms are highly accurate, implementation details for automated processing can be challenging. One particular challenge is how to compute bin widths of intensity histograms, which are fundamental building blocks of MI. We solve this problem by making the bin widths proportional to instrument shot noise. Next, we show how to take advantage of multiple VNIR bands, and improve registration sensitivity to image alignment. To meet this goal, we employ Canonical Correlation Analysis, which maximizes VNIR/SWIR correlation through an optimal linear combination of VNIR bands. Finally we explore how to register images corresponding to different spatial resolutions. We show that MI computed at a low-resolution grid is more sensitive to alignment parameters than MI computed at a high-resolution grid. The proposed modifications allow us to improve VNIR/SWIR registration to better than ¼ of a SWIR pixel, as long as terrain elevation is properly accounted for, and clouds and water are masked out.
High performance multi-spectral interrogation for surface plasmon resonance imaging sensors.
Sereda, A; Moreau, J; Canva, M; Maillart, E
2014-04-15
Surface plasmon resonance (SPR) sensing has proven to be a valuable tool in the field of surface interactions characterization, especially for biomedical applications where label-free techniques are of particular interest. In order to approach the theoretical resolution limit, most SPR-based systems have turned to either angular or spectral interrogation modes, which both offer very accurate real-time measurements, but at the expense of the 2-dimensional imaging capability, therefore decreasing the data throughput. In this article, we show numerically and experimentally how to combine the multi-spectral interrogation technique with 2D-imaging, while finding an optimum in terms of resolution, accuracy, acquisition speed and reduction in data dispersion with respect to the classical reflectivity interrogation mode. This multi-spectral interrogation methodology is based on a robust five parameter fitting of the spectral reflectivity curve which enables monitoring of the reflectivity spectral shift with a resolution of the order of ten picometers, and using only five wavelength measurements per point. In fine, such multi-spectral based plasmonic imaging system allows biomolecular interaction monitoring in a linear regime independently of variations of buffer optical index, which is illustrated on a DNA-DNA model case. © 2013 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor)
1989-01-01
A method and apparatus is developed for obtaining a stereo image with reduced depth distortion and optimum depth resolution. Static and dynamic depth distortion and depth resolution tradeoff is provided. Cameras obtaining the images for a stereo view are converged at a convergence point behind the object to be presented in the image, and the collection-surface-to-object distance, the camera separation distance, and the focal lengths of zoom lenses for the cameras are all increased. Doubling the distances cuts the static depth distortion in half while maintaining image size and depth resolution. Dynamic depth distortion is minimized by panning a stereo view-collecting camera system about a circle which passes through the convergence point and the camera's first nodal points. Horizontal field shifting of the television fields on a television monitor brings both the monitor and the stereo views within the viewer's limit of binocular fusion.
NASA Astrophysics Data System (ADS)
L. Wang, F.; Mu, B. Z.; Wang, Z. S.; Gu, C. S.; Zhang, Z.; Qin, S. J.; Chen, L. Y.
A grazing Kirkpatrick-Baez (K-B) microscope was designed for hard x-ray (8keV; Cu Ka radiation) imaging in Inertial Confinement Fusion (ICF) diagnostic experiments. Ray tracing software was used to simulate optical system performance. The optimized theoretical resolution of K-B microscope was about 2 micron and better than 10 micron in 200 micron field of view. Tungsten and boron carbide were chosen as multilayer materials and the multilayer was deposited onto the silicon wafer substrate and the reflectivity was measured by x-ray diffraction (XRD). The reflectivity of supermirror was about 20 % in 0.3 % of bandwidth. 8keV Cu target x-ray tube source was used in x-ray imaging experiments and the magnification of 1x and 2x x-ray images were obtained.
Simultaneous multi-headed imager geometry calibration method
Tran, Vi-Hoa [Newport News, VA; Meikle, Steven Richard [Penshurst, AU; Smith, Mark Frederick [Yorktown, VA
2008-02-19
A method for calibrating multi-headed high sensitivity and high spatial resolution dynamic imaging systems, especially those useful in the acquisition of tomographic images of small animals. The method of the present invention comprises: simultaneously calibrating two or more detectors to the same coordinate system; and functionally correcting for unwanted detector movement due to gantry flexing.
A novel imaging method for photonic crystal fiber fusion splicer
NASA Astrophysics Data System (ADS)
Bi, Weihong; Fu, Guangwei; Guo, Xuan
2007-01-01
Because the structure of Photonic Crystal Fiber (PCF) is very complex, and it is very difficult that traditional fiber fusion splice obtains optical axial information of PCF. Therefore, we must search for a bran-new optical imaging method to get section information of Photonic Crystal Fiber. Based on complex trait of PCF, a novel high-precision optics imaging system is presented in this article. The system uses a thinned electron-bombarded CCD (EBCCD) which is a kind of image sensor as imaging element, the thinned electron-bombarded CCD can offer low light level performance superior to conventional image intensifier coupled CCD approaches, this high-performance device can provide high contrast high resolution in low light level surveillance imaging; in order to realize precision focusing of image, we use a ultra-highprecision pace motor to adjust position of imaging lens. In this way, we can obtain legible section information of PCF. We may realize further concrete analysis for section information of PCF by digital image processing technology. Using this section information may distinguish different sorts of PCF, compute some parameters such as the size of PCF ventage, cladding structure of PCF and so on, and provide necessary analysis data for PCF fixation, adjustment, regulation, fusion and cutting system.
FFT-enhanced IHS transform method for fusing high-resolution satellite images
Ling, Y.; Ehlers, M.; Usery, E.L.; Madden, M.
2007-01-01
Existing image fusion techniques such as the intensity-hue-saturation (IHS) transform and principal components analysis (PCA) methods may not be optimal for fusing the new generation commercial high-resolution satellite images such as Ikonos and QuickBird. One problem is color distortion in the fused image, which causes visual changes as well as spectral differences between the original and fused images. In this paper, a fast Fourier transform (FFT)-enhanced IHS method is developed for fusing new generation high-resolution satellite images. This method combines a standard IHS transform with FFT filtering of both the panchromatic image and the intensity component of the original multispectral image. Ikonos and QuickBird data are used to assess the FFT-enhanced IHS transform method. Experimental results indicate that the FFT-enhanced IHS transform method may improve upon the standard IHS transform and the PCA methods in preserving spectral and spatial information. ?? 2006 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS).
[Time consumption and quality of an automated fusion tool for SPECT and MRI images of the brain].
Fiedler, E; Platsch, G; Schwarz, A; Schmiedehausen, K; Tomandl, B; Huk, W; Rupprecht, Th; Rahn, N; Kuwert, T
2003-10-01
Although the fusion of images from different modalities may improve diagnostic accuracy, it is rarely used in clinical routine work due to logistic problems. Therefore we evaluated performance and time needed for fusing MRI and SPECT images using a semiautomated dedicated software. PATIENTS, MATERIAL AND METHOD: In 32 patients regional cerebral blood flow was measured using (99m)Tc ethylcystein dimer (ECD) and the three-headed SPECT camera MultiSPECT 3. MRI scans of the brain were performed using either a 0,2 T Open or a 1,5 T Sonata. Twelve of the MRI data sets were acquired using a 3D-T1w MPRAGE sequence, 20 with a 2D acquisition technique and different echo sequences. Image fusion was performed on a Syngo workstation using an entropy minimizing algorithm by an experienced user of the software. The fusion results were classified. We measured the time needed for the automated fusion procedure and in case of need that for manual realignment after automated, but insufficient fusion. The mean time of the automated fusion procedure was 123 s. It was for the 2D significantly shorter than for the 3D MRI datasets. For four of the 2D data sets and two of the 3D data sets an optimal fit was reached using the automated approach. The remaining 26 data sets required manual correction. The sum of the time required for automated fusion and that needed for manual correction averaged 320 s (50-886 s). The fusion of 3D MRI data sets lasted significantly longer than that of the 2D MRI data. The automated fusion tool delivered in 20% an optimal fit, in 80% manual correction was necessary. Nevertheless, each of the 32 SPECT data sets could be merged in less than 15 min with the corresponding MRI data, which seems acceptable for clinical routine use.
NASA Astrophysics Data System (ADS)
Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan
2016-06-01
Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.
Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan
2016-01-01
Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available. PMID:27283459
Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan
2016-06-10
Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed "digital color fusion microscopy" (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.
Fast high resolution reconstruction in multi-slice and multi-view cMRI
NASA Astrophysics Data System (ADS)
Velasco Toledo, Nelson; Romero Castro, Eduardo
2015-01-01
Cardiac magnetic resonance imaging (cMRI) is an useful tool in diagnosis, prognosis and research since it functionally tracks the heart structure. Although useful, this imaging technique is limited in spatial resolution because heart is a constant moving organ, also there are other non controled conditions such as patient movements and volumetric changes during apnea periods when data is acquired, those conditions limit the time to capture high quality information. This paper presents a very fast and simple strategy to reconstruct high resolution 3D images from a set of low resolution series of 2D images. The strategy is based on an information reallocation algorithm which uses the DICOM header to relocate voxel intensities in a regular grid. An interpolation method is applied to fill empty places with estimated data, the interpolation resamples the low resolution information to estimate the missing information. As a final step a gaussian filter that denoises the final result. A reconstructed image evaluation is performed using as a reference a super-resolution reconstructed image. The evaluation reveals that the method maintains the general heart structure with a small loss in detailed information (edge sharpening and blurring), some artifacts related with input information quality are detected. The proposed method requires low time and computational resources.
Snorkelling between the stars: submarine methods for astronomical observations.
NASA Astrophysics Data System (ADS)
Velasco, S.; Quevedo, E.; Font, J.; Oscoz, A.; López, R. L.; Puga, M.; Rebolo, R.; Hernáandez Brito, J.; Llinas, O.; Marrero Callico, G.; Sarmiento, R.
2017-03-01
Trying to reach diffraction-limited astronomical observations from ground-based telescopes is very challenging due to the atmospheric effects contributing to a general blurring of the images. However, astronomy is not the only science facing turbulence problems; obtaining quality images of the undersea world is as ambitious as it is on the sky. One of the solutions contemplated to reach high-resolution images is the use of multiple frames of the same target, known as fusion super-resolution (Quevedo et al. 2015), which is the principle for Lucky Imaging (Velasco et al. 2016). Here we present the successful result of joining efforts between the undersea and the astronomical research done at the Canary Islands.
Applications of PET CT in clinical practice: Present and future
NASA Astrophysics Data System (ADS)
Costa, Durval Campos
2007-02-01
Radionuclide imaging and specially positron emission tomography (PET) has already demonstrated its benefits in three major medical subjects, i.e. neurology, cardiology and particularly clinical oncology. More recently the combination of PET and X-ray computed tomography (CT) as PET-CT led to a significant increment of the already large number of clinical applications of this imaging modality. This "anatomy-metabolic fusion" also known as Metabolic Imaging has its future assured if we can: (1) improve resolution reducing partial volume effect, (2) achieve very fast whole body imaging, (3) obtain accurate quantification of specific functions with higher contrast resolution and, if possible, (4) reduce exposure rates due to the unavoidable use of ionizing radiation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, S.
Lihong V. Wang: Photoacoustic tomography (PAT), combining non-ionizing optical and ultrasonic waves via the photoacoustic effect, provides in vivo multiscale functional, metabolic, and molecular imaging. Broad applications include imaging of the breast, brain, skin, esophagus, colon, vascular system, and lymphatic system in humans or animals. Light offers rich contrast but does not penetrate biological tissue in straight paths as x-rays do. Consequently, high-resolution pure optical imaging (e.g., confocal microscopy, two-photon microscopy, and optical coherence tomography) is limited to penetration within the optical diffusion limit (∼1 mm in the skin). Ultrasonic imaging, on the contrary, provides fine spatial resolution but suffersmore » from both poor contrast in early-stage tumors and strong speckle artifacts. In PAT, pulsed laser light penetrates tissue and generates a small but rapid temperature rise, which induces emission of ultrasonic waves due to thermoelastic expansion. The ultrasonic waves, orders of magnitude less scattering than optical waves, are then detected to form high-resolution images of optical absorption at depths up to 7 cm, conquering the optical diffusion limit. PAT is the only modality capable of imaging across the length scales of organelles, cells, tissues, and organs (up to whole-body small animals) with consistent contrast. This rapidly growing technology promises to enable multiscale biological research and accelerate translation from microscopic laboratory discoveries to macroscopic clinical practice. PAT may also hold the key to label-free early detection of cancer by in vivo quantification of hypermetabolism, the quintessential hallmark of malignancy. Learning Objectives: To understand the contrast mechanism of PAT To understand the multiscale applications of PAT Benjamin M. W. Tsui: Multi-modality molecular imaging instrumentation and techniques have been major developments in small animal imaging that has contributed significantly to biomedical research during the past decade. The initial development was an extension of clinical PET/CT and SPECT/CT from human to small animals and combine the unique functional information obtained from PET and SPECT with anatomical information provided by the CT in registered multi-modality images. The requirements to image a mouse whose size is an order of magnitude smaller than that of a human have spurred advances in new radiation detector technologies, novel imaging system designs and special image reconstruction and processing techniques. Examples are new detector materials and designs with high intrinsic resolution, multi-pinhole (MPH) collimator design for much improved resolution and detection efficiency compared to the conventional collimator designs in SPECT, 3D high-resolution and artifact-free MPH and sparse-view image reconstruction techniques, and iterative image reconstruction methods with system response modeling for resolution recovery and image noise reduction for much improved image quality. The spatial resolution of PET and SPECT has improved from ∼6–12 mm to ∼1 mm a few years ago to sub-millimeter today. A recent commercial small animal SPECT system has achieved a resolution of ∼0.25 mm which surpasses that of a state-of-art PET system whose resolution is limited by the positron range. More recently, multimodality SA PET/MRI and SPECT/MRI systems have been developed in research laboratories. Also, multi-modality SA imaging systems that include other imaging modalities such as optical and ultrasound are being actively pursued. In this presentation, we will provide a review of the development, recent advances and future outlook of multi-modality molecular imaging of small animals. Learning Objectives: To learn about the two major multi-modality molecular imaging techniques of small animals. To learn about the spatial resolution achievable by the molecular imaging systems for small animal today. To learn about the new multi-modality imaging instrumentation and techniques that are being developed. Sang Hyun Cho; X-ray fluorescence (XRF) imaging, such as x-ray fluorescence computed tomography (XFCT), offers unique capabilities for accurate identification and quantification of metals within the imaging objects. As a result, it has emerged as a promising quantitative imaging modality in recent years, especially in conjunction with metal-based imaging probes. This talk will familiarize the audience with the basic principles of XRF/XFCT imaging. It will also cover the latest development of benchtop XFCT technology. Additionally, the use of metallic nanoparticles such as gold nanoparticles, in conjunction with benchtop XFCT, will be discussed within the context of preclinical multimodal multiplexed molecular imaging. Learning Objectives: To learn the basic principles of XRF/XFCT imaging To learn the latest advances in benchtop XFCT development for preclinical imaging Funding support received from NIH and DOD; Funding support received from GE Healthcare; Funding support received from Siemens AX; Patent royalties received from GE Healthcare; L. Wang, Funding Support: NIH; COI: Microphotoacoustics; S. Cho, Yes: ;NIH/NCI grant R01CA155446 DOD/PCRP grant W81XWH-12-1-0198.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, L.
Lihong V. Wang: Photoacoustic tomography (PAT), combining non-ionizing optical and ultrasonic waves via the photoacoustic effect, provides in vivo multiscale functional, metabolic, and molecular imaging. Broad applications include imaging of the breast, brain, skin, esophagus, colon, vascular system, and lymphatic system in humans or animals. Light offers rich contrast but does not penetrate biological tissue in straight paths as x-rays do. Consequently, high-resolution pure optical imaging (e.g., confocal microscopy, two-photon microscopy, and optical coherence tomography) is limited to penetration within the optical diffusion limit (∼1 mm in the skin). Ultrasonic imaging, on the contrary, provides fine spatial resolution but suffersmore » from both poor contrast in early-stage tumors and strong speckle artifacts. In PAT, pulsed laser light penetrates tissue and generates a small but rapid temperature rise, which induces emission of ultrasonic waves due to thermoelastic expansion. The ultrasonic waves, orders of magnitude less scattering than optical waves, are then detected to form high-resolution images of optical absorption at depths up to 7 cm, conquering the optical diffusion limit. PAT is the only modality capable of imaging across the length scales of organelles, cells, tissues, and organs (up to whole-body small animals) with consistent contrast. This rapidly growing technology promises to enable multiscale biological research and accelerate translation from microscopic laboratory discoveries to macroscopic clinical practice. PAT may also hold the key to label-free early detection of cancer by in vivo quantification of hypermetabolism, the quintessential hallmark of malignancy. Learning Objectives: To understand the contrast mechanism of PAT To understand the multiscale applications of PAT Benjamin M. W. Tsui: Multi-modality molecular imaging instrumentation and techniques have been major developments in small animal imaging that has contributed significantly to biomedical research during the past decade. The initial development was an extension of clinical PET/CT and SPECT/CT from human to small animals and combine the unique functional information obtained from PET and SPECT with anatomical information provided by the CT in registered multi-modality images. The requirements to image a mouse whose size is an order of magnitude smaller than that of a human have spurred advances in new radiation detector technologies, novel imaging system designs and special image reconstruction and processing techniques. Examples are new detector materials and designs with high intrinsic resolution, multi-pinhole (MPH) collimator design for much improved resolution and detection efficiency compared to the conventional collimator designs in SPECT, 3D high-resolution and artifact-free MPH and sparse-view image reconstruction techniques, and iterative image reconstruction methods with system response modeling for resolution recovery and image noise reduction for much improved image quality. The spatial resolution of PET and SPECT has improved from ∼6–12 mm to ∼1 mm a few years ago to sub-millimeter today. A recent commercial small animal SPECT system has achieved a resolution of ∼0.25 mm which surpasses that of a state-of-art PET system whose resolution is limited by the positron range. More recently, multimodality SA PET/MRI and SPECT/MRI systems have been developed in research laboratories. Also, multi-modality SA imaging systems that include other imaging modalities such as optical and ultrasound are being actively pursued. In this presentation, we will provide a review of the development, recent advances and future outlook of multi-modality molecular imaging of small animals. Learning Objectives: To learn about the two major multi-modality molecular imaging techniques of small animals. To learn about the spatial resolution achievable by the molecular imaging systems for small animal today. To learn about the new multi-modality imaging instrumentation and techniques that are being developed. Sang Hyun Cho; X-ray fluorescence (XRF) imaging, such as x-ray fluorescence computed tomography (XFCT), offers unique capabilities for accurate identification and quantification of metals within the imaging objects. As a result, it has emerged as a promising quantitative imaging modality in recent years, especially in conjunction with metal-based imaging probes. This talk will familiarize the audience with the basic principles of XRF/XFCT imaging. It will also cover the latest development of benchtop XFCT technology. Additionally, the use of metallic nanoparticles such as gold nanoparticles, in conjunction with benchtop XFCT, will be discussed within the context of preclinical multimodal multiplexed molecular imaging. Learning Objectives: To learn the basic principles of XRF/XFCT imaging To learn the latest advances in benchtop XFCT development for preclinical imaging Funding support received from NIH and DOD; Funding support received from GE Healthcare; Funding support received from Siemens AX; Patent royalties received from GE Healthcare; L. Wang, Funding Support: NIH; COI: Microphotoacoustics; S. Cho, Yes: ;NIH/NCI grant R01CA155446 DOD/PCRP grant W81XWH-12-1-0198.« less
WE-H-206-00: Advances in Preclinical Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Lihong V. Wang: Photoacoustic tomography (PAT), combining non-ionizing optical and ultrasonic waves via the photoacoustic effect, provides in vivo multiscale functional, metabolic, and molecular imaging. Broad applications include imaging of the breast, brain, skin, esophagus, colon, vascular system, and lymphatic system in humans or animals. Light offers rich contrast but does not penetrate biological tissue in straight paths as x-rays do. Consequently, high-resolution pure optical imaging (e.g., confocal microscopy, two-photon microscopy, and optical coherence tomography) is limited to penetration within the optical diffusion limit (∼1 mm in the skin). Ultrasonic imaging, on the contrary, provides fine spatial resolution but suffersmore » from both poor contrast in early-stage tumors and strong speckle artifacts. In PAT, pulsed laser light penetrates tissue and generates a small but rapid temperature rise, which induces emission of ultrasonic waves due to thermoelastic expansion. The ultrasonic waves, orders of magnitude less scattering than optical waves, are then detected to form high-resolution images of optical absorption at depths up to 7 cm, conquering the optical diffusion limit. PAT is the only modality capable of imaging across the length scales of organelles, cells, tissues, and organs (up to whole-body small animals) with consistent contrast. This rapidly growing technology promises to enable multiscale biological research and accelerate translation from microscopic laboratory discoveries to macroscopic clinical practice. PAT may also hold the key to label-free early detection of cancer by in vivo quantification of hypermetabolism, the quintessential hallmark of malignancy. Learning Objectives: To understand the contrast mechanism of PAT To understand the multiscale applications of PAT Benjamin M. W. Tsui: Multi-modality molecular imaging instrumentation and techniques have been major developments in small animal imaging that has contributed significantly to biomedical research during the past decade. The initial development was an extension of clinical PET/CT and SPECT/CT from human to small animals and combine the unique functional information obtained from PET and SPECT with anatomical information provided by the CT in registered multi-modality images. The requirements to image a mouse whose size is an order of magnitude smaller than that of a human have spurred advances in new radiation detector technologies, novel imaging system designs and special image reconstruction and processing techniques. Examples are new detector materials and designs with high intrinsic resolution, multi-pinhole (MPH) collimator design for much improved resolution and detection efficiency compared to the conventional collimator designs in SPECT, 3D high-resolution and artifact-free MPH and sparse-view image reconstruction techniques, and iterative image reconstruction methods with system response modeling for resolution recovery and image noise reduction for much improved image quality. The spatial resolution of PET and SPECT has improved from ∼6–12 mm to ∼1 mm a few years ago to sub-millimeter today. A recent commercial small animal SPECT system has achieved a resolution of ∼0.25 mm which surpasses that of a state-of-art PET system whose resolution is limited by the positron range. More recently, multimodality SA PET/MRI and SPECT/MRI systems have been developed in research laboratories. Also, multi-modality SA imaging systems that include other imaging modalities such as optical and ultrasound are being actively pursued. In this presentation, we will provide a review of the development, recent advances and future outlook of multi-modality molecular imaging of small animals. Learning Objectives: To learn about the two major multi-modality molecular imaging techniques of small animals. To learn about the spatial resolution achievable by the molecular imaging systems for small animal today. To learn about the new multi-modality imaging instrumentation and techniques that are being developed. Sang Hyun Cho; X-ray fluorescence (XRF) imaging, such as x-ray fluorescence computed tomography (XFCT), offers unique capabilities for accurate identification and quantification of metals within the imaging objects. As a result, it has emerged as a promising quantitative imaging modality in recent years, especially in conjunction with metal-based imaging probes. This talk will familiarize the audience with the basic principles of XRF/XFCT imaging. It will also cover the latest development of benchtop XFCT technology. Additionally, the use of metallic nanoparticles such as gold nanoparticles, in conjunction with benchtop XFCT, will be discussed within the context of preclinical multimodal multiplexed molecular imaging. Learning Objectives: To learn the basic principles of XRF/XFCT imaging To learn the latest advances in benchtop XFCT development for preclinical imaging Funding support received from NIH and DOD; Funding support received from GE Healthcare; Funding support received from Siemens AX; Patent royalties received from GE Healthcare; L. Wang, Funding Support: NIH; COI: Microphotoacoustics; S. Cho, Yes: ;NIH/NCI grant R01CA155446 DOD/PCRP grant W81XWH-12-1-0198.« less
Nakamura, Masanobu; Yoneyama, Masami; Tabuchi, Takashi; Takemura, Atsushi; Obara, Makoto; Sawano, Seishi
2012-01-01
Detailed information on anatomy and hemodynamics in cerebrovascular disorders such as AVM and Moyamoya disease is mandatory for defined diagnosis and treatment planning. Arterial spin labeling technique has come to be applied to magnetic resonance angiography (MRA) and perfusion imaging in recent years. However, those non-contrast techniques are mostly limited to single frame images. Recently we have proposed a non-contrast time-resolved MRA technique termed contrast inherent inflow enhanced multi phase angiography combining spatial resolution echo planar imaging based signal targeting and alternating radiofrequency (CINEMA-STAR). CINEMA-STAR can extract the blood flow in the major intracranial arteries at an interval of 70 ms and thus permits us to observe vascular construction in full by preparing MIP images of axial acquisitions with high spatial resolution. This preliminary study demonstrates the usefulness of the CINEMA-STAR technique in evaluating the cerebral vasculature.
Akita, Hidetaka; Kudo, Asako; Minoura, Arisa; Yamaguti, Masaya; Khalil, Ikramy A; Moriguchi, Rumiko; Masuda, Tomoya; Danev, Radostin; Nagayama, Kuniaki; Kogure, Kentaro; Harashima, Hideyoshi
2009-05-01
Efficient targeting of DNA to the nucleus is a prerequisite for effective gene therapy. The gene-delivery vehicle must penetrate through the plasma membrane, and the DNA-impermeable double-membraned nuclear envelope, and deposit its DNA cargo in a form ready for transcription. Here we introduce a concept for overcoming intracellular membrane barriers that involves step-wise membrane fusion. To achieve this, a nanotechnology was developed that creates a multi-layered nanoparticle, which we refer to as a Tetra-lamellar Multi-functional Envelope-type Nano Device (T-MEND). The critical structural elements of the T-MEND are a DNA-polycation condensed core coated with two nuclear membrane-fusogenic inner envelopes and two endosome-fusogenic outer envelopes, which are shed in stepwise fashion. A double-lamellar membrane structure is required for nuclear delivery via the stepwise fusion of double layered nuclear membrane structure. Intracellular membrane fusions to endosomes and nuclear membranes were verified by spectral imaging of fluorescence resonance energy transfer (FRET) between donor and acceptor fluorophores that had been dually labeled on the liposome surface. Coating the core with the minimum number of nucleus-fusogenic lipid envelopes (i.e., 2) is essential to facilitate transcription. As a result, the T-MEND achieves dramatic levels of transgene expression in non-dividing cells.
Bonora, Stefano; Jian, Yifan; Zhang, Pengfei; Zam, Azhar; Pugh, Edward N; Zawadzki, Robert J; Sarunic, Marinko V
2015-08-24
Adaptive optics is rapidly transforming microscopy and high-resolution ophthalmic imaging. The adaptive elements commonly used to control optical wavefronts are liquid crystal spatial light modulators and deformable mirrors. We introduce a novel Multi-actuator Adaptive Lens that can correct aberrations to high order, and which has the potential to increase the spread of adaptive optics to many new applications by simplifying its integration with existing systems. Our method combines an adaptive lens with an imaged-based optimization control that allows the correction of images to the diffraction limit, and provides a reduction of hardware complexity with respect to existing state-of-the-art adaptive optics systems. The Multi-actuator Adaptive Lens design that we present can correct wavefront aberrations up to the 4th order of the Zernike polynomial characterization. The performance of the Multi-actuator Adaptive Lens is demonstrated in a wide field microscope, using a Shack-Hartmann wavefront sensor for closed loop control. The Multi-actuator Adaptive Lens and image-based wavefront-sensorless control were also integrated into the objective of a Fourier Domain Optical Coherence Tomography system for in vivo imaging of mouse retinal structures. The experimental results demonstrate that the insertion of the Multi-actuator Objective Lens can generate arbitrary wavefronts to correct aberrations down to the diffraction limit, and can be easily integrated into optical systems to improve the quality of aberrated images.
Feasibility study on sensor data fusion for the CP-140 aircraft: fusion architecture analyses
NASA Astrophysics Data System (ADS)
Shahbazian, Elisa
1995-09-01
Loral Canada completed (May 1995) a Department of National Defense (DND) Chief of Research and Development (CRAD) contract, to study the feasibility of implementing a multi- sensor data fusion (MSDF) system onboard the CP-140 Aurora aircraft. This system is expected to fuse data from: (a) attributed measurement oriented sensors (ESM, IFF, etc.); (b) imaging sensors (FLIR, SAR, etc.); (c) tracking sensors (radar, acoustics, etc.); (d) data from remote platforms (data links); and (e) non-sensor data (intelligence reports, environmental data, visual sightings, encyclopedic data, etc.). Based on purely theoretical considerations a central-level fusion architecture will lead to a higher performance fusion system. However, there are a number of systems and fusion architecture issues involving fusion of such dissimilar data: (1) the currently existing sensors are not designed to provide the type of data required by a fusion system; (2) the different types (attribute, imaging, tracking, etc.) of data may require different degree of processing, before they can be used within a fusion system efficiently; (3) the data quality from different sensors, and more importantly from remote platforms via the data links must be taken into account before fusing; and (4) the non-sensor data may impose specific requirements on the fusion architecture (e.g. variable weight/priority for the data from different sensors). This paper presents the analyses performed for the selection of the fusion architecture for the enhanced sensor suite planned for the CP-140 aircraft in the context of the mission requirements and environmental conditions.
NASA Astrophysics Data System (ADS)
Wang, Zheng; Mao, Zhihua; Xia, Junshi; Du, Peijun; Shi, Liangliang; Huang, Haiqing; Wang, Tianyu; Gong, Fang; Zhu, Qiankun
2018-06-01
The cloud cover for the South China Sea and its coastal area is relatively large throughout the year, which limits the potential application of optical remote sensing. A HJ-charge-coupled device (HJ-CCD) has the advantages of wide field, high temporal resolution, and short repeat cycle. However, this instrument suffers from its use of only four relatively low-quality bands which can't adequately resolve the features of long wavelengths. The Landsat Enhanced Thematic Mapper-plus (ETM+) provides high-quality data, however, the Scan Line Corrector (SLC) stopped working and caused striping of remote sensed images, which dramatically reduced the coverage of the ETM+ data. In order to combine the advantages of the HJ-CCD and Landsat ETM+ data, we adopted a back-propagation artificial neural network (BP-ANN) to fuse these two data types for this study. The results showed that the fused output data not only have the advantage of data intactness for the HJ-CCD, but also have the advantages of the multi-spectral and high radiometric resolution of the ETM+ data. Moreover, the fused data were analyzed qualitatively, quantitatively and from a practical application point of view. Experimental studies indicated that the fused data have a full spatial distribution, multi-spectral bands, high radiometric resolution, a small difference between the observed and fused output data, and a high correlation between the observed and fused data. The excellent performance in its practical application is a further demonstration that the fused data are of high quality.
Yang Li; Wei Liang; Yinlong Zhang; Haibo An; Jindong Tan
2016-08-01
Automatic and accurate lumbar vertebrae detection is an essential step of image-guided minimally invasive spine surgery (IG-MISS). However, traditional methods still require human intervention due to the similarity of vertebrae, abnormal pathological conditions and uncertain imaging angle. In this paper, we present a novel convolutional neural network (CNN) model to automatically detect lumbar vertebrae for C-arm X-ray images. Training data is augmented by DRR and automatic segmentation of ROI is able to reduce the computational complexity. Furthermore, a feature fusion deep learning (FFDL) model is introduced to combine two types of features of lumbar vertebrae X-ray images, which uses sobel kernel and Gabor kernel to obtain the contour and texture of lumbar vertebrae, respectively. Comprehensive qualitative and quantitative experiments demonstrate that our proposed model performs more accurate in abnormal cases with pathologies and surgical implants in multi-angle views.
Demosaiced pixel super-resolution for multiplexed holographic color imaging
Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan
2016-01-01
To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired. PMID:27353242
Blind image fusion for hyperspectral imaging with the directional total variation
NASA Astrophysics Data System (ADS)
Bungert, Leon; Coomes, David A.; Ehrhardt, Matthias J.; Rasch, Jennifer; Reisenhofer, Rafael; Schönlieb, Carola-Bibiane
2018-04-01
Hyperspectral imaging is a cutting-edge type of remote sensing used for mapping vegetation properties, rock minerals and other materials. A major drawback of hyperspectral imaging devices is their intrinsic low spatial resolution. In this paper, we propose a method for increasing the spatial resolution of a hyperspectral image by fusing it with an image of higher spatial resolution that was obtained with a different imaging modality. This is accomplished by solving a variational problem in which the regularization functional is the directional total variation. To accommodate for possible mis-registrations between the two images, we consider a non-convex blind super-resolution problem where both a fused image and the corresponding convolution kernel are estimated. Using this approach, our model can realign the given images if needed. Our experimental results indicate that the non-convexity is negligible in practice and that reliable solutions can be computed using a variety of different optimization algorithms. Numerical results on real remote sensing data from plant sciences and urban monitoring show the potential of the proposed method and suggests that it is robust with respect to the regularization parameters, mis-registration and the shape of the kernel.
Near infrared and visible face recognition based on decision fusion of LBP and DCT features
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan
2018-03-01
Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In order to extract the discriminative complementary features between near infrared and visible images, in this paper, we proposed a novel near infrared and visible face fusion recognition algorithm based on DCT and LBP features. Firstly, the effective features in near-infrared face image are extracted by the low frequency part of DCT coefficients and the partition histograms of LBP operator. Secondly, the LBP features of visible-light face image are extracted to compensate for the lacking detail features of the near-infrared face image. Then, the LBP features of visible-light face image, the DCT and LBP features of near-infrared face image are sent to each classifier for labeling. Finally, decision level fusion strategy is used to obtain the final recognition result. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. The experiment results show that the proposed method extracts the complementary features of near-infrared and visible face images and improves the robustness of unconstrained face recognition. Especially for the circumstance of small training samples, the recognition rate of proposed method can reach 96.13%, which has improved significantly than 92.75 % of the method based on statistical feature fusion.
Downscattered Neutron Imaging for ICF
NASA Astrophysics Data System (ADS)
Moran, Michael; Haan, Steven; Hatchett, Stephen; Izumi, Nobuhiko; Koch, Jeffrey; Lerche, Richard; Phillips, Thomas
2002-11-01
Diagnostics which measure the performance of implosions are critical for the success of ignition. Neutron yield, fusion-burn time history, and images are examples of important diagnostics. Neutron and x-ray images will record the geometries of compressed targets during the fusion-burn process. Such images provide a critical test of the accuracy of numerical modeling of ICF experiments. Imaging of downscattered neutrons, by using energy-resolved detection, offers the intriguing advantage of being able to provide independent images of burning and non-burning regions of the nuclear fuel. The usefulness of downscattered neutron imaging depends on both the information content of the data and on the quality of the data that can be recorded. The information content will relate to the characteristic neutron spectra that are associated with emission from different regions of the source. Numerical modeling of ICF fusion burn will be required to interpret the corresponding energy-dependent images. The exercise will be useful only if the images can be recorded with sufficient definition to reveal the spatial and energy-dependent features of interest. Several options are being evaluated with respect to the feasibility of providing the desired simultaneous spatial and energy resolution. This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48.
NASA Astrophysics Data System (ADS)
Dechesne, Clément; Mallet, Clément; Le Bris, Arnaud; Gouet-Brunet, Valérie
2017-04-01
Forest stands are the basic units for forest inventory and mapping. Stands are defined as large forested areas (e.g., ⩾ 2 ha) of homogeneous tree species composition and age. Their accurate delineation is usually performed by human operators through visual analysis of very high resolution (VHR) infra-red images. This task is tedious, highly time consuming, and should be automated for scalability and efficient updating purposes. In this paper, a method based on the fusion of airborne lidar data and VHR multispectral images is proposed for the automatic delineation of forest stands containing one dominant species (purity superior to 75%). This is the key preliminary task for forest land-cover database update. The multispectral images give information about the tree species whereas 3D lidar point clouds provide geometric information on the trees and allow their individual extraction. Multi-modal features are computed, both at pixel and object levels: the objects are individual trees extracted from lidar data. A supervised classification is then performed at the object level in order to coarsely discriminate the existing tree species in each area of interest. The classification results are further processed to obtain homogeneous areas with smooth borders by employing an energy minimum framework, where additional constraints are joined to form the energy function. The experimental results show that the proposed method provides very satisfactory results both in terms of stand labeling and delineation (overall accuracy ranges between 84 % and 99 %).
Color-coded Live Imaging of Heterokaryon Formation and Nuclear Fusion of Hybridizing Cancer Cells.
Suetsugu, Atsushi; Matsumoto, Takuro; Hasegawa, Kosuke; Nakamura, Miki; Kunisada, Takahiro; Shimizu, Masahito; Saji, Shigetoyo; Moriwaki, Hisataka; Bouvet, Michael; Hoffman, Robert M
2016-08-01
Fusion of cancer cells has been studied for over half a century. However, the steps involved after initial fusion between cells, such as heterokaryon formation and nuclear fusion, have been difficult to observe in real time. In order to be able to visualize these steps, we have established cancer-cell sublines from the human HT-1080 fibrosarcoma, one expressing green fluorescent protein (GFP) linked to histone H2B in the nucleus and a red fluorescent protein (RFP) in the cytoplasm and the other subline expressing RFP in the nucleus (mCherry) linked to histone H2B and GFP in the cytoplasm. The two reciprocal color-coded sublines of HT-1080 cells were fused using the Sendai virus. The fused cells were cultured on plastic and observed using an Olympus FV1000 confocal microscope. Multi-nucleate (heterokaryotic) cancer cells, in addition to hybrid cancer cells with single-or multiple-fused nuclei, including fused mitotic nuclei, were observed among the fused cells. Heterokaryons with red, green, orange and yellow nuclei were observed by confocal imaging, even in single hybrid cells. The orange and yellow nuclei indicate nuclear fusion. Red and green nuclei remained unfused. Cell fusion with heterokaryon formation and subsequent nuclear fusion resulting in hybridization may be an important natural phenomenon between cancer cells that may make them more malignant. The ability to image the complex processes following cell fusion using reciprocal color-coded cancer cells will allow greater understanding of the genetic basis of malignancy. Copyright© 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.
a Spiral-Based Downscaling Method for Generating 30 M Time Series Image Data
NASA Astrophysics Data System (ADS)
Liu, B.; Chen, J.; Xing, H.; Wu, H.; Zhang, J.
2017-09-01
The spatial detail and updating frequency of land cover data are important factors influencing land surface dynamic monitoring applications in high spatial resolution scale. However, the fragmentized patches and seasonal variable of some land cover types (e. g. small crop field, wetland) make it labor-intensive and difficult in the generation of land cover data. Utilizing the high spatial resolution multi-temporal image data is a possible solution. Unfortunately, the spatial and temporal resolution of available remote sensing data like Landsat or MODIS datasets can hardly satisfy the minimum mapping unit and frequency of current land cover mapping / updating at the same time. The generation of high resolution time series may be a compromise to cover the shortage in land cover updating process. One of popular way is to downscale multi-temporal MODIS data with other high spatial resolution auxiliary data like Landsat. But the usual manner of downscaling pixel based on a window may lead to the underdetermined problem in heterogeneous area, result in the uncertainty of some high spatial resolution pixels. Therefore, the downscaled multi-temporal data can hardly reach high spatial resolution as Landsat data. A spiral based method was introduced to downscale low spatial and high temporal resolution image data to high spatial and high temporal resolution image data. By the way of searching the similar pixels around the adjacent region based on the spiral, the pixel set was made up in the adjacent region pixel by pixel. The underdetermined problem is prevented to a large extent from solving the linear system when adopting the pixel set constructed. With the help of ordinary least squares, the method inverted the endmember values of linear system. The high spatial resolution image was reconstructed on the basis of high spatial resolution class map and the endmember values band by band. Then, the high spatial resolution time series was formed with these high spatial resolution images image by image. Simulated experiment and remote sensing image downscaling experiment were conducted. In simulated experiment, the 30 meters class map dataset Globeland30 was adopted to investigate the effect on avoid the underdetermined problem in downscaling procedure and a comparison between spiral and window was conducted. Further, the MODIS NDVI and Landsat image data was adopted to generate the 30m time series NDVI in remote sensing image downscaling experiment. Simulated experiment results showed that the proposed method had a robust performance in downscaling pixel in heterogeneous region and indicated that it was superior to the traditional window-based methods. The high resolution time series generated may be a benefit to the mapping and updating of land cover data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castello, Marco; DIBRIS, University of Genoa, Via Opera Pia 13, Genoa 16145; Diaspro, Alberto
2014-12-08
Time-gated detection, namely, only collecting the fluorescence photons after a time-delay from the excitation events, reduces complexity, cost, and illumination intensity of a stimulated emission depletion (STED) microscope. In the gated continuous-wave- (CW-) STED implementation, the spatial resolution improves with increased time-delay, but the signal-to-noise ratio (SNR) reduces. Thus, in sub-optimal conditions, such as a low photon-budget regime, the SNR reduction can cancel-out the expected gain in resolution. Here, we propose a method which does not discard photons, but instead collects all the photons in different time-gates and recombines them through a multi-image deconvolution. Our results, obtained on simulated andmore » experimental data, show that the SNR of the restored image improves relative to the gated image, thereby improving the effective resolution.« less
SU-E-J-88: Deformable Registration Using Multi-Resolution Demons Algorithm for 4DCT.
Li, Dengwang; Yin, Yong
2012-06-01
In order to register 4DCT efficiently, we propose an improved deformable registration algorithm based on improved multi-resolution demons strategy to improve the efficiency of the algorithm. 4DCT images of lung cancer patients are collected from a General Electric Discovery ST CT scanner from our cancer hospital. All of the images are sorted into groups and reconstructed according to their phases, and eachrespiratory cycle is divided into 10 phases with the time interval of 10%. Firstly, in our improved demons algorithm we use gradients of both reference and floating images as deformation forces and also redistribute the forces according to the proportion of the two forces. Furthermore, we introduce intermediate variable to cost function for decreasing the noise in registration process. At the same time, Gaussian multi-resolution strategy and BFGS method for optimization are used to improve speed and accuracy of the registration. To validate the performance of the algorithm, we register the previous 10 phase-images. We compared the difference of floating and reference images before and after registered where two landmarks are decided by experienced clinician. We registered 10 phase-images of 4D-CT which is lung cancer patient from cancer hospital and choose images in exhalationas the reference images, and all other images were registered into the reference images. This method has a good accuracy demonstrated by a higher similarity measure for registration of 4D-CT and it can register a large deformation precisely. Finally, we obtain the tumor target achieved by the deformation fields using proposed method, which is more accurately than the internal margin (IM) expanded by the Gross Tumor Volume (GTV). Furthermore, we achieve tumor and normal tissue tracking and dose accumulation using 4DCT data. An efficient deformable registration algorithm was proposed by using multi-resolution demons algorithm for 4DCT. © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Li, Zhenwei; Sun, Jianyong; Zhang, Jianguo
2012-02-01
As more and more CT/MR studies are scanning with larger volume of data sets, more and more radiologists and clinician would like using PACS WS to display and manipulate these larger data sets of images with 3D rendering features. In this paper, we proposed a design method and implantation strategy to develop 3D image display component not only with normal 3D display functions but also with multi-modal medical image fusion as well as compute-assisted diagnosis of coronary heart diseases. The 3D component has been integrated into the PACS display workstation of Shanghai Huadong Hospital, and the clinical practice showed that it is easy for radiologists and physicians to use these 3D functions such as multi-modalities' (e.g. CT, MRI, PET, SPECT) visualization, registration and fusion, and the lesion quantitative measurements. The users were satisfying with the rendering speeds and quality of 3D reconstruction. The advantages of the component include low requirements for computer hardware, easy integration, reliable performance and comfortable application experience. With this system, the radiologists and the clinicians can manipulate with 3D images easily, and use the advanced visualization tools to facilitate their work with a PACS display workstation at any time.
Demosaicking for full motion video 9-band SWIR sensor
NASA Astrophysics Data System (ADS)
Kanaev, Andrey V.; Rawhouser, Marjorie; Kutteruf, Mary R.; Yetzbacher, Michael K.; DePrenger, Michael J.; Novak, Kyle M.; Miller, Corey A.; Miller, Christopher W.
2014-05-01
Short wave infrared (SWIR) spectral imaging systems are vital for Intelligence, Surveillance, and Reconnaissance (ISR) applications because of their abilities to autonomously detect targets and classify materials. Typically the spectral imagers are incapable of providing Full Motion Video (FMV) because of their reliance on line scanning. We enable FMV capability for a SWIR multi-spectral camera by creating a repeating pattern of 3x3 spectral filters on a staring focal plane array (FPA). In this paper we present the imagery from an FMV SWIR camera with nine discrete bands and discuss image processing algorithms necessary for its operation. The main task of image processing in this case is demosaicking of the spectral bands i.e. reconstructing full spectral images with original FPA resolution from spatially subsampled and incomplete spectral data acquired with the choice of filter array pattern. To the best of author's knowledge, the demosaicking algorithms for nine or more equally sampled bands have not been reported before. Moreover all existing algorithms developed for demosaicking visible color filter arrays with less than nine colors assume either certain relationship between the visible colors, which are not valid for SWIR imaging, or presence of one color band with higher sampling rate compared to the rest of the bands, which does not conform to our spectral filter pattern. We will discuss and present results for two novel approaches to demosaicking: interpolation using multi-band edge information and application of multi-frame super-resolution to a single frame resolution enhancement of multi-spectral spatially multiplexed images.
NASA Astrophysics Data System (ADS)
Coleman, Lamar W...
1985-02-01
Progress in laser fusion research has increased the need for detail and precision in the diagnosis of experiments. This has spawned the development and use of sophisticated sub-nanosecond resolution diavostic systems. These systems typically use ultrafast x-ray or optical streak caAleras in combination. with spatially imaging or spectrally dispersing elements. These instruments provide high resolution data essential for understanding the processes occurrilltg in the interaction. of high. intensity laser light with targets. Several of these types of instruments and their capabilities will be discussed. The utilization of these kinds of diagnostics systems on the nearly completed 100 kJ Nova laser facility will be described.
NASA Astrophysics Data System (ADS)
Coleman, L. W.
1985-01-01
Progress in laser fusion research has increased the need for detail and precision in the diagnosis of experiments. This has spawned the development and use of sophisticated sub-nanosecond resolution diagnostic systems. These systems typically use ultrafast X-ray or optical streak cameras in combination with spatially imaging or spectrally dispersing elements. These instruments provide high resolution data essential for understanding the processes occurring in the interaction of high intensity laser light with targets. Several of these types of instruments and their capabilities will be discussed. The utilization of these kinds of diagnostics systems on the nearly completed 100 kJ Nova laser facility will be described.
NASA Astrophysics Data System (ADS)
Fei, Peng; Lee, Juhyun; Packard, René R. Sevag; Sereti, Konstantina-Ioanna; Xu, Hao; Ma, Jianguo; Ding, Yichen; Kang, Hanul; Chen, Harrison; Sung, Kevin; Kulkarni, Rajan; Ardehali, Reza; Kuo, C.-C. Jay; Xu, Xiaolei; Ho, Chih-Ming; Hsiai, Tzung K.
2016-03-01
Light Sheet Fluorescence Microscopy (LSFM) enables multi-dimensional and multi-scale imaging via illuminating specimens with a separate thin sheet of laser. It allows rapid plane illumination for reduced photo-damage and superior axial resolution and contrast. We hereby demonstrate cardiac LSFM (c-LSFM) imaging to assess the functional architecture of zebrafish embryos with a retrospective cardiac synchronization algorithm for four-dimensional reconstruction (3-D space + time). By combining our approach with tissue clearing techniques, we reveal the entire cardiac structures and hypertrabeculation of adult zebrafish hearts in response to doxorubicin treatment. By integrating the resolution enhancement technique with c-LSFM to increase the resolving power under a large field-of-view, we demonstrate the use of low power objective to resolve the entire architecture of large-scale neonatal mouse hearts, revealing the helical orientation of individual myocardial fibers. Therefore, our c-LSFM imaging approach provides multi-scale visualization of architecture and function to drive cardiovascular research with translational implication in congenital heart diseases.
Design of a normal incidence multilayer imaging X-ray microscope
NASA Astrophysics Data System (ADS)
Shealy, David L.; Gabardi, David R.; Hoover, Richard B.; Walker, Arthur B. C., Jr.; Lindblom, Joakim F.
Normal incidence multilayer Cassegrain X-ray telescopes were flown on the Stanford/MSFC Rocket X-ray Spectroheliograph. These instruments produced high spatial resolution images of the sun and conclusively demonstrated that doubly reflecting multilayer X-ray optical systems are feasible. The images indicated that aplanatic imaging soft X-ray/EUV microscopes should be achievable using multilayer optics technology. A doubly reflecting normal incidence multilayer imaging X-ray microscope based on the Schwarzschild configuration has been designed. The design of the microscope and the results of the optical system ray trace analysis are discussed. High resolution aplanatic imaging X-ray microscopes using normal incidence multilayer X-ray mirrors should have many important applications in advanced X-ray astronomical instrumentation, X-ray lithography, biological, biomedical, metallurgical, and laser fusion research.
HCP: A Flexible CNN Framework for Multi-label Image Classification.
Wei, Yunchao; Xia, Wei; Lin, Min; Huang, Junshi; Ni, Bingbing; Dong, Jian; Zhao, Yao; Yan, Shuicheng
2015-10-26
Convolutional Neural Network (CNN) has demonstrated promising performance in single-label image classification tasks. However, how CNN best copes with multi-label images still remains an open problem, mainly due to the complex underlying object layouts and insufficient multi-label training images. In this work, we propose a flexible deep CNN infrastructure, called Hypotheses-CNN-Pooling (HCP), where an arbitrary number of object segment hypotheses are taken as the inputs, then a shared CNN is connected with each hypothesis, and finally the CNN output results from different hypotheses are aggregated with max pooling to produce the ultimate multi-label predictions. Some unique characteristics of this flexible deep CNN infrastructure include: 1) no ground-truth bounding box information is required for training; 2) the whole HCP infrastructure is robust to possibly noisy and/or redundant hypotheses; 3) the shared CNN is flexible and can be well pre-trained with a large-scale single-label image dataset, e.g., ImageNet; and 4) it may naturally output multi-label prediction results. Experimental results on Pascal VOC 2007 and VOC 2012 multi-label image datasets well demonstrate the superiority of the proposed HCP infrastructure over other state-of-the-arts. In particular, the mAP reaches 90.5% by HCP only and 93.2% after the fusion with our complementary result in [44] based on hand-crafted features on the VOC 2012 dataset.
Selkowitz, D.J.
2010-01-01
Shrub cover appears to be increasing across many areas of the Arctic tundra biome, and increasing shrub cover in the Arctic has the potential to significantly impact global carbon budgets and the global climate system. For most of the Arctic, however, there is no existing baseline inventory of shrub canopy cover, as existing maps of Arctic vegetation provide little information about the density of shrub cover at a moderate spatial resolution across the region. Remotely-sensed fractional shrub canopy maps can provide this necessary baseline inventory of shrub cover. In this study, we compare the accuracy of fractional shrub canopy (> 0.5 m tall) maps derived from multi-spectral, multi-angular, and multi-temporal datasets from Landsat imagery at 30 m spatial resolution, Moderate Resolution Imaging SpectroRadiometer (MODIS) imagery at 250 m and 500 m spatial resolution, and MultiAngle Imaging Spectroradiometer (MISR) imagery at 275 m spatial resolution for a 1067 km2 study area in Arctic Alaska. The study area is centered at 69 ??N, ranges in elevation from 130 to 770 m, is composed primarily of rolling topography with gentle slopes less than 10??, and is free of glaciers and perennial snow cover. Shrubs > 0.5 m in height cover 2.9% of the study area and are primarily confined to patches associated with specific landscape features. Reference fractional shrub canopy is determined from in situ shrub canopy measurements and a high spatial resolution IKONOS image swath. Regression tree models are constructed to estimate fractional canopy cover at 250 m using different combinations of input data from Landsat, MODIS, and MISR. Results indicate that multi-spectral data provide substantially more accurate estimates of fractional shrub canopy cover than multi-angular or multi-temporal data. Higher spatial resolution datasets also provide more accurate estimates of fractional shrub canopy cover (aggregated to moderate spatial resolutions) than lower spatial resolution datasets, an expected result for a study area where most shrub cover is concentrated in narrow patches associated with rivers, drainages, and slopes. Including the middle infrared bands available from Landsat and MODIS in the regression tree models (in addition to the four standard visible and near-infrared spectral bands) typically results in a slight boost in accuracy. Including the multi-angular red band data available from MISR in the regression tree models, however, typically boosts accuracy more substantially, resulting in moderate resolution fractional shrub canopy estimates approaching the accuracy of estimates derived from the much higher spatial resolution Landsat sensor. Given the poor availability of snow and cloud-free Landsat scenes in many areas of the Arctic and the promising results demonstrated here by the MISR sensor, MISR may be the best choice for large area fractional shrub canopy mapping in the Alaskan Arctic for the period 2000-2009.
Super-resolution for imagery from integrated microgrid polarimeters.
Hardie, Russell C; LeMaster, Daniel A; Ratliff, Bradley M
2011-07-04
Imagery from microgrid polarimeters is obtained by using a mosaic of pixel-wise micropolarizers on a focal plane array (FPA). Each distinct polarization image is obtained by subsampling the full FPA image. Thus, the effective pixel pitch for each polarization channel is increased and the sampling frequency is decreased. As a result, aliasing artifacts from such undersampling can corrupt the true polarization content of the scene. Here we present the first multi-channel multi-frame super-resolution (SR) algorithms designed specifically for the problem of image restoration in microgrid polarization imagers. These SR algorithms can be used to address aliasing and other degradations, without sacrificing field of view or compromising optical resolution with an anti-aliasing filter. The new SR methods are designed to exploit correlation between the polarimetric channels. One of the new SR algorithms uses a form of regularized least squares and has an iterative solution. The other is based on the faster adaptive Wiener filter SR method. We demonstrate that the new multi-channel SR algorithms are capable of providing significant enhancement of polarimetric imagery and that they outperform their independent channel counterparts.
Minimally invasive multimode optical fiber microendoscope for deep brain fluorescence imaging
Ohayon, Shay; Caravaca-Aguirre, Antonio; Piestun, Rafael; DiCarlo, James J.
2018-01-01
A major open challenge in neuroscience is the ability to measure and perturb neural activity in vivo from well defined neural sub-populations at cellular resolution anywhere in the brain. However, limitations posed by scattering and absorption prohibit non-invasive multi-photon approaches for deep (>2mm) structures, while gradient refractive index (GRIN) endoscopes are relatively thick and can cause significant damage upon insertion. Here, we present a novel micro-endoscope design to image neural activity at arbitrary depths via an ultra-thin multi-mode optical fiber (MMF) probe that has 5–10X thinner diameter than commercially available micro-endoscopes. We demonstrate micron-scale resolution, multi-spectral and volumetric imaging. In contrast to previous approaches, we show that this method has an improved acquisition speed that is sufficient to capture rapid neuronal dynamics in-vivo in rodents expressing a genetically encoded calcium indicator (GCaMP). Our results emphasize the potential of this technology in neuroscience applications and open up possibilities for cellular resolution imaging in previously unreachable brain regions. PMID:29675297
A state space based approach to localizing single molecules from multi-emitter images.
Vahid, Milad R; Chao, Jerry; Ward, E Sally; Ober, Raimund J
2017-01-28
Single molecule super-resolution microscopy is a powerful tool that enables imaging at sub-diffraction-limit resolution. In this technique, subsets of stochastically photoactivated fluorophores are imaged over a sequence of frames and accurately localized, and the estimated locations are used to construct a high-resolution image of the cellular structures labeled by the fluorophores. Available localization methods typically first determine the regions of the image that contain emitting fluorophores through a process referred to as detection. Then, the locations of the fluorophores are estimated accurately in an estimation step. We propose a novel localization method which combines the detection and estimation steps. The method models the given image as the frequency response of a multi-order system obtained with a balanced state space realization algorithm based on the singular value decomposition of a Hankel matrix, and determines the locations of intensity peaks in the image as the pole locations of the resulting system. The locations of the most significant peaks correspond to the locations of single molecules in the original image. Although the accuracy of the location estimates is reasonably good, we demonstrate that, by using the estimates as the initial conditions for a maximum likelihood estimator, refined estimates can be obtained that have a standard deviation close to the Cramér-Rao lower bound-based limit of accuracy. We validate our method using both simulated and experimental multi-emitter images.
Kim, Daehyeok; Song, Minkyu; Choe, Byeongseong; Kim, Soo Youn
2017-06-25
In this paper, we present a multi-resolution mode CMOS image sensor (CIS) for intelligent surveillance system (ISS) applications. A low column fixed-pattern noise (CFPN) comparator is proposed in 8-bit two-step single-slope analog-to-digital converter (TSSS ADC) for the CIS that supports normal, 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64 mode of pixel resolution. We show that the scaled-resolution images enable CIS to reduce total power consumption while images hold steady without events. A prototype sensor of 176 × 144 pixels has been fabricated with a 0.18 μm 1-poly 4-metal CMOS process. The area of 4-shared 4T-active pixel sensor (APS) is 4.4 μm × 4.4 μm and the total chip size is 2.35 mm × 2.35 mm. The maximum power consumption is 10 mW (with full resolution) with supply voltages of 3.3 V (analog) and 1.8 V (digital) and 14 frame/s of frame rates.
Testing and evaluation of tactical electro-optical sensors
NASA Astrophysics Data System (ADS)
Middlebrook, Christopher T.; Smith, John G.
2002-07-01
As integrated electro-optical sensor payloads (multi- sensors) comprised of infrared imagers, visible imagers, and lasers advance in performance, the tests and testing methods must also advance in order to fully evaluate them. Future operational requirements will require integrated sensor payloads to perform missions at further ranges and with increased targeting accuracy. In order to meet these requirements sensors will require advanced imaging algorithms, advanced tracking capability, high-powered lasers, and high-resolution imagers. To meet the U.S. Navy's testing requirements of such multi-sensors, the test and evaluation group in the Night Vision and Chemical Biological Warfare Department at NAVSEA Crane is developing automated testing methods, and improved tests to evaluate imaging algorithms, and procuring advanced testing hardware to measure high resolution imagers and line of sight stabilization of targeting systems. This paper addresses: descriptions of the multi-sensor payloads tested, testing methods used and under development, and the different types of testing hardware and specific payload tests that are being developed and used at NAVSEA Crane.
Takeshima, T; Takahashi, T; Yamashita, J; Okada, Y; Watanabe, S
2018-05-25
Multi-emitter fitting algorithms have been developed to improve the temporal resolution of single-molecule switching nanoscopy, but the molecular density range they can analyse is narrow and the computation required is intensive, significantly limiting their practical application. Here, we propose a computationally fast method, wedged template matching (WTM), an algorithm that uses a template matching technique to localise molecules at any overlapping molecular density from sparse to ultrahigh density with subdiffraction resolution. WTM achieves the localization of overlapping molecules at densities up to 600 molecules μm -2 with a high detection sensitivity and fast computational speed. WTM also shows localization precision comparable with that of DAOSTORM (an algorithm for high-density super-resolution microscopy), at densities up to 20 molecules μm -2 , and better than DAOSTORM at higher molecular densities. The application of WTM to a high-density biological sample image demonstrated that it resolved protein dynamics from live cell images with subdiffraction resolution and a temporal resolution of several hundred milliseconds or less through a significant reduction in the number of camera images required for a high-density reconstruction. WTM algorithm is a computationally fast, multi-emitter fitting algorithm that can analyse over a wide range of molecular densities. The algorithm is available through the website. https://doi.org/10.17632/bf3z6xpn5j.1. © 2018 The Authors. Journal of Microscopy published by JohnWiley & Sons Ltd on behalf of Royal Microscopical Society.
NASA Astrophysics Data System (ADS)
Liu, X.; Zhang, J. X.; Zhao, Z.; Ma, A. D.
2015-06-01
Synthetic aperture radar in the application of remote sensing technology is becoming more and more widely because of its all-time and all-weather operation, feature extraction research in high resolution SAR image has become a hot topic of concern. In particular, with the continuous improvement of airborne SAR image resolution, image texture information become more abundant. It's of great significance to classification and extraction. In this paper, a novel method for built-up areas extraction using both statistical and structural features is proposed according to the built-up texture features. First of all, statistical texture features and structural features are respectively extracted by classical method of gray level co-occurrence matrix and method of variogram function, and the direction information is considered in this process. Next, feature weights are calculated innovatively according to the Bhattacharyya distance. Then, all features are weighted fusion. At last, the fused image is classified with K-means classification method and the built-up areas are extracted after post classification process. The proposed method has been tested by domestic airborne P band polarization SAR images, at the same time, two groups of experiments based on the method of statistical texture and the method of structural texture were carried out respectively. On the basis of qualitative analysis, quantitative analysis based on the built-up area selected artificially is enforced, in the relatively simple experimentation area, detection rate is more than 90%, in the relatively complex experimentation area, detection rate is also higher than the other two methods. In the study-area, the results show that this method can effectively and accurately extract built-up areas in high resolution airborne SAR imagery.
Maximizing the Biochemical Resolving Power of Fluorescence Microscopy
Esposito, Alessandro; Popleteeva, Marina; Venkitaraman, Ashok R.
2013-01-01
Most recent advances in fluorescence microscopy have focused on achieving spatial resolutions below the diffraction limit. However, the inherent capability of fluorescence microscopy to non-invasively resolve different biochemical or physical environments in biological samples has not yet been formally described, because an adequate and general theoretical framework is lacking. Here, we develop a mathematical characterization of the biochemical resolution in fluorescence detection with Fisher information analysis. To improve the precision and the resolution of quantitative imaging methods, we demonstrate strategies for the optimization of fluorescence lifetime, fluorescence anisotropy and hyperspectral detection, as well as different multi-dimensional techniques. We describe optimized imaging protocols, provide optimization algorithms and describe precision and resolving power in biochemical imaging thanks to the analysis of the general properties of Fisher information in fluorescence detection. These strategies enable the optimal use of the information content available within the limited photon-budget typically available in fluorescence microscopy. This theoretical foundation leads to a generalized strategy for the optimization of multi-dimensional optical detection, and demonstrates how the parallel detection of all properties of fluorescence can maximize the biochemical resolving power of fluorescence microscopy, an approach we term Hyper Dimensional Imaging Microscopy (HDIM). Our work provides a theoretical framework for the description of the biochemical resolution in fluorescence microscopy, irrespective of spatial resolution, and for the development of a new class of microscopes that exploit multi-parametric detection systems. PMID:24204821
A multi-frame soft x-ray pinhole imaging diagnostic for single-shot applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wurden, G. A.; Coffey, S. K.
2012-10-15
For high energy density magnetized target fusion experiments at the Air Force Research Laboratory FRCHX machine, obtaining multi-frame soft x-ray images of the field reversed configuration (FRC) plasma as it is being compressed will provide useful dynamics and symmetry information. However, vacuum hardware will be destroyed during the implosion. We have designed a simple in-vacuum pinhole nosecone attachment, fitting onto a Conflat window, coated with 3.2 mg/cm{sup 2} of P-47 phosphor, and covered with a thin 50-nm aluminum reflective overcoat, lens-coupled to a multi-frame Hadland Ultra intensified digital camera. We compare visible and soft x-ray axial images of translating ({approx}200more » eV) plasmas in the FRX-L and FRCHX machines in Los Alamos and Albuquerque.« less
A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification
Liu, Fuxian
2018-01-01
One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs) as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM) classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references. PMID:29581722
A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification.
Yu, Yunlong; Liu, Fuxian
2018-01-01
One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs) as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM) classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references.
Multi-energy SXR cameras for magnetically confined fusion plasmas (invited)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delgado-Aparicio, L. F.; Maddox, J.; Pablant, N.
A compact multi-energy soft x-ray camera has been developed for time, energy and space-resolved measurements of the soft-x-ray emissivity in magnetically confined fusion plasmas. Multi-energy soft x-ray imaging provides a unique opportunity for measuring, simultaneously, a variety of important plasma properties (T e, n Z, ΔZ eff, and n e,fast). The electron temperature can be obtained by modeling the slope of the continuum radiation from ratios of the available brightness and inverted radial emissivity profiles over multiple energy ranges. Impurity density measurements are also possible using the line-emission from medium- to high-Z impurities to separate the background as well asmore » transient levels of metal contributions. As a result, this technique should be explored also as a burning plasma diagnostic in-view of its simplicity and robustness.« less
Multi-energy SXR cameras for magnetically confined fusion plasmas (invited)
Delgado-Aparicio, L. F.; Maddox, J.; Pablant, N.; ...
2016-11-14
A compact multi-energy soft x-ray camera has been developed for time, energy and space-resolved measurements of the soft-x-ray emissivity in magnetically confined fusion plasmas. Multi-energy soft x-ray imaging provides a unique opportunity for measuring, simultaneously, a variety of important plasma properties (T e, n Z, ΔZ eff, and n e,fast). The electron temperature can be obtained by modeling the slope of the continuum radiation from ratios of the available brightness and inverted radial emissivity profiles over multiple energy ranges. Impurity density measurements are also possible using the line-emission from medium- to high-Z impurities to separate the background as well asmore » transient levels of metal contributions. As a result, this technique should be explored also as a burning plasma diagnostic in-view of its simplicity and robustness.« less
Fusion-based multi-target tracking and localization for intelligent surveillance systems
NASA Astrophysics Data System (ADS)
Rababaah, Haroun; Shirkhodaie, Amir
2008-04-01
In this paper, we have presented two approaches addressing visual target tracking and localization in complex urban environment. The two techniques presented in this paper are: fusion-based multi-target visual tracking, and multi-target localization via camera calibration. For multi-target tracking, the data fusion concepts of hypothesis generation/evaluation/selection, target-to-target registration, and association are employed. An association matrix is implemented using RGB histograms for associated tracking of multi-targets of interests. Motion segmentation of targets of interest (TOI) from the background was achieved by a Gaussian Mixture Model. Foreground segmentation, on other hand, was achieved by the Connected Components Analysis (CCA) technique. The tracking of individual targets was estimated by fusing two sources of information, the centroid with the spatial gating, and the RGB histogram association matrix. The localization problem is addressed through an effective camera calibration technique using edge modeling for grid mapping (EMGM). A two-stage image pixel to world coordinates mapping technique is introduced that performs coarse and fine location estimation of moving TOIs. In coarse estimation, an approximate neighborhood of the target position is estimated based on nearest 4-neighbor method, and in fine estimation, we use Euclidean interpolation to localize the position within the estimated four neighbors. Both techniques were tested and shown reliable results for tracking and localization of Targets of interests in complex urban environment.
Multi-dimensional super-resolution imaging enables surface hydrophobicity mapping
NASA Astrophysics Data System (ADS)
Bongiovanni, Marie N.; Godet, Julien; Horrocks, Mathew H.; Tosatto, Laura; Carr, Alexander R.; Wirthensohn, David C.; Ranasinghe, Rohan T.; Lee, Ji-Eun; Ponjavic, Aleks; Fritz, Joelle V.; Dobson, Christopher M.; Klenerman, David; Lee, Steven F.
2016-12-01
Super-resolution microscopy allows biological systems to be studied at the nanoscale, but has been restricted to providing only positional information. Here, we show that it is possible to perform multi-dimensional super-resolution imaging to determine both the position and the environmental properties of single-molecule fluorescent emitters. The method presented here exploits the solvatochromic and fluorogenic properties of nile red to extract both the emission spectrum and the position of each dye molecule simultaneously enabling mapping of the hydrophobicity of biological structures. We validated this by studying synthetic lipid vesicles of known composition. We then applied both to super-resolve the hydrophobicity of amyloid aggregates implicated in neurodegenerative diseases, and the hydrophobic changes in mammalian cell membranes. Our technique is easily implemented by inserting a transmission diffraction grating into the optical path of a localization-based super-resolution microscope, enabling all the information to be extracted simultaneously from a single image plane.
Multi-dimensional super-resolution imaging enables surface hydrophobicity mapping
Bongiovanni, Marie N.; Godet, Julien; Horrocks, Mathew H.; Tosatto, Laura; Carr, Alexander R.; Wirthensohn, David C.; Ranasinghe, Rohan T.; Lee, Ji-Eun; Ponjavic, Aleks; Fritz, Joelle V.; Dobson, Christopher M.; Klenerman, David; Lee, Steven F.
2016-01-01
Super-resolution microscopy allows biological systems to be studied at the nanoscale, but has been restricted to providing only positional information. Here, we show that it is possible to perform multi-dimensional super-resolution imaging to determine both the position and the environmental properties of single-molecule fluorescent emitters. The method presented here exploits the solvatochromic and fluorogenic properties of nile red to extract both the emission spectrum and the position of each dye molecule simultaneously enabling mapping of the hydrophobicity of biological structures. We validated this by studying synthetic lipid vesicles of known composition. We then applied both to super-resolve the hydrophobicity of amyloid aggregates implicated in neurodegenerative diseases, and the hydrophobic changes in mammalian cell membranes. Our technique is easily implemented by inserting a transmission diffraction grating into the optical path of a localization-based super-resolution microscope, enabling all the information to be extracted simultaneously from a single image plane. PMID:27929085
Investigation of Joint Visibility Between SAR and Optical Images of Urban Environments
NASA Astrophysics Data System (ADS)
Hughes, L. H.; Auer, S.; Schmitt, M.
2018-05-01
In this paper, we present a work-flow to investigate the joint visibility between very-high-resolution SAR and optical images of urban scenes. For this task, we extend the simulation framework SimGeoI to enable a simulation of individual pixels rather than complete images. Using the extended SimGeoI simulator, we carry out a case study using a TerraSAR-X staring spotlight image and a Worldview-2 panchromatic image acquired over the city of Munich, Germany. The results of this study indicate that about 55 % of the scene are visible in both images and are thus suitable for matching and data fusion endeavours, while about 25 % of the scene are affected by either radar shadow or optical occlusion. Taking the image acquisition parameters into account, our findings can provide support regarding the definition of upper bounds for image fusion tasks, as well as help to improve acquisition planning with respect to different application goals.