Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network
Qu, Xiaobo; He, Yifan
2018-01-01
Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods. PMID:29509666
Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network.
Du, Xiaofeng; Qu, Xiaobo; He, Yifan; Guo, Di
2018-03-06
Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods.
Medical image classification based on multi-scale non-negative sparse coding.
Zhang, Ruijie; Shen, Jian; Wei, Fushan; Li, Xiong; Sangaiah, Arun Kumar
2017-11-01
With the rapid development of modern medical imaging technology, medical image classification has become more and more important in medical diagnosis and clinical practice. Conventional medical image classification algorithms usually neglect the semantic gap problem between low-level features and high-level image semantic, which will largely degrade the classification performance. To solve this problem, we propose a multi-scale non-negative sparse coding based medical image classification algorithm. Firstly, Medical images are decomposed into multiple scale layers, thus diverse visual details can be extracted from different scale layers. Secondly, for each scale layer, the non-negative sparse coding model with fisher discriminative analysis is constructed to obtain the discriminative sparse representation of medical images. Then, the obtained multi-scale non-negative sparse coding features are combined to form a multi-scale feature histogram as the final representation for a medical image. Finally, SVM classifier is combined to conduct medical image classification. The experimental results demonstrate that our proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance. Copyright © 2017 Elsevier B.V. All rights reserved.
Fusion of infrared polarization and intensity images based on improved toggle operator
NASA Astrophysics Data System (ADS)
Zhu, Pan; Ding, Lei; Ma, Xiaoqing; Huang, Zhanhua
2018-01-01
Integration of infrared polarization and intensity images has been a new topic in infrared image understanding and interpretation. The abundant infrared details and target from infrared image and the salient edge and shape information from polarization image should be preserved or even enhanced in the fused result. In this paper, a new fusion method is proposed for infrared polarization and intensity images based on the improved multi-scale toggle operator with spatial scale, which can effectively extract the feature information of source images and heavily reduce redundancy among different scale. Firstly, the multi-scale image features of infrared polarization and intensity images are respectively extracted at different scale levels by the improved multi-scale toggle operator. Secondly, the redundancy of the features among different scales is reduced by using spatial scale. Thirdly, the final image features are combined by simply adding all scales of feature images together, and a base image is calculated by performing mean value weighted method on smoothed source images. Finally, the fusion image is obtained by importing the combined image features into the base image with a suitable strategy. Both objective assessment and subjective vision of the experimental results indicate that the proposed method obtains better performance in preserving the details and edge information as well as improving the image contrast.
A real-time multi-scale 2D Gaussian filter based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Haibo; Gai, Xingqin; Chang, Zheng; Hui, Bin
2014-11-01
Multi-scale 2-D Gaussian filter has been widely used in feature extraction (e.g. SIFT, edge etc.), image segmentation, image enhancement, image noise removing, multi-scale shape description etc. However, their computational complexity remains an issue for real-time image processing systems. Aimed at this problem, we propose a framework of multi-scale 2-D Gaussian filter based on FPGA in this paper. Firstly, a full-hardware architecture based on parallel pipeline was designed to achieve high throughput rate. Secondly, in order to save some multiplier, the 2-D convolution is separated into two 1-D convolutions. Thirdly, a dedicate first in first out memory named as CAFIFO (Column Addressing FIFO) was designed to avoid the error propagating induced by spark on clock. Finally, a shared memory framework was designed to reduce memory costs. As a demonstration, we realized a 3 scales 2-D Gaussian filter on a single ALTERA Cyclone III FPGA chip. Experimental results show that, the proposed framework can computing a Multi-scales 2-D Gaussian filtering within one pixel clock period, is further suitable for real-time image processing. Moreover, the main principle can be popularized to the other operators based on convolution, such as Gabor filter, Sobel operator and so on.
NASA Astrophysics Data System (ADS)
Liu, Changjiang; Cheng, Irene; Zhang, Yi; Basu, Anup
2017-06-01
This paper presents an improved multi-scale Retinex (MSR) based enhancement for ariel images under low visibility. For traditional multi-scale Retinex, three scales are commonly employed, which limits its application scenarios. We extend our research to a general purpose enhanced method, and design an MSR with more than three scales. Based on the mathematical analysis and deductions, an explicit multi-scale representation is proposed that balances image contrast and color consistency. In addition, a histogram truncation technique is introduced as a post-processing strategy to remap the multi-scale Retinex output to the dynamic range of the display. Analysis of experimental results and comparisons with existing algorithms demonstrate the effectiveness and generality of the proposed method. Results on image quality assessment proves the accuracy of the proposed method with respect to both objective and subjective criteria.
NASA Astrophysics Data System (ADS)
Wang, Min; Cui, Qi; Wang, Jie; Ming, Dongping; Lv, Guonian
2017-01-01
In this paper, we first propose several novel concepts for object-based image analysis, which include line-based shape regularity, line density, and scale-based best feature value (SBV), based on the region-line primitive association framework (RLPAF). We then propose a raft cultivation area (RCA) extraction method for high spatial resolution (HSR) remote sensing imagery based on multi-scale feature fusion and spatial rule induction. The proposed method includes the following steps: (1) Multi-scale region primitives (segments) are obtained by image segmentation method HBC-SEG, and line primitives (straight lines) are obtained by phase-based line detection method. (2) Association relationships between regions and lines are built based on RLPAF, and then multi-scale RLPAF features are extracted and SBVs are selected. (3) Several spatial rules are designed to extract RCAs within sea waters after land and water separation. Experiments show that the proposed method can successfully extract different-shaped RCAs from HR images with good performance.
NASA Astrophysics Data System (ADS)
Yan, Dan; Bai, Lianfa; Zhang, Yi; Han, Jing
2018-02-01
For the problems of missing details and performance of the colorization based on sparse representation, we propose a conceptual model framework for colorizing gray-scale images, and then a multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement (CEMDC) is proposed based on this framework. The algorithm can achieve a natural colorized effect for a gray-scale image, and it is consistent with the human vision. First, the algorithm establishes a multi-sparse dictionary classification colorization model. Then, to improve the accuracy rate of the classification, the corresponding local constraint algorithm is proposed. Finally, we propose a detail enhancement based on Laplacian Pyramid, which is effective in solving the problem of missing details and improving the speed of image colorization. In addition, the algorithm not only realizes the colorization of the visual gray-scale image, but also can be applied to the other areas, such as color transfer between color images, colorizing gray fusion images, and infrared images.
NASA Astrophysics Data System (ADS)
Lei, Sen; Zou, Zhengxia; Liu, Dunge; Xia, Zhenghuan; Shi, Zhenwei
2018-06-01
Sea-land segmentation is a key step for the information processing of ocean remote sensing images. Traditional sea-land segmentation algorithms ignore the local similarity prior of sea and land, and thus fail in complex scenarios. In this paper, we propose a new sea-land segmentation method for infrared remote sensing images to tackle the problem based on superpixels and multi-scale features. Considering the connectivity and local similarity of sea or land, we interpret the sea-land segmentation task in view of superpixels rather than pixels, where similar pixels are clustered and the local similarity are explored. Moreover, the multi-scale features are elaborately designed, comprising of gray histogram and multi-scale total variation. Experimental results on infrared bands of Landsat-8 satellite images demonstrate that the proposed method can obtain more accurate and more robust sea-land segmentation results than the traditional algorithms.
Tone mapping infrared images using conditional filtering-based multi-scale retinex
NASA Astrophysics Data System (ADS)
Luo, Haibo; Xu, Lingyun; Hui, Bin; Chang, Zheng
2015-10-01
Tone mapping can be used to compress the dynamic range of the image data such that it can be fitted within the range of the reproduction media and human vision. The original infrared images that captured with infrared focal plane arrays (IFPA) are high dynamic images, so tone mapping infrared images is an important component in the infrared imaging systems, and it has become an active topic in recent years. In this paper, we present a tone mapping framework using multi-scale retinex. Firstly, a Conditional Gaussian Filter (CGF) was designed to suppress "halo" effect. Secondly, original infrared image is decomposed into a set of images that represent the mean of the image at different spatial resolutions by applying CGF of different scale. And then, a set of images that represent the multi-scale details of original image is produced by dividing the original image pointwise by the decomposed image. Thirdly, the final detail image is reconstructed by weighted sum of the multi-scale detail images together. Finally, histogram scaling and clipping is adopted to remove outliers and scale the detail image, 0.1% of the pixels are clipped at both extremities of the histogram. Experimental results show that the proposed algorithm efficiently increases the local contrast while preventing "halo" effect and provides a good rendition of visual effect.
Multi-scale learning based segmentation of glands in digital colonrectal pathology images.
Gao, Yi; Liu, William; Arjun, Shipra; Zhu, Liangjia; Ratner, Vadim; Kurc, Tahsin; Saltz, Joel; Tannenbaum, Allen
2016-02-01
Digital histopathological images provide detailed spatial information of the tissue at micrometer resolution. Among the available contents in the pathology images, meso-scale information, such as the gland morphology, texture, and distribution, are useful diagnostic features. In this work, focusing on the colon-rectal cancer tissue samples, we propose a multi-scale learning based segmentation scheme for the glands in the colon-rectal digital pathology slides. The algorithm learns the gland and non-gland textures from a set of training images in various scales through a sparse dictionary representation. After the learning step, the dictionaries are used collectively to perform the classification and segmentation for the new image.
Multi-scale learning based segmentation of glands in digital colonrectal pathology images
NASA Astrophysics Data System (ADS)
Gao, Yi; Liu, William; Arjun, Shipra; Zhu, Liangjia; Ratner, Vadim; Kurc, Tahsin; Saltz, Joel; Tannenbaum, Allen
2016-03-01
Digital histopathological images provide detailed spatial information of the tissue at micrometer resolution. Among the available contents in the pathology images, meso-scale information, such as the gland morphology, texture, and distribution, are useful diagnostic features. In this work, focusing on the colon-rectal cancer tissue samples, we propose a multi-scale learning based segmentation scheme for the glands in the colon-rectal digital pathology slides. The algorithm learns the gland and non-gland textures from a set of training images in various scales through a sparse dictionary representation. After the learning step, the dictionaries are used collectively to perform the classification and segmentation for the new image.
An improved feature extraction algorithm based on KAZE for multi-spectral image
NASA Astrophysics Data System (ADS)
Yang, Jianping; Li, Jun
2018-02-01
Multi-spectral image contains abundant spectral information, which is widely used in all fields like resource exploration, meteorological observation and modern military. Image preprocessing, such as image feature extraction and matching, is indispensable while dealing with multi-spectral remote sensing image. Although the feature matching algorithm based on linear scale such as SIFT and SURF performs strong on robustness, the local accuracy cannot be guaranteed. Therefore, this paper proposes an improved KAZE algorithm, which is based on nonlinear scale, to raise the number of feature and to enhance the matching rate by using the adjusted-cosine vector. The experiment result shows that the number of feature and the matching rate of the improved KAZE are remarkably than the original KAZE algorithm.
Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D
2015-05-08
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.
Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.
2015-01-01
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714
Buildings Change Detection Based on Shape Matching for Multi-Resolution Remote Sensing Imagery
NASA Astrophysics Data System (ADS)
Abdessetar, M.; Zhong, Y.
2017-09-01
Buildings change detection has the ability to quantify the temporal effect, on urban area, for urban evolution study or damage assessment in disaster cases. In this context, changes analysis might involve the utilization of the available satellite images with different resolutions for quick responses. In this paper, to avoid using traditional method with image resampling outcomes and salt-pepper effect, building change detection based on shape matching is proposed for multi-resolution remote sensing images. Since the object's shape can be extracted from remote sensing imagery and the shapes of corresponding objects in multi-scale images are similar, it is practical for detecting buildings changes in multi-scale imagery using shape analysis. Therefore, the proposed methodology can deal with different pixel size for identifying new and demolished buildings in urban area using geometric properties of objects of interest. After rectifying the desired multi-dates and multi-resolutions images, by image to image registration with optimal RMS value, objects based image classification is performed to extract buildings shape from the images. Next, Centroid-Coincident Matching is conducted, on the extracted building shapes, based on the Euclidean distance measurement between shapes centroid (from shape T0 to shape T1 and vice versa), in order to define corresponding building objects. Then, New and Demolished buildings are identified based on the obtained distances those are greater than RMS value (No match in the same location).
Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming
2018-01-01
There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L0 gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements. PMID:29414893
Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming
2018-02-07
There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L ₀ gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements.
Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri
2014-01-01
In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.
Multi-sensor image registration based on algebraic projective invariants.
Li, Bin; Wang, Wei; Ye, Hao
2013-04-22
A new automatic feature-based registration algorithm is presented for multi-sensor images with projective deformation. Contours are firstly extracted from both reference and sensed images as basic features in the proposed method. Since it is difficult to design a projective-invariant descriptor from the contour information directly, a new feature named Five Sequential Corners (FSC) is constructed based on the corners detected from the extracted contours. By introducing algebraic projective invariants, we design a descriptor for each FSC that is ensured to be robust against projective deformation. Further, no gray scale related information is required in calculating the descriptor, thus it is also robust against the gray scale discrepancy between the multi-sensor image pairs. Experimental results utilizing real image pairs are presented to show the merits of the proposed registration method.
Jaiswal, Astha; Godinez, William J; Eils, Roland; Lehmann, Maik Jorg; Rohr, Karl
2015-11-01
Automatic fluorescent particle tracking is an essential task to study the dynamics of a large number of biological structures at a sub-cellular level. We have developed a probabilistic particle tracking approach based on multi-scale detection and two-step multi-frame association. The multi-scale detection scheme allows coping with particles in close proximity. For finding associations, we have developed a two-step multi-frame algorithm, which is based on a temporally semiglobal formulation as well as spatially local and global optimization. In the first step, reliable associations are determined for each particle individually in local neighborhoods. In the second step, the global spatial information over multiple frames is exploited jointly to determine optimal associations. The multi-scale detection scheme and the multi-frame association finding algorithm have been combined with a probabilistic tracking approach based on the Kalman filter. We have successfully applied our probabilistic tracking approach to synthetic as well as real microscopy image sequences of virus particles and quantified the performance. We found that the proposed approach outperforms previous approaches.
Multi scales based sparse matrix spectral clustering image segmentation
NASA Astrophysics Data System (ADS)
Liu, Zhongmin; Chen, Zhicai; Li, Zhanming; Hu, Wenjin
2018-04-01
In image segmentation, spectral clustering algorithms have to adopt the appropriate scaling parameter to calculate the similarity matrix between the pixels, which may have a great impact on the clustering result. Moreover, when the number of data instance is large, computational complexity and memory use of the algorithm will greatly increase. To solve these two problems, we proposed a new spectral clustering image segmentation algorithm based on multi scales and sparse matrix. We devised a new feature extraction method at first, then extracted the features of image on different scales, at last, using the feature information to construct sparse similarity matrix which can improve the operation efficiency. Compared with traditional spectral clustering algorithm, image segmentation experimental results show our algorithm have better degree of accuracy and robustness.
a Region-Based Multi-Scale Approach for Object-Based Image Analysis
NASA Astrophysics Data System (ADS)
Kavzoglu, T.; Yildiz Erdemir, M.; Tonbul, H.
2016-06-01
Within the last two decades, object-based image analysis (OBIA) considering objects (i.e. groups of pixels) instead of pixels has gained popularity and attracted increasing interest. The most important stage of the OBIA is image segmentation that groups spectrally similar adjacent pixels considering not only the spectral features but also spatial and textural features. Although there are several parameters (scale, shape, compactness and band weights) to be set by the analyst, scale parameter stands out the most important parameter in segmentation process. Estimating optimal scale parameter is crucially important to increase the classification accuracy that depends on image resolution, image object size and characteristics of the study area. In this study, two scale-selection strategies were implemented in the image segmentation process using pan-sharped Qickbird-2 image. The first strategy estimates optimal scale parameters for the eight sub-regions. For this purpose, the local variance/rate of change (LV-RoC) graphs produced by the ESP-2 tool were analysed to determine fine, moderate and coarse scales for each region. In the second strategy, the image was segmented using the three candidate scale values (fine, moderate, coarse) determined from the LV-RoC graph calculated for whole image. The nearest neighbour classifier was applied in all segmentation experiments and equal number of pixels was randomly selected to calculate accuracy metrics (overall accuracy and kappa coefficient). Comparison of region-based and image-based segmentation was carried out on the classified images and found that region-based multi-scale OBIA produced significantly more accurate results than image-based single-scale OBIA. The difference in classification accuracy reached to 10% in terms of overall accuracy.
Multi-scale image segmentation method with visual saliency constraints and its application
NASA Astrophysics Data System (ADS)
Chen, Yan; Yu, Jie; Sun, Kaimin
2018-03-01
Object-based image analysis method has many advantages over pixel-based methods, so it is one of the current research hotspots. It is very important to get the image objects by multi-scale image segmentation in order to carry out object-based image analysis. The current popular image segmentation methods mainly share the bottom-up segmentation principle, which is simple to realize and the object boundaries obtained are accurate. However, the macro statistical characteristics of the image areas are difficult to be taken into account, and fragmented segmentation (or over-segmentation) results are difficult to avoid. In addition, when it comes to information extraction, target recognition and other applications, image targets are not equally important, i.e., some specific targets or target groups with particular features worth more attention than the others. To avoid the problem of over-segmentation and highlight the targets of interest, this paper proposes a multi-scale image segmentation method with visually saliency graph constraints. Visual saliency theory and the typical feature extraction method are adopted to obtain the visual saliency information, especially the macroscopic information to be analyzed. The visual saliency information is used as a distribution map of homogeneity weight, where each pixel is given a weight. This weight acts as one of the merging constraints in the multi- scale image segmentation. As a result, pixels that macroscopically belong to the same object but are locally different can be more likely assigned to one same object. In addition, due to the constraint of visual saliency model, the constraint ability over local-macroscopic characteristics can be well controlled during the segmentation process based on different objects. These controls will improve the completeness of visually saliency areas in the segmentation results while diluting the controlling effect for non- saliency background areas. Experiments show that this method works better for texture image segmentation than traditional multi-scale image segmentation methods, and can enable us to give priority control to the saliency objects of interest. This method has been used in image quality evaluation, scattered residential area extraction, sparse forest extraction and other applications to verify its validation. All applications showed good results.
Retinex enhancement of infrared images.
Li, Ying; He, Renjie; Xu, Guizhi; Hou, Changzhi; Sun, Yunyan; Guo, Lei; Rao, Liyun; Yan, Weili
2008-01-01
With the ability of imaging the temperature distribution of body, infrared imaging is promising in diagnostication and prognostication of diseases. However the poor quality of the raw original infrared images prevented applications and one of the essential problems is the low contrast appearance of the imagined object. In this paper, the image enhancement technique based on the Retinex theory is studied, which is a process that automatically retrieve the visual realism to images. The algorithms, including Frackle-McCann algorithm, McCann99 algorithm, single-scale Retinex algorithm, multi-scale Retinex algorithm and multi-scale Retinex algorithm with color restoration, are experienced to the enhancement of infrared images. The entropy measurements along with the visual inspection were compared and results shown the algorithms based on Retinex theory have the ability in enhancing the infrared image. Out of the algorithms compared, MSRCR demonstrated the best performance.
Imaging multi-scale dynamics in vivo with spiral volumetric optoacoustic tomography
NASA Astrophysics Data System (ADS)
Deán-Ben, X. Luís.; Fehm, Thomas F.; Ford, Steven J.; Gottschalk, Sven; Razansky, Daniel
2017-03-01
Imaging dynamics in living organisms is essential for the understanding of biological complexity. While multiple imaging modalities are often required to cover both microscopic and macroscopic spatial scales, dynamic phenomena may also extend over different temporal scales, necessitating the use of different imaging technologies based on the trade-off between temporal resolution and effective field of view. Optoacoustic (photoacoustic) imaging has been shown to offer the exclusive capability to link multiple spatial scales ranging from organelles to entire organs of small animals. Yet, efficient visualization of multi-scale dynamics remained difficult with state-of-the-art systems due to inefficient trade-offs between image acquisition and effective field of view. Herein, we introduce a spiral volumetric optoacoustic tomography (SVOT) technique that provides spectrally-enriched high-resolution optical absorption contrast across multiple spatio-temporal scales. We demonstrate that SVOT can be used to monitor various in vivo dynamics, from video-rate volumetric visualization of cardiac-associated motion in whole organs to high-resolution imaging of pharmacokinetics in larger regions. The multi-scale dynamic imaging capability thus emerges as a powerful and unique feature of the optoacoustic technology that adds to the multiple advantages of this technology for structural, functional and molecular imaging.
Salient object detection based on multi-scale contrast.
Wang, Hai; Dai, Lei; Cai, Yingfeng; Sun, Xiaoqiang; Chen, Long
2018-05-01
Due to the development of deep learning networks, a salient object detection based on deep learning networks, which are used to extract the features, has made a great breakthrough compared to the traditional methods. At present, the salient object detection mainly relies on very deep convolutional network, which is used to extract the features. In deep learning networks, an dramatic increase of network depth may cause more training errors instead. In this paper, we use the residual network to increase network depth and to mitigate the errors caused by depth increase simultaneously. Inspired by image simplification, we use color and texture features to obtain simplified image with multiple scales by means of region assimilation on the basis of super-pixels in order to reduce the complexity of images and to improve the accuracy of salient target detection. We refine the feature on pixel level by the multi-scale feature correction method to avoid the feature error when the image is simplified at the above-mentioned region level. The final full connection layer not only integrates features of multi-scale and multi-level but also works as classifier of salient targets. The experimental results show that proposed model achieves better results than other salient object detection models based on original deep learning networks. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Luo, Aiwen; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Huang, Zunkai; Jürgen Mattausch, Hans
2018-04-01
Feature extraction techniques are a cornerstone of object detection in computer-vision-based applications. The detection performance of vison-based detection systems is often degraded by, e.g., changes in the illumination intensity of the light source, foreground-background contrast variations or automatic gain control from the camera. In order to avoid such degradation effects, we present a block-based L1-norm-circuit architecture which is configurable for different image-cell sizes, cell-based feature descriptors and image resolutions according to customization parameters from the circuit input. The incorporated flexibility in both the image resolution and the cell size for multi-scale image pyramids leads to lower computational complexity and power consumption. Additionally, an object-detection prototype for performance evaluation in 65 nm CMOS implements the proposed L1-norm circuit together with a histogram of oriented gradients (HOG) descriptor and a support vector machine (SVM) classifier. The proposed parallel architecture with high hardware efficiency enables real-time processing, high detection robustness, small chip-core area as well as low power consumption for multi-scale object detection.
Christodoulidis, Argyrios; Hurtut, Thomas; Tahar, Houssem Ben; Cheriet, Farida
2016-09-01
Segmenting the retinal vessels from fundus images is a prerequisite for many CAD systems for the automatic detection of diabetic retinopathy lesions. So far, research efforts have concentrated mainly on the accurate localization of the large to medium diameter vessels. However, failure to detect the smallest vessels at the segmentation step can lead to false positive lesion detection counts in a subsequent lesion analysis stage. In this study, a new hybrid method for the segmentation of the smallest vessels is proposed. Line detection and perceptual organization techniques are combined in a multi-scale scheme. Small vessels are reconstructed from the perceptual-based approach via tracking and pixel painting. The segmentation was validated in a high resolution fundus image database including healthy and diabetic subjects using pixel-based as well as perceptual-based measures. The proposed method achieves 85.06% sensitivity rate, while the original multi-scale line detection method achieves 81.06% sensitivity rate for the corresponding images (p<0.05). The improvement in the sensitivity rate for the database is 6.47% when only the smallest vessels are considered (p<0.05). For the perceptual-based measure, the proposed method improves the detection of the vasculature by 7.8% against the original multi-scale line detection method (p<0.05). Copyright © 2016 Elsevier Ltd. All rights reserved.
Automatic image enhancement based on multi-scale image decomposition
NASA Astrophysics Data System (ADS)
Feng, Lu; Wu, Zhuangzhi; Pei, Luo; Long, Xiong
2014-01-01
In image processing and computational photography, automatic image enhancement is one of the long-range objectives. Recently the automatic image enhancement methods not only take account of the globe semantics, like correct color hue and brightness imbalances, but also the local content of the image, such as human face and sky of landscape. In this paper we describe a new scheme for automatic image enhancement that considers both global semantics and local content of image. Our automatic image enhancement method employs the multi-scale edge-aware image decomposition approach to detect the underexposure regions and enhance the detail of the salient content. The experiment results demonstrate the effectiveness of our approach compared to existing automatic enhancement methods.
Uniform competency-based local feature extraction for remote sensing images
NASA Astrophysics Data System (ADS)
Sedaghat, Amin; Mohammadi, Nazila
2018-01-01
Local feature detectors are widely used in many photogrammetry and remote sensing applications. The quantity and distribution of the local features play a critical role in the quality of the image matching process, particularly for multi-sensor high resolution remote sensing image registration. However, conventional local feature detectors cannot extract desirable matched features either in terms of the number of correct matches or the spatial and scale distribution in multi-sensor remote sensing images. To address this problem, this paper proposes a novel method for uniform and robust local feature extraction for remote sensing images, which is based on a novel competency criterion and scale and location distribution constraints. The proposed method, called uniform competency (UC) local feature extraction, can be easily applied to any local feature detector for various kinds of applications. The proposed competency criterion is based on a weighted ranking process using three quality measures, including robustness, spatial saliency and scale parameters, which is performed in a multi-layer gridding schema. For evaluation, five state-of-the-art local feature detector approaches, namely, scale-invariant feature transform (SIFT), speeded up robust features (SURF), scale-invariant feature operator (SFOP), maximally stable extremal region (MSER) and hessian-affine, are used. The proposed UC-based feature extraction algorithms were successfully applied to match various synthetic and real satellite image pairs, and the results demonstrate its capability to increase matching performance and to improve the spatial distribution. The code to carry out the UC feature extraction is available from href="https://www.researchgate.net/publication/317956777_UC-Feature_Extraction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seyedhosseini, Mojtaba; Kumar, Ritwik; Jurrus, Elizabeth R.
2011-10-01
Automated neural circuit reconstruction through electron microscopy (EM) images is a challenging problem. In this paper, we present a novel method that exploits multi-scale contextual information together with Radon-like features (RLF) to learn a series of discriminative models. The main idea is to build a framework which is capable of extracting information about cell membranes from a large contextual area of an EM image in a computationally efficient way. Toward this goal, we extract RLF that can be computed efficiently from the input image and generate a scale-space representation of the context images that are obtained at the output ofmore » each discriminative model in the series. Compared to a single-scale model, the use of a multi-scale representation of the context image gives the subsequent classifiers access to a larger contextual area in an effective way. Our strategy is general and independent of the classifier and has the potential to be used in any context based framework. We demonstrate that our method outperforms the state-of-the-art algorithms in detection of neuron membranes in EM images.« less
Multispectral Image Enhancement Through Adaptive Wavelet Fusion
2016-09-14
13. SUPPLEMENTARY NOTES 14. ABSTRACT This research developed a multiresolution image fusion scheme based on guided filtering . Guided filtering can...effectively reduce noise while preserving detail boundaries. When applied in an iterative mode, guided filtering selectively eliminates small scale...details while restoring larger scale edges. The proposed multi-scale image fusion scheme achieves spatial consistency by using guided filtering both at
Multi-scale image segmentation and numerical modeling in carbonate rocks
NASA Astrophysics Data System (ADS)
Alves, G. C.; Vanorio, T.
2016-12-01
Numerical methods based on computational simulations can be an important tool in estimating physical properties of rocks. These can complement experimental results, especially when time constraints and sample availability are a problem. However, computational models created at different scales can yield conflicting results with respect to the physical laboratory. This problem is exacerbated in carbonate rocks due to their heterogeneity at all scales. We developed a multi-scale approach performing segmentation of the rock images and numerical modeling across several scales, accounting for those heterogeneities. As a first step, we measured the porosity and the elastic properties of a group of carbonate samples with varying micrite content. Then, samples were imaged by Scanning Electron Microscope (SEM) as well as optical microscope at different magnifications. We applied three different image segmentation techniques to create numerical models from the SEM images and performed numerical simulations of the elastic wave-equation. Our results show that a multi-scale approach can efficiently account for micro-porosities in tight micrite-supported samples, yielding acoustic velocities comparable to those obtained experimentally. Nevertheless, in high-porosity samples characterized by larger grain/micrite ratio, results show that SEM scale images tend to overestimate velocities, mostly due to their inability to capture macro- and/or intragranular- porosity. This suggests that, for high-porosity carbonate samples, optical microscope images would be more suited for numerical simulations.
Multidirectional Image Sensing for Microscopy Based on a Rotatable Robot.
Shen, Yajing; Wan, Wenfeng; Zhang, Lijun; Yong, Li; Lu, Haojian; Ding, Weili
2015-12-15
Image sensing at a small scale is essentially important in many fields, including microsample observation, defect inspection, material characterization and so on. However, nowadays, multi-directional micro object imaging is still very challenging due to the limited field of view (FOV) of microscopes. This paper reports a novel approach for multi-directional image sensing in microscopes by developing a rotatable robot. First, a robot with endless rotation ability is designed and integrated with the microscope. Then, the micro object is aligned to the rotation axis of the robot automatically based on the proposed forward-backward alignment strategy. After that, multi-directional images of the sample can be obtained by rotating the robot within one revolution under the microscope. To demonstrate the versatility of this approach, we view various types of micro samples from multiple directions in both optical microscopy and scanning electron microscopy, and panoramic images of the samples are processed as well. The proposed method paves a new way for the microscopy image sensing, and we believe it could have significant impact in many fields, especially for sample detection, manipulation and characterization at a small scale.
Plant trait detection with multi-scale spectrometry
NASA Astrophysics Data System (ADS)
Gamon, J. A.; Wang, R.
2017-12-01
Proximal and remote sensing using imaging spectrometry offers new opportunities for detecting plant traits, with benefits for phenotyping, productivity estimation, stress detection, and biodiversity studies. Using proximal and airborne spectrometry, we evaluated variation in plant optical properties at various spatial and spectral scales with the goal of identifying optimal scales for distinguishing plant traits related to photosynthetic function. Using directed approaches based on physiological vegetation indices, and statistical approaches based on spectral information content, we explored alternate ways of distinguishing plant traits with imaging spectrometry. With both leaf traits and canopy structure contributing to the signals, results exhibit a strong scale dependence. Our results demonstrate the benefits of multi-scale experimental approaches within a clear conceptual framework when applying remote sensing methods to plant trait detection for phenotyping, productivity, and biodiversity studies.
Dim target detection method based on salient graph fusion
NASA Astrophysics Data System (ADS)
Hu, Ruo-lan; Shen, Yi-yan; Jiang, Jun
2018-02-01
Dim target detection is one key problem in digital image processing field. With development of multi-spectrum imaging sensor, it becomes a trend to improve the performance of dim target detection by fusing the information from different spectral images. In this paper, one dim target detection method based on salient graph fusion was proposed. In the method, Gabor filter with multi-direction and contrast filter with multi-scale were combined to construct salient graph from digital image. And then, the maximum salience fusion strategy was designed to fuse the salient graph from different spectral images. Top-hat filter was used to detect dim target from the fusion salient graph. Experimental results show that proposal method improved the probability of target detection and reduced the probability of false alarm on clutter background images.
NASA Astrophysics Data System (ADS)
Alonso, Carmelo; Tarquis, Ana M.; Zúñiga, Ignacio; Benito, Rosa M.
2017-03-01
Several studies have shown that vegetation indexes can be used to estimate root zone soil moisture. Earth surface images, obtained by high-resolution satellites, presently give a lot of information on these indexes, based on the data of several wavelengths. Because of the potential capacity for systematic observations at various scales, remote sensing technology extends the possible data archives from the present time to several decades back. Because of this advantage, enormous efforts have been made by researchers and application specialists to delineate vegetation indexes from local scale to global scale by applying remote sensing imagery. In this work, four band images have been considered, which are involved in these vegetation indexes, and were taken by satellites Ikonos-2 and Landsat-7 of the same geographic location, to study the effect of both spatial (pixel size) and radiometric (number of bits coding the image) resolution on these wavelength bands as well as two vegetation indexes: the Normalized Difference Vegetation Index (NDVI) and the Enhanced Vegetation Index (EVI). In order to do so, a multi-fractal analysis of these multi-spectral images was applied in each of these bands and the two indexes derived. The results showed that spatial resolution has a similar scaling effect in the four bands, but radiometric resolution has a larger influence in blue and green bands than in red and near-infrared bands. The NDVI showed a higher sensitivity to the radiometric resolution than EVI. Both were equally affected by the spatial resolution. From both factors, the spatial resolution has a major impact in the multi-fractal spectrum for all the bands and the vegetation indexes. This information should be taken in to account when vegetation indexes based on different satellite sensors are obtained.
Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators
Bai, Xiangzhi
2015-01-01
The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion. PMID:26184229
Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators.
Bai, Xiangzhi
2015-07-15
The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion.
Multi-atlas learner fusion: An efficient segmentation approach for large-scale data.
Asman, Andrew J; Huo, Yuankai; Plassard, Andrew J; Landman, Bennett A
2015-12-01
We propose multi-atlas learner fusion (MLF), a framework for rapidly and accurately replicating the highly accurate, yet computationally expensive, multi-atlas segmentation framework based on fusing local learners. In the largest whole-brain multi-atlas study yet reported, multi-atlas segmentations are estimated for a training set of 3464 MR brain images. Using these multi-atlas estimates we (1) estimate a low-dimensional representation for selecting locally appropriate example images, and (2) build AdaBoost learners that map a weak initial segmentation to the multi-atlas segmentation result. Thus, to segment a new target image we project the image into the low-dimensional space, construct a weak initial segmentation, and fuse the trained, locally selected, learners. The MLF framework cuts the runtime on a modern computer from 36 h down to 3-8 min - a 270× speedup - by completely bypassing the need for deformable atlas-target registrations. Additionally, we (1) describe a technique for optimizing the weak initial segmentation and the AdaBoost learning parameters, (2) quantify the ability to replicate the multi-atlas result with mean accuracies approaching the multi-atlas intra-subject reproducibility on a testing set of 380 images, (3) demonstrate significant increases in the reproducibility of intra-subject segmentations when compared to a state-of-the-art multi-atlas framework on a separate reproducibility dataset, (4) show that under the MLF framework the large-scale data model significantly improve the segmentation over the small-scale model under the MLF framework, and (5) indicate that the MLF framework has comparable performance as state-of-the-art multi-atlas segmentation algorithms without using non-local information. Copyright © 2015 Elsevier B.V. All rights reserved.
Development of multi-dimensional body image scale for malaysian female adolescents
Taib, Mohd Nasir Mohd; Shariff, Zalilah Mohd; Khor, Geok Lin
2008-01-01
The present study was conducted to develop a Multi-dimensional Body Image Scale for Malaysian female adolescents. Data were collected among 328 female adolescents from a secondary school in Kuantan district, state of Pahang, Malaysia by using a self-administered questionnaire and anthropometric measurements. The self-administered questionnaire comprised multiple measures of body image, Eating Attitude Test (EAT-26; Garner & Garfinkel, 1979) and Rosenberg Self-esteem Inventory (Rosenberg, 1965). The 152 items from selected multiple measures of body image were examined through factor analysis and for internal consistency. Correlations between Multi-dimensional Body Image Scale and body mass index (BMI), risk of eating disorders and self-esteem were assessed for construct validity. A seven factor model of a 62-item Multi-dimensional Body Image Scale for Malaysian female adolescents with construct validity and good internal consistency was developed. The scale encompasses 1) preoccupation with thinness and dieting behavior, 2) appearance and body satisfaction, 3) body importance, 4) muscle increasing behavior, 5) extreme dieting behavior, 6) appearance importance, and 7) perception of size and shape dimensions. Besides, a multidimensional body image composite score was proposed to screen negative body image risk in female adolescents. The result found body image was correlated with BMI, risk of eating disorders and self-esteem in female adolescents. In short, the present study supports a multi-dimensional concept for body image and provides a new insight into its multi-dimensionality in Malaysian female adolescents with preliminary validity and reliability of the scale. The Multi-dimensional Body Image Scale can be used to identify female adolescents who are potentially at risk of developing body image disturbance through future intervention programs. PMID:20126371
Development of multi-dimensional body image scale for malaysian female adolescents.
Chin, Yit Siew; Taib, Mohd Nasir Mohd; Shariff, Zalilah Mohd; Khor, Geok Lin
2008-01-01
The present study was conducted to develop a Multi-dimensional Body Image Scale for Malaysian female adolescents. Data were collected among 328 female adolescents from a secondary school in Kuantan district, state of Pahang, Malaysia by using a self-administered questionnaire and anthropometric measurements. The self-administered questionnaire comprised multiple measures of body image, Eating Attitude Test (EAT-26; Garner & Garfinkel, 1979) and Rosenberg Self-esteem Inventory (Rosenberg, 1965). The 152 items from selected multiple measures of body image were examined through factor analysis and for internal consistency. Correlations between Multi-dimensional Body Image Scale and body mass index (BMI), risk of eating disorders and self-esteem were assessed for construct validity. A seven factor model of a 62-item Multi-dimensional Body Image Scale for Malaysian female adolescents with construct validity and good internal consistency was developed. The scale encompasses 1) preoccupation with thinness and dieting behavior, 2) appearance and body satisfaction, 3) body importance, 4) muscle increasing behavior, 5) extreme dieting behavior, 6) appearance importance, and 7) perception of size and shape dimensions. Besides, a multidimensional body image composite score was proposed to screen negative body image risk in female adolescents. The result found body image was correlated with BMI, risk of eating disorders and self-esteem in female adolescents. In short, the present study supports a multi-dimensional concept for body image and provides a new insight into its multi-dimensionality in Malaysian female adolescents with preliminary validity and reliability of the scale. The Multi-dimensional Body Image Scale can be used to identify female adolescents who are potentially at risk of developing body image disturbance through future intervention programs.
NASA Astrophysics Data System (ADS)
Lee, Jongpil; Nam, Juhan
2017-08-01
Music auto-tagging is often handled in a similar manner to image classification by regarding the 2D audio spectrogram as image data. However, music auto-tagging is distinguished from image classification in that the tags are highly diverse and have different levels of abstractions. Considering this issue, we propose a convolutional neural networks (CNN)-based architecture that embraces multi-level and multi-scaled features. The architecture is trained in three steps. First, we conduct supervised feature learning to capture local audio features using a set of CNNs with different input sizes. Second, we extract audio features from each layer of the pre-trained convolutional networks separately and aggregate them altogether given a long audio clip. Finally, we put them into fully-connected networks and make final predictions of the tags. Our experiments show that using the combination of multi-level and multi-scale features is highly effective in music auto-tagging and the proposed method outperforms previous state-of-the-arts on the MagnaTagATune dataset and the Million Song Dataset. We further show that the proposed architecture is useful in transfer learning.
Feature-based Alignment of Volumetric Multi-modal Images
Toews, Matthew; Zöllei, Lilla; Wells, William M.
2014-01-01
This paper proposes a method for aligning image volumes acquired from different imaging modalities (e.g. MR, CT) based on 3D scale-invariant image features. A novel method for encoding invariant feature geometry and appearance is developed, based on the assumption of locally linear intensity relationships, providing a solution to poor repeatability of feature detection in different image modalities. The encoding method is incorporated into a probabilistic feature-based model for multi-modal image alignment. The model parameters are estimated via a group-wise alignment algorithm, that iteratively alternates between estimating a feature-based model from feature data, then realigning feature data to the model, converging to a stable alignment solution with few pre-processing or pre-alignment requirements. The resulting model can be used to align multi-modal image data with the benefits of invariant feature correspondence: globally optimal solutions, high efficiency and low memory usage. The method is tested on the difficult RIRE data set of CT, T1, T2, PD and MP-RAGE brain images of subjects exhibiting significant inter-subject variability due to pathology. PMID:24683955
NASA Astrophysics Data System (ADS)
Reato, Thomas; Demir, Begüm; Bruzzone, Lorenzo
2017-10-01
This paper presents a novel class sensitive hashing technique in the framework of large-scale content-based remote sensing (RS) image retrieval. The proposed technique aims at representing each image with multi-hash codes, each of which corresponds to a primitive (i.e., land cover class) present in the image. To this end, the proposed method consists of a three-steps algorithm. The first step is devoted to characterize each image by primitive class descriptors. These descriptors are obtained through a supervised approach, which initially extracts the image regions and their descriptors that are then associated with primitives present in the images. This step requires a set of annotated training regions to define primitive classes. A correspondence between the regions of an image and the primitive classes is built based on the probability of each primitive class to be present at each region. All the regions belonging to the specific primitive class with a probability higher than a given threshold are highly representative of that class. Thus, the average value of the descriptors of these regions is used to characterize that primitive. In the second step, the descriptors of primitive classes are transformed into multi-hash codes to represent each image. This is achieved by adapting the kernel-based supervised locality sensitive hashing method to multi-code hashing problems. The first two steps of the proposed technique, unlike the standard hashing methods, allow one to represent each image by a set of primitive class sensitive descriptors and their hash codes. Then, in the last step, the images in the archive that are very similar to a query image are retrieved based on a multi-hash-code-matching scheme. Experimental results obtained on an archive of aerial images confirm the effectiveness of the proposed technique in terms of retrieval accuracy when compared to the standard hashing methods.
NASA Astrophysics Data System (ADS)
Hussein, Rafid M.; Chandrashekhara, K.
2017-11-01
A multi-scale modeling approach is presented to simulate and validate thermo-oxidation shrinkage and cracking damage of a high temperature polymer composite. The multi-scale approach investigates coupled transient diffusion-reaction and static structural at macro- to micro-scale. The micro-scale shrinkage deformation and cracking damage are simulated and validated using 2D and 3D simulations. Localized shrinkage displacement boundary conditions for the micro-scale simulations are determined from the respective meso- and macro-scale simulations, conducted for a cross-ply laminate. The meso-scale geometrical domain and the micro-scale geometry and mesh are developed using the object oriented finite element (OOF). The macro-scale shrinkage and weight loss are measured using unidirectional coupons and used to build the macro-shrinkage model. The cross-ply coupons are used to validate the macro-shrinkage model by the shrinkage profiles acquired using scanning electron images at the cracked surface. The macro-shrinkage model deformation shows a discrepancy when the micro-scale image-based cracking is computed. The local maximum shrinkage strain is assumed to be 13 times the maximum macro-shrinkage strain of 2.5 × 10-5, upon which the discrepancy is minimized. The microcrack damage of the composite is modeled using a static elastic analysis with extended finite element and cohesive surfaces by considering the modulus spatial evolution. The 3D shrinkage displacements are fed to the model using node-wise boundary/domain conditions of the respective oxidized region. Microcrack simulation results: length, meander, and opening are closely matched to the crack in the area of interest for the scanning electron images.
Multi-scale statistical analysis of coronal solar activity
Gamborino, Diana; del-Castillo-Negrete, Diego; Martinell, Julio J.
2016-07-08
Multi-filter images from the solar corona are used to obtain temperature maps that are analyzed using techniques based on proper orthogonal decomposition (POD) in order to extract dynamical and structural information at various scales. Exploring active regions before and after a solar flare and comparing them with quiet regions, we show that the multi-scale behavior presents distinct statistical properties for each case that can be used to characterize the level of activity in a region. Information about the nature of heat transport is also to be extracted from the analysis.
NASA Astrophysics Data System (ADS)
Yu, Le; Zhang, Dengrong; Holden, Eun-Jung
2008-07-01
Automatic registration of multi-source remote-sensing images is a difficult task as it must deal with the varying illuminations and resolutions of the images, different perspectives and the local deformations within the images. This paper proposes a fully automatic and fast non-rigid image registration technique that addresses those issues. The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model. Once the coarse registration is completed, it performs a fine-scale registration process based on a piecewise linear transformation technique using feature points that are detected by the Harris corner detector. The registration process firstly finds in succession, tie point pairs between the input and the reference image by detecting Harris corners and applying a cross-matching strategy based on a wavelet pyramid for a fast search speed. Tie point pairs with large errors are pruned by an error-checking step. The input image is then rectified by using triangulated irregular networks (TINs) to deal with irregular local deformations caused by the fluctuation of the terrain. For each triangular facet of the TIN, affine transformations are estimated and applied for rectification. Experiments with Quickbird, SPOT5, SPOT4, TM remote-sensing images of the Hangzhou area in China demonstrate the efficiency and the accuracy of the proposed technique for multi-source remote-sensing image registration.
MREG V1.1 : a multi-scale image registration algorithm for SAR applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eichel, Paul H.
2013-08-01
MREG V1.1 is the sixth generation SAR image registration algorithm developed by the Signal Processing&Technology Department for Synthetic Aperture Radar applications. Like its predecessor algorithm REGI, it employs a powerful iterative multi-scale paradigm to achieve the competing goals of sub-pixel registration accuracy and the ability to handle large initial offsets. Since it is not model based, it allows for high fidelity tracking of spatially varying terrain-induced misregistration. Since it does not rely on image domain phase, it is equally adept at coherent and noncoherent image registration. This document provides a brief history of the registration processors developed by Dept. 5962more » leading up to MREG V1.1, a full description of the signal processing steps involved in the algorithm, and a user's manual with application specific recommendations for CCD, TwoColor MultiView, and SAR stereoscopy.« less
Towards Personalized Cardiology: Multi-Scale Modeling of the Failing Heart
Amr, Ali; Neumann, Dominik; Georgescu, Bogdan; Seegerer, Philipp; Kamen, Ali; Haas, Jan; Frese, Karen S.; Irawati, Maria; Wirsz, Emil; King, Vanessa; Buss, Sebastian; Mereles, Derliz; Zitron, Edgar; Keller, Andreas; Katus, Hugo A.; Comaniciu, Dorin; Meder, Benjamin
2015-01-01
Background Despite modern pharmacotherapy and advanced implantable cardiac devices, overall prognosis and quality of life of HF patients remain poor. This is in part due to insufficient patient stratification and lack of individualized therapy planning, resulting in less effective treatments and a significant number of non-responders. Methods and Results State-of-the-art clinical phenotyping was acquired, including magnetic resonance imaging (MRI) and biomarker assessment. An individualized, multi-scale model of heart function covering cardiac anatomy, electrophysiology, biomechanics and hemodynamics was estimated using a robust framework. The model was computed on n=46 HF patients, showing for the first time that advanced multi-scale models can be fitted consistently on large cohorts. Novel multi-scale parameters derived from the model of all cases were analyzed and compared against clinical parameters, cardiac imaging, lab tests and survival scores to evaluate the explicative power of the model and its potential for better patient stratification. Model validation was pursued by comparing clinical parameters that were not used in the fitting process against model parameters. Conclusion This paper illustrates how advanced multi-scale models can complement cardiovascular imaging and how they could be applied in patient care. Based on obtained results, it becomes conceivable that, after thorough validation, such heart failure models could be applied for patient management and therapy planning in the future, as we illustrate in one patient of our cohort who received CRT-D implantation. PMID:26230546
Local variance for multi-scale analysis in geomorphometry.
Drăguţ, Lucian; Eisank, Clemens; Strasser, Thomas
2011-07-15
Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements.
Local variance for multi-scale analysis in geomorphometry
Drăguţ, Lucian; Eisank, Clemens; Strasser, Thomas
2011-01-01
Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements. PMID:21779138
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-01-01
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs. PMID:25587878
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-11-26
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs.
Li, Xiaomeng; Dou, Qi; Chen, Hao; Fu, Chi-Wing; Qi, Xiaojuan; Belavý, Daniel L; Armbrecht, Gabriele; Felsenberg, Dieter; Zheng, Guoyan; Heng, Pheng-Ann
2018-04-01
Intervertebral discs (IVDs) are small joints that lie between adjacent vertebrae. The localization and segmentation of IVDs are important for spine disease diagnosis and measurement quantification. However, manual annotation is time-consuming and error-prone with limited reproducibility, particularly for volumetric data. In this work, our goal is to develop an automatic and accurate method based on fully convolutional networks (FCN) for the localization and segmentation of IVDs from multi-modality 3D MR data. Compared with single modality data, multi-modality MR images provide complementary contextual information, which contributes to better recognition performance. However, how to effectively integrate such multi-modality information to generate accurate segmentation results remains to be further explored. In this paper, we present a novel multi-scale and modality dropout learning framework to locate and segment IVDs from four-modality MR images. First, we design a 3D multi-scale context fully convolutional network, which processes the input data in multiple scales of context and then merges the high-level features to enhance the representation capability of the network for handling the scale variation of anatomical structures. Second, to harness the complementary information from different modalities, we present a random modality voxel dropout strategy which alleviates the co-adaption issue and increases the discriminative capability of the network. Our method achieved the 1st place in the MICCAI challenge on automatic localization and segmentation of IVDs from multi-modality MR images, with a mean segmentation Dice coefficient of 91.2% and a mean localization error of 0.62 mm. We further conduct extensive experiments on the extended dataset to validate our method. We demonstrate that the proposed modality dropout strategy with multi-modality images as contextual information improved the segmentation accuracy significantly. Furthermore, experiments conducted on extended data collected from two different time points demonstrate the efficacy of our method on tracking the morphological changes in a longitudinal study. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, G. H.; Wang, H. B.; Fan, W. F.; Liu, Y.; Chen, C.
2018-04-01
In view of the traditional change detection algorithm mainly depends on the spectral information image spot, failed to effectively mining and fusion of multi-image feature detection advantage, the article borrows the ideas of object oriented analysis proposed a multi feature fusion of remote sensing image change detection algorithm. First by the multi-scale segmentation of image objects based; then calculate the various objects of color histogram and linear gradient histogram; utilizes the color distance and edge line feature distance between EMD statistical operator in different periods of the object, using the adaptive weighted method, the color feature distance and edge in a straight line distance of combination is constructed object heterogeneity. Finally, the curvature histogram analysis image spot change detection results. The experimental results show that the method can fully fuse the color and edge line features, thus improving the accuracy of the change detection.
[Research on non-rigid registration of multi-modal medical image based on Demons algorithm].
Hao, Peibo; Chen, Zhen; Jiang, Shaofeng; Wang, Yang
2014-02-01
Non-rigid medical image registration is a popular subject in the research areas of the medical image and has an important clinical value. In this paper we put forward an improved algorithm of Demons, together with the conservation of gray model and local structure tensor conservation model, to construct a new energy function processing multi-modal registration problem. We then applied the L-BFGS algorithm to optimize the energy function and solve complex three-dimensional data optimization problem. And finally we used the multi-scale hierarchical refinement ideas to solve large deformation registration. The experimental results showed that the proposed algorithm for large de formation and multi-modal three-dimensional medical image registration had good effects.
NASA Astrophysics Data System (ADS)
Ma, Jinlei; Zhou, Zhiqiang; Wang, Bo; Zong, Hua
2017-05-01
The goal of infrared (IR) and visible image fusion is to produce a more informative image for human observation or some other computer vision tasks. In this paper, we propose a novel multi-scale fusion method based on visual saliency map (VSM) and weighted least square (WLS) optimization, aiming to overcome some common deficiencies of conventional methods. Firstly, we introduce a multi-scale decomposition (MSD) using the rolling guidance filter (RGF) and Gaussian filter to decompose input images into base and detail layers. Compared with conventional MSDs, this MSD can achieve the unique property of preserving the information of specific scales and reducing halos near edges. Secondly, we argue that the base layers obtained by most MSDs would contain a certain amount of residual low-frequency information, which is important for controlling the contrast and overall visual appearance of the fused image, and the conventional "averaging" fusion scheme is unable to achieve desired effects. To address this problem, an improved VSM-based technique is proposed to fuse the base layers. Lastly, a novel WLS optimization scheme is proposed to fuse the detail layers. This optimization aims to transfer more visual details and less irrelevant IR details or noise into the fused image. As a result, the fused image details would appear more naturally and be suitable for human visual perception. Experimental results demonstrate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.
Logarithmic profile mapping multi-scale Retinex for restoration of low illumination images
NASA Astrophysics Data System (ADS)
Shi, Haiyan; Kwok, Ngaiming; Wu, Hongkun; Li, Ruowei; Liu, Shilong; Lin, Ching-Feng; Wong, Chin Yeow
2018-04-01
Images are valuable information sources for many scientific and engineering applications. However, images captured in poor illumination conditions would have a large portion of dark regions that could heavily degrade the image quality. In order to improve the quality of such images, a restoration algorithm is developed here that transforms the low input brightness to a higher value using a modified Multi-Scale Retinex approach. The algorithm is further improved by a entropy based weighting with the input and the processed results to refine the necessary amplification at regions of low brightness. Moreover, fine details in the image are preserved by applying the Retinex principles to extract and then re-insert object edges to obtain an enhanced image. Results from experiments using low and normal illumination images have shown satisfactory performances with regard to the improvement in information contents and the mitigation of viewing artifacts.
Dabbah, M A; Graham, J; Petropoulos, I N; Tavakoli, M; Malik, R A
2011-10-01
Diabetic peripheral neuropathy (DPN) is one of the most common long term complications of diabetes. Corneal confocal microscopy (CCM) image analysis is a novel non-invasive technique which quantifies corneal nerve fibre damage and enables diagnosis of DPN. This paper presents an automatic analysis and classification system for detecting nerve fibres in CCM images based on a multi-scale adaptive dual-model detection algorithm. The algorithm exploits the curvilinear structure of the nerve fibres and adapts itself to the local image information. Detected nerve fibres are then quantified and used as feature vectors for classification using random forest (RF) and neural networks (NNT) classifiers. We show, in a comparative study with other well known curvilinear detectors, that the best performance is achieved by the multi-scale dual model in conjunction with the NNT classifier. An evaluation of clinical effectiveness shows that the performance of the automated system matches that of ground-truth defined by expert manual annotation. Copyright © 2011 Elsevier B.V. All rights reserved.
Towards designing an optical-flow based colonoscopy tracking algorithm: a comparative study
NASA Astrophysics Data System (ADS)
Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.
2013-03-01
Automatic co-alignment of optical and virtual colonoscopy images can supplement traditional endoscopic procedures, by providing more complete information of clinical value to the gastroenterologist. In this work, we present a comparative analysis of our optical flow based technique for colonoscopy tracking, in relation to current state of the art methods, in terms of tracking accuracy, system stability, and computational efficiency. Our optical-flow based colonoscopy tracking algorithm starts with computing multi-scale dense and sparse optical flow fields to measure image displacements. Camera motion parameters are then determined from optical flow fields by employing a Focus of Expansion (FOE) constrained egomotion estimation scheme. We analyze the design choices involved in the three major components of our algorithm: dense optical flow, sparse optical flow, and egomotion estimation. Brox's optical flow method,1 due to its high accuracy, was used to compare and evaluate our multi-scale dense optical flow scheme. SIFT6 and Harris-affine features7 were used to assess the accuracy of the multi-scale sparse optical flow, because of their wide use in tracking applications; the FOE-constrained egomotion estimation was compared with collinear,2 image deformation10 and image derivative4 based egomotion estimation methods, to understand the stability of our tracking system. Two virtual colonoscopy (VC) image sequences were used in the study, since the exact camera parameters(for each frame) were known; dense optical flow results indicated that Brox's method was superior to multi-scale dense optical flow in estimating camera rotational velocities, but the final tracking errors were comparable, viz., 6mm vs. 8mm after the VC camera traveled 110mm. Our approach was computationally more efficient, averaging 7.2 sec. vs. 38 sec. per frame. SIFT and Harris affine features resulted in tracking errors of up to 70mm, while our sparse optical flow error was 6mm. The comparison among egomotion estimation algorithms showed that our FOE-constrained egomotion estimation method achieved the optimal balance between tracking accuracy and robustness. The comparative study demonstrated that our optical-flow based colonoscopy tracking algorithm maintains good accuracy and stability for routine use in clinical practice.
Improvement and Extension of Shape Evaluation Criteria in Multi-Scale Image Segmentation
NASA Astrophysics Data System (ADS)
Sakamoto, M.; Honda, Y.; Kondo, A.
2016-06-01
From the last decade, the multi-scale image segmentation is getting a particular interest and practically being used for object-based image analysis. In this study, we have addressed the issues on multi-scale image segmentation, especially, in improving the performances for validity of merging and variety of derived region's shape. Firstly, we have introduced constraints on the application of spectral criterion which could suppress excessive merging between dissimilar regions. Secondly, we have extended the evaluation for smoothness criterion by modifying the definition on the extent of the object, which was brought for controlling the shape's diversity. Thirdly, we have developed new shape criterion called aspect ratio. This criterion helps to improve the reproducibility on the shape of object to be matched to the actual objectives of interest. This criterion provides constraint on the aspect ratio in the bounding box of object by keeping properties controlled with conventional shape criteria. These improvements and extensions lead to more accurate, flexible, and diverse segmentation results according to the shape characteristics of the target of interest. Furthermore, we also investigated a technique for quantitative and automatic parameterization in multi-scale image segmentation. This approach is achieved by comparing segmentation result with training area specified in advance by considering the maximization of the average area in derived objects or satisfying the evaluation index called F-measure. Thus, it has been possible to automate the parameterization that suited the objectives especially in the view point of shape's reproducibility.
NASA Astrophysics Data System (ADS)
Arshadi, Amir
Image-based simulation of complex materials is a very important tool for understanding their mechanical behavior and an effective tool for successful design of composite materials. In this thesis an image-based multi-scale finite element approach is developed to predict the mechanical properties of asphalt mixtures. In this approach the "up-scaling" and homogenization of each scale to the next is critically designed to improve accuracy. In addition to this multi-scale efficiency, this study introduces an approach for consideration of particle contacts at each of the scales in which mineral particles exist. One of the most important pavement distresses which seriously affects the pavement performance is fatigue cracking. As this cracking generally takes place in the binder phase of the asphalt mixture, the binder fatigue behavior is assumed to be one of the main factors influencing the overall pavement fatigue performance. It is also known that aggregate gradation, mixture volumetric properties, and filler type and concentration can affect damage initiation and progression in the asphalt mixtures. This study was conducted to develop a tool to characterize the damage properties of the asphalt mixtures at all scales. In the present study the Viscoelastic continuum damage model is implemented into the well-known finite element software ABAQUS via the user material subroutine (UMAT) in order to simulate the state of damage in the binder phase under the repeated uniaxial sinusoidal loading. The inputs are based on the experimentally derived measurements for the binder properties. For the scales of mastic and mortar, the artificially 2-Dimensional images of mastic and mortar scales were generated and used to characterize the properties of those scales. Finally, the 2D scanned images of asphalt mixtures are used to study the asphalt mixture fatigue behavior under loading. In order to validate the proposed model, the experimental test results and the simulation results were compared. Indirect tensile fatigue tests were conducted on asphalt mixture samples. A comparison between experimental results and the results from simulation shows that the model developed in this study is capable of predicting the effect of asphalt binder properties and aggregate micro-structure on mechanical behavior of asphalt concrete under loading.
Cloud Detection by Fusing Multi-Scale Convolutional Features
NASA Astrophysics Data System (ADS)
Li, Zhiwei; Shen, Huanfeng; Wei, Yancong; Cheng, Qing; Yuan, Qiangqiang
2018-04-01
Clouds detection is an important pre-processing step for accurate application of optical satellite imagery. Recent studies indicate that deep learning achieves best performance in image segmentation tasks. Aiming at boosting the accuracy of cloud detection for multispectral imagery, especially for those that contain only visible and near infrared bands, in this paper, we proposed a deep learning based cloud detection method termed MSCN (multi-scale cloud net), which segments cloud by fusing multi-scale convolutional features. MSCN was trained on a global cloud cover validation collection, and was tested in more than ten types of optical images with different resolution. Experiment results show that MSCN has obvious advantages over the traditional multi-feature combined cloud detection method in accuracy, especially when in snow and other areas covered by bright non-cloud objects. Besides, MSCN produced more detailed cloud masks than the compared deep cloud detection convolution network. The effectiveness of MSCN make it promising for practical application in multiple kinds of optical imagery.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-08-01
The main purpose of this work is to explore the usefulness of fractal descriptors estimated in multi-resolution domains to characterize biomedical digital image texture. In this regard, three multi-resolution techniques are considered: the well-known discrete wavelet transform (DWT) and the empirical mode decomposition (EMD), and; the newly introduced; variational mode decomposition mode (VMD). The original image is decomposed by the DWT, EMD, and VMD into different scales. Then, Fourier spectrum based fractal descriptors is estimated at specific scales and directions to characterize the image. The support vector machine (SVM) was used to perform supervised classification. The empirical study was applied to the problem of distinguishing between normal and abnormal brain magnetic resonance images (MRI) affected with Alzheimer disease (AD). Our results demonstrate that fractal descriptors estimated in VMD domain outperform those estimated in DWT and EMD domains; and also those directly estimated from the original image.
A comparative study of multi-focus image fusion validation metrics
NASA Astrophysics Data System (ADS)
Giansiracusa, Michael; Lutz, Adam; Messer, Neal; Ezekiel, Soundararajan; Alford, Mark; Blasch, Erik; Bubalo, Adnan; Manno, Michael
2016-05-01
Fusion of visual information from multiple sources is relevant for applications security, transportation, and safety applications. One way that image fusion can be particularly useful is when fusing imagery data from multiple levels of focus. Different focus levels can create different visual qualities for different regions in the imagery, which can provide much more visual information to analysts when fused. Multi-focus image fusion would benefit a user through automation, which requires the evaluation of the fused images to determine whether they have properly fused the focused regions of each image. Many no-reference metrics, such as information theory based, image feature based and structural similarity-based have been developed to accomplish comparisons. However, it is hard to scale an accurate assessment of visual quality which requires the validation of these metrics for different types of applications. In order to do this, human perception based validation methods have been developed, particularly dealing with the use of receiver operating characteristics (ROC) curves and the area under them (AUC). Our study uses these to analyze the effectiveness of no-reference image fusion metrics applied to multi-resolution fusion methods in order to determine which should be used when dealing with multi-focus data. Preliminary results show that the Tsallis, SF, and spatial frequency metrics are consistent with the image quality and peak signal to noise ratio (PSNR).
Fast Acquisition and Reconstruction of Optical Coherence Tomography Images via Sparse Representation
Li, Shutao; McNabb, Ryan P.; Nie, Qing; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.; Farsiu, Sina
2014-01-01
In this paper, we present a novel technique, based on compressive sensing principles, for reconstruction and enhancement of multi-dimensional image data. Our method is a major improvement and generalization of the multi-scale sparsity based tomographic denoising (MSBTD) algorithm we recently introduced for reducing speckle noise. Our new technique exhibits several advantages over MSBTD, including its capability to simultaneously reduce noise and interpolate missing data. Unlike MSBTD, our new method does not require an a priori high-quality image from the target imaging subject and thus offers the potential to shorten clinical imaging sessions. This novel image restoration method, which we termed sparsity based simultaneous denoising and interpolation (SBSDI), utilizes sparse representation dictionaries constructed from previously collected datasets. We tested the SBSDI algorithm on retinal spectral domain optical coherence tomography images captured in the clinic. Experiments showed that the SBSDI algorithm qualitatively and quantitatively outperforms other state-of-the-art methods. PMID:23846467
Deep multi-scale convolutional neural network for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Zhang, Feng-zhe; Yang, Xia
2018-04-01
In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.
Object-oriented recognition of high-resolution remote sensing image
NASA Astrophysics Data System (ADS)
Wang, Yongyan; Li, Haitao; Chen, Hong; Xu, Yuannan
2016-01-01
With the development of remote sensing imaging technology and the improvement of multi-source image's resolution in satellite visible light, multi-spectral and hyper spectral , the high resolution remote sensing image has been widely used in various fields, for example military field, surveying and mapping, geophysical prospecting, environment and so forth. In remote sensing image, the segmentation of ground targets, feature extraction and the technology of automatic recognition are the hotspot and difficulty in the research of modern information technology. This paper also presents an object-oriented remote sensing image scene classification method. The method is consist of vehicles typical objects classification generation, nonparametric density estimation theory, mean shift segmentation theory, multi-scale corner detection algorithm, local shape matching algorithm based on template. Remote sensing vehicles image classification software system is designed and implemented to meet the requirements .
NASA Astrophysics Data System (ADS)
Ren, B.; Wen, Q.; Zhou, H.; Guan, F.; Li, L.; Yu, H.; Wang, Z.
2018-04-01
The purpose of this paper is to provide decision support for the adjustment and optimization of crop planting structure in Jingxian County. The object-oriented information extraction method is used to extract corn and cotton from Jingxian County of Hengshui City in Hebei Province, based on multi-period GF-1 16-meter images. The best time of data extraction was screened by analyzing the spectral characteristics of corn and cotton at different growth stages based on multi-period GF-116-meter images, phenological data, and field survey data. The results showed that the total classification accuracy of corn and cotton was up to 95.7 %, the producer accuracy was 96 % and 94 % respectively, and the user precision was 95.05 % and 95.9 % respectively, which satisfied the demand of crop monitoring application. Therefore, combined with multi-period high-resolution images and object-oriented classification can be a good extraction of large-scale distribution of crop information for crop monitoring to provide convenient and effective technical means.
Multi-scale Observation of Biological Interactions of Nanocarriers: from Nano to Macro
Jin, Su-Eon; Bae, Jin Woo; Hong, Seungpyo
2010-01-01
Microscopic observations have played a key role in recent advancements in nanotechnology-based biomedical sciences. In particular, multi-scale observation is necessary to fully understand the nano-bio interfaces where a large amount of unprecedented phenomena have been reported. This review describes how to address the physicochemical and biological interactions of nanocarriers within the biological environments using microscopic tools. The imaging techniques are categorized based on the size scale of detection. For observation of the nano-scale biological interactions of nanocarriers, we discuss atomic force microscopy (AFM), scanning electron microscopy (SEM), and transmission electron microscopy (TEM). For the micro to macro-scale (in vitro and in vivo) observation, we focus on confocal laser scanning microscopy (CLSM) as well as in vivo imaging systems such as magnetic resonance imaging (MRI), superconducting quantum interference devices (SQUIDs), and IVIS®. Additionally, recently developed combined techniques such as AFM-CLSM, correlative Light and Electron Microscopy (CLEM), and SEM-spectroscopy are also discussed. In this review, we describe how each technique helps elucidate certain physicochemical and biological activities of nanocarriers such as dendrimers, polymers, liposomes, and polymeric/inorganic nanoparticles, thus providing a toolbox for bioengineers, pharmaceutical scientists, biologists, and research clinicians. PMID:20232368
NASA Astrophysics Data System (ADS)
Kirstetter, P. E.; Petersen, W. A.; Gourley, J. J.; Kummerow, C. D.; Huffman, G. J.; Turk, J.; Tanelli, S.; Maggioni, V.; Anagnostou, E. N.; Hong, Y.; Schwaller, M.
2016-12-01
Natural gas production via hydraulic fracturing of shale has proliferated on a global scale, yet recovery factors remain low because production strategies are not based on the physics of flow in shale reservoirs. In particular, the physical mechanisms and time scales of depletion from the matrix into the simulated fracture network are not well understood, limiting the potential to optimize operations and reduce environmental impacts. Studying matrix flow is challenging because shale is heterogeneous and has porosity from the μm- to nm-scale. Characterizing nm-scale flow paths requires electron microscopy but the limited field of view does not capture the connectivity and heterogeneity observed at the mm-scale. Therefore, pore-scale models must link to larger volumes to simulate flow on the reservoir-scale. Upscaled models must honor the physics of flow, but at present there is a gap between cm-scale experiments and μm-scale simulations based on ex situ image data. To address this gap, we developed a synchrotron X-ray microscope with an in situ cell to simultaneously visualize and measure flow. We perform coupled flow and microtomography experiments on mm-scale samples from the Barnett, Eagle Ford and Marcellus reservoirs. We measure permeability at various pressures via the pulse-decay method to quantify effective stress dependence and the relative contributions of advective and diffusive mechanisms. Images at each pressure step document how microfractures, interparticle pores, and organic matter change with effective stress. Linking changes in the pore network to flow measurements motivates a physical model for depletion. To directly visualize flow, we measure imbibition rates using inert, high atomic number gases and image periodically with monochromatic beam. By imaging above/below X-ray adsorption edges, we magnify the signal of gas saturation in μm-scale porosity and nm-scale, sub-voxel features. Comparing vacuumed and saturated states yields image-based measurements of the distribution and time scales of imbibition. We also characterize nm-scale structure via focused ion beam tomography to quantify sub-voxel porosity and connectivity. The multi-scale image and flow data is used to develop a framework to upscale and benchmark pore-scale models.
Change Detection of Remote Sensing Images by Dt-Cwt and Mrf
NASA Astrophysics Data System (ADS)
Ouyang, S.; Fan, K.; Wang, H.; Wang, Z.
2017-05-01
Aiming at the significant loss of high frequency information during reducing noise and the pixel independence in change detection of multi-scale remote sensing image, an unsupervised algorithm is proposed based on the combination between Dual-tree Complex Wavelet Transform (DT-CWT) and Markov random Field (MRF) model. This method first performs multi-scale decomposition for the difference image by the DT-CWT and extracts the change characteristics in high-frequency regions by using a MRF-based segmentation algorithm. Then our method estimates the final maximum a posterior (MAP) according to the segmentation algorithm of iterative condition model (ICM) based on fuzzy c-means(FCM) after reconstructing the high-frequency and low-frequency sub-bands of each layer respectively. Finally, the method fuses the above segmentation results of each layer by using the fusion rule proposed to obtain the mask of the final change detection result. The results of experiment prove that the method proposed is of a higher precision and of predominant robustness properties.
Edge detection based on adaptive threshold b-spline wavelet for optical sub-aperture measuring
NASA Astrophysics Data System (ADS)
Zhang, Shiqi; Hui, Mei; Liu, Ming; Zhao, Zhu; Dong, Liquan; Liu, Xiaohua; Zhao, Yuejin
2015-08-01
In the research of optical synthetic aperture imaging system, phase congruency is the main problem and it is necessary to detect sub-aperture phase. The edge of the sub-aperture system is more complex than that in the traditional optical imaging system. And with the existence of steep slope for large-aperture optical component, interference fringe may be quite dense when interference imaging. Deep phase gradient may cause a loss of phase information. Therefore, it's urgent to search for an efficient edge detection method. Wavelet analysis as a powerful tool is widely used in the fields of image processing. Based on its properties of multi-scale transform, edge region is detected with high precision in small scale. Longing with the increase of scale, noise is reduced in contrary. So it has a certain suppression effect on noise. Otherwise, adaptive threshold method which sets different thresholds in various regions can detect edge points from noise. Firstly, fringe pattern is obtained and cubic b-spline wavelet is adopted as the smoothing function. After the multi-scale wavelet decomposition of the whole image, we figure out the local modulus maxima in gradient directions. However, it also contains noise, and thus adaptive threshold method is used to select the modulus maxima. The point which greater than threshold value is boundary point. Finally, we use corrosion and expansion deal with the resulting image to get the consecutive boundary of image.
Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression.
Gao, Guangwei; Yang, Jian; Jing, Xiaoyuan; Huang, Pu; Hua, Juliang; Yue, Dong
2016-01-01
In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms.
A scale space feature based registration technique for fusion of satellite imagery
NASA Technical Reports Server (NTRS)
Raghavan, Srini; Cromp, Robert F.; Campbell, William C.
1997-01-01
Feature based registration is one of the most reliable methods to register multi-sensor images (both active and passive imagery) since features are often more reliable than intensity or radiometric values. The only situation where a feature based approach will fail is when the scene is completely homogenous or densely textural in which case a combination of feature and intensity based methods may yield better results. In this paper, we present some preliminary results of testing our scale space feature based registration technique, a modified version of feature based method developed earlier for classification of multi-sensor imagery. The proposed approach removes the sensitivity in parameter selection experienced in the earlier version as explained later.
Monitoring forest dynamics with multi-scale and time series imagery.
Huang, Chunbo; Zhou, Zhixiang; Wang, Di; Dian, Yuanyong
2016-05-01
To learn the forest dynamics and evaluate the ecosystem services of forest effectively, a timely acquisition of spatial and quantitative information of forestland is very necessary. Here, a new method was proposed for mapping forest cover changes by combining multi-scale satellite remote-sensing imagery with time series data. Using time series Normalized Difference Vegetation Index products derived from the Moderate Resolution Imaging Spectroradiometer images (MODIS-NDVI) and Landsat Thematic Mapper/Enhanced Thematic Mapper Plus (TM/ETM+) images as data source, a hierarchy stepwise analysis from coarse scale to fine scale was developed for detecting the forest change area. At the coarse scale, MODIS-NDVI data with 1-km resolution were used to detect the changes in land cover types and a land cover change map was constructed using NDVI values at vegetation growing seasons. At the fine scale, based on the results at the coarse scale, Landsat TM/ETM+ data with 30-m resolution were used to precisely detect the forest change location and forest change trend by analyzing time series forest vegetation indices (IFZ). The method was tested using the data for Hubei Province, China. The MODIS-NDVI data from 2001 to 2012 were used to detect the land cover changes, and the overall accuracy was 94.02 % at the coarse scale. At the fine scale, the available TM/ETM+ images at vegetation growing seasons between 2001 and 2012 were used to locate and verify forest changes in the Three Gorges Reservoir Area, and the overall accuracy was 94.53 %. The accuracy of the two layer hierarchical monitoring results indicated that the multi-scale monitoring method is feasible and reliable.
Wu, Guorong; Kim, Minjeong; Sanroma, Gerard; Wang, Qian; Munsell, Brent C.; Shen, Dinggang
2014-01-01
Multi-atlas patch-based label fusion methods have been successfully used to improve segmentation accuracy in many important medical image analysis applications. In general, to achieve label fusion a single target image is first registered to several atlas images, after registration a label is assigned to each target point in the target image by determining the similarity between the underlying target image patch (centered at the target point) and the aligned image patch in each atlas image. To achieve the highest level of accuracy during the label fusion process it’s critical the chosen patch similarity measurement accurately captures the tissue/shape appearance of the anatomical structure. One major limitation of existing state-of-the-art label fusion methods is that they often apply a fixed size image patch throughout the entire label fusion procedure. Doing so may severely affect the fidelity of the patch similarity measurement, which in turn may not adequately capture complex tissue appearance patterns expressed by the anatomical structure. To address this limitation, we advance state-of-the-art by adding three new label fusion contributions: First, each image patch now characterized by a multi-scale feature representation that encodes both local and semi-local image information. Doing so will increase the accuracy of the patch-based similarity measurement. Second, to limit the possibility of the patch-based similarity measurement being wrongly guided by the presence of multiple anatomical structures in the same image patch, each atlas image patch is further partitioned into a set of label-specific partial image patches according to the existing labels. Since image information has now been semantically divided into different patterns, these new label-specific atlas patches make the label fusion process more specific and flexible. Lastly, in order to correct target points that are mislabeled during label fusion, a hierarchically approach is used to improve the label fusion results. In particular, a coarse-to-fine iterative label fusion approach is used that gradually reduces the patch size. To evaluate the accuracy of our label fusion approach, the proposed method was used to segment the hippocampus in the ADNI dataset and 7.0 tesla MR images, sub-cortical regions in LONI LBPA40 dataset, mid-brain regions in SATA dataset from MICCAI 2013 segmentation challenge, and a set of key internal gray matter structures in IXI dataset. In all experiments, the segmentation results of the proposed hierarchical label fusion method with multi-scale feature representations and label-specific atlas patches are more accurate than several well-known state-of-the-art label fusion methods. PMID:25463474
Multi-Scale Measures of Rugosity, Slope and Aspect from Benthic Stereo Image Reconstructions
Friedman, Ariell; Pizarro, Oscar; Williams, Stefan B.; Johnson-Roberson, Matthew
2012-01-01
This paper demonstrates how multi-scale measures of rugosity, slope and aspect can be derived from fine-scale bathymetric reconstructions created from geo-referenced stereo imagery. We generate three-dimensional reconstructions over large spatial scales using data collected by Autonomous Underwater Vehicles (AUVs), Remotely Operated Vehicles (ROVs), manned submersibles and diver-held imaging systems. We propose a new method for calculating rugosity in a Delaunay triangulated surface mesh by projecting areas onto the plane of best fit using Principal Component Analysis (PCA). Slope and aspect can be calculated with very little extra effort, and fitting a plane serves to decouple rugosity from slope. We compare the results of the virtual terrain complexity calculations with experimental results using conventional in-situ measurement methods. We show that performing calculations over a digital terrain reconstruction is more flexible, robust and easily repeatable. In addition, the method is non-contact and provides much less environmental impact compared to traditional survey techniques. For diver-based surveys, the time underwater needed to collect rugosity data is significantly reduced and, being a technique based on images, it is possible to use robotic platforms that can operate beyond diver depths. Measurements can be calculated exhaustively at multiple scales for surveys with tens of thousands of images covering thousands of square metres. The technique is demonstrated on data gathered by a diver-rig and an AUV, on small single-transect surveys and on a larger, dense survey that covers over . Stereo images provide 3D structure as well as visual appearance, which could potentially feed into automated classification techniques. Our multi-scale rugosity, slope and aspect measures have already been adopted in a number of marine science studies. This paper presents a detailed description of the method and thoroughly validates it against traditional in-situ measurements. PMID:23251370
Ball-scale based hierarchical multi-object recognition in 3D medical images
NASA Astrophysics Data System (ADS)
Bağci, Ulas; Udupa, Jayaram K.; Chen, Xinjian
2010-03-01
This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automatic way of constructing a multi-object shape model assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship between objects in the training images and their intensity patterns captured in b-scale images. (c) A hierarchical mechanism of positioning the model, in a one-shot way, in a given image from a knowledge of the learnt pose relationship and the b-scale image of the given image to be segmented. The evaluation results on a set of 20 routine clinical abdominal female and male CT data sets indicate the following: (1) Incorporating a large number of objects improves the recognition accuracy dramatically. (2) The recognition algorithm can be thought as a hierarchical framework such that quick replacement of the model assembly is defined as coarse recognition and delineation itself is known as finest recognition. (3) Scale yields useful information about the relationship between the model assembly and any given image such that the recognition results in a placement of the model close to the actual pose without doing any elaborate searches or optimization. (4) Effective object recognition can make delineation most accurate.
Fusion of infrared and visible images based on BEMD and NSDFB
NASA Astrophysics Data System (ADS)
Zhu, Pan; Huang, Zhanhua; Lei, Hai
2016-07-01
This paper presents a new fusion method based on the adaptive multi-scale decomposition of bidimensional empirical mode decomposition (BEMD) and the flexible directional expansion of nonsubsampled directional filter banks (NSDFB) for visible-infrared images. Compared with conventional multi-scale fusion methods, BEMD is non-parametric and completely data-driven, which is relatively more suitable for non-linear signals decomposition and fusion. NSDFB can provide direction filtering on the decomposition levels to capture more geometrical structure of the source images effectively. In our fusion framework, the entropies of the two patterns of source images are firstly calculated and the residue of the image whose entropy is larger is extracted to make it highly relevant with the other source image. Then, the residue and the other source image are decomposed into low-frequency sub-bands and a sequence of high-frequency directional sub-bands in different scales by using BEMD and NSDFB. In this fusion scheme, two relevant fusion rules are used in low-frequency sub-bands and high-frequency directional sub-bands, respectively. Finally, the fused image is obtained by applying corresponding inverse transform. Experimental results indicate that the proposed fusion algorithm can obtain state-of-the-art performance for visible-infrared images fusion in both aspects of objective assessment and subjective visual quality even for the source images obtained in different conditions. Furthermore, the fused results have high contrast, remarkable target information and rich details information that are more suitable for human visual characteristics or machine perception.
Orientation of airborne laser scanning point clouds with multi-view, multi-scale image blocks.
Rönnholm, Petri; Hyyppä, Hannu; Hyyppä, Juha; Haggrén, Henrik
2009-01-01
Comprehensive 3D modeling of our environment requires integration of terrestrial and airborne data, which is collected, preferably, using laser scanning and photogrammetric methods. However, integration of these multi-source data requires accurate relative orientations. In this article, two methods for solving relative orientation problems are presented. The first method includes registration by minimizing the distances between of an airborne laser point cloud and a 3D model. The 3D model was derived from photogrammetric measurements and terrestrial laser scanning points. The first method was used as a reference and for validation. Having completed registration in the object space, the relative orientation between images and laser point cloud is known. The second method utilizes an interactive orientation method between a multi-scale image block and a laser point cloud. The multi-scale image block includes both aerial and terrestrial images. Experiments with the multi-scale image block revealed that the accuracy of a relative orientation increased when more images were included in the block. The orientations of the first and second methods were compared. The comparison showed that correct rotations were the most difficult to detect accurately by using the interactive method. Because the interactive method forces laser scanning data to fit with the images, inaccurate rotations cause corresponding shifts to image positions. However, in a test case, in which the orientation differences included only shifts, the interactive method could solve the relative orientation of an aerial image and airborne laser scanning data repeatedly within a couple of centimeters.
Orientation of Airborne Laser Scanning Point Clouds with Multi-View, Multi-Scale Image Blocks
Rönnholm, Petri; Hyyppä, Hannu; Hyyppä, Juha; Haggrén, Henrik
2009-01-01
Comprehensive 3D modeling of our environment requires integration of terrestrial and airborne data, which is collected, preferably, using laser scanning and photogrammetric methods. However, integration of these multi-source data requires accurate relative orientations. In this article, two methods for solving relative orientation problems are presented. The first method includes registration by minimizing the distances between of an airborne laser point cloud and a 3D model. The 3D model was derived from photogrammetric measurements and terrestrial laser scanning points. The first method was used as a reference and for validation. Having completed registration in the object space, the relative orientation between images and laser point cloud is known. The second method utilizes an interactive orientation method between a multi-scale image block and a laser point cloud. The multi-scale image block includes both aerial and terrestrial images. Experiments with the multi-scale image block revealed that the accuracy of a relative orientation increased when more images were included in the block. The orientations of the first and second methods were compared. The comparison showed that correct rotations were the most difficult to detect accurately by using the interactive method. Because the interactive method forces laser scanning data to fit with the images, inaccurate rotations cause corresponding shifts to image positions. However, in a test case, in which the orientation differences included only shifts, the interactive method could solve the relative orientation of an aerial image and airborne laser scanning data repeatedly within a couple of centimeters. PMID:22454569
A multi-scale convolutional neural network for phenotyping high-content cellular images.
Godinez, William J; Hossain, Imtiaz; Lazic, Stanley E; Davies, John W; Zhang, Xian
2017-07-01
Identifying phenotypes based on high-content cellular images is challenging. Conventional image analysis pipelines for phenotype identification comprise multiple independent steps, with each step requiring method customization and adjustment of multiple parameters. Here, we present an approach based on a multi-scale convolutional neural network (M-CNN) that classifies, in a single cohesive step, cellular images into phenotypes by using directly and solely the images' pixel intensity values. The only parameters in the approach are the weights of the neural network, which are automatically optimized based on training images. The approach requires no a priori knowledge or manual customization, and is applicable to single- or multi-channel images displaying single or multiple cells. We evaluated the classification performance of the approach on eight diverse benchmark datasets. The approach yielded overall a higher classification accuracy compared with state-of-the-art results, including those of other deep CNN architectures. In addition to using the network to simply obtain a yes-or-no prediction for a given phenotype, we use the probability outputs calculated by the network to quantitatively describe the phenotypes. This study shows that these probability values correlate with chemical treatment concentrations. This finding validates further our approach and enables chemical treatment potency estimation via CNNs. The network specifications and solver definitions are provided in Supplementary Software 1. william_jose.godinez_navarro@novartis.com or xian-1.zhang@novartis.com. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Object-Location-Aware Hashing for Multi-Label Image Retrieval via Automatic Mask Learning.
Huang, Chang-Qin; Yang, Shang-Ming; Pan, Yan; Lai, Han-Jiang
2018-09-01
Learning-based hashing is a leading approach of approximate nearest neighbor search for large-scale image retrieval. In this paper, we develop a deep supervised hashing method for multi-label image retrieval, in which we propose to learn a binary "mask" map that can identify the approximate locations of objects in an image, so that we use this binary "mask" map to obtain length-limited hash codes which mainly focus on an image's objects but ignore the background. The proposed deep architecture consists of four parts: 1) a convolutional sub-network to generate effective image features; 2) a binary "mask" sub-network to identify image objects' approximate locations; 3) a weighted average pooling operation based on the binary "mask" to obtain feature representations and hash codes that pay most attention to foreground objects but ignore the background; and 4) the combination of a triplet ranking loss designed to preserve relative similarities among images and a cross entropy loss defined on image labels. We conduct comprehensive evaluations on four multi-label image data sets. The results indicate that the proposed hashing method achieves superior performance gains over the state-of-the-art supervised or unsupervised hashing baselines.
Simulating and mapping spatial complexity using multi-scale techniques
De Cola, L.
1994-01-01
A central problem in spatial analysis is the mapping of data for complex spatial fields using relatively simple data structures, such as those of a conventional GIS. This complexity can be measured using such indices as multi-scale variance, which reflects spatial autocorrelation, and multi-fractal dimension, which characterizes the values of fields. These indices are computed for three spatial processes: Gaussian noise, a simple mathematical function, and data for a random walk. Fractal analysis is then used to produce a vegetation map of the central region of California based on a satellite image. This analysis suggests that real world data lie on a continuum between the simple and the random, and that a major GIS challenge is the scientific representation and understanding of rapidly changing multi-scale fields. -Author
Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun
2014-01-01
Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding consistency analysis evaluation.
Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun
2014-01-01
Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding consistency analysis evaluation. PMID:25405760
Vessel Segmentation in Retinal Images Using Multi-scale Line Operator and K-Means Clustering.
Saffarzadeh, Vahid Mohammadi; Osareh, Alireza; Shadgar, Bita
2014-04-01
Detecting blood vessels is a vital task in retinal image analysis. The task is more challenging with the presence of bright and dark lesions in retinal images. Here, a method is proposed to detect vessels in both normal and abnormal retinal fundus images based on their linear features. First, the negative impact of bright lesions is reduced by using K-means segmentation in a perceptive space. Then, a multi-scale line operator is utilized to detect vessels while ignoring some of the dark lesions, which have intensity structures different from the line-shaped vessels in the retina. The proposed algorithm is tested on two publicly available STARE and DRIVE databases. The performance of the method is measured by calculating the area under the receiver operating characteristic curve and the segmentation accuracy. The proposed method achieves 0.9483 and 0.9387 localization accuracy against STARE and DRIVE respectively.
Imaging of earthquake faults using small UAVs as a pathfinder for air and space observations
Donnellan, Andrea; Green, Joseph; Ansar, Adnan; Aletky, Joseph; Glasscoe, Margaret; Ben-Zion, Yehuda; Arrowsmith, J. Ramón; DeLong, Stephen B.
2017-01-01
Large earthquakes cause billions of dollars in damage and extensive loss of life and property. Geodetic and topographic imaging provide measurements of transient and long-term crustal deformation needed to monitor fault zones and understand earthquakes. Earthquake-induced strain and rupture characteristics are expressed in topographic features imprinted on the landscapes of fault zones. Small UAVs provide an efficient and flexible means to collect multi-angle imagery to reconstruct fine scale fault zone topography and provide surrogate data to determine requirements for and to simulate future platforms for air- and space-based multi-angle imaging.
Segmentation of white rat sperm image
NASA Astrophysics Data System (ADS)
Bai, Weiguo; Liu, Jianguo; Chen, Guoyuan
2011-11-01
The segmentation of sperm image exerts a profound influence in the analysis of sperm morphology, which plays a significant role in the research of animals' infertility and reproduction. To overcome the microscope image's properties of low contrast and highly polluted noise, and to get better segmentation results of sperm image, this paper presents a multi-scale gradient operator combined with a multi-structuring element for the micro-spermatozoa image of white rat, as the multi-scale gradient operator can smooth the noise of an image, while the multi-structuring element can retain more shape details of the sperms. Then, we use the Otsu method to segment the modified gradient image whose gray scale processed is strong in sperms and weak in the background, converting it into a binary sperm image. As the obtained binary image owns impurities that are not similar with sperms in the shape, we choose a form factor to filter those objects whose form factor value is larger than the select critical value, and retain those objects whose not. And then, we can get the final binary image of the segmented sperms. The experiment shows this method's great advantage in the segmentation of the micro-spermatozoa image.
Rapid multi-modality preregistration based on SIFT descriptor.
Chen, Jian; Tian, Jie
2006-01-01
This paper describes the scale invariant feature transform (SIFT) method for rapid preregistration of medical image. This technique originates from Lowe's method wherein preregistration is achieved by matching the corresponding keypoints between two images. The computational complexity has been reduced when we applied SIFT preregistration method before refined registration due to its O(n) exponential calculations. The features of SIFT are highly distinctive and invariant to image scaling and rotation, and partially invariant to change in illumination and contrast, it is robust and repeatable for cursorily matching two images. We also altered the descriptor so our method can deal with multimodality preregistration.
Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu
2018-03-02
Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.
Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu
2018-01-01
Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods. PMID:29498703
Towards a Multi-Resolution Model of Seismic Risk in Central Asia. Challenge and perspectives
NASA Astrophysics Data System (ADS)
Pittore, M.; Wieland, M.; Bindi, D.; Parolai, S.
2011-12-01
Assessing seismic risk, defined as the probability of occurrence of economical and social losses as consequence of an earthquake, both at regional and at local scale is a challenging, multi-disciplinary task. In order to provide a reliable estimate, diverse information must be gathered by seismologists, geologists, engineers and civil authorities, and carefully integrated keeping into account the different levels of uncertainty. The research towards an integrated methodology, able to seamlessly describe seismic risk at different spatial scales is challenging, but discloses new application perspectives, particularly in those countries which suffer from a relevant seismic hazard but do not have resources for a standard assessment. Central Asian countries in particular, which exhibit one of the highest seismic hazard in the world, are experiencing a steady demographic growth, often accompanied by informal settlement and urban sprawling. A reliable evaluation of how these factors affect the seismic risk, together with a realistic assessment of the assets exposed to seismic hazard and their structural vulnerability is of particular importance, in order to undertake proper mitigation actions and to promptly and efficiently react to a catastrophic event. New strategies are needed to efficiently cope with systematic lack of information and uncertainties. An original approach is presented to assess seismic risk based on integration of information coming from remote-sensing and ground-based panoramic imaging, in situ measurements, expert knowledge and already available data. Efficient sampling strategies based on freely available medium-resolution multi-spectral satellite images are adopted to optimize data collection and validation, in a multi-scale approach. Panoramic imaging is also considered as a valuable ground-based visual data collection technique, suitable both for manual and automatic analysis. A full-probabilistic framework based on Bayes Network is proposed to integrate available information taking into account both aleatory and epistemic uncertainties. An improved risk model for the capital of Kyrgyz Republic, Biskek, has been developed following this approach and tested based on different earthquake scenarios. Preliminary results will be presented and discussed.
NASA Astrophysics Data System (ADS)
Wang, Guanxi; Tie, Yun; Qi, Lin
2017-07-01
In this paper, we propose a novel approach based on Depth Maps and compute Multi-Scale Histograms of Oriented Gradient (MSHOG) from sequences of depth maps to recognize actions. Each depth frame in a depth video sequence is projected onto three orthogonal Cartesian planes. Under each projection view, the absolute difference between two consecutive projected maps is accumulated through a depth video sequence to form a Depth Map, which is called Depth Motion Trail Images (DMTI). The MSHOG is then computed from the Depth Maps for the representation of an action. In addition, we apply L2-Regularized Collaborative Representation (L2-CRC) to classify actions. We evaluate the proposed approach on MSR Action3D dataset and MSRGesture3D dataset. Promising experimental result demonstrates the effectiveness of our proposed method.
SU-F-I-10: Spatially Local Statistics for Adaptive Image Filtering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iliopoulos, AS; Sun, X; Floros, D
Purpose: To facilitate adaptive image filtering operations, addressing spatial variations in both noise and signal. Such issues are prevalent in cone-beam projections, where physical effects such as X-ray scattering result in spatially variant noise, violating common assumptions of homogeneous noise and challenging conventional filtering approaches to signal extraction and noise suppression. Methods: We present a computational mechanism for probing into and quantifying the spatial variance of noise throughout an image. The mechanism builds a pyramid of local statistics at multiple spatial scales; local statistical information at each scale includes (weighted) mean, median, standard deviation, median absolute deviation, as well asmore » histogram or dynamic range after local mean/median shifting. Based on inter-scale differences of local statistics, the spatial scope of distinguishable noise variation is detected in a semi- or un-supervised manner. Additionally, we propose and demonstrate the incorporation of such information in globally parametrized (i.e., non-adaptive) filters, effectively transforming the latter into spatially adaptive filters. The multi-scale mechanism is materialized by efficient algorithms and implemented in parallel CPU/GPU architectures. Results: We demonstrate the impact of local statistics for adaptive image processing and analysis using cone-beam projections of a Catphan phantom, fitted within an annulus to increase X-ray scattering. The effective spatial scope of local statistics calculations is shown to vary throughout the image domain, necessitating multi-scale noise and signal structure analysis. Filtering results with and without spatial filter adaptation are compared visually, illustrating improvements in imaging signal extraction and noise suppression, and in preserving information in low-contrast regions. Conclusion: Local image statistics can be incorporated in filtering operations to equip them with spatial adaptivity to spatial signal/noise variations. An efficient multi-scale computational mechanism is developed to curtail processing latency. Spatially adaptive filtering may impact subsequent processing tasks such as reconstruction and numerical gradient computations for deformable registration. NIH Grant No. R01-184173.« less
NASA Astrophysics Data System (ADS)
Fei, Peng; Lee, Juhyun; Packard, René R. Sevag; Sereti, Konstantina-Ioanna; Xu, Hao; Ma, Jianguo; Ding, Yichen; Kang, Hanul; Chen, Harrison; Sung, Kevin; Kulkarni, Rajan; Ardehali, Reza; Kuo, C.-C. Jay; Xu, Xiaolei; Ho, Chih-Ming; Hsiai, Tzung K.
2016-03-01
Light Sheet Fluorescence Microscopy (LSFM) enables multi-dimensional and multi-scale imaging via illuminating specimens with a separate thin sheet of laser. It allows rapid plane illumination for reduced photo-damage and superior axial resolution and contrast. We hereby demonstrate cardiac LSFM (c-LSFM) imaging to assess the functional architecture of zebrafish embryos with a retrospective cardiac synchronization algorithm for four-dimensional reconstruction (3-D space + time). By combining our approach with tissue clearing techniques, we reveal the entire cardiac structures and hypertrabeculation of adult zebrafish hearts in response to doxorubicin treatment. By integrating the resolution enhancement technique with c-LSFM to increase the resolving power under a large field-of-view, we demonstrate the use of low power objective to resolve the entire architecture of large-scale neonatal mouse hearts, revealing the helical orientation of individual myocardial fibers. Therefore, our c-LSFM imaging approach provides multi-scale visualization of architecture and function to drive cardiovascular research with translational implication in congenital heart diseases.
NASA Astrophysics Data System (ADS)
Chen, Cheng; Jin, Dakai; Zhang, Xiaoliu; Levy, Steven M.; Saha, Punam K.
2017-03-01
Osteoporosis is associated with an increased risk of low-trauma fractures. Segmentation of trabecular bone (TB) is essential to assess TB microstructure, which is a key determinant of bone strength and fracture risk. Here, we present a new method for TB segmentation for in vivo CT imaging. The method uses Hessian matrix-guided anisotropic diffusion to improve local separability of trabecular structures, followed by a new multi-scale morphological reconstruction algorithm for TB segmentation. High sensitivity (0.93), specificity (0.93), and accuracy (0.92) were observed for the new method based on regional manual thresholding on in vivo CT images. Mechanical tests have shown that TB segmentation using the new method improved the ability of derived TB spacing measure for predicting actual bone strength (R2=0.83).
Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei
2016-01-01
Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762
Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei
2016-01-01
Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.
An algorithm for pavement crack detection based on multiscale space
NASA Astrophysics Data System (ADS)
Liu, Xiang-long; Li, Qing-quan
2006-10-01
Conventional human-visual and manual field pavement crack detection method and approaches are very costly, time-consuming, dangerous, labor-intensive and subjective. They possess various drawbacks such as having a high degree of variability of the measure results, being unable to provide meaningful quantitative information and almost always leading to inconsistencies in crack details over space and across evaluation, and with long-periodic measurement. With the development of the public transportation and the growth of the Material Flow System, the conventional method can far from meet the demands of it, thereby, the automatic pavement state data gathering and data analyzing system come to the focus of the vocation's attention, and developments in computer technology, digital image acquisition, image processing and multi-sensors technology made the system possible, but the complexity of the image processing always made the data processing and data analyzing come to the bottle-neck of the whole system. According to the above description, a robust and high-efficient parallel pavement crack detection algorithm based on Multi-Scale Space is proposed in this paper. The proposed method is based on the facts that: (1) the crack pixels in pavement images are darker than their surroundings and continuous; (2) the threshold values of gray-level pavement images are strongly related with the mean value and standard deviation of the pixel-grey intensities. The Multi-Scale Space method is used to improve the data processing speed and minimize the effectiveness caused by image noise. Experiment results demonstrate that the advantages are remarkable: (1) it can correctly discover tiny cracks, even from very noise pavement image; (2) the efficiency and accuracy of the proposed algorithm are superior; (3) its application-dependent nature can simplify the design of the entire system.
NASA Astrophysics Data System (ADS)
Ajadi, O. A.; Meyer, F. J.
2014-12-01
Automatic oil spill detection and tracking from Synthetic Aperture Radar (SAR) images is a difficult task, due in large part to the inhomogeneous properties of the sea surface, the high level of speckle inherent in SAR data, the complexity and the highly non-Gaussian nature of amplitude information, and the low temporal sampling that is often achieved with SAR systems. This research presents a promising new oil spill detection and tracking method that is based on time series of SAR images. Through the combination of a number of advanced image processing techniques, the develop approach is able to mitigate some of these previously mentioned limitations of SAR-based oil-spill detection and enables fully automatic spill detection and tracking across a wide range of spatial scales. The method combines an initial automatic texture analysis with a consecutive change detection approach based on multi-scale image decomposition. The first step of the approach, a texture transformation of the original SAR images, is performed in order to normalize the ocean background and enhance the contrast between oil-covered and oil-free ocean surfaces. The Lipschitz regularity (LR), a local texture parameter, is used here due to its proven ability to normalize the reflectivity properties of ocean water and maximize the visibly of oil in water. To calculate LR, the images are decomposed using two-dimensional continuous wavelet transform (2D-CWT), and transformed into Holder space to measure LR. After texture transformation, the now normalized images are inserted into our multi-temporal change detection algorithm. The multi-temporal change detection approach is a two-step procedure including (1) data enhancement and filtering and (2) multi-scale automatic change detection. The performance of the developed approach is demonstrated by an application to oil spill areas in the Gulf of Mexico. In this example, areas affected by oil spills were identified from a series of ALOS PALSAR images acquired in 2010. The comparison showed exceptional performance of our method. This method can be applied to emergency management and decision support systems with a need for real-time data, and it shows great potential for rapid data analysis in other areas, including volcano detection, flood boundaries, forest health, and wildfires.
NASA Astrophysics Data System (ADS)
Pokhrel, A.; El Hannach, M.; Orfino, F. P.; Dutta, M.; Kjeang, E.
2016-10-01
X-ray computed tomography (XCT), a non-destructive technique, is proposed for three-dimensional, multi-length scale characterization of complex failure modes in fuel cell electrodes. Comparative tomography data sets are acquired for a conditioned beginning of life (BOL) and a degraded end of life (EOL) membrane electrode assembly subjected to cathode degradation by voltage cycling. Micro length scale analysis shows a five-fold increase in crack size and 57% thickness reduction in the EOL cathode catalyst layer, indicating widespread action of carbon corrosion. Complementary nano length scale analysis shows a significant reduction in porosity, increased pore size, and dramatically reduced effective diffusivity within the remaining porous structure of the catalyst layer at EOL. Collapsing of the structure is evident from the combination of thinning and reduced porosity, as uniquely determined by the multi-length scale approach. Additionally, a novel image processing based technique developed for nano scale segregation of pore, ionomer, and Pt/C dominated voxels shows an increase in ionomer volume fraction, Pt/C agglomerates, and severe carbon corrosion at the catalyst layer/membrane interface at EOL. In summary, XCT based multi-length scale analysis enables detailed information needed for comprehensive understanding of the complex failure modes observed in fuel cell electrodes.
Calibrations for a MCAO Imaging System
NASA Astrophysics Data System (ADS)
Hibon, Pascale; B. Neichel; V. Garrel; R. Carrasco
2017-09-01
"GeMS, the Gemini Multi conjugate adaptive optics System installed at the Gemini South telescope (Cerro Pachon, Chile) started to deliver science since the beginning of 2013. GeMS is using the Multi Conjugate AdaptiveOptics (MCAO) technique allowing to dramatically increase the corrected field of view (FOV) compared to classical Single Conjugated Adaptive Optics (SCAO) systems. It is the first sodium-based multi-Laser Guide Star (LGS) adaptive optics system. It has been designed to feed two science instruments: GSAOI, a 4k×4k NIR imager covering 85"×85" with 0.02" pixel scale, and Flamingos-2, a NIR multi-object spectrograph. We present here an overview of the calibrations necessary for reducing and analysing the science datasets obtained with GeMS+GSAOI."
The Study of Residential Areas Extraction Based on GF-3 Texture Image Segmentation
NASA Astrophysics Data System (ADS)
Shao, G.; Luo, H.; Tao, X.; Ling, Z.; Huang, Y.
2018-04-01
The study chooses the standard stripe and dual polarization SAR images of GF-3 as the basic data. Residential areas extraction processes and methods based upon GF-3 images texture segmentation are compared and analyzed. GF-3 images processes include radiometric calibration, complex data conversion, multi-look processing, images filtering, and then conducting suitability analysis for different images filtering methods, the filtering result show that the filtering method of Kuan is efficient for extracting residential areas, then, we calculated and analyzed the texture feature vectors using the GLCM (the Gary Level Co-occurrence Matrix), texture feature vectors include the moving window size, step size and angle, the result show that window size is 11*11, step is 1, and angle is 0°, which is effective and optimal for the residential areas extracting. And with the FNEA (Fractal Net Evolution Approach), we segmented the GLCM texture images, and extracted the residential areas by threshold setting. The result of residential areas extraction verified and assessed by confusion matrix. Overall accuracy is 0.897, kappa is 0.881, and then we extracted the residential areas by SVM classification based on GF-3 images, the overall accuracy is less 0.09 than the accuracy of extraction method based on GF-3 Texture Image Segmentation. We reached the conclusion that residential areas extraction based on GF-3 SAR texture image multi-scale segmentation is simple and highly accurate. although, it is difficult to obtain multi-spectrum remote sensing image in southern China, in cloudy and rainy weather throughout the year, this paper has certain reference significance.
[Object-oriented aquatic vegetation extracting approach based on visible vegetation indices.
Jing, Ran; Deng, Lei; Zhao, Wen Ji; Gong, Zhao Ning
2016-05-01
Using the estimation of scale parameters (ESP) image segmentation tool to determine the ideal image segmentation scale, the optimal segmented image was created by the multi-scale segmentation method. Based on the visible vegetation indices derived from mini-UAV imaging data, we chose a set of optimal vegetation indices from a series of visible vegetation indices, and built up a decision tree rule. A membership function was used to automatically classify the study area and an aquatic vegetation map was generated. The results showed the overall accuracy of image classification using the supervised classification was 53.7%, and the overall accuracy of object-oriented image analysis (OBIA) was 91.7%. Compared with pixel-based supervised classification method, the OBIA method improved significantly the image classification result and further increased the accuracy of extracting the aquatic vegetation. The Kappa value of supervised classification was 0.4, and the Kappa value based OBIA was 0.9. The experimental results demonstrated that using visible vegetation indices derived from the mini-UAV data and OBIA method extracting the aquatic vegetation developed in this study was feasible and could be applied in other physically similar areas.
A Multi-Scale Algorithm for Graffito Advertisement Detection from Images of Real Estate
NASA Astrophysics Data System (ADS)
Yang, Jun; Zhu, Shi-Jiao
There is a significant need to detect and extract the graffito advertisement embedded in the housing images automatically. However, it is a hard job to separate the advertisement region well since housing images generally have complex background. In this paper, a detecting algorithm which uses multi-scale Gabor filters to identify graffito regions is proposed. Firstly, multi-scale Gabor filters with different directions are applied to housing images, then the approach uses these frequency data to find likely graffito regions using the relationship of different channels, it exploits the ability of different filters technique to solve the detection problem with low computational efforts. Lastly, the method is tested on several real estate images which are embedded graffito advertisement to verify its robustness and efficiency. The experiments demonstrate graffito regions can be detected quite well.
Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition
Ong, Frank; Lustig, Michael
2016-01-01
We present a natural generalization of the recent low rank + sparse matrix decomposition and consider the decomposition of matrices into components of multiple scales. Such decomposition is well motivated in practice as data matrices often exhibit local correlations in multiple scales. Concretely, we propose a multi-scale low rank modeling that represents a data matrix as a sum of block-wise low rank matrices with increasing scales of block sizes. We then consider the inverse problem of decomposing the data matrix into its multi-scale low rank components and approach the problem via a convex formulation. Theoretically, we show that under various incoherence conditions, the convex program recovers the multi-scale low rank components either exactly or approximately. Practically, we provide guidance on selecting the regularization parameters and incorporate cycle spinning to reduce blocking artifacts. Experimentally, we show that the multi-scale low rank decomposition provides a more intuitive decomposition than conventional low rank methods and demonstrate its effectiveness in four applications, including illumination normalization for face images, motion separation for surveillance videos, multi-scale modeling of the dynamic contrast enhanced magnetic resonance imaging and collaborative filtering exploiting age information. PMID:28450978
NASA Astrophysics Data System (ADS)
Sun, Y. S.; Zhang, L.; Xu, B.; Zhang, Y.
2018-04-01
The accurate positioning of optical satellite image without control is the precondition for remote sensing application and small/medium scale mapping in large abroad areas or with large-scale images. In this paper, aiming at the geometric features of optical satellite image, based on a widely used optimization method of constraint problem which is called Alternating Direction Method of Multipliers (ADMM) and RFM least-squares block adjustment, we propose a GCP independent block adjustment method for the large-scale domestic high resolution optical satellite image - GISIBA (GCP-Independent Satellite Imagery Block Adjustment), which is easy to parallelize and highly efficient. In this method, the virtual "average" control points are built to solve the rank defect problem and qualitative and quantitative analysis in block adjustment without control. The test results prove that the horizontal and vertical accuracy of multi-covered and multi-temporal satellite images are better than 10 m and 6 m. Meanwhile the mosaic problem of the adjacent areas in large area DOM production can be solved if the public geographic information data is introduced as horizontal and vertical constraints in the block adjustment process. Finally, through the experiments by using GF-1 and ZY-3 satellite images over several typical test areas, the reliability, accuracy and performance of our developed procedure will be presented and studied in this paper.
Detecting Multi-scale Structures in Chandra Images of Centaurus A
NASA Astrophysics Data System (ADS)
Karovska, M.; Fabbiano, G.; Elvis, M. S.; Evans, I. N.; Kim, D. W.; Prestwich, A. H.; Schwartz, D. A.; Murray, S. S.; Forman, W.; Jones, C.; Kraft, R. P.; Isobe, T.; Cui, W.; Schreier, E. J.
1999-12-01
Centaurus A (NGC 5128) is a giant early-type galaxy with a merger history, containing the nearest radio-bright AGN. Recent Chandra High Resolution Camera (HRC) observations of Cen A reveal X-ray multi-scale structures in this object with unprecedented detail and clarity. We show the results of an analysis of the Chandra data with smoothing and edge enhancement techniques that allow us to enhance and quantify the multi-scale structures present in the HRC images. These techniques include an adaptive smoothing algorithm (Ebeling et al 1999), and a multi-directional gradient detection algorithm (Karovska et al 1994). The Ebeling et al adaptive smoothing algorithm, which is incorporated in the CXC analysis s/w package, is a powerful tool for smoothing images containing complex structures at various spatial scales. The adaptively smoothed images of Centaurus A show simultaneously the high-angular resolution bright structures at scales as small as an arcsecond and the extended faint structures as large as several arc minutes. The large scale structures suggest complex symmetry, including a component possibly associated with the inner radio lobes (as suggested by the ROSAT HRI data, Dobereiner et al 1996), and a separate component with an orthogonal symmetry that may be associated with the galaxy as a whole. The dust lane and the x-ray ridges are very clearly visible. The adaptively smoothed images and the edge-enhanced images also suggest several filamentary features including a large filament-like structure extending as far as about 5 arcminutes to North-West.
HCP: A Flexible CNN Framework for Multi-label Image Classification.
Wei, Yunchao; Xia, Wei; Lin, Min; Huang, Junshi; Ni, Bingbing; Dong, Jian; Zhao, Yao; Yan, Shuicheng
2015-10-26
Convolutional Neural Network (CNN) has demonstrated promising performance in single-label image classification tasks. However, how CNN best copes with multi-label images still remains an open problem, mainly due to the complex underlying object layouts and insufficient multi-label training images. In this work, we propose a flexible deep CNN infrastructure, called Hypotheses-CNN-Pooling (HCP), where an arbitrary number of object segment hypotheses are taken as the inputs, then a shared CNN is connected with each hypothesis, and finally the CNN output results from different hypotheses are aggregated with max pooling to produce the ultimate multi-label predictions. Some unique characteristics of this flexible deep CNN infrastructure include: 1) no ground-truth bounding box information is required for training; 2) the whole HCP infrastructure is robust to possibly noisy and/or redundant hypotheses; 3) the shared CNN is flexible and can be well pre-trained with a large-scale single-label image dataset, e.g., ImageNet; and 4) it may naturally output multi-label prediction results. Experimental results on Pascal VOC 2007 and VOC 2012 multi-label image datasets well demonstrate the superiority of the proposed HCP infrastructure over other state-of-the-arts. In particular, the mAP reaches 90.5% by HCP only and 93.2% after the fusion with our complementary result in [44] based on hand-crafted features on the VOC 2012 dataset.
May, Christian P; Kolokotroni, Eleni; Stamatakos, Georgios S; Büchler, Philippe
2011-10-01
Modeling of tumor growth has been performed according to various approaches addressing different biocomplexity levels and spatiotemporal scales. Mathematical treatments range from partial differential equation based diffusion models to rule-based cellular level simulators, aiming at both improving our quantitative understanding of the underlying biological processes and, in the mid- and long term, constructing reliable multi-scale predictive platforms to support patient-individualized treatment planning and optimization. The aim of this paper is to establish a multi-scale and multi-physics approach to tumor modeling taking into account both the cellular and the macroscopic mechanical level. Therefore, an already developed biomodel of clinical tumor growth and response to treatment is self-consistently coupled with a biomechanical model. Results are presented for the free growth case of the imageable component of an initially point-like glioblastoma multiforme tumor. The composite model leads to significant tumor shape corrections that are achieved through the utilization of environmental pressure information and the application of biomechanical principles. Using the ratio of smallest to largest moment of inertia of the tumor material to quantify the effect of our coupled approach, we have found a tumor shape correction of 20% by coupling biomechanics to the cellular simulator as compared to a cellular simulation without preferred growth directions. We conclude that the integration of the two models provides additional morphological insight into realistic tumor growth behavior. Therefore, it might be used for the development of an advanced oncosimulator focusing on tumor types for which morphology plays an important role in surgical and/or radio-therapeutic treatment planning. Copyright © 2011 Elsevier Ltd. All rights reserved.
Hi-fidelity multi-scale local processing for visually optimized far-infrared Herschel images
NASA Astrophysics Data System (ADS)
Li Causi, G.; Schisano, E.; Liu, S. J.; Molinari, S.; Di Giorgio, A.
2016-07-01
In the context of the "Hi-Gal" multi-band full-plane mapping program for the Galactic Plane, as imaged by the Herschel far-infrared satellite, we have developed a semi-automatic tool which produces high definition, high quality color maps optimized for visual perception of extended features, like bubbles and filaments, against the high background variations. We project the map tiles of three selected bands onto a 3-channel panorama, which spans the central 130 degrees of galactic longitude times 2.8 degrees of galactic latitude, at the pixel scale of 3.2", in cartesian galactic coordinates. Then we process this image piecewise, applying a custom multi-scale local stretching algorithm, enforced by a local multi-scale color balance. Finally, we apply an edge-preserving contrast enhancement to perform an artifact-free details sharpening. Thanks to this tool, we have thus produced a stunning giga-pixel color image of the far-infrared Galactic Plane that we made publicly available with the recent release of the Hi-Gal mosaics and compact source catalog.
Grid-Enabled Quantitative Analysis of Breast Cancer
2010-10-01
large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...research, we designed a pilot study utilizing large scale parallel Grid computing harnessing nationwide infrastructure for medical image analysis . Also
NASA Astrophysics Data System (ADS)
Vo, Kiet T.; Sowmya, Arcot
A directional multi-scale modeling scheme based on wavelet and contourlet transforms is employed to describe HRCT lung image textures for classifying four diffuse lung disease patterns: normal, emphysema, ground glass opacity (GGO) and honey-combing. Generalized Gaussian density parameters are used to represent the detail sub-band features obtained by wavelet and contourlet transforms. In addition, support vector machines (SVMs) with excellent performance in a variety of pattern classification problems are used as classifier. The method is tested on a collection of 89 slices from 38 patients, each slice of size 512x512, 16 bits/pixel in DICOM format. The dataset contains 70,000 ROIs of those slices marked by experienced radiologists. We employ this technique at different wavelet and contourlet transform scales for diffuse lung disease classification. The technique presented here has best overall sensitivity 93.40% and specificity 98.40%.
NASA Astrophysics Data System (ADS)
Chen, Q.; Rice, A. F.
2005-03-01
Scanning Probe Recognition Microscopy is a new scanning probe capability under development within our group to reliably return to and directly interact with a specific nanobiological feature of interest. In previous work, we have successfully recognized and classified tubular versus globular biological objects from experimental atomic force microscope images using a method based on normalized central moments [ref. 1]. In this paper we extend this work to include recognition schemes appropriate for cellular and sub-cellular structures. Globular cells containing tubular actin filaments are under investigation. Thus there are differences in external/internal shapes and scales. Continuous Wavelet Transform with a differential Gaussian mother wavelet is employed for multi- scale analysis. [ref. 1] Q. Chen, V. Ayres and L. Udpa, ``Biological Investigation Using Scanning Probe Recognition Microscopy,'' Proceedings 3rd IEEE Conference on Nanotechnology, vol. 2, p 863-865 (2003).
Towards Omni-Tomography—Grand Fusion of Multiple Modalities for Simultaneous Interior Tomography
Wang, Ge; Zhang, Jie; Gao, Hao; Weir, Victor; Yu, Hengyong; Cong, Wenxiang; Xu, Xiaochen; Shen, Haiou; Bennett, James; Furth, Mark; Wang, Yue; Vannier, Michael
2012-01-01
We recently elevated interior tomography from its origin in computed tomography (CT) to a general tomographic principle, and proved its validity for other tomographic modalities including SPECT, MRI, and others. Here we propose “omni-tomography”, a novel concept for the grand fusion of multiple tomographic modalities for simultaneous data acquisition in a region of interest (ROI). Omni-tomography can be instrumental when physiological processes under investigation are multi-dimensional, multi-scale, multi-temporal and multi-parametric. Both preclinical and clinical studies now depend on in vivo tomography, often requiring separate evaluations by different imaging modalities. Over the past decade, two approaches have been used for multimodality fusion: Software based image registration and hybrid scanners such as PET-CT, PET-MRI, and SPECT-CT among others. While there are intrinsic limitations with both approaches, the main obstacle to the seamless fusion of multiple imaging modalities has been the bulkiness of each individual imager and the conflict of their physical (especially spatial) requirements. To address this challenge, omni-tomography is now unveiled as an emerging direction for biomedical imaging and systems biomedicine. PMID:22768108
Multi-level discriminative dictionary learning with application to large scale image classification.
Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua
2015-10-01
The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.
A multi-scale segmentation approach to filling gaps in Landsat ETM+ SLC-off images
Maxwell, S.K.; Schmidt, Gail L.; Storey, James C.
2007-01-01
On 31 May 2003, the Landsat Enhanced Thematic Plus (ETM+) Scan Line Corrector (SLC) failed, causing the scanning pattern to exhibit wedge-shaped scan-to-scan gaps. We developed a method that uses coincident spectral data to fill the image gaps. This method uses a multi-scale segment model, derived from a previous Landsat SLC-on image (image acquired prior to the SLC failure), to guide the spectral interpolation across the gaps in SLC-off images (images acquired after the SLC failure). This paper describes the process used to generate the segment model, provides details of the gap-fill algorithm used in deriving the segment-based gap-fill product, and presents the results of the gap-fill process applied to grassland, cropland, and forest landscapes. Our results indicate this product will be useful for a wide variety of applications, including regional-scale studies, general land cover mapping (e.g. forest, urban, and grass), crop-specific mapping and monitoring, and visual assessments. Applications that need to be cautious when using pixels in the gap areas include any applications that require per-pixel accuracy, such as urban characterization or impervious surface mapping, applications that use texture to characterize landscape features, and applications that require accurate measurements of small or narrow landscape features such as roads, farmsteads, and riparian areas.
An automated multi-scale network-based scheme for detection and location of seismic sources
NASA Astrophysics Data System (ADS)
Poiata, N.; Aden-Antoniow, F.; Satriano, C.; Bernard, P.; Vilotte, J. P.; Obara, K.
2017-12-01
We present a recently developed method - BackTrackBB (Poiata et al. 2016) - allowing to image energy radiation from different seismic sources (e.g., earthquakes, LFEs, tremors) in different tectonic environments using continuous seismic records. The method exploits multi-scale frequency-selective coherence in the wave field, recorded by regional seismic networks or local arrays. The detection and location scheme is based on space-time reconstruction of the seismic sources through an imaging function built from the sum of station-pair time-delay likelihood functions, projected onto theoretical 3D time-delay grids. This imaging function is interpreted as the location likelihood of the seismic source. A signal pre-processing step constructs a multi-band statistical representation of the non stationary signal, i.e. time series, by means of higher-order statistics or energy envelope characteristic functions. Such signal-processing is designed to detect in time signal transients - of different scales and a priori unknown predominant frequency - potentially associated with a variety of sources (e.g., earthquakes, LFE, tremors), and to improve the performance and the robustness of the detection-and-location location step. The initial detection-location, based on a single phase analysis with the P- or S-phase only, can then be improved recursively in a station selection scheme. This scheme - exploiting the 3-component records - makes use of P- and S-phase characteristic functions, extracted after a polarization analysis of the event waveforms, and combines the single phase imaging functions with the S-P differential imaging functions. The performance of the method is demonstrated here in different tectonic environments: (1) analysis of the one year long precursory phase of 2014 Iquique earthquake in Chile; (2) detection and location of tectonic tremor sources and low-frequency earthquakes during the multiple episodes of tectonic tremor activity in southwestern Japan.
Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop
NASA Astrophysics Data System (ADS)
Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.
2018-04-01
The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.
NASA Astrophysics Data System (ADS)
Rajchl, Martin; Abhari, Kamyar; Stirrat, John; Ukwatta, Eranga; Cantor, Diego; Li, Feng P.; Peters, Terry M.; White, James A.
2014-03-01
Multi-center trials provide the unique ability to investigate novel techniques across a range of geographical sites with sufficient statistical power, the inclusion of multiple operators determining feasibility under a wider array of clinical environments and work-flows. For this purpose, we introduce a new means of distributing pre-procedural cardiac models for image-guided interventions across a large scale multi-center trial. In this method, a single core facility is responsible for image processing, employing a novel web-based interface for model visualization and distribution. The requirements for such an interface, being WebGL-based, are minimal and well within the realms of accessibility for participating centers. We then demonstrate the accuracy of our approach using a single-center pacemaker lead implantation trial with generic planning models.
Seamline Determination Based on PKGC Segmentation for Remote Sensing Image Mosaicking
Dong, Qiang; Liu, Jinghong
2017-01-01
This paper presents a novel method of seamline determination for remote sensing image mosaicking. A two-level optimization strategy is applied to determine the seamline. Object-level optimization is executed firstly. Background regions (BRs) and obvious regions (ORs) are extracted based on the results of parametric kernel graph cuts (PKGC) segmentation. The global cost map which consists of color difference, a multi-scale morphological gradient (MSMG) constraint, and texture difference is weighted by BRs. Finally, the seamline is determined in the weighted cost from the start point to the end point. Dijkstra’s shortest path algorithm is adopted for pixel-level optimization to determine the positions of seamline. Meanwhile, a new seamline optimization strategy is proposed for image mosaicking with multi-image overlapping regions. The experimental results show the better performance than the conventional method based on mean-shift segmentation. Seamlines based on the proposed method bypass the obvious objects and take less time in execution. This new method is efficient and superior for seamline determination in remote sensing image mosaicking. PMID:28749446
NASA Astrophysics Data System (ADS)
Mabu, Shingo; Kido, Shoji; Hashimoto, Noriaki; Hirano, Yasushi; Kuremoto, Takashi
2018-02-01
This research proposes a multi-channel deep convolutional neural network (DCNN) for computer-aided diagnosis (CAD) that classifies normal and abnormal opacities of diffuse lung diseases in Computed Tomography (CT) images. Because CT images are gray scale, DCNN usually uses one channel for inputting image data. On the other hand, this research uses multi-channel DCNN where each channel corresponds to the original raw image or the images transformed by some preprocessing techniques. In fact, the information obtained only from raw images is limited and some conventional research suggested that preprocessing of images contributes to improving the classification accuracy. Thus, the combination of the original and preprocessed images is expected to show higher accuracy. The proposed method realizes region of interest (ROI)-based opacity annotation. We used lung CT images taken in Yamaguchi University Hospital, Japan, and they are divided into 32 × 32 ROI images. The ROIs contain six kinds of opacities: consolidation, ground-glass opacity (GGO), emphysema, honeycombing, nodular, and normal. The aim of the proposed method is to classify each ROI into one of the six opacities (classes). The DCNN structure is based on VGG network that secured the first and second places in ImageNet ILSVRC-2014. From the experimental results, the classification accuracy of the proposed method was better than the conventional method with single channel, and there was a significant difference between them.
Multi-scale approaches for high-speed imaging and analysis of large neural populations
Ahrens, Misha B.; Yuste, Rafael; Peterka, Darcy S.; Paninski, Liam
2017-01-01
Progress in modern neuroscience critically depends on our ability to observe the activity of large neuronal populations with cellular spatial and high temporal resolution. However, two bottlenecks constrain efforts towards fast imaging of large populations. First, the resulting large video data is challenging to analyze. Second, there is an explicit tradeoff between imaging speed, signal-to-noise, and field of view: with current recording technology we cannot image very large neuronal populations with simultaneously high spatial and temporal resolution. Here we describe multi-scale approaches for alleviating both of these bottlenecks. First, we show that spatial and temporal decimation techniques based on simple local averaging provide order-of-magnitude speedups in spatiotemporally demixing calcium video data into estimates of single-cell neural activity. Second, once the shapes of individual neurons have been identified at fine scale (e.g., after an initial phase of conventional imaging with standard temporal and spatial resolution), we find that the spatial/temporal resolution tradeoff shifts dramatically: after demixing we can accurately recover denoised fluorescence traces and deconvolved neural activity of each individual neuron from coarse scale data that has been spatially decimated by an order of magnitude. This offers a cheap method for compressing this large video data, and also implies that it is possible to either speed up imaging significantly, or to “zoom out” by a corresponding factor to image order-of-magnitude larger neuronal populations with minimal loss in accuracy or temporal resolution. PMID:28771570
Research on fusion algorithm of polarization image in tetrolet domain
NASA Astrophysics Data System (ADS)
Zhang, Dexiang; Yuan, BaoHong; Zhang, Jingjing
2015-12-01
Tetrolets are Haar-type wavelets whose supports are tetrominoes which are shapes made by connecting four equal-sized squares. A fusion method for polarization images based on tetrolet transform is proposed. Firstly, the magnitude of polarization image and angle of polarization image can be decomposed into low-frequency coefficients and high-frequency coefficients with multi-scales and multi-directions using tetrolet transform. For the low-frequency coefficients, the average fusion method is used. According to edge distribution differences in high frequency sub-band images, for the directional high-frequency coefficients are used to select the better coefficients by region spectrum entropy algorithm for fusion. At last the fused image can be obtained by utilizing inverse transform for fused tetrolet coefficients. Experimental results show that the proposed method can detect image features more effectively and the fused image has better subjective visual effect
No-Reference Image Quality Assessment by Wide-Perceptual-Domain Scorer Ensemble Method.
Liu, Tsung-Jung; Liu, Kuan-Hsien
2018-03-01
A no-reference (NR) learning-based approach to assess image quality is presented in this paper. The devised features are extracted from wide perceptual domains, including brightness, contrast, color, distortion, and texture. These features are used to train a model (scorer) which can predict scores. The scorer selection algorithms are utilized to help simplify the proposed system. In the final stage, the ensemble method is used to combine the prediction results from selected scorers. Two multiple-scale versions of the proposed approach are also presented along with the single-scale one. They turn out to have better performances than the original single-scale method. Because of having features from five different domains at multiple image scales and using the outputs (scores) from selected score prediction models as features for multi-scale or cross-scale fusion (i.e., ensemble), the proposed NR image quality assessment models are robust with respect to more than 24 image distortion types. They also can be used on the evaluation of images with authentic distortions. The extensive experiments on three well-known and representative databases confirm the performance robustness of our proposed model.
NASA Astrophysics Data System (ADS)
Wang, Fang; Wang, Hu; Xiao, Nan; Shen, Yang; Xue, Yaoke
2018-03-01
With the development of related technology gradually mature in the field of optoelectronic information, it is a great demand to design an optical system with high resolution and wide field of view(FOV). However, as it is illustrated in conventional Applied Optics, there is a contradiction between these two characteristics. Namely, the FOV and imaging resolution are limited by each other. Here, based on the study of typical wide-FOV optical system design, we propose the monocentric multi-scale system design method to solve this problem. Consisting of a concentric spherical lens and a series of micro-lens array, this system has effective improvement on its imaging quality. As an example, we designed a typical imaging system, which has a focal length of 35mm and a instantaneous field angle of 14.7", as well as the FOV set to be 120°. By analyzing the imaging quality, we demonstrate that in different FOV, all the values of MTF at 200lp/mm are higher than 0.4 when the sampling frequency of the Nyquist is 200lp/mm, which shows a good accordance with our design.
NASA Astrophysics Data System (ADS)
Aghaei, A.
2017-12-01
Digital imaging and modeling of rocks and subsequent simulation of physical phenomena in digitally-constructed rock models are becoming an integral part of core analysis workflows. One of the inherent limitations of image-based analysis, at any given scale, is image resolution. This limitation becomes more evident when the rock has multiple scales of porosity such as in carbonates and tight sandstones. Multi-scale imaging and constructions of hybrid models that encompass images acquired at multiple scales and resolutions are proposed as a solution to this problem. In this study, we investigate the effect of image resolution and unresolved porosity on petrophysical and two-phase flow properties calculated based on images. A helical X-ray micro-CT scanner with a high cone-angle is used to acquire digital rock images that are free of geometric distortion. To remove subjectivity from the analyses, a semi-automated image processing technique is used to process and segment the acquired data into multiple phases. Direct and pore network based models are used to simulate physical phenomena and obtain absolute permeability, formation factor and two-phase flow properties such as relative permeability and capillary pressure. The effect of image resolution on each property is investigated. Finally a hybrid network model incorporating images at multiple resolutions is built and used for simulations. The results from the hybrid model are compared against results from the model built at the highest resolution and those from laboratory tests.
Web tools for large-scale 3D biological images and atlases
2012-01-01
Background Large-scale volumetric biomedical image data of three or more dimensions are a significant challenge for distributed browsing and visualisation. Many images now exceed 10GB which for most users is too large to handle in terms of computer RAM and network bandwidth. This is aggravated when users need to access tens or hundreds of such images from an archive. Here we solve the problem for 2D section views through archive data delivering compressed tiled images enabling users to browse through very-large volume data in the context of a standard web-browser. The system provides an interactive visualisation for grey-level and colour 3D images including multiple image layers and spatial-data overlay. Results The standard Internet Imaging Protocol (IIP) has been extended to enable arbitrary 2D sectioning of 3D data as well a multi-layered images and indexed overlays. The extended protocol is termed IIP3D and we have implemented a matching server to deliver the protocol and a series of Ajax/Javascript client codes that will run in an Internet browser. We have tested the server software on a low-cost linux-based server for image volumes up to 135GB and 64 simultaneous users. The section views are delivered with response times independent of scale and orientation. The exemplar client provided multi-layer image views with user-controlled colour-filtering and overlays. Conclusions Interactive browsing of arbitrary sections through large biomedical-image volumes is made possible by use of an extended internet protocol and efficient server-based image tiling. The tools open the possibility of enabling fast access to large image archives without the requirement of whole image download and client computers with very large memory configurations. The system was demonstrated using a range of medical and biomedical image data extending up to 135GB for a single image volume. PMID:22676296
Sensing Urban Land-Use Patterns by Integrating Google Tensorflow and Scene-Classification Models
NASA Astrophysics Data System (ADS)
Yao, Y.; Liang, H.; Li, X.; Zhang, J.; He, J.
2017-09-01
With the rapid progress of China's urbanization, research on the automatic detection of land-use patterns in Chinese cities is of substantial importance. Deep learning is an effective method to extract image features. To take advantage of the deep-learning method in detecting urban land-use patterns, we applied a transfer-learning-based remote-sensing image approach to extract and classify features. Using the Google Tensorflow framework, a powerful convolution neural network (CNN) library was created. First, the transferred model was previously trained on ImageNet, one of the largest object-image data sets, to fully develop the model's ability to generate feature vectors of standard remote-sensing land-cover data sets (UC Merced and WHU-SIRI). Then, a random-forest-based classifier was constructed and trained on these generated vectors to classify the actual urban land-use pattern on the scale of traffic analysis zones (TAZs). To avoid the multi-scale effect of remote-sensing imagery, a large random patch (LRP) method was used. The proposed method could efficiently obtain acceptable accuracy (OA = 0.794, Kappa = 0.737) for the study area. In addition, the results show that the proposed method can effectively overcome the multi-scale effect that occurs in urban land-use classification at the irregular land-parcel level. The proposed method can help planners monitor dynamic urban land use and evaluate the impact of urban-planning schemes.
NASA Astrophysics Data System (ADS)
Yang, Shuyu; Mitra, Sunanda
2002-05-01
Due to the huge volumes of radiographic images to be managed in hospitals, efficient compression techniques yielding no perceptual loss in the reconstructed images are becoming a requirement in the storage and management of such datasets. A wavelet-based multi-scale vector quantization scheme that generates a global codebook for efficient storage and transmission of medical images is presented in this paper. The results obtained show that even at low bit rates one is able to obtain reconstructed images with perceptual quality higher than that of the state-of-the-art scalar quantization method, the set partitioning in hierarchical trees.
Scale invariant texture descriptors for classifying celiac disease
Hegenbart, Sebastian; Uhl, Andreas; Vécsei, Andreas; Wimmer, Georg
2013-01-01
Scale invariant texture recognition methods are applied for the computer assisted diagnosis of celiac disease. In particular, emphasis is given to techniques enhancing the scale invariance of multi-scale and multi-orientation wavelet transforms and methods based on fractal analysis. After fine-tuning to specific properties of our celiac disease imagery database, which consists of endoscopic images of the duodenum, some scale invariant (and often even viewpoint invariant) methods provide classification results improving the current state of the art. However, not each of the investigated scale invariant methods is applicable successfully to our dataset. Therefore, the scale invariance of the employed approaches is explicitly assessed and it is found that many of the analyzed methods are not as scale invariant as they theoretically should be. Results imply that scale invariance is not a key-feature required for successful classification of our celiac disease dataset. PMID:23481171
Poisson denoising on the sphere
NASA Astrophysics Data System (ADS)
Schmitt, J.; Starck, J. L.; Fadili, J.; Grenier, I.; Casandjian, J. M.
2009-08-01
In the scope of the Fermi mission, Poisson noise removal should improve data quality and make source detection easier. This paper presents a method for Poisson data denoising on sphere, called Multi-Scale Variance Stabilizing Transform on Sphere (MS-VSTS). This method is based on a Variance Stabilizing Transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has an (asymptotically) constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. Thus, MS-VSTS consists in decomposing the data into a sparse multi-scale dictionary (wavelets, curvelets, ridgelets...), and then applying a VST on the coefficients in order to get quasi-Gaussian stabilized coefficients. In this present article, the used multi-scale transform is the Isotropic Undecimated Wavelet Transform. Then, hypothesis tests are made to detect significant coefficients, and the denoised image is reconstructed with an iterative method based on Hybrid Steepest Descent (HST). The method is tested on simulated Fermi data.
Label-free, multi-scale imaging of ex-vivo mouse brain using spatial light interference microscopy
NASA Astrophysics Data System (ADS)
Min, Eunjung; Kandel, Mikhail E.; Ko, Chemyong J.; Popescu, Gabriel; Jung, Woonggyu; Best-Popescu, Catherine
2016-12-01
Brain connectivity spans over broad spatial scales, from nanometers to centimeters. In order to understand the brain at multi-scale, the neural network in wide-field has been visualized in detail by taking advantage of light microscopy. However, the process of staining or addition of fluorescent tags is commonly required, and the image contrast is insufficient for delineation of cytoarchitecture. To overcome this barrier, we use spatial light interference microscopy to investigate brain structure with high-resolution, sub-nanometer pathlength sensitivity without the use of exogenous contrast agents. Combining wide-field imaging and a mosaic algorithm developed in-house, we show the detailed architecture of cells and myelin, within coronal olfactory bulb and cortical sections, and from sagittal sections of the hippocampus and cerebellum. Our technique is well suited to identify laminar characteristics of fiber tract orientation within white matter, e.g. the corpus callosum. To further improve the macro-scale contrast of anatomical structures, and to better differentiate axons and dendrites from cell bodies, we mapped the tissue in terms of its scattering property. Based on our results, we anticipate that spatial light interference microscopy can potentially provide multiscale and multicontrast perspectives of gross and microscopic brain anatomy.
Object-based class modelling for multi-scale riparian forest habitat mapping
NASA Astrophysics Data System (ADS)
Strasser, Thomas; Lang, Stefan
2015-05-01
Object-based class modelling allows for mapping complex, hierarchical habitat systems. The riparian zone, including forests, represents such a complex ecosystem. Forests within riparian zones are biologically high productive and characterized by a rich biodiversity; thus considered of high community interest with an imperative to be protected and regularly monitored. Satellite earth observation (EO) provides tools for capturing the current state of forest habitats such as forest composition including intermixture of non-native tree species. Here we present a semi-automated object based image analysis (OBIA) approach for the mapping of riparian forests by applying class modelling of habitats based on the European Nature Information System (EUNIS) habitat classifications and the European Habitats Directive (HabDir) Annex 1. A very high resolution (VHR) WorldView-2 satellite image provided the required spatial and spectral details for a multi-scale image segmentation and rule-base composition to generate a six-level hierarchical representation of riparian forest habitats. Thereby habitats were hierarchically represented within an image object hierarchy as forest stands, stands of homogenous tree species and single trees represented by sunlit tree crowns. 522 EUNIS level 3 (EUNIS-3) habitat patches with a mean patch size (MPS) of 12,349.64 m2 were modelled from 938 forest stand patches (MPS = 6868.20 m2) and 43,742 tree stand patches (MPS = 140.79 m2). The delineation quality of the modelled EUNIS-3 habitats (focal level) was quantitatively assessed to an expert-based visual interpretation showing a mean deviation of 11.71%.
NASA Astrophysics Data System (ADS)
Wu, Jie; Besnehard, Quentin; Marchessoux, Cédric
2011-03-01
Clinical studies for the validation of new medical imaging devices require hundreds of images. An important step in creating and tuning the study protocol is the classification of images into "difficult" and "easy" cases. This consists of classifying the image based on features like the complexity of the background, the visibility of the disease (lesions). Therefore, an automatic medical background classification tool for mammograms would help for such clinical studies. This classification tool is based on a multi-content analysis framework (MCA) which was firstly developed to recognize image content of computer screen shots. With the implementation of new texture features and a defined breast density scale, the MCA framework is able to automatically classify digital mammograms with a satisfying accuracy. BI-RADS (Breast Imaging Reporting Data System) density scale is used for grouping the mammograms, which standardizes the mammography reporting terminology and assessment and recommendation categories. Selected features are input into a decision tree classification scheme in MCA framework, which is the so called "weak classifier" (any classifier with a global error rate below 50%). With the AdaBoost iteration algorithm, these "weak classifiers" are combined into a "strong classifier" (a classifier with a low global error rate) for classifying one category. The results of classification for one "strong classifier" show the good accuracy with the high true positive rates. For the four categories the results are: TP=90.38%, TN=67.88%, FP=32.12% and FN =9.62%.
Michez, Adrien; Piégay, Hervé; Lisein, Jonathan; Claessens, Hugues; Lejeune, Philippe
2016-03-01
Riparian forests are critically endangered many anthropogenic pressures and natural hazards. The importance of riparian zones has been acknowledged by European Directives, involving multi-scale monitoring. The use of this very-high-resolution and hyperspatial imagery in a multi-temporal approach is an emerging topic. The trend is reinforced by the recent and rapid growth of the use of the unmanned aerial system (UAS), which has prompted the development of innovative methodology. Our study proposes a methodological framework to explore how a set of multi-temporal images acquired during a vegetative period can differentiate some of the deciduous riparian forest species and their health conditions. More specifically, the developed approach intends to identify, through a process of variable selection, which variables derived from UAS imagery and which scale of image analysis are the most relevant to our objectives.The methodological framework is applied to two study sites to describe the riparian forest through two fundamental characteristics: the species composition and the health condition. These characteristics were selected not only because of their use as proxies for the riparian zone ecological integrity but also because of their use for river management.The comparison of various scales of image analysis identified the smallest object-based image analysis (OBIA) objects (ca. 1 m(2)) as the most relevant scale. Variables derived from spectral information (bands ratios) were identified as the most appropriate, followed by variables related to the vertical structure of the forest. Classification results show good overall accuracies for the species composition of the riparian forest (five classes, 79.5 and 84.1% for site 1 and site 2). The classification scenario regarding the health condition of the black alders of the site 1 performed the best (90.6%).The quality of the classification models developed with a UAS-based, cost-effective, and semi-automatic approach competes successfully with those developed using more expensive imagery, such as multi-spectral and hyperspectral airborne imagery. The high overall accuracy results obtained by the classification of the diseased alders open the door to applications dedicated to monitoring of the health conditions of riparian forest. Our methodological framework will allow UAS users to manage large imagery metric datasets derived from those dense time series.
NASA Astrophysics Data System (ADS)
Khan, Faisal; Enzmann, Frieder; Kersten, Michael
2016-03-01
Image processing of X-ray-computed polychromatic cone-beam micro-tomography (μXCT) data of geological samples mainly involves artefact reduction and phase segmentation. For the former, the main beam-hardening (BH) artefact is removed by applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. A Matlab code for this approach is provided in the Appendix. The final BH-corrected image is extracted from the residual data or from the difference between the surface elevation values and the original grey-scale values. For the segmentation, we propose a novel least-squares support vector machine (LS-SVM, an algorithm for pixel-based multi-phase classification) approach. A receiver operating characteristic (ROC) analysis was performed on BH-corrected and uncorrected samples to show that BH correction is in fact an important prerequisite for accurate multi-phase classification. The combination of the two approaches was thus used to classify successfully three different more or less complex multi-phase rock core samples.
A detail enhancement and dynamic range adjustment algorithm for high dynamic range images
NASA Astrophysics Data System (ADS)
Xu, Bo; Wang, Huachuang; Liang, Mingtao; Yu, Cong; Hu, Jinlong; Cheng, Hua
2014-08-01
Although high dynamic range (HDR) images contain large amounts of information, they have weak texture and low contrast. What's more, these images are difficult to be reproduced on low dynamic range displaying mediums. If much more information is to be acquired when these images are displayed on PCs, some specific transforms, such as compressing the dynamic range, enhancing the portions of little difference in original contrast and highlighting the texture details on the premise of keeping the parts of large contrast, are needed. To this ends, a multi-scale guided filter enhancement algorithm which derives from the single-scale guided filter based on the analysis of non-physical model is proposed in this paper. Firstly, this algorithm decomposes the original HDR images into base image and detail images of different scales, and then it adaptively selects a transform function which acts on the enhanced detail images and original images. By comparing the treatment effects of HDR images and low dynamic range (LDR) images of different scene features, it proves that this algorithm, on the basis of maintaining the hierarchy and texture details of images, not only improves the contrast and enhances the details of images, but also adjusts the dynamic range well. Thus, it is much suitable for human observation or analytical processing of machines.
Semi-automated extraction of landslides in Taiwan based on SPOT imagery and DEMs
NASA Astrophysics Data System (ADS)
Eisank, Clemens; Hölbling, Daniel; Friedl, Barbara; Chen, Yi-Chin; Chang, Kang-Tsung
2014-05-01
The vast availability and improved quality of optical satellite data and digital elevation models (DEMs), as well as the need for complete and up-to-date landslide inventories at various spatial scales have fostered the development of semi-automated landslide recognition systems. Among the tested approaches for designing such systems, object-based image analysis (OBIA) stepped out to be a highly promising methodology. OBIA offers a flexible, spatially enabled framework for effective landslide mapping. Most object-based landslide mapping systems, however, have been tailored to specific, mainly small-scale study areas or even to single landslides only. Even though reported mapping accuracies tend to be higher than for pixel-based approaches, accuracy values are still relatively low and depend on the particular study. There is still room to improve the applicability and objectivity of object-based landslide mapping systems. The presented study aims at developing a knowledge-based landslide mapping system implemented in an OBIA environment, i.e. Trimble eCognition. In comparison to previous knowledge-based approaches, the classification of segmentation-derived multi-scale image objects relies on digital landslide signatures. These signatures hold the common operational knowledge on digital landslide mapping, as reported by 25 Taiwanese landslide experts during personal semi-structured interviews. Specifically, the signatures include information on commonly used data layers, spectral and spatial features, and feature thresholds. The signatures guide the selection and implementation of mapping rules that were finally encoded in Cognition Network Language (CNL). Multi-scale image segmentation is optimized by using the improved Estimation of Scale Parameter (ESP) tool. The approach described above is developed and tested for mapping landslides in a sub-region of the Baichi catchment in Northern Taiwan based on SPOT imagery and a high-resolution DEM. An object-based accuracy assessment is conducted by quantitatively comparing extracted landslide objects with landslide polygons that were visually interpreted by local experts. The applicability and transferability of the mapping system are evaluated by comparing initial accuracies with those achieved for the following two tests: first, usage of a SPOT image from the same year, but for a different area within the Baichi catchment; second, usage of SPOT images from multiple years for the same region. The integration of the common knowledge via digital landslide signatures is new in object-based landslide studies. In combination with strategies to optimize image segmentation this may lead to a more objective, transferable and stable knowledge-based system for the mapping of landslides from optical satellite data and DEMs.
Multi-threshold de-noising of electrical imaging logging data based on the wavelet packet transform
NASA Astrophysics Data System (ADS)
Xie, Fang; Xiao, Chengwen; Liu, Ruilin; Zhang, Lili
2017-08-01
A key problem of effectiveness evaluation for fractured-vuggy carbonatite reservoir is how to accurately extract fracture and vug information from electrical imaging logging data. Drill bits quaked during drilling and resulted in rugged surfaces of borehole walls and thus conductivity fluctuations in electrical imaging logging data. The occurrence of the conductivity fluctuations (formation background noise) directly affects the fracture/vug information extraction and reservoir effectiveness evaluation. We present a multi-threshold de-noising method based on wavelet packet transform to eliminate the influence of rugged borehole walls. The noise is present as fluctuations in button-electrode conductivity curves and as pockmarked responses in electrical imaging logging static images. The noise has responses in various scales and frequency ranges and has low conductivity compared with fractures or vugs. Our de-noising method is to decompose the data into coefficients with wavelet packet transform on a quadratic spline basis, then shrink high-frequency wavelet packet coefficients in different resolutions with minimax threshold and hard-threshold function, and finally reconstruct the thresholded coefficients. We use electrical imaging logging data collected from fractured-vuggy Ordovician carbonatite reservoir in Tarim Basin to verify the validity of the multi-threshold de-noising method. Segmentation results and extracted parameters are shown as well to prove the effectiveness of the de-noising procedure.
Deblurring sequential ocular images from multi-spectral imaging (MSI) via mutual information.
Lian, Jian; Zheng, Yuanjie; Jiao, Wanzhen; Yan, Fang; Zhao, Bojun
2018-06-01
Multi-spectral imaging (MSI) produces a sequence of spectral images to capture the inner structure of different species, which was recently introduced into ocular disease diagnosis. However, the quality of MSI images can be significantly degraded by motion blur caused by the inevitable saccades and exposure time required for maintaining a sufficiently high signal-to-noise ratio. This degradation may confuse an ophthalmologist, reduce the examination quality, or defeat various image analysis algorithms. We propose an early work specially on deblurring sequential MSI images, which is distinguished from many of the current image deblurring techniques by resolving the blur kernel simultaneously for all the images in an MSI sequence. It is accomplished by incorporating several a priori constraints including the sharpness of the latent clear image, the spatial and temporal smoothness of the blur kernel and the similarity between temporally-neighboring images in MSI sequence. Specifically, we model the similarity between MSI images with mutual information considering the different wavelengths used for capturing different images in MSI sequence. The optimization of the proposed approach is based on a multi-scale framework and stepwise optimization strategy. Experimental results from 22 MSI sequences validate that our approach outperforms several state-of-the-art techniques in natural image deblurring.
NASA Astrophysics Data System (ADS)
Liu, Jianfei; Dubra, Alfredo; Tam, Johnny
2016-03-01
Cone photoreceptors are highly specialized cells responsible for the origin of vision in the human eye. Their inner segments can be noninvasively visualized using adaptive optics scanning light ophthalmoscopes (AOSLOs) with nonconfocal split detection capabilities. Monitoring the number of cones can lead to more precise metrics for real-time diagnosis and assessment of disease progression. Cell identification in split detection AOSLO images is hindered by cell regions with heterogeneous intensity arising from shadowing effects and low contrast boundaries due to overlying blood vessels. Here, we present a multi-scale circular voting approach to overcome these challenges through the novel combination of: 1) iterative circular voting to identify candidate cells based on their circular structures, 2) a multi-scale strategy to identify the optimal circular voting response, and 3) clustering to improve robustness while removing false positives. We acquired images from three healthy subjects at various locations on the retina and manually labeled cell locations to create ground-truth for evaluating the detection accuracy. The images span a large range of cell densities. The overall recall, precision, and F1 score were 91±4%, 84±10%, and 87±7% (Mean±SD). Results showed that our method for the identification of cone photoreceptor inner segments performs well even with low contrast cell boundaries and vessel obscuration. These encouraging results demonstrate that the proposed approach can robustly and accurately identify cells in split detection AOSLO images.
A rapid and robust gradient measurement technique using dynamic single-point imaging.
Jang, Hyungseok; McMillan, Alan B
2017-09-01
We propose a new gradient measurement technique based on dynamic single-point imaging (SPI), which allows simple, rapid, and robust measurement of k-space trajectory. To enable gradient measurement, we utilize the variable field-of-view (FOV) property of dynamic SPI, which is dependent on gradient shape. First, one-dimensional (1D) dynamic SPI data are acquired from a targeted gradient axis, and then relative FOV scaling factors between 1D images or k-spaces at varying encoding times are found. These relative scaling factors are the relative k-space position that can be used for image reconstruction. The gradient measurement technique also can be used to estimate the gradient impulse response function for reproducible gradient estimation as a linear time invariant system. The proposed measurement technique was used to improve reconstructed image quality in 3D ultrashort echo, 2D spiral, and multi-echo bipolar gradient-echo imaging. In multi-echo bipolar gradient-echo imaging, measurement of the k-space trajectory allowed the use of a ramp-sampled trajectory for improved acquisition speed (approximately 30%) and more accurate quantitative fat and water separation in a phantom. The proposed dynamic SPI-based method allows fast k-space trajectory measurement with a simple implementation and no additional hardware for improved image quality. Magn Reson Med 78:950-962, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.
Liu, Min; Wang, Xueping; Zhang, Hongzhong
2018-03-01
In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.
Hayworth, Kenneth J.; Morgan, Josh L.; Schalek, Richard; Berger, Daniel R.; Hildebrand, David G. C.; Lichtman, Jeff W.
2014-01-01
The automated tape-collecting ultramicrotome (ATUM) makes it possible to collect large numbers of ultrathin sections quickly—the equivalent of a petabyte of high resolution images each day. However, even high throughput image acquisition strategies generate images far more slowly (at present ~1 terabyte per day). We therefore developed WaferMapper, a software package that takes a multi-resolution approach to mapping and imaging select regions within a library of ultrathin sections. This automated method selects and directs imaging of corresponding regions within each section of an ultrathin section library (UTSL) that may contain many thousands of sections. Using WaferMapper, it is possible to map thousands of tissue sections at low resolution and target multiple points of interest for high resolution imaging based on anatomical landmarks. The program can also be used to expand previously imaged regions, acquire data under different imaging conditions, or re-image after additional tissue treatments. PMID:25018701
The MIND PALACE: A Multi-Spectral Imaging and Spectroscopy Database for Planetary Science
NASA Astrophysics Data System (ADS)
Eshelman, E.; Doloboff, I.; Hara, E. K.; Uckert, K.; Sapers, H. M.; Abbey, W.; Beegle, L. W.; Bhartia, R.
2017-12-01
The Multi-Instrument Database (MIND) is the web-based home to a well-characterized set of analytical data collected by a suite of deep-UV fluorescence/Raman instruments built at the Jet Propulsion Laboratory (JPL). Samples derive from a growing body of planetary surface analogs, mineral and microbial standards, meteorites, spacecraft materials, and other astrobiologically relevant materials. In addition to deep-UV spectroscopy, datasets stored in MIND are obtained from a variety of analytical techniques obtained over multiple spatial and spectral scales including electron microscopy, optical microscopy, infrared spectroscopy, X-ray fluorescence, and direct fluorescence imaging. Multivariate statistical analysis techniques, primarily Principal Component Analysis (PCA), are used to guide interpretation of these large multi-analytical spectral datasets. Spatial co-referencing of integrated spectral/visual maps is performed using QGIS (geographic information system software). Georeferencing techniques transform individual instrument data maps into a layered co-registered data cube for analysis across spectral and spatial scales. The body of data in MIND is intended to serve as a permanent, reliable, and expanding database of deep-UV spectroscopy datasets generated by this unique suite of JPL-based instruments on samples of broad planetary science interest.
Multi-color electron microscopy by element-guided identification of cells, organelles and molecules.
Scotuzzi, Marijke; Kuipers, Jeroen; Wensveen, Dasha I; de Boer, Pascal; Hagen, Kees C W; Hoogenboom, Jacob P; Giepmans, Ben N G
2017-04-07
Cellular complexity is unraveled at nanometer resolution using electron microscopy (EM), but interpretation of macromolecular functionality is hampered by the difficulty in interpreting grey-scale images and the unidentified molecular content. We perform large-scale EM on mammalian tissue complemented with energy-dispersive X-ray analysis (EDX) to allow EM-data analysis based on elemental composition. Endogenous elements, labels (gold and cadmium-based nanoparticles) as well as stains are analyzed at ultrastructural resolution. This provides a wide palette of colors to paint the traditional grey-scale EM images for composition-based interpretation. Our proof-of-principle application of EM-EDX reveals that endocrine and exocrine vesicles exist in single cells in Islets of Langerhans. This highlights how elemental mapping reveals unbiased biomedical relevant information. Broad application of EM-EDX will further allow experimental analysis on large-scale tissue using endogenous elements, multiple stains, and multiple markers and thus brings nanometer-scale 'color-EM' as a promising tool to unravel molecular (de)regulation in biomedicine.
Multi-color electron microscopy by element-guided identification of cells, organelles and molecules
Scotuzzi, Marijke; Kuipers, Jeroen; Wensveen, Dasha I.; de Boer, Pascal; Hagen, Kees (C.) W.; Hoogenboom, Jacob P.; Giepmans, Ben N. G.
2017-01-01
Cellular complexity is unraveled at nanometer resolution using electron microscopy (EM), but interpretation of macromolecular functionality is hampered by the difficulty in interpreting grey-scale images and the unidentified molecular content. We perform large-scale EM on mammalian tissue complemented with energy-dispersive X-ray analysis (EDX) to allow EM-data analysis based on elemental composition. Endogenous elements, labels (gold and cadmium-based nanoparticles) as well as stains are analyzed at ultrastructural resolution. This provides a wide palette of colors to paint the traditional grey-scale EM images for composition-based interpretation. Our proof-of-principle application of EM-EDX reveals that endocrine and exocrine vesicles exist in single cells in Islets of Langerhans. This highlights how elemental mapping reveals unbiased biomedical relevant information. Broad application of EM-EDX will further allow experimental analysis on large-scale tissue using endogenous elements, multiple stains, and multiple markers and thus brings nanometer-scale ‘color-EM’ as a promising tool to unravel molecular (de)regulation in biomedicine. PMID:28387351
Turbulent Flow Structure Inside a Canopy with Complex Multi-Scale Elements
NASA Astrophysics Data System (ADS)
Bai, Kunlun; Katz, Joseph; Meneveau, Charles
2015-06-01
Particle image velocimetry laboratory measurements are carried out to study mean flow distributions and turbulent statistics inside a canopy with complex geometry and multiple scales consisting of fractal, tree-like objects. Matching the optical refractive indices of the tree elements with those of the working fluid provides unobstructed optical paths for both illuminations and image acquisition. As a result, the flow fields between tree branches can be resolved in great detail, without optical interference. Statistical distributions of mean velocity, turbulence stresses, and components of dispersive fluxes are documented and discussed. The results show that the trees leave their signatures in the flow by imprinting wake structures with shapes similar to the trees. The velocities in both wake and non-wake regions significantly deviate from the spatially-averaged values. These local deviations result in strong dispersive fluxes, which are important to account for in canopy-flow modelling. In fact, we find that the streamwise normal dispersive flux inside the canopy has a larger magnitude (by up to four times) than the corresponding Reynolds normal stress. Turbulent transport in horizontal planes is studied in the framework of the eddy viscosity model. Scatter plots comparing the Reynolds shear stress and mean velocity gradient are indicative of a linear trend, from which one can calculate the eddy viscosity and mixing length. Similar to earlier results from the wake of a single tree, here we find that inside the canopy the mean mixing length decreases with increasing elevation. This trend cannot be scaled based on a single length scale, but can be described well by a model, which considers the coexistence of multi-scale branches. This agreement indicates that the multi-scale information and the clustering properties of the fractal objects should be taken into consideration in flows inside multi-scale canopies.
Multi-scale Gaussian representation and outline-learning based cell image segmentation.
Farhan, Muhammad; Ruusuvuori, Pekka; Emmenlauer, Mario; Rämö, Pauli; Dehio, Christoph; Yli-Harja, Olli
2013-01-01
High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks.
Multi-scale Gaussian representation and outline-learning based cell image segmentation
2013-01-01
Background High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. Methods We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. Results and conclusions We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks. PMID:24267488
Morales-Navarrete, Hernán; Segovia-Miranda, Fabián; Klukowski, Piotr; Meyer, Kirstin; Nonaka, Hidenori; Marsico, Giovanni; Chernykh, Mikhail; Kalaidzidis, Alexander; Zerial, Marino; Kalaidzidis, Yannis
2015-01-01
A prerequisite for the systems biology analysis of tissues is an accurate digital three-dimensional reconstruction of tissue structure based on images of markers covering multiple scales. Here, we designed a flexible pipeline for the multi-scale reconstruction and quantitative morphological analysis of tissue architecture from microscopy images. Our pipeline includes newly developed algorithms that address specific challenges of thick dense tissue reconstruction. Our implementation allows for a flexible workflow, scalable to high-throughput analysis and applicable to various mammalian tissues. We applied it to the analysis of liver tissue and extracted quantitative parameters of sinusoids, bile canaliculi and cell shapes, recognizing different liver cell types with high accuracy. Using our platform, we uncovered an unexpected zonation pattern of hepatocytes with different size, nuclei and DNA content, thus revealing new features of liver tissue organization. The pipeline also proved effective to analyse lung and kidney tissue, demonstrating its generality and robustness. DOI: http://dx.doi.org/10.7554/eLife.11214.001 PMID:26673893
Jang, Jinbeum; Yoo, Yoonjong; Kim, Jongheon; Paik, Joonki
2015-03-10
This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF) algorithm consists of four steps: (i) acquisition of left and right images using AF points in the region-of-interest; (ii) feature extraction in the left image under low illumination and out-of-focus blur; (iii) the generation of two feature images using the phase difference between the left and right images; and (iv) estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems.
Jang, Jinbeum; Yoo, Yoonjong; Kim, Jongheon; Paik, Joonki
2015-01-01
This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF) algorithm consists of four steps: (i) acquisition of left and right images using AF points in the region-of-interest; (ii) feature extraction in the left image under low illumination and out-of-focus blur; (iii) the generation of two feature images using the phase difference between the left and right images; and (iv) estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems. PMID:25763645
NASA Astrophysics Data System (ADS)
Zhang, Cheng; Wenbo, Mei; Huiqian, Du; Zexian, Wang
2018-04-01
A new algorithm was proposed for medical images fusion in this paper, which combined gradient minimization smoothing filter (GMSF) with non-sampled directional filter bank (NSDFB). In order to preserve more detail information, a multi scale edge preserving decomposition framework (MEDF) was used to decompose an image into a base image and a series of detail images. For the fusion of base images, the local Gaussian membership function is applied to construct the fusion weighted factor. For the fusion of detail images, NSDFB was applied to decompose each detail image into multiple directional sub-images that are fused by pulse coupled neural network (PCNN) respectively. The experimental results demonstrate that the proposed algorithm is superior to the compared algorithms in both visual effect and objective assessment.
A rapid extraction of landslide disaster information research based on GF-1 image
NASA Astrophysics Data System (ADS)
Wang, Sai; Xu, Suning; Peng, Ling; Wang, Zhiyi; Wang, Na
2015-08-01
In recent years, the landslide disasters occurred frequently because of the seismic activity. It brings great harm to people's life. It has caused high attention of the state and the extensive concern of society. In the field of geological disaster, landslide information extraction based on remote sensing has been controversial, but high resolution remote sensing image can improve the accuracy of information extraction effectively with its rich texture and geometry information. Therefore, it is feasible to extract the information of earthquake- triggered landslides with serious surface damage and large scale. Taking the Wenchuan county as the study area, this paper uses multi-scale segmentation method to extract the landslide image object through domestic GF-1 images and DEM data, which uses the estimation of scale parameter tool to determine the optimal segmentation scale; After analyzing the characteristics of landslide high-resolution image comprehensively and selecting spectrum feature, texture feature, geometric features and landform characteristics of the image, we can establish the extracting rules to extract landslide disaster information. The extraction results show that there are 20 landslide whose total area is 521279.31 .Compared with visual interpretation results, the extraction accuracy is 72.22%. This study indicates its efficient and feasible to extract earthquake landslide disaster information based on high resolution remote sensing and it provides important technical support for post-disaster emergency investigation and disaster assessment.
Perception-oriented fusion of multi-sensor imagery: visible, IR, and SAR
NASA Astrophysics Data System (ADS)
Sidorchuk, D.; Volkov, V.; Gladilin, S.
2018-04-01
This paper addresses the problem of image fusion of optical (visible and thermal domain) data and radar data for the purpose of visualization. These types of images typically contain a lot of complimentary information, and their joint visualization can be useful and more convenient for human user than a set of individual images. To solve the image fusion problem we propose a novel algorithm that utilizes some peculiarities of human color perception and based on the grey-scale structural visualization. Benefits of presented algorithm are exemplified by satellite imagery.
Sparse and redundant representations for inverse problems and recognition
NASA Astrophysics Data System (ADS)
Patel, Vishal M.
Sparse and redundant representation of data enables the description of signals as linear combinations of a few atoms from a dictionary. In this dissertation, we study applications of sparse and redundant representations in inverse problems and object recognition. Furthermore, we propose two novel imaging modalities based on the recently introduced theory of Compressed Sensing (CS). This dissertation consists of four major parts. In the first part of the dissertation, we study a new type of deconvolution algorithm that is based on estimating the image from a shearlet decomposition. Shearlets provide a multi-directional and multi-scale decomposition that has been mathematically shown to represent distributed discontinuities such as edges better than traditional wavelets. We develop a deconvolution algorithm that allows for the approximation inversion operator to be controlled on a multi-scale and multi-directional basis. Furthermore, we develop a method for the automatic determination of the threshold values for the noise shrinkage for each scale and direction without explicit knowledge of the noise variance using a generalized cross validation method. In the second part of the dissertation, we study a reconstruction method that recovers highly undersampled images assumed to have a sparse representation in a gradient domain by using partial measurement samples that are collected in the Fourier domain. Our method makes use of a robust generalized Poisson solver that greatly aids in achieving a significantly improved performance over similar proposed methods. We will demonstrate by experiments that this new technique is more flexible to work with either random or restricted sampling scenarios better than its competitors. In the third part of the dissertation, we introduce a novel Synthetic Aperture Radar (SAR) imaging modality which can provide a high resolution map of the spatial distribution of targets and terrain using a significantly reduced number of needed transmitted and/or received electromagnetic waveforms. We demonstrate that this new imaging scheme, requires no new hardware components and allows the aperture to be compressed. Also, it presents many new applications and advantages which include strong resistance to countermesasures and interception, imaging much wider swaths and reduced on-board storage requirements. The last part of the dissertation deals with object recognition based on learning dictionaries for simultaneous sparse signal approximations and feature extraction. A dictionary is learned for each object class based on given training examples which minimize the representation error with a sparseness constraint. A novel test image is then projected onto the span of the atoms in each learned dictionary. The residual vectors along with the coefficients are then used for recognition. Applications to illumination robust face recognition and automatic target recognition are presented.
Lu, Pei; Xia, Jun; Li, Zhicheng; Xiong, Jing; Yang, Jian; Zhou, Shoujun; Wang, Lei; Chen, Mingyang; Wang, Cheng
2016-11-08
Accurate segmentation of blood vessels plays an important role in the computer-aided diagnosis and interventional treatment of vascular diseases. The statistical method is an important component of effective vessel segmentation; however, several limitations discourage the segmentation effect, i.e., dependence of the image modality, uneven contrast media, bias field, and overlapping intensity distribution of the object and background. In addition, the mixture models of the statistical methods are constructed relaying on the characteristics of the image histograms. Thus, it is a challenging issue for the traditional methods to be available in vessel segmentation from multi-modality angiographic images. To overcome these limitations, a flexible segmentation method with a fixed mixture model has been proposed for various angiography modalities. Our method mainly consists of three parts. Firstly, multi-scale filtering algorithm was used on the original images to enhance vessels and suppress noises. As a result, the filtered data achieved a new statistical characteristic. Secondly, a mixture model formed by three probabilistic distributions (two Exponential distributions and one Gaussian distribution) was built to fit the histogram curve of the filtered data, where the expectation maximization (EM) algorithm was used for parameters estimation. Finally, three-dimensional (3D) Markov random field (MRF) were employed to improve the accuracy of pixel-wise classification and posterior probability estimation. To quantitatively evaluate the performance of the proposed method, two phantoms simulating blood vessels with different tubular structures and noises have been devised. Meanwhile, four clinical angiographic data sets from different human organs have been used to qualitatively validate the method. To further test the performance, comparison tests between the proposed method and the traditional ones have been conducted on two different brain magnetic resonance angiography (MRA) data sets. The results of the phantoms were satisfying, e.g., the noise was greatly suppressed, the percentages of the misclassified voxels, i.e., the segmentation error ratios, were no more than 0.3%, and the Dice similarity coefficients (DSCs) were above 94%. According to the opinions of clinical vascular specialists, the vessels in various data sets were extracted with high accuracy since complete vessel trees were extracted while lesser non-vessels and background were falsely classified as vessel. In the comparison experiments, the proposed method showed its superiority in accuracy and robustness for extracting vascular structures from multi-modality angiographic images with complicated background noises. The experimental results demonstrated that our proposed method was available for various angiographic data. The main reason was that the constructed mixture probability model could unitarily classify vessel object from the multi-scale filtered data of various angiography images. The advantages of the proposed method lie in the following aspects: firstly, it can extract the vessels with poor angiography quality, since the multi-scale filtering algorithm can improve the vessel intensity in the circumstance such as uneven contrast media and bias field; secondly, it performed well for extracting the vessels in multi-modality angiographic images despite various signal-noises; and thirdly, it was implemented with better accuracy, and robustness than the traditional methods. Generally, these traits declare that the proposed method would have significant clinical application.
A 3D Freehand Ultrasound System for Multi-view Reconstructions from Sparse 2D Scanning Planes
2011-01-01
Background A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. Methods We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes. For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Results Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better agreement with clinical measures than measures from single view reconstructions. Conclusions Multi-view 3D reconstruction from sparse 2D freehand B-mode images leads to more accurate volume quantification compared to single view systems. The flexibility and low-cost of the proposed system allow for fine control of the image acquisition planes for optimal 3D reconstructions from multiple views. PMID:21251284
A 3D freehand ultrasound system for multi-view reconstructions from sparse 2D scanning planes.
Yu, Honggang; Pattichis, Marios S; Agurto, Carla; Beth Goens, M
2011-01-20
A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes.For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better agreement with clinical measures than measures from single view reconstructions. Multi-view 3D reconstruction from sparse 2D freehand B-mode images leads to more accurate volume quantification compared to single view systems. The flexibility and low-cost of the proposed system allow for fine control of the image acquisition planes for optimal 3D reconstructions from multiple views.
A brain MRI bias field correction method created in the Gaussian multi-scale space
NASA Astrophysics Data System (ADS)
Chen, Mingsheng; Qin, Mingxin
2017-07-01
A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Xingye; Hu, Bin; Wei, Changdong
Lanthanum zirconate (La2Zr2O7) is a promising candidate material for thermal barrier coating (TBC) applications due to its low thermal conductivity and high-temperature phase stability. In this work, a novel image-based multi-scale simulation framework combining molecular dynamics (MD) and finite element (FE) calculations is proposed to study the thermal conductivity of La2Zr2O7 coatings. Since there is no experimental data of single crystal La2Zr2O7 thermal conductivity, a reverse non-equilibrium molecular dynamics (reverse NEMD) approach is first employed to compute the temperature-dependent thermal conductivity of single crystal La2Zr2O7. The single crystal data is then passed to a FE model which takes into accountmore » of realistic thermal barrier coating microstructures. The predicted thermal conductivities from the FE model are in good agreement with experimental validations using both flash laser technique and pulsed thermal imaging-multilayer analysis. The framework proposed in this work provides a powerful tool for future design of advanced coating systems. (C) 2016 Elsevier Ltd. All rights reserved.« less
A practical salient region feature based 3D multi-modality registration method for medical images
NASA Astrophysics Data System (ADS)
Hahn, Dieter A.; Wolz, Gabriele; Sun, Yiyong; Hornegger, Joachim; Sauer, Frank; Kuwert, Torsten; Xu, Chenyang
2006-03-01
We present a novel representation of 3D salient region features and its integration into a hybrid rigid-body registration framework. We adopt scale, translation and rotation invariance properties of those intrinsic 3D features to estimate a transform between underlying mono- or multi-modal 3D medical images. Our method combines advantageous aspects of both feature- and intensity-based approaches and consists of three steps: an automatic extraction of a set of 3D salient region features on each image, a robust estimation of correspondences and their sub-pixel accurate refinement with outliers elimination. We propose a region-growing based approach for the extraction of 3D salient region features, a solution to the problem of feature clustering and a reduction of the correspondence search space complexity. Results of the developed algorithm are presented for both mono- and multi-modal intra-patient 3D image pairs (CT, PET and SPECT) that have been acquired for change detection, tumor localization, and time based intra-person studies. The accuracy of the method is clinically evaluated by a medical expert with an approach that measures the distance between a set of selected corresponding points consisting of both anatomical and functional structures or lesion sites. This demonstrates the robustness of the proposed method to image overlap, missing information and artefacts. We conclude by discussing potential medical applications and possibilities for integration into a non-rigid registration framework.
Bryson, Mitch; Johnson-Roberson, Matthew; Murphy, Richard J; Bongiorno, Daniel
2013-01-01
Intertidal ecosystems have primarily been studied using field-based sampling; remote sensing offers the ability to collect data over large areas in a snapshot of time that could complement field-based sampling methods by extrapolating them into the wider spatial and temporal context. Conventional remote sensing tools (such as satellite and aircraft imaging) provide data at limited spatial and temporal resolutions and relatively high costs for small-scale environmental science and ecologically-focussed studies. In this paper, we describe a low-cost, kite-based imaging system and photogrammetric/mapping procedure that was developed for constructing high-resolution, three-dimensional, multi-spectral terrain models of intertidal rocky shores. The processing procedure uses automatic image feature detection and matching, structure-from-motion and photo-textured terrain surface reconstruction algorithms that require minimal human input and only a small number of ground control points and allow the use of cheap, consumer-grade digital cameras. The resulting maps combine imagery at visible and near-infrared wavelengths and topographic information at sub-centimeter resolutions over an intertidal shoreline 200 m long, thus enabling spatial properties of the intertidal environment to be determined across a hierarchy of spatial scales. Results of the system are presented for an intertidal rocky shore at Jervis Bay, New South Wales, Australia. Potential uses of this technique include mapping of plant (micro- and macro-algae) and animal (e.g. gastropods) assemblages at multiple spatial and temporal scales.
Bryson, Mitch; Johnson-Roberson, Matthew; Murphy, Richard J.; Bongiorno, Daniel
2013-01-01
Intertidal ecosystems have primarily been studied using field-based sampling; remote sensing offers the ability to collect data over large areas in a snapshot of time that could complement field-based sampling methods by extrapolating them into the wider spatial and temporal context. Conventional remote sensing tools (such as satellite and aircraft imaging) provide data at limited spatial and temporal resolutions and relatively high costs for small-scale environmental science and ecologically-focussed studies. In this paper, we describe a low-cost, kite-based imaging system and photogrammetric/mapping procedure that was developed for constructing high-resolution, three-dimensional, multi-spectral terrain models of intertidal rocky shores. The processing procedure uses automatic image feature detection and matching, structure-from-motion and photo-textured terrain surface reconstruction algorithms that require minimal human input and only a small number of ground control points and allow the use of cheap, consumer-grade digital cameras. The resulting maps combine imagery at visible and near-infrared wavelengths and topographic information at sub-centimeter resolutions over an intertidal shoreline 200 m long, thus enabling spatial properties of the intertidal environment to be determined across a hierarchy of spatial scales. Results of the system are presented for an intertidal rocky shore at Jervis Bay, New South Wales, Australia. Potential uses of this technique include mapping of plant (micro- and macro-algae) and animal (e.g. gastropods) assemblages at multiple spatial and temporal scales. PMID:24069206
Illumination Invariant Change Detection (iicd): from Earth to Mars
NASA Astrophysics Data System (ADS)
Wan, X.; Liu, J.; Qin, M.; Li, S. Y.
2018-04-01
Multi-temporal Earth Observation and Mars orbital imagery data with frequent repeat coverage provide great capability for planetary surface change detection. When comparing two images taken at different times of day or in different seasons for change detection, the variation of topographic shades and shadows caused by the change of sunlight angle can be so significant that it overwhelms the real object and environmental changes, making automatic detection unreliable. An effective change detection algorithm therefore has to be robust to the illumination variation. This paper presents our research on developing and testing an Illumination Invariant Change Detection (IICD) method based on the robustness of phase correlation (PC) to the variation of solar illumination for image matching. The IICD is based on two key functions: i) initial change detection based on a saliency map derived from pixel-wise dense PC matching and ii) change quantization which combines change type identification, motion estimation and precise appearance change identification. Experiment using multi-temporal Landsat 7 ETM+ satellite images, Rapid eye satellite images and Mars HiRiSE images demonstrate that our frequency based image matching method can reach sub-pixel accuracy and thus the proposed IICD method can effectively detect and precisely segment large scale change such as landslide as well as small object change such as Mars rover, under daily and seasonal sunlight changes.
NASA Technical Reports Server (NTRS)
Huffman, George J.; Adler, Robert F.; Bolvin, David T.; Gu, Guojun; Nelkin, Eric J.; Bowman, Kenneth P.; Stocker, Erich; Wolff, David B.
2006-01-01
The TRMM Multi-satellite Precipitation Analysis (TMPA) provides a calibration-based sequential scheme for combining multiple precipitation estimates from satellites, as well as gauge analyses where feasible, at fine scales (0.25 degrees x 0.25 degrees and 3-hourly). It is available both after and in real time, based on calibration by the TRMM Combined Instrument and TRMM Microwave Imager precipitation products, respectively. Only the after-real-time product incorporates gauge data at the present. The data set covers the latitude band 50 degrees N-S for the period 1998 to the delayed present. Early validation results are as follows: The TMPA provides reasonable performance at monthly scales, although it is shown to have precipitation rate dependent low bias due to lack of sensitivity to low precipitation rates in one of the input products (based on AMSU-B). At finer scales the TMPA is successful at approximately reproducing the surface-observation-based histogram of precipitation, as well as reasonably detecting large daily events. The TMPA, however, has lower skill in correctly specifying moderate and light event amounts on short time intervals, in common with other fine-scale estimators. Examples are provided of a flood event and diurnal cycle determination.
Horror Image Recognition Based on Context-Aware Multi-Instance Learning.
Li, Bing; Xiong, Weihua; Wu, Ou; Hu, Weiming; Maybank, Stephen; Yan, Shuicheng
2015-12-01
Horror content sharing on the Web is a growing phenomenon that can interfere with our daily life and affect the mental health of those involved. As an important form of expression, horror images have their own characteristics that can evoke extreme emotions. In this paper, we present a novel context-aware multi-instance learning (CMIL) algorithm for horror image recognition. The CMIL algorithm identifies horror images and picks out the regions that cause the sensation of horror in these horror images. It obtains contextual cues among adjacent regions in an image using a random walk on a contextual graph. Borrowing the strength of the fuzzy support vector machine (FSVM), we define a heuristic optimization procedure based on the FSVM to search for the optimal classifier for the CMIL. To improve the initialization of the CMIL, we propose a novel visual saliency model based on the tensor analysis. The average saliency value of each segmented region is set as its initial fuzzy membership in the CMIL. The advantage of the tensor-based visual saliency model is that it not only adaptively selects features, but also dynamically determines fusion weights for saliency value combination from different feature subspaces. The effectiveness of the proposed CMIL model is demonstrated by its use in horror image recognition on two large-scale image sets collected from the Internet.
Liu, Xingbin; Mei, Wenbo; Du, Huiqian
2018-02-13
In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images. Graphical abstract The detailed implementation of the proposed medical image fusion algorithm.
Single-Scale Fusion: An Effective Approach to Merging Images.
Ancuti, Codruta O; Ancuti, Cosmin; De Vleeschouwer, Christophe; Bovik, Alan C
2017-01-01
Due to its robustness and effectiveness, multi-scale fusion (MSF) based on the Laplacian pyramid decomposition has emerged as a popular technique that has shown utility in many applications. Guided by several intuitive measures (weight maps) the MSF process is versatile and straightforward to be implemented. However, the number of pyramid levels increases with the image size, which implies sophisticated data management and memory accesses, as well as additional computations. Here, we introduce a simplified formulation that reduces MSF to only a single level process. Starting from the MSF decomposition, we explain both mathematically and intuitively (visually) a way to simplify the classical MSF approach with minimal loss of information. The resulting single-scale fusion (SSF) solution is a close approximation of the MSF process that eliminates important redundant computations. It also provides insights regarding why MSF is so effective. While our simplified expression is derived in the context of high dynamic range imaging, we show its generality on several well-known fusion-based applications, such as image compositing, extended depth of field, medical imaging, and blending thermal (infrared) images with visible light. Besides visual validation, quantitative evaluations demonstrate that our SSF strategy is able to yield results that are highly competitive with traditional MSF approaches.
Aouadi, Souha; Vasic, Ana; Paloor, Satheesh; Torfeh, Tarraf; McGarry, Maeve; Petric, Primoz; Riyas, Mohamed; Hammoud, Rabih; Al-Hammadi, Noora
2017-10-01
To create a synthetic CT (sCT) from conventional brain MRI using a patch-based method for MRI-only radiotherapy planning and verification. Conventional T1 and T2-weighted MRI and CT datasets from 13 patients who underwent brain radiotherapy were included in a retrospective study whereas 6 patients were tested prospectively. A new contribution to the Non-local Means Patch-Based Method (NMPBM) framework was done with the use of novel multi-scale and dual-contrast patches. Furthermore, the training dataset was improved by pre-selecting the closest database patients to the target patient for computation time/accuracy balance. sCT and derived DRRs were assessed visually and quantitatively. VMAT planning was performed on CT and sCT for hypothetical PTVs in homogeneous and heterogeneous regions. Dosimetric analysis was done by comparing Dose Volume Histogram (DVH) parameters of PTVs and organs at risk (OARs). Positional accuracy of MRI-only image-guided radiation therapy based on CBCT or kV images was evaluated. The retrospective (respectively prospective) evaluation of the proposed Multi-scale and Dual-contrast Patch-Based Method (MDPBM) gave a mean absolute error MAE=99.69±11.07HU (98.95±8.35HU), and a Dice in bones DI bone =83±0.03 (0.82±0.03). Good agreement with conventional planning techniques was obtained; the highest percentage of DVH metric deviations was 0.43% (0.53%) for PTVs and 0.59% (0.75%) for OARs. The accuracy of sCT/CBCT or DRR sCT /kV images registration parameters was <2mm and <2°. Improvements with MDPBM, compared to NMPBM, were significant. We presented a novel method for sCT generation from T1 and T2-weighted MRI potentially suitable for MRI-only external beam radiotherapy in brain sites. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Power, J F
2009-06-01
Light profile microscopy (LPM) is a direct method for the spectral depth imaging of thin film cross-sections on the micrometer scale. LPM uses a perpendicular viewing configuration that directly images a source beam propagated through a thin film. Images are formed in dark field contrast, which is highly sensitive to subtle interfacial structures that are invisible to reference methods. The independent focusing of illumination and imaging systems allows multiple registered optical sources to be hosted on a single platform. These features make LPM a powerful multi-contrast (MC) imaging technique, demonstrated in this work with six modes of imaging in a single instrument, based on (1) broad-band elastic scatter; (2) laser excited wideband luminescence; (3) coherent elastic scatter; (4) Raman scatter (three channels with RGB illumination); (5) wavelength resolved luminescence; and (6) spectral broadband scatter, resolved in immediate succession. MC-LPM integrates Raman images with a wider optical and morphological picture of the sample than prior art microprobes. Currently, MC-LPM resolves images at an effective spectral resolution better than 9 cm(-1), at a spatial resolution approaching 1 microm, with optics that operate in air at half the maximum numerical aperture of the prior art microprobes.
A coarse-to-fine approach for medical hyperspectral image classification with sparse representation
NASA Astrophysics Data System (ADS)
Chang, Lan; Zhang, Mengmeng; Li, Wei
2017-10-01
A coarse-to-fine approach with sparse representation is proposed for medical hyperspectral image classification in this work. Segmentation technique with different scales is employed to exploit edges of the input image, where coarse super-pixel patches provide global classification information while fine ones further provide detail information. Different from common RGB image, hyperspectral image has multi bands to adjust the cluster center with more high precision. After segmentation, each super pixel is classified by recently-developed sparse representation-based classification (SRC), which assigns label for testing samples in one local patch by means of sparse linear combination of all the training samples. Furthermore, segmentation with multiple scales is employed because single scale is not suitable for complicate distribution of medical hyperspectral imagery. Finally, classification results for different sizes of super pixel are fused by some fusion strategy, offering at least two benefits: (1) the final result is obviously superior to that of segmentation with single scale, and (2) the fusion process significantly simplifies the choice of scales. Experimental results using real medical hyperspectral images demonstrate that the proposed method outperforms the state-of-the-art SRC.
The information extraction of Gannan citrus orchard based on the GF-1 remote sensing image
NASA Astrophysics Data System (ADS)
Wang, S.; Chen, Y. L.
2017-02-01
The production of Gannan oranges is the largest in China, which occupied an important part in the world. The extraction of citrus orchard quickly and effectively has important significance for fruit pathogen defense, fruit production and industrial planning. The traditional spectra extraction method of citrus orchard based on pixel has a lower classification accuracy, difficult to avoid the “pepper phenomenon”. In the influence of noise, the phenomenon that different spectrums of objects have the same spectrum is graveness. Taking Xunwu County citrus fruit planting area of Ganzhou as the research object, aiming at the disadvantage of the lower accuracy of the traditional method based on image element classification method, a decision tree classification method based on object-oriented rule set is proposed. Firstly, multi-scale segmentation is performed on the GF-1 remote sensing image data of the study area. Subsequently the sample objects are selected for statistical analysis of spectral features and geometric features. Finally, combined with the concept of decision tree classification, a variety of empirical values of single band threshold, NDVI, band combination and object geometry characteristics are used hierarchically to execute the information extraction of the research area, and multi-scale segmentation and hierarchical decision tree classification is implemented. The classification results are verified with the confusion matrix, and the overall Kappa index is 87.91%.
Grid-Enabled Quantitative Analysis of Breast Cancer
2009-10-01
large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...pilot study to utilize large scale parallel Grid computing to harness the nationwide cluster infrastructure for optimization of medical image ... analysis parameters. Additionally, we investigated the use of cutting edge dataanalysis/ mining techniques as applied to Ultrasound, FFDM, and DCE-MRI Breast
NASA Astrophysics Data System (ADS)
Chung, C.; Nagol, J. R.; Tao, X.; Anand, A.; Dempewolf, J.
2015-12-01
Increasing agricultural production while at the same time preserving the environment has become a challenging task. There is a need for new approaches for use of multi-scale and multi-source remote sensing data as well as ground based measurements for mapping and monitoring crop and ecosystem state to support decision making by governmental and non-governmental organizations for sustainable agricultural development. High resolution sub-meter imagery plays an important role in such an integrative framework of landscape monitoring. It helps link the ground based data to more easily available coarser resolution data, facilitating calibration and validation of derived remote sensing products. Here we present a hierarchical Object Based Image Analysis (OBIA) approach to classify sub-meter imagery. The primary reason for choosing OBIA is to accommodate pixel sizes smaller than the object or class of interest. Especially in non-homogeneous savannah regions of Tanzania, this is an important concern and the traditional pixel based spectral signature approach often fails. Ortho-rectified, calibrated, pan sharpened 0.5 meter resolution data acquired from DigitalGlobe's WorldView-2 satellite sensor was used for this purpose. Multi-scale hierarchical segmentation was performed using multi-resolution segmentation approach to facilitate the use of texture, neighborhood context, and the relationship between super and sub objects for training and classification. eCognition, a commonly used OBIA software program, was used for this purpose. Both decision tree and random forest approaches for classification were tested. The Kappa index agreement for both algorithms surpassed the 85%. The results demonstrate that using hierarchical OBIA can effectively and accurately discriminate classes at even LCCS-3 legend.
Multi-objects recognition for distributed intelligent sensor networks
NASA Astrophysics Data System (ADS)
He, Haibo; Chen, Sheng; Cao, Yuan; Desai, Sachi; Hohil, Myron E.
2008-04-01
This paper proposes an innovative approach for multi-objects recognition for homeland security and defense based intelligent sensor networks. Unlike the conventional way of information analysis, data mining in such networks is typically characterized with high information ambiguity/uncertainty, data redundancy, high dimensionality and real-time constrains. Furthermore, since a typical military based network normally includes multiple mobile sensor platforms, ground forces, fortified tanks, combat flights, and other resources, it is critical to develop intelligent data mining approaches to fuse different information resources to understand dynamic environments, to support decision making processes, and finally to achieve the goals. This paper aims to address these issues with a focus on multi-objects recognition. Instead of classifying a single object as in the traditional image classification problems, the proposed method can automatically learn multiple objectives simultaneously. Image segmentation techniques are used to identify the interesting regions in the field, which correspond to multiple objects such as soldiers or tanks. Since different objects will come with different feature sizes, we propose a feature scaling method to represent each object in the same number of dimensions. This is achieved by linear/nonlinear scaling and sampling techniques. Finally, support vector machine (SVM) based learning algorithms are developed to learn and build the associations for different objects, and such knowledge will be adaptively accumulated for objects recognition in the testing stage. We test the effectiveness of proposed method in different simulated military environments.
NASA Astrophysics Data System (ADS)
Seers, T. D.; Hodgetts, D.
2013-12-01
Seers, T. D. & Hodgetts, D. School of Earth, Atmospheric and Environmental Sciences, University of Manchester, UK. M13 9PL. The detection of topological change at the Earth's surface is of considerable scholarly interest, allowing the quantification of the rates of geomorphic processes whilst providing lucid insights into the underlying mechanisms driving landscape evolution. In this regard, the past decade has witnessed the ever increasing proliferation of studies employing multi-temporal topographic data in within the geosciences, bolstered by continuing technical advancements in the acquisition and processing of prerequisite datasets. Provided by workers within the field of Computer Vision, multiview stereo (MVS) dense surface reconstructions, primed by structure-from-motion (SfM) based camera pose estimation represents one such development. Providing a cost effective, operationally efficient data capture medium, the modest requirement of a consumer grade camera for data collection coupled with the minimal user intervention required during post-processing makes SfM-MVS an attractive alternative to terrestrial laser scanners for collecting multi-temporal topographic datasets. However, in similitude to terrestrial scanner derived data, the co-registration of spatially coincident or partially overlapping scans produced by SfM-MVS presents a major technical challenge, particularly in the case of semi non-rigid scenes produced during topographic change detection studies. Moreover, the arbitrary scaling resulting from SfM ambiguity requires that a scale matrix must be estimated during the transformation, introducing further complexity into its formulation. Here, we present a novel, fully unsupervised algorithm which utilises non-linearly weighted image features for the solving the similarity transform (scale, translation rotation) between partially overlapping scans produced by SfM-MVS image processing. With the only initialization condition being partial intersection between input image sets, our method has major advantages over conventional iterative least squares minimization based methods (e.g. Iterative Closest Point variants), acting only on rigid areas of target scenes, being capable of reliably estimating the scaling factor and requiring no incipient estimation of the transformation to initialize (i.e. manual rough alignment). Moreover, because the solution is closed form, convergence is considerably more expedient that most iterative methods. It is hoped that the availability of improved co-registration routines, such as the one presented here, will facilitate the routine collection of multi-temporal topographic datasets by a wider range of geoscience practitioners.
Fast hierarchical knowledge-based approach for human face detection in color images
NASA Astrophysics Data System (ADS)
Jiang, Jun; Gong, Jie; Zhang, Guilin; Hu, Ruolan
2001-09-01
This paper presents a fast hierarchical knowledge-based approach for automatically detecting multi-scale upright faces in still color images. The approach consists of three levels. At the highest level, skin-like regions are determinated by skin model, which is based on the color attributes hue and saturation in HSV color space, as well color attributes red and green in normalized color space. In level 2, a new eye model is devised to select human face candidates in segmented skin-like regions. An important feature of the eye model is that it is independent of the scale of human face. So it is possible for finding human faces in different scale with scanning image only once, and it leads to reduction the computation time of face detection greatly. In level 3, a human face mosaic image model, which is consistent with physical structure features of human face well, is applied to judge whether there are face detects in human face candidate regions. This model includes edge and gray rules. Experiment results show that the approach has high robustness and fast speed. It has wide application perspective at human-computer interactions and visual telephone etc.
NASA Astrophysics Data System (ADS)
Zhu, Aichun; Wang, Tian; Snoussi, Hichem
2018-03-01
This paper addresses the problems of the graphical-based human pose estimation in still images, including the diversity of appearances and confounding background clutter. We present a new architecture for estimating human pose using a Convolutional Neural Network (CNN). Firstly, a Relative Mixture Deformable Model (RMDM) is defined by each pair of connected parts to compute the relative spatial information in the graphical model. Secondly, a Local Multi-Resolution Convolutional Neural Network (LMR-CNN) is proposed to train and learn the multi-scale representation of each body parts by combining different levels of part context. Thirdly, a LMR-CNN based hierarchical model is defined to explore the context information of limb parts. Finally, the experimental results demonstrate the effectiveness of the proposed deep learning approach for human pose estimation.
A Novel Defect Inspection Method for Semiconductor Wafer Based on Magneto-Optic Imaging
NASA Astrophysics Data System (ADS)
Pan, Z.; Chen, L.; Li, W.; Zhang, G.; Wu, P.
2013-03-01
The defects of semiconductor wafer may be generated from the manufacturing processes. A novel defect inspection method of semiconductor wafer is presented in this paper. The method is based on magneto-optic imaging, which involves inducing eddy current into the wafer under test, and detecting the magnetic flux associated with eddy current distribution in the wafer by exploiting the Faraday rotation effect. The magneto-optic image being generated may contain some noises that degrade the overall image quality, therefore, in this paper, in order to remove the unwanted noise present in the magneto-optic image, the image enhancement approach using multi-scale wavelet is presented, and the image segmentation approach based on the integration of watershed algorithm and clustering strategy is given. The experimental results show that many types of defects in wafer such as hole and scratch etc. can be detected by the method proposed in this paper.
Multi-scale functional mapping of tidal marsh vegetation for restoration monitoring
NASA Astrophysics Data System (ADS)
Tuxen Bettman, Karin
2007-12-01
Nearly half of the world's natural wetlands have been destroyed or degraded, and in recent years, there have been significant endeavors to restore wetland habitat throughout the world. Detailed mapping of restoring wetlands can offer valuable information about changes in vegetation and geomorphology, which can inform the restoration process and ultimately help to improve chances of restoration success. I studied six tidal marshes in the San Francisco Estuary, CA, US, between 2003 and 2004 in order to develop techniques for mapping tidal marshes at multiple scales by incorporating specific restoration objectives for improved longer term monitoring. I explored a "pixel-based" remote sensing image analysis method for mapping vegetation in restored and natural tidal marshes, describing the benefits and limitations of this type of approach (Chapter 2). I also performed a multi-scale analysis of vegetation pattern metrics for a recently restored tidal marsh in order to target the metrics that are consistent across scales and will be robust measures of marsh vegetation change (Chapter 3). Finally, I performed an "object-based" image analysis using the same remotely sensed imagery, which maps vegetation type and specific wetland functions at multiple scales (Chapter 4). The combined results of my work highlight important trends and management implications for monitoring wetland restoration using remote sensing, and will better enable restoration ecologists to use remote sensing for tidal marsh monitoring. Several findings important for tidal marsh restoration monitoring were made. Overall results showed that pixel-based methods are effective at quantifying landscape changes in composition and diversity in recently restored marshes, but are limited in their use for quantifying smaller, more fine-scale changes. While pattern metrics can highlight small but important changes in vegetation composition and configuration across years, scientists should exercise caution when using metrics in their studies or to validate restoration management decisions, and multi-scale analyses should be performed before metrics are used in restoration science for important management decisions. Lastly, restoration objectives, ecosystem function, and scale can each be integrated into monitoring techniques using remote sensing for improved restoration monitoring.
Fast Detection of Airports on Remote Sensing Images with Single Shot MultiBox Detector
NASA Astrophysics Data System (ADS)
Xia, Fei; Li, HuiZhou
2018-01-01
This paper introduces a method for fast airport detection on remote sensing images (RSIs) using Single Shot MultiBox Detector (SSD). To our knowledge, this could be the first study which introduces an end-to-end detection model into airport detection on RSIs. Based on the common low-level features between natural images and RSIs, a convolution neural network trained on large amounts of natural images was transferred to tackle the airport detection problem with limited annotated data. To deal with the specific characteristics of RSIs, some related parameters in the SSD, such as the scales and layers, were modified for more accurate and rapider detection. The experiments show that the proposed method could achieve 83.5% Average Recall at 8 FPS on RSIs with the size of 1024*1024. In contrast to Faster R-CNN, an improvement on AP and speed could be obtained.
A novel fruit shape classification method based on multi-scale analysis
NASA Astrophysics Data System (ADS)
Gui, Jiangsheng; Ying, Yibin; Rao, Xiuqin
2005-11-01
Shape is one of the major concerns and which is still a difficult problem in automated inspection and sorting of fruits. In this research, we proposed the multi-scale energy distribution (MSED) for object shape description, the relationship between objects shape and its boundary energy distribution at multi-scale was explored for shape extraction. MSED offers not only the mainly energy which represent primary shape information at the lower scales, but also subordinate energy which represent local shape information at higher differential scales. Thus, it provides a natural tool for multi resolution representation and can be used as a feature for shape classification. We addressed the three main processing steps in the MSED-based shape classification. They are namely, 1) image preprocessing and citrus shape extraction, 2) shape resample and shape feature normalization, 3) energy decomposition by wavelet and classification by BP neural network. Hereinto, shape resample is resample 256 boundary pixel from a curve which is approximated original boundary by using cubic spline in order to get uniform raw data. A probability function was defined and an effective method to select a start point was given through maximal expectation, which overcame the inconvenience of traditional methods in order to have a property of rotation invariants. The experiment result is relatively well normal citrus and serious abnormality, with a classification rate superior to 91.2%. The global correct classification rate is 89.77%, and our method is more effective than traditional method. The global result can meet the request of fruit grading.
Color enhancement and image defogging in HSI based on Retinex model
NASA Astrophysics Data System (ADS)
Gao, Han; Wei, Ping; Ke, Jun
2015-08-01
Retinex is a luminance perceptual algorithm based on color consistency. It has a good performance in color enhancement. But in some cases, the traditional Retinex algorithms, both Single-Scale Retinex(SSR) and Multi-Scale Retinex(MSR) in RGB color space, do not work well and will cause color deviation. To solve this problem, we present improved SSR and MSR algorithms. Compared to other Retinex algorithms, we implement Retinex algorithms in HSI(Hue, Saturation, Intensity) color space, and use a parameter αto improve quality of the image. Moreover, the algorithms presented in this paper has a good performance in image defogging. Contrasted with traditional Retinex algorithms, we use intensity channel to obtain reflection information of an image. The intensity channel is processed using a Gaussian center-surround image filter to get light information, which should be removed from intensity channel. After that, we subtract the light information from intensity channel to obtain the reflection image, which only includes the attribute of the objects in image. Using the reflection image and a parameter α, which is an arbitrary scale factor set manually, we improve the intensity channel, and complete the color enhancement. Our experiments show that this approach works well compared with existing methods for color enhancement. Besides a better performance in color deviation problem and image defogging, a visible improvement in the image quality for human contrast perception is also observed.
Vehicle license plate recognition based on geometry restraints and multi-feature decision
NASA Astrophysics Data System (ADS)
Wu, Jianwei; Wang, Zongyue
2005-10-01
Vehicle license plate (VLP) recognition is of great importance to many traffic applications. Though researchers have paid much attention to VLP recognition there has not been a fully operational VLP recognition system yet for many reasons. This paper discusses a valid and practical method for vehicle license plate recognition based on geometry restraints and multi-feature decision including statistical and structural features. In general, the VLP recognition includes the following steps: the location of VLP, character segmentation, and character recognition. This paper discusses the three steps in detail. The characters of VLP are always declining caused by many factors, which makes it more difficult to recognize the characters of VLP, therefore geometry restraints such as the general ratio of length and width, the adjacent edges being perpendicular are used for incline correction. Image Moment has been proved to be invariant to translation, rotation and scaling therefore image moment is used as one feature for character recognition. Stroke is the basic element for writing and hence taking it as a feature is helpful to character recognition. Finally we take the image moment, the strokes and the numbers of each stroke for each character image and some other structural features and statistical features as the multi-feature to match each character image with sample character images so that each character image can be recognized by BP neural net. The proposed method combines statistical and structural features for VLP recognition, and the result shows its validity and efficiency.
NASA Astrophysics Data System (ADS)
Dafflon, B.; Leger, E.; Peterson, J.; Falco, N.; Wainwright, H. M.; Wu, Y.; Tran, A. P.; Brodie, E.; Williams, K. H.; Versteeg, R.; Hubbard, S. S.
2017-12-01
Improving understanding and modelling of terrestrial systems requires advances in measuring and quantifying interactions among subsurface, land surface and vegetation processes over relevant spatiotemporal scales. Such advances are important to quantify natural and managed ecosystem behaviors, as well as to predict how watershed systems respond to increasingly frequent hydrological perturbations, such as droughts, floods and early snowmelt. Our study focuses on the joint use of UAV-based multi-spectral aerial imaging, ground-based geophysical tomographic monitoring (incl., electrical and electromagnetic imaging) and point-scale sensing (soil moisture sensors and soil sampling) to quantify interactions between above and below ground compartments of the East River Watershed in the Upper Colorado River Basin. We evaluate linkages between physical properties (incl. soil composition, soil electrical conductivity, soil water content), metrics extracted from digital surface and terrain elevation models (incl., slope, wetness index) and vegetation properties (incl., greenness, plant type) in a 500 x 500 m hillslope-floodplain subsystem of the watershed. Data integration and analysis is supported by numerical approaches that simulate the control of soil and geomorphic characteristic on hydrological processes. Results provide an unprecedented window into critical zone interactions, revealing significant below- and above-ground co-dynamics. Baseline geophysical datasets provide lithological structure along the hillslope, which includes a surface soil horizon, underlain by a saprolite layer and the fractured Mancos shale. Time-lapse geophysical data show very different moisture dynamics in various compartments and locations during the winter and growing season. Integration with aerial imaging reveals a significant linkage between plant growth and the subsurface wetness, soil characteristics and the topographic gradient. The obtained information about the organization and connectivity of the landscape is being transferred to larger regions using aerial imaging and will be used to constrain multi-scale, multi-physics hydro-biogeochemical simulations of the East River watershed response to hydrological perturbations.
NASA Astrophysics Data System (ADS)
Cook, Jason R.; Dumani, Diego S.; Kubelick, Kelsey P.; Luci, Jeffrey; Emelianov, Stanislav Y.
2017-03-01
Imaging modalities utilize contrast agents to improve morphological visualization and to assess functional and molecular/cellular information. Here we present a new type of nanometer scale multi-functional particle that can be used for multi-modal imaging and therapeutic applications. Specifically, we synthesized monodisperse 20 nm Prussian Blue Nanocubes (PBNCs) with desired optical absorption in the near-infrared region and superparamagnetic properties. PBNCs showed excellent contrast in photoacoustic (700 nm wavelength) and MR (3T) imaging. Furthermore, photostability was assessed by exposing the PBNCs to nearly 1,000 laser pulses (5 ns pulse width) with up to 30 mJ/cm2 laser fluences. The PBNCs exhibited insignificant changes in photoacoustic signal, demonstrating enhanced robustness compared to the commonly used gold nanorods (substantial photodegradation with fluences greater than 5 mJ/cm2). Furthermore, the PBNCs exhibited superparamagnetism with a magnetic saturation of 105 emu/g, a 5x improvement over superparamagnetic iron-oxide (SPIO) nanoparticles. PBNCs exhibited enhanced T2 contrast measured using 3T clinical MRI. Because of the excellent optical absorption and magnetism, PBNCs have potential uses in other imaging modalities including optical tomography, microscopy, magneto-motive OCT/ultrasound, etc. In addition to multi-modal imaging, the PBNCs are multi-functional and, for example, can be used to enhance magnetic delivery and as therapeutic agents. Our initial studies show that stem cells can be labeled with PBNCs to perform image-guided magnetic delivery. Overall, PBNCs can act as imaging/therapeutic agents in diverse applications including cancer, cardiovascular disease, ophthalmology, and tissue engineering. Furthermore, PBNCs are based on FDA approved Prussian Blue thus potentially easing clinical translation of PBNCs.
Going Deeper With Contextual CNN for Hyperspectral Image Classification.
Lee, Hyungtae; Kwon, Heesung
2017-10-01
In this paper, we describe a novel deep convolutional neural network (CNN) that is deeper and wider than other existing deep networks for hyperspectral image classification. Unlike current state-of-the-art approaches in CNN-based hyperspectral image classification, the proposed network, called contextual deep CNN, can optimally explore local contextual interactions by jointly exploiting local spatio-spectral relationships of neighboring individual pixel vectors. The joint exploitation of the spatio-spectral information is achieved by a multi-scale convolutional filter bank used as an initial component of the proposed CNN pipeline. The initial spatial and spectral feature maps obtained from the multi-scale filter bank are then combined together to form a joint spatio-spectral feature map. The joint feature map representing rich spectral and spatial properties of the hyperspectral image is then fed through a fully convolutional network that eventually predicts the corresponding label of each pixel vector. The proposed approach is tested on three benchmark data sets: the Indian Pines data set, the Salinas data set, and the University of Pavia data set. Performance comparison shows enhanced classification performance of the proposed approach over the current state-of-the-art on the three data sets.
Visualizing and measuring flow in shale matrix using in situ synchrotron X-ray microtomography
NASA Astrophysics Data System (ADS)
Kohli, A. H.; Kiss, A. M.; Kovscek, A. R.; Bargar, J.
2017-12-01
Natural gas production via hydraulic fracturing of shale has proliferated on a global scale, yet recovery factors remain low because production strategies are not based on the physics of flow in shale reservoirs. In particular, the physical mechanisms and time scales of depletion from the matrix into the simulated fracture network are not well understood, limiting the potential to optimize operations and reduce environmental impacts. Studying matrix flow is challenging because shale is heterogeneous and has porosity from the μm- to nm-scale. Characterizing nm-scale flow paths requires electron microscopy but the limited field of view does not capture the connectivity and heterogeneity observed at the mm-scale. Therefore, pore-scale models must link to larger volumes to simulate flow on the reservoir-scale. Upscaled models must honor the physics of flow, but at present there is a gap between cm-scale experiments and μm-scale simulations based on ex situ image data. To address this gap, we developed a synchrotron X-ray microscope with an in situ cell to simultaneously visualize and measure flow. We perform coupled flow and microtomography experiments on mm-scale samples from the Barnett, Eagle Ford and Marcellus reservoirs. We measure permeability at various pressures via the pulse-decay method to quantify effective stress dependence and the relative contributions of advective and diffusive mechanisms. Images at each pressure step document how microfractures, interparticle pores, and organic matter change with effective stress. Linking changes in the pore network to flow measurements motivates a physical model for depletion. To directly visualize flow, we measure imbibition rates using inert, high atomic number gases and image periodically with monochromatic beam. By imaging above/below X-ray adsorption edges, we magnify the signal of gas saturation in μm-scale porosity and nm-scale, sub-voxel features. Comparing vacuumed and saturated states yields image-based measurements of the distribution and time scales of imbibition. We also characterize nm-scale structure via focused ion beam tomography to quantify sub-voxel porosity and connectivity. The multi-scale image and flow data is used to develop a framework to upscale and benchmark pore-scale models.
Micro-computed tomography pore-scale study of flow in porous media: Effect of voxel resolution
NASA Astrophysics Data System (ADS)
Shah, S. M.; Gray, F.; Crawshaw, J. P.; Boek, E. S.
2016-09-01
A fundamental understanding of flow in porous media at the pore-scale is necessary to be able to upscale average displacement processes from core to reservoir scale. The study of fluid flow in porous media at the pore-scale consists of two key procedures: Imaging - reconstruction of three-dimensional (3D) pore space images; and modelling such as with single and two-phase flow simulations with Lattice-Boltzmann (LB) or Pore-Network (PN) Modelling. Here we analyse pore-scale results to predict petrophysical properties such as porosity, single-phase permeability and multi-phase properties at different length scales. The fundamental issue is to understand the image resolution dependency of transport properties, in order to up-scale the flow physics from pore to core scale. In this work, we use a high resolution micro-computed tomography (micro-CT) scanner to image and reconstruct three dimensional pore-scale images of five sandstones (Bentheimer, Berea, Clashach, Doddington and Stainton) and five complex carbonates (Ketton, Estaillades, Middle Eastern sample 3, Middle Eastern sample 5 and Indiana Limestone 1) at four different voxel resolutions (4.4 μm, 6.2 μm, 8.3 μm and 10.2 μm), scanning the same physical field of view. Implementing three phase segmentation (macro-pore phase, intermediate phase and grain phase) on pore-scale images helps to understand the importance of connected macro-porosity in the fluid flow for the samples studied. We then compute the petrophysical properties for all the samples using PN and LB simulations in order to study the influence of voxel resolution on petrophysical properties. We then introduce a numerical coarsening scheme which is used to coarsen a high voxel resolution image (4.4 μm) to lower resolutions (6.2 μm, 8.3 μm and 10.2 μm) and study the impact of coarsening data on macroscopic and multi-phase properties. Numerical coarsening of high resolution data is found to be superior to using a lower resolution scan because it avoids the problem of partial volume effects and reduces the scaling effect by preserving the pore-space properties influencing the transport properties. This is evidently compared in this study by predicting several pore network properties such as number of pores and throats, average pore and throat radius and coordination number for both scan based analysis and numerical coarsened data.
Spatial vision processes: From the optical image to the symbolic structures of contour information
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.
1988-01-01
The significance of machine and natural vision is discussed together with the need for a general approach to image acquisition and processing aimed at recognition. An exploratory scheme is proposed which encompasses the definition of spatial primitives, intrinsic image properties and sampling, 2-D edge detection at the smallest scale, the construction of spatial primitives from edges, and the isolation of contour information from textural information. Concepts drawn from or suggested by natural vision at both perceptual and physiological levels are relied upon heavily to guide the development of the overall scheme. The scheme is intended to provide a larger context in which to place the emerging technology of detector array focal-plane processors. The approach differs from many recent efforts in edge detection and image coding by emphasizing smallest scale edge detection as a foundation for multi-scale symbolic processing while diminishing somewhat the importance of image convolutions with multi-scale edge operators. Cursory treatments of information theory illustrate that the direct application of this theory to structural information in images could not be realized.
A multi-layer MRI description of Parkinson's disease
NASA Astrophysics Data System (ADS)
La Rocca, M.; Amoroso, N.; Lella, E.; Bellotti, R.; Tangaro, S.
2017-09-01
Magnetic resonance imaging (MRI) along with complex network is currently one of the most widely adopted techniques for detection of structural changes in neurological diseases, such as Parkinson's Disease (PD). In this paper, we present a digital image processing study, within the multi-layer network framework, combining more classifiers to evaluate the informative power of the MRI features, for the discrimination of normal controls (NC) and PD subjects. We define a network for each MRI scan; the nodes are the sub-volumes (patches) the images are divided into and the links are defined using the Pearson's pairwise correlation between patches. We obtain a multi-layer network whose important network features, obtained with different feature selection methods, are used to feed a supervised multi-level random forest classifier which exploits this base of knowledge for accurate classification. Method evaluation has been carried out using T1 MRI scans of 354 individuals, including 177 PD subjects and 177 NC from the Parkinson's Progression Markers Initiative (PPMI) database. The experimental results demonstrate that the features obtained from multiplex networks are able to accurately describe PD patterns. Besides, also if a privileged scale for studying PD disease exists, exploring the informative content of more scales leads to a significant improvement of the performances in the discrimination between disease and healthy subjects. In particular, this method gives a comprehensive overview of brain regions statistically affected by the disease, an additional value to the presented study.
NASA Astrophysics Data System (ADS)
Ortiz-Jaramillo, B.; Fandiño Toro, H. A.; Benitez-Restrepo, H. D.; Orjuela-Vargas, S. A.; Castellanos-Domínguez, G.; Philips, W.
2012-03-01
Infrared Non-Destructive Testing (INDT) is known as an effective and rapid method for nondestructive inspection. It can detect a broad range of near-surface structuring flaws in metallic and composite components. Those flaws are modeled as a smooth contour centered at peaks of stored thermal energy, termed Regions of Interest (ROI). Dedicated methodologies must detect the presence of those ROIs. In this paper, we present a methodology for ROI extraction in INDT tasks. The methodology deals with the difficulties due to the non-uniform heating. The non-uniform heating affects low spatial/frequencies and hinders the detection of relevant points in the image. In this paper, a methodology for ROI extraction in INDT using multi-resolution analysis is proposed, which is robust to ROI low contrast and non-uniform heating. The former methodology includes local correlation, Gaussian scale analysis and local edge detection. In this methodology local correlation between image and Gaussian window provides interest points related to ROIs. We use a Gaussian window because thermal behavior is well modeled by Gaussian smooth contours. Also, the Gaussian scale is used to analyze details in the image using multi-resolution analysis avoiding low contrast, non-uniform heating and selection of the Gaussian window size. Finally, local edge detection is used to provide a good estimation of the boundaries in the ROI. Thus, we provide a methodology for ROI extraction based on multi-resolution analysis that is better or equal compared with the other dedicate algorithms proposed in the state of art.
XIAO, Xiangming; DONG, Jinwei; QIN, Yuanwei; WANG, Zongming
2016-01-01
Information of paddy rice distribution is essential for food production and methane emission calculation. Phenology-based algorithms have been utilized in the mapping of paddy rice fields by identifying the unique flooding and seedling transplanting phases using multi-temporal moderate resolution (500 m to 1 km) images. In this study, we developed simple algorithms to identify paddy rice at a fine resolution at the regional scale using multi-temporal Landsat imagery. Sixteen Landsat images from 2010–2012 were used to generate the 30 m paddy rice map in the Sanjiang Plain, northeast China—one of the major paddy rice cultivation regions in China. Three vegetation indices, Normalized Difference Vegetation Index (NDVI), Enhanced Vegetation Index (EVI), and Land Surface Water Index (LSWI), were used to identify rice fields during the flooding/transplanting and ripening phases. The user and producer accuracies of paddy rice on the resultant Landsat-based paddy rice map were 90% and 94%, respectively. The Landsat-based paddy rice map was an improvement over the paddy rice layer on the National Land Cover Dataset, which was generated through visual interpretation and digitalization on the fine-resolution images. The agricultural census data substantially underreported paddy rice area, raising serious concern about its use for studies on food security. PMID:27695637
A graph-based approach for the retrieval of multi-modality medical images.
Kumar, Ashnil; Kim, Jinman; Wen, Lingfeng; Fulham, Michael; Feng, Dagan
2014-02-01
In this paper, we address the retrieval of multi-modality medical volumes, which consist of two different imaging modalities, acquired sequentially, from the same scanner. One such example, positron emission tomography and computed tomography (PET-CT), provides physicians with complementary functional and anatomical features as well as spatial relationships and has led to improved cancer diagnosis, localisation, and staging. The challenge of multi-modality volume retrieval for cancer patients lies in representing the complementary geometric and topologic attributes between tumours and organs. These attributes and relationships, which are used for tumour staging and classification, can be formulated as a graph. It has been demonstrated that graph-based methods have high accuracy for retrieval by spatial similarity. However, naïvely representing all relationships on a complete graph obscures the structure of the tumour-anatomy relationships. We propose a new graph structure derived from complete graphs that structurally constrains the edges connected to tumour vertices based upon the spatial proximity of tumours and organs. This enables retrieval on the basis of tumour localisation. We also present a similarity matching algorithm that accounts for different feature sets for graph elements from different imaging modalities. Our method emphasises the relationships between a tumour and related organs, while still modelling patient-specific anatomical variations. Constraining tumours to related anatomical structures improves the discrimination potential of graphs, making it easier to retrieve similar images based on tumour location. We evaluated our retrieval methodology on a dataset of clinical PET-CT volumes. Our results showed that our method enabled the retrieval of multi-modality images using spatial features. Our graph-based retrieval algorithm achieved a higher precision than several other retrieval techniques: gray-level histograms as well as state-of-the-art methods such as visual words using the scale- invariant feature transform (SIFT) and relational matrices representing the spatial arrangements of objects. Copyright © 2013 Elsevier B.V. All rights reserved.
Multicolor Super-Resolution Fluorescence Imaging via Multi-Parameter Fluorophore Detection
Bates, Mark; Dempsey, Graham T; Chen, Kok Hao; Zhuang, Xiaowei
2012-01-01
Understanding the complexity of the cellular environment will benefit from the ability to unambiguously resolve multiple cellular components, simultaneously and with nanometer-scale spatial resolution. Multicolor super-resolution fluorescence microscopy techniques have been developed to achieve this goal, yet challenges remain in terms of the number of targets that can be simultaneously imaged and the crosstalk between color channels. Herein, we demonstrate multicolor stochastic optical reconstruction microscopy (STORM) based on a multi-parameter detection strategy, which uses both the fluorescence activation wavelength and the emission color to discriminate between photo-activatable fluorescent probes. First, we obtained two-color super-resolution images using the near-infrared cyanine dye Alexa 750 in conjunction with a red cyanine dye Alexa 647, and quantified color crosstalk levels and image registration accuracy. Combinatorial pairing of these two switchable dyes with fluorophores which enhance photo-activation enabled multi-parameter detection of six different probes. Using this approach, we obtained six-color super-resolution fluorescence images of a model sample. The combination of multiple fluorescence detection parameters for improved fluorophore discrimination promises to substantially enhance our ability to visualize multiple cellular targets with sub-diffraction-limit resolution. PMID:22213647
An efficient method for the fusion of light field refocused images
NASA Astrophysics Data System (ADS)
Wang, Yingqian; Yang, Jungang; Xiao, Chao; An, Wei
2018-04-01
Light field cameras have drawn much attention due to the advantage of post-capture adjustments such as refocusing after exposure. The depth of field in refocused images is always shallow because of the large equivalent aperture. As a result, a large number of multi-focus images are obtained and an all-in-focus image is demanded. Consider that most multi-focus image fusion algorithms do not particularly aim at large numbers of source images and traditional DWT-based fusion approach has serious problems in dealing with lots of multi-focus images, causing color distortion and ringing effect. To solve this problem, this paper proposes an efficient multi-focus image fusion method based on stationary wavelet transform (SWT), which can deal with a large quantity of multi-focus images with shallow depth of fields. We compare SWT-based approach with DWT-based approach on various occasions. And the results demonstrate that the proposed method performs much better both visually and quantitatively.
Multi-exposure high dynamic range image synthesis with camera shake correction
NASA Astrophysics Data System (ADS)
Li, Xudong; Chen, Yongfu; Jiang, Hongzhi; Zhao, Huijie
2017-10-01
Machine vision plays an important part in industrial online inspection. Owing to the nonuniform illuminance conditions and variable working distances, the captured image tends to be over-exposed or under-exposed. As a result, when processing the image such as crack inspection, the algorithm complexity and computing time increase. Multiexposure high dynamic range (HDR) image synthesis is used to improve the quality of the captured image, whose dynamic range is limited. Inevitably, camera shake will result in ghost effect, which blurs the synthesis image to some extent. However, existed exposure fusion algorithms assume that the input images are either perfectly aligned or captured in the same scene. These assumptions limit the application. At present, widely used registration based on Scale Invariant Feature Transform (SIFT) is usually time consuming. In order to rapidly obtain a high quality HDR image without ghost effect, we come up with an efficient Low Dynamic Range (LDR) images capturing approach and propose a registration method based on ORiented Brief (ORB) and histogram equalization which can eliminate the illumination differences between the LDR images. The fusion is performed after alignment. The experiment results demonstrate that the proposed method is robust to illumination changes and local geometric distortion. Comparing with other exposure fusion methods, our method is more efficient and can produce HDR images without ghost effect by registering and fusing four multi-exposure images.
Wavelet based free-form deformations for nonrigid registration
NASA Astrophysics Data System (ADS)
Sun, Wei; Niessen, Wiro J.; Klein, Stefan
2014-03-01
In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.
NASA Astrophysics Data System (ADS)
Yu, Xuelian; Chen, Qian; Gu, Guohua; Ren, Jianle; Sui, Xiubao
2015-02-01
Designing objective quality assessment of color-fused image is a very demanding and challenging task. We propose four no-reference metrics based on human visual system characteristics for objectively evaluating the quality of false color fusion image. The perceived edge metric (PEM) is defined based on visual perception model and color image gradient similarity between the fused image and the source images. The perceptual contrast metric (PCM) is established associating multi-scale contrast and varying contrast sensitivity filter (CSF) with color components. The linear combination of the standard deviation and mean value over the fused image construct the image colorfulness metric (ICM). The color comfort metric (CCM) is designed by the average saturation and the ratio of pixels with high and low saturation. The qualitative and quantitative experimental results demonstrate that the proposed metrics have a good agreement with subjective perception.
Pollen Image Recognition Based on DGDB-LBP Descriptor
NASA Astrophysics Data System (ADS)
Han, L. P.; Xie, Y. H.
2018-01-01
In this paper, we propose DGDB-LBP, a local binary pattern descriptor based on the pixel blocks in the dominant gradient direction. Differing from traditional LBP and its variants, DGDB-LBP encodes by comparing the main gradient magnitude of each block rather than the single pixel value or the average of pixel blocks, in doing so, it reduces the influence of noise on pollen images and eliminates redundant and non-informative features. In order to fully describe the texture features of pollen images and analyze it under multi-scales, we propose a new sampling strategy, which uses three types of operators to extract the radial, angular and multiple texture features under different scales. Considering that the pollen images have some degree of rotation under the microscope, we propose the adaptive encoding direction, which is determined by the texture distribution of local region. Experimental results on the Pollenmonitor dataset show that the average correct recognition rate of our method is superior to other pollen recognition methods in recent years.
A new method of Quickbird own image fusion
NASA Astrophysics Data System (ADS)
Han, Ying; Jiang, Hong; Zhang, Xiuying
2009-10-01
With the rapid development of remote sensing technology, the means of accessing to remote sensing data become increasingly abundant, thus the same area can form a large number of multi-temporal, different resolution image sequence. At present, the fusion methods are mainly: HPF, IHS transform method, PCA method, Brovey, Mallat algorithm and wavelet transform and so on. There exists a serious distortion of the spectrums in the IHS transform, Mallat algorithm omits low-frequency information of the high spatial resolution images, the integration results of which has obvious blocking effects. Wavelet multi-scale decomposition for different sizes, the directions, details and the edges can have achieved very good results, but different fusion rules and algorithms can achieve different effects. This article takes the Quickbird own image fusion as an example, basing on wavelet transform and HVS, wavelet transform and IHS integration. The result shows that the former better. This paper introduces the correlation coefficient, the relative average spectral error index and usual index to evaluate the quality of image.
Factor structure of the Body Appreciation Scale among Malaysian women.
Swami, Viren; Chamorro-Premuzic, Tomas
2008-12-01
The present study examined the factor structure of a Malay version of the Body Appreciation Scale (BAS), a recently developed scale for the assessment of positive body image that has been shown to have a unidimensional structure in Western settings. Results of exploratory and confirmatory factor analyses based on data from community sample of 591 women in Kuala Lumpur, Malaysia, failed to support a unidimensional structure for the Malay BAS. Results of a confirmatory factor analysis suggested two stable factors, which were labelled 'General Body Appreciation' and 'Body Image Investment'. Multi-group analysis showed that the two-factor structure was invariant for both Malaysian Malay and Chinese women, and that there were no significant ethnic differences on either factor. Results also showed that General Body Appreciation was significant negatively correlated with participants' body mass index. These results are discussed in relation to possible cross-cultural differences in positive body image.
Renosh, P R; Schmitt, Francois G; Loisel, Hubert
2015-01-01
Satellite remote sensing observations allow the ocean surface to be sampled synoptically over large spatio-temporal scales. The images provided from visible and thermal infrared satellite observations are widely used in physical, biological, and ecological oceanography. The present work proposes a method to understand the multi-scaling properties of satellite products such as the Chlorophyll-a (Chl-a), and the Sea Surface Temperature (SST), rarely studied. The specific objectives of this study are to show how the small scale heterogeneities of satellite images can be characterised using tools borrowed from the fields of turbulence. For that purpose, we show how the structure function, which is classically used in the frame of scaling time series analysis, can be used also in 2D. The main advantage of this method is that it can be applied to process images which have missing data. Based on both simulated and real images, we demonstrate that coarse-graining (CG) of a gradient modulus transform of the original image does not provide correct scaling exponents. We show, using a fractional Brownian simulation in 2D, that the structure function (SF) can be used with randomly sampled couple of points, and verify that 1 million of couple of points provides enough statistics.
Lai, Zongying; Zhang, Xinlin; Guo, Di; Du, Xiaofeng; Yang, Yonggui; Guo, Gang; Chen, Zhong; Qu, Xiaobo
2018-05-03
Multi-contrast images in magnetic resonance imaging (MRI) provide abundant contrast information reflecting the characteristics of the internal tissues of human bodies, and thus have been widely utilized in clinical diagnosis. However, long acquisition time limits the application of multi-contrast MRI. One efficient way to accelerate data acquisition is to under-sample the k-space data and then reconstruct images with sparsity constraint. However, images are compromised at high acceleration factor if images are reconstructed individually. We aim to improve the images with a jointly sparse reconstruction and Graph-based redundant wavelet transform (GBRWT). First, a sparsifying transform, GBRWT, is trained to reflect the similarity of tissue structures in multi-contrast images. Second, joint multi-contrast image reconstruction is formulated as a ℓ 2, 1 norm optimization problem under GBRWT representations. Third, the optimization problem is numerically solved using a derived alternating direction method. Experimental results in synthetic and in vivo MRI data demonstrate that the proposed joint reconstruction method can achieve lower reconstruction errors and better preserve image structures than the compared joint reconstruction methods. Besides, the proposed method outperforms single image reconstruction with joint sparsity constraint of multi-contrast images. The proposed method explores the joint sparsity of multi-contrast MRI images under graph-based redundant wavelet transform and realizes joint sparse reconstruction of multi-contrast images. Experiment demonstrate that the proposed method outperforms the compared joint reconstruction methods as well as individual reconstructions. With this high quality image reconstruction method, it is possible to achieve the high acceleration factors by exploring the complementary information provided by multi-contrast MRI.
Image Description with Local Patterns: An Application to Face Recognition
NASA Astrophysics Data System (ADS)
Zhou, Wei; Ahrary, Alireza; Kamata, Sei-Ichiro
In this paper, we propose a novel approach for presenting the local features of digital image using 1D Local Patterns by Multi-Scans (1DLPMS). We also consider the extentions and simplifications of the proposed approach into facial images analysis. The proposed approach consists of three steps. At the first step, the gray values of pixels in image are represented as a vector giving the local neighborhood intensity distrubutions of the pixels. Then, multi-scans are applied to capture different spatial information on the image with advantage of less computation than other traditional ways, such as Local Binary Patterns (LBP). The second step is encoding the local features based on different encoding rules using 1D local patterns. This transformation is expected to be less sensitive to illumination variations besides preserving the appearance of images embedded in the original gray scale. At the final step, Grouped 1D Local Patterns by Multi-Scans (G1DLPMS) is applied to make the proposed approach computationally simpler and easy to extend. Next, we further formulate boosted algorithm to extract the most discriminant local features. The evaluated results demonstrate that the proposed approach outperforms the conventional approaches in terms of accuracy in applications of face recognition, gender estimation and facial expression.
The Transition Region Explorer: Observing the Multi-Scale Dynamics of Geospace
NASA Astrophysics Data System (ADS)
Donovan, E.
2015-12-01
Meso- and global-scale IT remote sensing is accomplished via satellite imagers and ground-based instruments. On the ground, the approach is arrays providing extensive as possible coverage (the "net") and powerful observatories that drill deep to provide detailed information about small-scale processes (the "drill"). Always, there is a trade between cost, spatial resolution, coverage (extent), number of parameters, and more, such that in general the larger the network the sparser the coverage. Where are we now? There are important gaps. With THEMIS-ASI, we see processes that quickly evolve beyond the field of view of one observatory, but involve space/time scales not captured by existing meso- and large-scale arrays. Many forefront questions require observations at heretofore unexplored space and time scales, and comprehensive inter-hemispheric conjugate observations than are presently available. To address this, a new ground-based observing initiative is being developed in Canada. Called TREx, for Transition Region Explorer, this new facility will incorporate dedicated blueline, redline, and Near-Infrared All-Sky Imagers, together with an unprecedented network of ten imaging riometers, with a combined field of view spanning more than three hours of magnetic local time and from equatorward to poleward of typical auroral latitudes (spanning the ionospheric footprint of the "nightside transition region" that separates the highly stretched tail and the inner magnetosphere). The TREx field-of-view is covered by HF radars, and contains a dense network of magnetometers and VLF receivers, as well as other geospace and upper atmospheric remote sensors. Taken together, TREx and these co-located instruments represent a quantum leap forward in terms of imaging, in multiple parameters (precipitation, ionization, convection, and currents), ionospheric dynamics in the above-mentioned scale gap. This represents an exciting new opportunity for studying geospace at the system level, especially for using the aurora to remote sense magnetospheric plasma physics and dynamics, and comes with a set of Big Data challenges that are going to be exciting. One such challenge is the development of a fundamentally new type of data product, namely time series of multi-parameter, geospatially referenced 'data cubes'.
James, Joseph; Murukeshan, Vadakke Matham; Woh, Lye Sun
2014-07-01
The structural and molecular heterogeneities of biological tissues demand the interrogation of the samples with multiple energy sources and provide visualization capabilities at varying spatial resolution and depth scales for obtaining complementary diagnostic information. A novel multi-modal imaging approach that uses optical and acoustic energies to perform photoacoustic, ultrasound and fluorescence imaging at multiple resolution scales from the tissue surface and depth is proposed in this paper. The system comprises of two distinct forms of hardware level integration so as to have an integrated imaging system under a single instrumentation set-up. The experimental studies show that the system is capable of mapping high resolution fluorescence signatures from the surface, optical absorption and acoustic heterogeneities along the depth (>2cm) of the tissue at multi-scale resolution (<1µm to <0.5mm).
Three-Dimensional Terahertz Coded-Aperture Imaging Based on Single Input Multiple Output Technology.
Chen, Shuo; Luo, Chenggao; Deng, Bin; Wang, Hongqiang; Cheng, Yongqiang; Zhuang, Zhaowen
2018-01-19
As a promising radar imaging technique, terahertz coded-aperture imaging (TCAI) can achieve high-resolution, forward-looking, and staring imaging by producing spatiotemporal independent signals with coded apertures. In this paper, we propose a three-dimensional (3D) TCAI architecture based on single input multiple output (SIMO) technology, which can reduce the coding and sampling times sharply. The coded aperture applied in the proposed TCAI architecture loads either purposive or random phase modulation factor. In the transmitting process, the purposive phase modulation factor drives the terahertz beam to scan the divided 3D imaging cells. In the receiving process, the random phase modulation factor is adopted to modulate the terahertz wave to be spatiotemporally independent for high resolution. Considering human-scale targets, images of each 3D imaging cell are reconstructed one by one to decompose the global computational complexity, and then are synthesized together to obtain the complete high-resolution image. As for each imaging cell, the multi-resolution imaging method helps to reduce the computational burden on a large-scale reference-signal matrix. The experimental results demonstrate that the proposed architecture can achieve high-resolution imaging with much less time for 3D targets and has great potential in applications such as security screening, nondestructive detection, medical diagnosis, etc.
NASA Technical Reports Server (NTRS)
Cornell, Stephen R.; Leser, William P.; Hochhalter, Jacob D.; Newman, John A.; Hartl, Darren J.
2014-01-01
A method for detecting fatigue cracks has been explored at NASA Langley Research Center. Microscopic NiTi shape memory alloy (sensory) particles were embedded in a 7050 aluminum alloy matrix to detect the presence of fatigue cracks. Cracks exhibit an elevated stress field near their tip inducing a martensitic phase transformation in nearby sensory particles. Detectable levels of acoustic energy are emitted upon particle phase transformation such that the existence and location of fatigue cracks can be detected. To test this concept, a fatigue crack was grown in a mode-I single-edge notch fatigue crack growth specimen containing sensory particles. As the crack approached the sensory particles, measurements of particle strain, matrix-particle debonding, and phase transformation behavior of the sensory particles were performed. Full-field deformation measurements were performed using a novel multi-scale optical 3D digital image correlation (DIC) system. This information will be used in a finite element-based study to determine optimal sensory material behavior and density.
Analysis of Decadal Vegetation Dynamics Using Multi-Scale Satellite Images
NASA Astrophysics Data System (ADS)
Chiang, Y.; Chen, K.
2013-12-01
This study aims at quantifying vegetation fractional cover (VFC) by incorporating multi-resolution satellite images, including Formosat-2(RSI), SPOT(HRV/HRG), Landsat (MSS/TM) and Terra/Aqua(MODIS), to investigate long-term and seasonal vegetation dynamics in Taiwan. We used 40-year NDVI records for derivation of VFC, with field campaigns routinely conducted to calibrate the critical NDVI threshold. Given different sensor capabilities in terms of their spatial and spectral properties, translation and infusion of NDVIs was used to assure NDVI coherence and to determine the fraction of vegetation cover at different spatio-temporal scales. Based on the proposed method, a bimodal sequence of intra-annual VFC which corresponds to the dual-cropping agriculture pattern was observed. Compared to seasonal VFC variation (78~90%), decadal VFC reveals moderate oscillations (81~86%), which were strongly linked with landuse changes and several major disturbances. This time-series mapping of VFC can be used to examine vegetation dynamics and its response associated with short-term and long-term anthropogenic/natural events.
OMERO and Bio-Formats 5: flexible access to large bioimaging datasets at scale
NASA Astrophysics Data System (ADS)
Moore, Josh; Linkert, Melissa; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Li, Simon; Lindner, Dominik; Moore, William J.; Patterson, Andrew J.; Pindelski, Blazej; Ramalingam, Balaji; Rozbicki, Emil; Tarkowska, Aleksandra; Walczysko, Petr; Allan, Chris; Burel, Jean-Marie; Swedlow, Jason
2015-03-01
The Open Microscopy Environment (OME) has built and released Bio-Formats, a Java-based proprietary file format conversion tool and OMERO, an enterprise data management platform under open source licenses. In this report, we describe new versions of Bio-Formats and OMERO that are specifically designed to support large, multi-gigabyte or terabyte scale datasets that are routinely collected across most domains of biological and biomedical research. Bio- Formats reads image data directly from native proprietary formats, bypassing the need for conversion into a standard format. It implements the concept of a file set, a container that defines the contents of multi-dimensional data comprised of many files. OMERO uses Bio-Formats to read files natively, and provides a flexible access mechanism that supports several different storage and access strategies. These new capabilities of OMERO and Bio-Formats make them especially useful for use in imaging applications like digital pathology, high content screening and light sheet microscopy that create routinely large datasets that must be managed and analyzed.
NASA Astrophysics Data System (ADS)
LIU, Yiping; XU, Qing; ZhANG, Heng; LV, Liang; LU, Wanjie; WANG, Dandi
2016-11-01
The purpose of this paper is to solve the problems of the traditional single system for interpretation and draughting such as inconsistent standards, single function, dependence on plug-ins, closed system and low integration level. On the basis of the comprehensive analysis of the target elements composition, map representation and similar system features, a 3D interpretation and draughting integrated service platform for multi-source, multi-scale and multi-resolution geospatial objects is established based on HTML5 and WebGL, which not only integrates object recognition, access, retrieval, three-dimensional display and test evaluation but also achieves collection, transfer, storage, refreshing and maintenance of data about Geospatial Objects and shows value in certain prospects and potential for growth.
Automated Image Registration Using Morphological Region of Interest Feature Extraction
NASA Technical Reports Server (NTRS)
Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.
2005-01-01
With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.
A detail-preserved and luminance-consistent multi-exposure image fusion algorithm
NASA Astrophysics Data System (ADS)
Wang, Guanquan; Zhou, Yue
2018-04-01
When irradiance across a scene varies greatly, we can hardly get an image of the scene without over- or underexposure area, because of the constraints of cameras. Multi-exposure image fusion (MEF) is an effective method to deal with this problem by fusing multi-exposure images of a static scene. A novel MEF method is described in this paper. In the proposed algorithm, coarser-scale luminance consistency is preserved by contribution adjustment using the luminance information between blocks; detail-preserved smoothing filter can stitch blocks smoothly without losing details. Experiment results show that the proposed method performs well in preserving luminance consistency and details.
NASA Astrophysics Data System (ADS)
Zhang, Ka; Sheng, Yehua; Wang, Meizhen; Fu, Suxia
2018-05-01
The traditional multi-view vertical line locus (TMVLL) matching method is an object-space-based method that is commonly used to directly acquire spatial 3D coordinates of ground objects in photogrammetry. However, the TMVLL method can only obtain one elevation and lacks an accurate means of validating the matching results. In this paper, we propose an enhanced multi-view vertical line locus (EMVLL) matching algorithm based on positioning consistency for aerial or space images. The algorithm involves three components: confirming candidate pixels of the ground primitive in the base image, multi-view image matching based on the object space constraints for all candidate pixels, and validating the consistency of the object space coordinates with the multi-view matching result. The proposed algorithm was tested using actual aerial images and space images. Experimental results show that the EMVLL method successfully solves the problems associated with the TMVLL method, and has greater reliability, accuracy and computing efficiency.
Singh, U R; Enayat, M; White, S C; Wahl, P
2013-01-01
We report on the set-up and performance of a dilution-refrigerator based spectroscopic imaging scanning tunneling microscope. It operates at temperatures below 10 mK and in magnetic fields up to 14T. The system allows for sample transfer and in situ cleavage. We present first-results demonstrating atomic resolution and the multi-gap structure of the superconducting gap of NbSe(2) at base temperature. To determine the energy resolution of our system we have measured a normal metal/vacuum/superconductor tunneling junction consisting of an aluminum tip on a gold sample. Our system allows for continuous measurements at base temperature on time scales of up to ≈170 h.
Inferring multi-scale neural mechanisms with brain network modelling
Schirner, Michael; McIntosh, Anthony Randal; Jirsa, Viktor; Deco, Gustavo
2018-01-01
The neurophysiological processes underlying non-invasive brain activity measurements are incompletely understood. Here, we developed a connectome-based brain network model that integrates individual structural and functional data with neural population dynamics to support multi-scale neurophysiological inference. Simulated populations were linked by structural connectivity and, as a novelty, driven by electroencephalography (EEG) source activity. Simulations not only predicted subjects' individual resting-state functional magnetic resonance imaging (fMRI) time series and spatial network topologies over 20 minutes of activity, but more importantly, they also revealed precise neurophysiological mechanisms that underlie and link six empirical observations from different scales and modalities: (1) resting-state fMRI oscillations, (2) functional connectivity networks, (3) excitation-inhibition balance, (4, 5) inverse relationships between α-rhythms, spike-firing and fMRI on short and long time scales, and (6) fMRI power-law scaling. These findings underscore the potential of this new modelling framework for general inference and integration of neurophysiological knowledge to complement empirical studies. PMID:29308767
Bayesian multi-scale smoothing of photon-limited images with applications to astronomy and medicine
NASA Astrophysics Data System (ADS)
White, John
Multi-scale models for smoothing Poisson signals or images have gained much attention over the past decade. A new Bayesian model is developed using the concept of the Chinese restaurant process to find structures in two-dimensional images when performing image reconstruction or smoothing. This new model performs very well when compared to other leading methodologies for the same problem. It is developed and evaluated theoretically and empirically throughout Chapter 2. The newly developed Bayesian model is extended to three-dimensional images in Chapter 3. The third dimension has numerous different applications, such as different energy spectra, another spatial index, or possibly a temporal dimension. Empirically, this method shows promise in reducing error with the use of simulation studies. A further development removes background noise in the image. This removal can further reduce the error and is done using a modeling adjustment and post-processing techniques. These details are given in Chapter 4. Applications to real world problems are given throughout. Photon-based images are common in astronomical imaging due to the collection of different types of energy such as X-Rays. Applications to real astronomical images are given, and these consist of X-ray images from the Chandra X-ray observatory satellite. Diagnostic medicine uses many types of imaging such as magnetic resonance imaging and computed tomography that can also benefit from smoothing techniques such as the one developed here. Reducing the amount of radiation a patient takes will make images more noisy, but this can be mitigated through the use of image smoothing techniques. Both types of images represent the potential real world use for these methods.
Du, Hua Qiang; Sun, Xiao Yan; Han, Ning; Mao, Fang Jie
2017-10-01
By synergistically using the object-based image analysis (OBIA) and the classification and regression tree (CART) methods, the distribution information, the indexes (including diameter at breast, tree height, and crown closure), and the aboveground carbon storage (AGC) of moso bamboo forest in Shanchuan Town, Anji County, Zhejiang Province were investigated. The results showed that the moso bamboo forest could be accurately delineated by integrating the multi-scale ima ge segmentation in OBIA technique and CART, which connected the image objects at various scales, with a pretty good producer's accuracy of 89.1%. The investigation of indexes estimated by regression tree model that was constructed based on the features extracted from the image objects reached normal or better accuracy, in which the crown closure model archived the best estimating accuracy of 67.9%. The estimating accuracy of diameter at breast and tree height was relatively low, which was consistent with conclusion that estimating diameter at breast and tree height using optical remote sensing could not achieve satisfactory results. Estimation of AGC reached relatively high accuracy, and accuracy of the region of high value achieved above 80%.
Robust Vehicle Detection in Aerial Images Based on Cascaded Convolutional Neural Networks.
Zhong, Jiandan; Lei, Tao; Yao, Guangle
2017-11-24
Vehicle detection in aerial images is an important and challenging task. Traditionally, many target detection models based on sliding-window fashion were developed and achieved acceptable performance, but these models are time-consuming in the detection phase. Recently, with the great success of convolutional neural networks (CNNs) in computer vision, many state-of-the-art detectors have been designed based on deep CNNs. However, these CNN-based detectors are inefficient when applied in aerial image data due to the fact that the existing CNN-based models struggle with small-size object detection and precise localization. To improve the detection accuracy without decreasing speed, we propose a CNN-based detection model combining two independent convolutional neural networks, where the first network is applied to generate a set of vehicle-like regions from multi-feature maps of different hierarchies and scales. Because the multi-feature maps combine the advantage of the deep and shallow convolutional layer, the first network performs well on locating the small targets in aerial image data. Then, the generated candidate regions are fed into the second network for feature extraction and decision making. Comprehensive experiments are conducted on the Vehicle Detection in Aerial Imagery (VEDAI) dataset and Munich vehicle dataset. The proposed cascaded detection model yields high performance, not only in detection accuracy but also in detection speed.
Robust Vehicle Detection in Aerial Images Based on Cascaded Convolutional Neural Networks
Zhong, Jiandan; Lei, Tao; Yao, Guangle
2017-01-01
Vehicle detection in aerial images is an important and challenging task. Traditionally, many target detection models based on sliding-window fashion were developed and achieved acceptable performance, but these models are time-consuming in the detection phase. Recently, with the great success of convolutional neural networks (CNNs) in computer vision, many state-of-the-art detectors have been designed based on deep CNNs. However, these CNN-based detectors are inefficient when applied in aerial image data due to the fact that the existing CNN-based models struggle with small-size object detection and precise localization. To improve the detection accuracy without decreasing speed, we propose a CNN-based detection model combining two independent convolutional neural networks, where the first network is applied to generate a set of vehicle-like regions from multi-feature maps of different hierarchies and scales. Because the multi-feature maps combine the advantage of the deep and shallow convolutional layer, the first network performs well on locating the small targets in aerial image data. Then, the generated candidate regions are fed into the second network for feature extraction and decision making. Comprehensive experiments are conducted on the Vehicle Detection in Aerial Imagery (VEDAI) dataset and Munich vehicle dataset. The proposed cascaded detection model yields high performance, not only in detection accuracy but also in detection speed. PMID:29186756
Image-Based Reconstruction and Analysis of Dynamic Scenes in a Landslide Simulation Facility
NASA Astrophysics Data System (ADS)
Scaioni, M.; Crippa, J.; Longoni, L.; Papini, M.; Zanzi, L.
2017-12-01
The application of image processing and photogrammetric techniques to dynamic reconstruction of landslide simulations in a scaled-down facility is described. Simulations are also used here for active-learning purpose: students are helped understand how physical processes happen and which kinds of observations may be obtained from a sensor network. In particular, the use of digital images to obtain multi-temporal information is presented. On one side, using a multi-view sensor set up based on four synchronized GoPro 4 Black® cameras, a 4D (3D spatial position and time) reconstruction of the dynamic scene is obtained through the composition of several 3D models obtained from dense image matching. The final textured 4D model allows one to revisit in dynamic and interactive mode a completed experiment at any time. On the other side, a digital image correlation (DIC) technique has been used to track surface point displacements from the image sequence obtained from the camera in front of the simulation facility. While the 4D model may provide a qualitative description and documentation of the experiment running, DIC analysis output quantitative information such as local point displacements and velocities, to be related to physical processes and to other observations. All the hardware and software equipment adopted for the photogrammetric reconstruction has been based on low-cost and open-source solutions.
A simple method to achieve full-field and real-scale reconstruction using a movable stereo rig
NASA Astrophysics Data System (ADS)
Gu, Feifei; Zhao, Hong; Song, Zhan; Tang, Suming
2018-06-01
This paper introduces a simple method to achieve full-field and real-scale reconstruction using a movable binocular vision system (MBVS). The MBVS is composed of two cameras, one is called the tracking camera, and the other is called the working camera. The tracking camera is used for tracking the positions of the MBVS and the working camera is used for the 3D reconstruction task. The MBVS has several advantages compared with a single moving camera or multi-camera networks. Firstly, the MBVS could recover the real-scale-depth-information from the captured image sequences without using auxiliary objects whose geometry or motion should be precisely known. Secondly, the removability of the system could guarantee appropriate baselines to supply more robust point correspondences. Additionally, using one camera could avoid the drawback which exists in multi-camera networks, that the variability of a cameras’ parameters and performance could significantly affect the accuracy and robustness of the feature extraction and stereo matching methods. The proposed framework consists of local reconstruction and initial pose estimation of the MBVS based on transferable features, followed by overall optimization and accurate integration of multi-view 3D reconstruction data. The whole process requires no information other than the input images. The framework has been verified with real data, and very good results have been obtained.
Multiple Point Statistics algorithm based on direct sampling and multi-resolution images
NASA Astrophysics Data System (ADS)
Julien, S.; Renard, P.; Chugunova, T.
2017-12-01
Multiple Point Statistics (MPS) has become popular for more than one decade in Earth Sciences, because these methods allow to generate random fields reproducing highly complex spatial features given in a conceptual model, the training image, while classical geostatistics techniques based on bi-point statistics (covariance or variogram) fail to generate realistic models. Among MPS methods, the direct sampling consists in borrowing patterns from the training image to populate a simulation grid. This latter is sequentially filled by visiting each of these nodes in a random order, and then the patterns, whose the number of nodes is fixed, become narrower during the simulation process, as the simulation grid is more densely informed. Hence, large scale structures are caught in the beginning of the simulation and small scale ones in the end. However, MPS may mix spatial characteristics distinguishable at different scales in the training image, and then loose the spatial arrangement of different structures. To overcome this limitation, we propose to perform MPS simulation using a decomposition of the training image in a set of images at multiple resolutions. Applying a Gaussian kernel onto the training image (convolution) results in a lower resolution image, and iterating this process, a pyramid of images depicting fewer details at each level is built, as it can be done in image processing for example to lighten the space storage of a photography. The direct sampling is then employed to simulate the lowest resolution level, and then to simulate each level, up to the finest resolution, conditioned to the level one rank coarser. This scheme helps reproduce the spatial structures at any scale of the training image and then generate more realistic models. We illustrate the method with aerial photographies (satellite images) and natural textures. Indeed, these kinds of images often display typical structures at different scales and are well-suited for MPS simulation techniques.
Super-resolution imaging of multiple cells by optimized flat-field epi-illumination
NASA Astrophysics Data System (ADS)
Douglass, Kyle M.; Sieben, Christian; Archetti, Anna; Lambert, Ambroise; Manley, Suliana
2016-11-01
Biological processes are inherently multi-scale, and supramolecular complexes at the nanoscale determine changes at the cellular scale and beyond. Single-molecule localization microscopy (SMLM) techniques have been established as important tools for studying cellular features with resolutions of the order of around 10 nm. However, in their current form these modalities are limited by a highly constrained field of view (FOV) and field-dependent image resolution. Here, we develop a low-cost microlens array (MLA)-based epi-illumination system—flat illumination for field-independent imaging (FIFI)—that can efficiently and homogeneously perform simultaneous imaging of multiple cells with nanoscale resolution. The optical principle of FIFI, which is an extension of the Köhler integrator, is further elucidated and modelled with a new, free simulation package. We demonstrate FIFI's capabilities by imaging multiple COS-7 and bacteria cells in 100 × 100 μm2 SMLM images—more than quadrupling the size of a typical FOV and producing near-gigapixel-sized images of uniformly high quality.
Orientation estimation of anatomical structures in medical images for object recognition
NASA Astrophysics Data System (ADS)
Bağci, Ulaş; Udupa, Jayaram K.; Chen, Xinjian
2011-03-01
Recognition of anatomical structures is an important step in model based medical image segmentation. It provides pose estimation of objects and information about "where" roughly the objects are in the image and distinguishing them from other object-like entities. In,1 we presented a general method of model-based multi-object recognition to assist in segmentation (delineation) tasks. It exploits the pose relationship that can be encoded, via the concept of ball scale (b-scale), between the binary training objects and their associated grey images. The goal was to place the model, in a single shot, close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. Unlike position and scale parameters, we observe that orientation parameters require more attention when estimating the pose of the model as even small differences in orientation parameters can lead to inappropriate recognition. Motivated from the non-Euclidean nature of the pose information, we propose in this paper the use of non-Euclidean metrics to estimate orientation of the anatomical structures for more accurate recognition and segmentation. We statistically analyze and evaluate the following metrics for orientation estimation: Euclidean, Log-Euclidean, Root-Euclidean, Procrustes Size-and-Shape, and mean Hermitian metrics. The results show that mean Hermitian and Cholesky decomposition metrics provide more accurate orientation estimates than other Euclidean and non-Euclidean metrics.
NASA Astrophysics Data System (ADS)
Cao, Zhanning; Li, Xiangyang; Sun, Shaohan; Liu, Qun; Deng, Guangxiao
2018-04-01
Aiming at the prediction of carbonate fractured-vuggy reservoirs, we put forward an integrated approach based on seismic and well data. We divide a carbonate fracture-cave system into four scales for study: micro-scale fracture, meso-scale fracture, macro-scale fracture and cave. Firstly, we analyze anisotropic attributes of prestack azimuth gathers based on multi-scale rock physics forward modeling. We select the frequency attenuation gradient attribute to calculate azimuth anisotropy intensity, and we constrain the result with Formation MicroScanner image data and trial production data to predict the distribution of both micro-scale and meso-scale fracture sets. Then, poststack seismic attributes, variance, curvature and ant algorithms are used to predict the distribution of macro-scale fractures. We also constrain the results with trial production data for accuracy. Next, the distribution of caves is predicted by the amplitude corresponding to the instantaneous peak frequency of the seismic imaging data. Finally, the meso-scale fracture sets, macro-scale fractures and caves are combined to obtain an integrated result. This integrated approach is applied to a real field in Tarim Basin in western China for the prediction of fracture-cave reservoirs. The results indicate that this approach can well explain the spatial distribution of carbonate reservoirs. It can solve the problem of non-uniqueness and improve fracture prediction accuracy.
Young Kim, Eun; Johnson, Hans J
2013-01-01
A robust multi-modal tool, for automated registration, bias correction, and tissue classification, has been implemented for large-scale heterogeneous multi-site longitudinal MR data analysis. This work focused on improving the an iterative optimization framework between bias-correction, registration, and tissue classification inspired from previous work. The primary contributions are robustness improvements from incorporation of following four elements: (1) utilize multi-modal and repeated scans, (2) incorporate high-deformable registration, (3) use extended set of tissue definitions, and (4) use of multi-modal aware intensity-context priors. The benefits of these enhancements were investigated by a series of experiments with both simulated brain data set (BrainWeb) and by applying to highly-heterogeneous data from a 32 site imaging study with quality assessments through the expert visual inspection. The implementation of this tool is tailored for, but not limited to, large-scale data processing with great data variation with a flexible interface. In this paper, we describe enhancements to a joint registration, bias correction, and the tissue classification, that improve the generalizability and robustness for processing multi-modal longitudinal MR scans collected at multi-sites. The tool was evaluated by using both simulated and simulated and human subject MRI images. With these enhancements, the results showed improved robustness for large-scale heterogeneous MRI processing.
Multi-image encryption based on synchronization of chaotic lasers and iris authentication
NASA Astrophysics Data System (ADS)
Banerjee, Santo; Mukhopadhyay, Sumona; Rondoni, Lamberto
2012-07-01
A new technique of transmitting encrypted combinations of gray scaled and chromatic images using chaotic lasers derived from Maxwell-Bloch's equations has been proposed. This novel scheme utilizes the general method of solution of a set of linear equations to transmit similar sized heterogeneous images which are a combination of monochrome and chromatic images. The chaos encrypted gray scaled images are concatenated along the three color planes resulting in color images. These are then transmitted over a secure channel along with a cover image which is an iris scan. The entire cryptology is augmented with an iris-based authentication scheme. The secret messages are retrieved once the authentication is successful. The objective of our work is briefly outlined as (a) the biometric information is the iris which is encrypted before transmission, (b) the iris is used for personal identification and verifying for message integrity, (c) the information is transmitted securely which are colored images resulting from a combination of gray images, (d) each of the images transmitted are encrypted through chaos based cryptography, (e) these encrypted multiple images are then coupled with the iris through linear combination of images before being communicated over the network. The several layers of encryption together with the ergodicity and randomness of chaos render enough confusion and diffusion properties which guarantee a fool-proof approach in achieving secure communication as demonstrated by exhaustive statistical methods. The result is vital from the perspective of opening a fundamental new dimension in multiplexing and simultaneous transmission of several monochromatic and chromatic images along with biometry based authentication and cryptography.
A Feature-based Approach to Big Data Analysis of Medical Images
Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M.
2015-01-01
This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches in O(log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct. PMID:26221685
A Feature-Based Approach to Big Data Analysis of Medical Images.
Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M
2015-01-01
This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches-in O (log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods.. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct.
NASA Astrophysics Data System (ADS)
Rizzo, R. E.; Healy, D.; Farrell, N. J.
2017-12-01
Numerous laboratory brittle deformation experiments have shown that a rapid transition exists in the behaviour of porous materials under stress: at a certain point, early formed tensile cracks interact and coalesce into a `single' narrow zone, the shear plane, rather than remaining distributed throughout the material. In this work, we present and apply a novel image processing tool which is able to quantify this transition between distributed (`stable') damage accumulation and localised (`unstable') deformation, in terms of size, density, and orientation of cracks at the point of failure. Our technique, based on a two-dimensional (2D) continuous Morlet wavelet analysis, can recognise, extract and visually separate the multi-scale changes occurring in the fracture network during the deformation process. We have analysed high-resolution SEM-BSE images of thin sections of Hopeman Sandstone (Scotland, UK) taken from core plugs deformed under triaxial conditions, with increasing confining pressure. Through this analysis, we can determine the relationship between the initial orientation of tensile microcracks and the final geometry of the through-going shear fault, exploiting the total areal coverage of the analysed image. In addition, by comparing patterns of fractures in thin sections derived from triaxial (σ1>σ2=σ3=Pc) laboratory experiments conducted at different confining pressures (Pc), we can quantitatively explore the relationship between the observed geometry and the inferred mechanical processes. The methodology presented here can have important implications for larger-scale mechanical problems related to major fault propagation. Just as a core plug scale fault localises through extension and coalescence of microcracks, larger faults also grow by extension and coalescence of segments in a multi-scale process by which microscopic cracks can ultimately lead to macroscopic faulting. Consequently, wavelet analysis represents a useful tool for fracture pattern recognition, applicable to the detection of the transitions occurring at the time of catastrophic rupture.
Robust, Efficient Depth Reconstruction With Hierarchical Confidence-Based Matching.
Sun, Li; Chen, Ke; Song, Mingli; Tao, Dacheng; Chen, Gang; Chen, Chun
2017-07-01
In recent years, taking photos and capturing videos with mobile devices have become increasingly popular. Emerging applications based on the depth reconstruction technique have been developed, such as Google lens blur. However, depth reconstruction is difficult due to occlusions, non-diffuse surfaces, repetitive patterns, and textureless surfaces, and it has become more difficult due to the unstable image quality and uncontrolled scene condition in the mobile setting. In this paper, we present a novel hierarchical framework with multi-view confidence-based matching for robust, efficient depth reconstruction in uncontrolled scenes. Particularly, the proposed framework combines local cost aggregation with global cost optimization in a complementary manner that increases efficiency and accuracy. A depth map is efficiently obtained in a coarse-to-fine manner by using an image pyramid. Moreover, confidence maps are computed to robustly fuse multi-view matching cues, and to constrain the stereo matching on a finer scale. The proposed framework has been evaluated with challenging indoor and outdoor scenes, and has achieved robust and efficient depth reconstruction.
Fast Fourier transform-based Retinex and alpha-rooting color image enhancement
NASA Astrophysics Data System (ADS)
Grigoryan, Artyom M.; Agaian, Sos S.; Gonzales, Analysa M.
2015-05-01
Efficiency in terms of both accuracy and speed is highly important in any system, especially when it comes to image processing. The purpose of this paper is to improve an existing implementation of multi-scale retinex (MSR) by utilizing the fast Fourier transforms (FFT) within the illumination estimation step of the algorithm to improve the speed at which Gaussian blurring filters were applied to the original input image. In addition, alpha-rooting can be used as a separate technique to achieve a sharper image in order to fuse its results with those of the retinex algorithm for the sake of achieving the best image possible as shown by the values of the considered color image enhancement measure (EMEC).
Super-resolution mapping using multi-viewing CHRIS/PROBA data
NASA Astrophysics Data System (ADS)
Dwivedi, Manish; Kumar, Vinay
2016-04-01
High-spatial resolution Remote Sensing (RS) data provides detailed information which ensures high-definition visual image analysis of earth surface features. These data sets also support improved information extraction capabilities at a fine scale. In order to improve the spatial resolution of coarser resolution RS data, the Super Resolution Reconstruction (SRR) technique has become widely acknowledged which focused on multi-angular image sequences. In this study multi-angle CHRIS/PROBA data of Kutch area is used for SR image reconstruction to enhance the spatial resolution from 18 m to 6m in the hope to obtain a better land cover classification. Various SR approaches like Projection onto Convex Sets (POCS), Robust, Iterative Back Projection (IBP), Non-Uniform Interpolation and Structure-Adaptive Normalized Convolution (SANC) chosen for this study. Subjective assessment through visual interpretation shows substantial improvement in land cover details. Quantitative measures including peak signal to noise ratio and structural similarity are used for the evaluation of the image quality. It was observed that SANC SR technique using Vandewalle algorithm for the low resolution image registration outperformed the other techniques. After that SVM based classifier is used for the classification of SRR and data resampled to 6m spatial resolution using bi-cubic interpolation. A comparative analysis is carried out between classified data of bicubic interpolated and SR derived images of CHRIS/PROBA and SR derived classified data have shown a significant improvement of 10-12% in the overall accuracy. The results demonstrated that SR methods is able to improve spatial detail of multi-angle images as well as the classification accuracy.
NASA Astrophysics Data System (ADS)
Vercauteren, Tom; Doussoux, François; Cazaux, Matthieu; Schmid, Guillaume; Linard, Nicolas; Durin, Marie-Amélie; Gharbi, Hédi; Lacombe, François
2013-03-01
Since its inception in the field of in vivo imaging, endomicroscopy through optical fiber bundles, or probe-based Confocal Laser Endomicroscopy (pCLE), has extensively proven the benefit of in situ and real-time examination of living tissues at the microscopic scale. By continuously increasing image quality, reducing invasiveness and improving system ergonomics, Mauna Kea Technologies has turned pCLE not only into an irreplaceable research instrument for small animal imaging, but also into an accurate clinical decision making tool with applications as diverse as gastrointestinal endoscopy, pulmonology and urology. The current implementation of pCLE relies on a single fluorescence spectral band making different sources of in vivo information challenging to distinguish. Extending the pCLE approach to multi-color endomicroscopy therefore appears as a natural plan. Coupling simultaneous multi-laser excitation with minimally invasive, microscopic resolution, thin and flexible optics, allows the fusion of complementary and valuable biological information, thus paving the way to a combination of morphological and functional imaging. This paper will detail the architecture of a new system, Cellvizio Dual Band, capable of video rate in vivo and in situ multi-spectral fluorescence imaging with a microscopic resolution. In its standard configuration, the system simultaneously operates at 488 and 660 nm, where it automatically performs the necessary spectral, photometric and geometric calibrations to provide unambiguously co-registered images in real-time. The main hardware and software features, including calibration procedures and sub-micron registration algorithms, will be presented as well as a panorama of its current applications, illustrated with recent results in the field of pre-clinical imaging.
SegAN: Adversarial Network with Multi-scale L1 Loss for Medical Image Segmentation.
Xue, Yuan; Xu, Tao; Zhang, Han; Long, L Rodney; Huang, Xiaolei
2018-05-03
Inspired by classic Generative Adversarial Networks (GANs), we propose a novel end-to-end adversarial neural network, called SegAN, for the task of medical image segmentation. Since image segmentation requires dense, pixel-level labeling, the single scalar real/fake output of a classic GAN's discriminator may be ineffective in producing stable and sufficient gradient feedback to the networks. Instead, we use a fully convolutional neural network as the segmentor to generate segmentation label maps, and propose a novel adversarial critic network with a multi-scale L 1 loss function to force the critic and segmentor to learn both global and local features that capture long- and short-range spatial relationships between pixels. In our SegAN framework, the segmentor and critic networks are trained in an alternating fashion in a min-max game: The critic is trained by maximizing a multi-scale loss function, while the segmentor is trained with only gradients passed along by the critic, with the aim to minimize the multi-scale loss function. We show that such a SegAN framework is more effective and stable for the segmentation task, and it leads to better performance than the state-of-the-art U-net segmentation method. We tested our SegAN method using datasets from the MICCAI BRATS brain tumor segmentation challenge. Extensive experimental results demonstrate the effectiveness of the proposed SegAN with multi-scale loss: on BRATS 2013 SegAN gives performance comparable to the state-of-the-art for whole tumor and tumor core segmentation while achieves better precision and sensitivity for Gd-enhance tumor core segmentation; on BRATS 2015 SegAN achieves better performance than the state-of-the-art in both dice score and precision.
Edge enhancement and noise suppression for infrared image based on feature analysis
NASA Astrophysics Data System (ADS)
Jiang, Meng
2018-06-01
Infrared images are often suffering from background noise, blurred edges, few details and low signal-to-noise ratios. To improve infrared image quality, it is essential to suppress noise and enhance edges simultaneously. To realize it in this paper, we propose a novel algorithm based on feature analysis in shearlet domain. Firstly, as one of multi-scale geometric analysis (MGA), we introduce the theory and superiority of shearlet transform. Secondly, after analyzing the defects of traditional thresholding technique to suppress noise, we propose a novel feature extraction distinguishing image structures from noise well and use it to improve the traditional thresholding technique. Thirdly, with computing the correlations between neighboring shearlet coefficients, the feature attribute maps identifying the weak detail and strong edges are completed to improve the generalized unsharped masking (GUM). At last, experiment results with infrared images captured in different scenes demonstrate that the proposed algorithm suppresses noise efficiently and enhances image edges adaptively.
Application of process tomography in gas-solid fluidised beds in different scales and structures
NASA Astrophysics Data System (ADS)
Wang, H. G.; Che, H. Q.; Ye, J. M.; Tu, Q. Y.; Wu, Z. P.; Yang, W. Q.; Ocone, R.
2018-04-01
Gas-solid fluidised beds are commonly used in particle-related processes, e.g. for coal combustion and gasification in the power industry, and the coating and granulation process in the pharmaceutical industry. Because the operation efficiency depends on the gas-solid flow characteristics, it is necessary to investigate the flow behaviour. This paper is about the application of process tomography, including electrical capacitance tomography (ECT) and microwave tomography (MWT), in multi-scale gas-solid fluidisation processes in the pharmaceutical and power industries. This is the first time that both ECT and MWT have been applied for this purpose in multi-scale and complex structure. To evaluate the sensor design and image reconstruction and to investigate the effects of sensor structure and dimension on the image quality, a normalised sensitivity coefficient is introduced. In the meantime, computational fluid dynamic (CFD) analysis based on a computational particle fluid dynamic (CPFD) model and a two-phase fluid model (TFM) is used. Part of the CPFD-TFM simulation results are compared and validated by experimental results from ECT and/or MWT. By both simulation and experiment, the complex flow hydrodynamic behaviour in different scales is analysed. Time-series capacitance data are analysed both in time and frequency domains to reveal the flow characteristics.
NASA Astrophysics Data System (ADS)
Cheng, Boyang; Jin, Longxu; Li, Guoning
2018-06-01
Visible light and infrared images fusion has been a significant subject in imaging science. As a new contribution to this field, a novel fusion framework of visible light and infrared images based on adaptive dual-channel unit-linking pulse coupled neural networks with singular value decomposition (ADS-PCNN) in non-subsampled shearlet transform (NSST) domain is present in this paper. First, the source images are decomposed into multi-direction and multi-scale sub-images by NSST. Furthermore, an improved novel sum modified-Laplacian (INSML) of low-pass sub-image and an improved average gradient (IAVG) of high-pass sub-images are input to stimulate the ADS-PCNN, respectively. To address the large spectral difference between infrared and visible light and the occurrence of black artifacts in fused images, a local structure information operator (LSI), which comes from local area singular value decomposition in each source image, is regarded as the adaptive linking strength that enhances fusion accuracy. Compared with PCNN models in other studies, the proposed method simplifies certain peripheral parameters, and the time matrix is utilized to decide the iteration number adaptively. A series of images from diverse scenes are used for fusion experiments and the fusion results are evaluated subjectively and objectively. The results of the subjective and objective evaluation show that our algorithm exhibits superior fusion performance and is more effective than the existing typical fusion techniques.
Multi-scale seismic tomography of the Merapi-Merbabu volcanic complex, Indonesia
NASA Astrophysics Data System (ADS)
Mujid Abdullah, Nur; Valette, Bernard; Potin, Bertrand; Ramdhan, Mohamad
2017-04-01
Merapi-Merbabu volcanic complex is the most active volcano located on Java Island, Indonesia, where the Indian plate subducts beneath Eurasian plate. We present a preliminary study of a multi-scale seismic tomography of the substructures of the volcanic complex. The main objective of our study is to image the feeding paths of the volcanic complex at an intermediate scale by using the data from the dense network (about 5 km spacing) constituted by 53 stations of the French-Indonesian DOMERAPI experiment complemented by the data of the German-Indonesian MERAMEX project (134 stations) and of the Indonesia Tsunami Early Warning System (InaTEWS) located in the vicinity of the complex. The inversion was performed using the INSIGHT algorithm, which follows a non-linear least squares approach based on a stochastic description of data and model. In total, 1883 events and 41846 phases (26647 P and 15199 S) have been processed, and a two-scale approach was adopted. The model obtained at regional scale is consistent with the previous studies. We selected the most reliable regional model as a prior model for the local tomography performed with a variant of the INSIGHT code. The algorithm of this code is based on the fact that inverting differences of data when transporting the errors in probability is equivalent to inverting initial data while introducing specific correlation terms in the data covariance matrix. The local tomography provides images of the substructure of the volcanic complex with a sufficiently good resolution to allow identification of a probable magma chamber at about 20 km.
Poly-Pattern Compressive Segmentation of ASTER Data for GIS
NASA Technical Reports Server (NTRS)
Myers, Wayne; Warner, Eric; Tutwiler, Richard
2007-01-01
Pattern-based segmentation of multi-band image data, such as ASTER, produces one-byte and two-byte approximate compressions. This is a dual segmentation consisting of nested coarser and finer level pattern mappings called poly-patterns. The coarser A-level version is structured for direct incorporation into geographic information systems in the manner of a raster map. GIs renderings of this A-level approximation are called pattern pictures which have the appearance of color enhanced images. The two-byte version consisting of thousands of B-level segments provides a capability for approximate restoration of the multi-band data in selected areas or entire scenes. Poly-patterns are especially useful for purposes of change detection and landscape analysis at multiple scales. The primary author has implemented the segmentation methodology in a public domain software suite.
NASA Astrophysics Data System (ADS)
Yang, Xue; Sun, Hao; Fu, Kun; Yang, Jirui; Sun, Xian; Yan, Menglong; Guo, Zhi
2018-01-01
Ship detection has been playing a significant role in the field of remote sensing for a long time but it is still full of challenges. The main limitations of traditional ship detection methods usually lie in the complexity of application scenarios, the difficulty of intensive object detection and the redundancy of detection region. In order to solve such problems above, we propose a framework called Rotation Dense Feature Pyramid Networks (R-DFPN) which can effectively detect ship in different scenes including ocean and port. Specifically, we put forward the Dense Feature Pyramid Network (DFPN), which is aimed at solving the problem resulted from the narrow width of the ship. Compared with previous multi-scale detectors such as Feature Pyramid Network (FPN), DFPN builds the high-level semantic feature-maps for all scales by means of dense connections, through which enhances the feature propagation and encourages the feature reuse. Additionally, in the case of ship rotation and dense arrangement, we design a rotation anchor strategy to predict the minimum circumscribed rectangle of the object so as to reduce the redundant detection region and improve the recall. Furthermore, we also propose multi-scale ROI Align for the purpose of maintaining the completeness of semantic and spatial information. Experiments based on remote sensing images from Google Earth for ship detection show that our detection method based on R-DFPN representation has a state-of-the-art performance.
Multi-scales region segmentation for ROI separation in digital mammograms
NASA Astrophysics Data System (ADS)
Zhang, Dapeng; Zhang, Di; Li, Yue; Wang, Wei
2017-02-01
Mammography is currently the most effective imaging modality used by radiologists for the screening of breast cancer. Segmentation is one of the key steps in the process of developing anatomical models for calculation of safe medical dose of radiation. This paper explores the potential of the statistical region merging segmentation technique for Breast segmentation in digital mammograms. First, the mammograms are pre-processing for regions enhancement, then the enhanced images are segmented using SRM with multi scales, finally these segmentations are combined for region of interest (ROI) separation and edge detection. The proposed algorithm uses multi-scales region segmentation in order to: separate breast region from background region, region edge detection and ROIs separation. The experiments are performed using a data set of mammograms from different patients, demonstrating the validity of the proposed criterion. Results show that, the statistical region merging segmentation algorithm actually can work on the segmentation of medical image and more accurate than another methods. And the outcome shows that the technique has a great potential to become a method of choice for segmentation of mammograms.
Wang, Kun-Ching
2015-01-14
The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI). In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII). The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD) algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC) and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech.
Gundupalli, Sathish Paulraj; Hait, Subrata; Thakur, Atul
2017-12-01
There has been a significant rise in municipal solid waste (MSW) generation in the last few decades due to rapid urbanization and industrialization. Due to the lack of source segregation practice, a need for automated segregation of recyclables from MSW exists in the developing countries. This paper reports a thermal imaging based system for classifying useful recyclables from simulated MSW sample. Experimental results have demonstrated the possibility to use thermal imaging technique for classification and a robotic system for sorting of recyclables in a single process step. The reported classification system yields an accuracy in the range of 85-96% and is comparable with the existing single-material recyclable classification techniques. We believe that the reported thermal imaging based system can emerge as a viable and inexpensive large-scale classification-cum-sorting technology in recycling plants for processing MSW in developing countries. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fuzzy entropy thresholding and multi-scale morphological approach for microscopic image enhancement
NASA Astrophysics Data System (ADS)
Zhou, Jiancan; Li, Yuexiang; Shen, Linlin
2017-07-01
Microscopic images provide lots of useful information for modern diagnosis and biological research. However, due to the unstable lighting condition during image capturing, two main problems, i.e., high-level noises and low image contrast, occurred in the generated cell images. In this paper, a simple but efficient enhancement framework is proposed to address the problems. The framework removes image noises using a hybrid method based on wavelet transform and fuzzy-entropy, and enhances the image contrast with an adaptive morphological approach. Experiments on real cell dataset were made to assess the performance of proposed framework. The experimental results demonstrate that our proposed enhancement framework increases the cell tracking accuracy to an average of 74.49%, which outperforms the benchmark algorithm, i.e., 46.18%.
Man-made objects cuing in satellite imagery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skurikhin, Alexei N
2009-01-01
We present a multi-scale framework for man-made structures cuing in satellite image regions. The approach is based on a hierarchical image segmentation followed by structural analysis. A hierarchical segmentation produces an image pyramid that contains a stack of irregular image partitions, represented as polygonized pixel patches, of successively reduced levels of detail (LOOs). We are jumping off from the over-segmented image represented by polygons attributed with spectral and texture information. The image is represented as a proximity graph with vertices corresponding to the polygons and edges reflecting polygon relations. This is followed by the iterative graph contraction based on Boruvka'smore » Minimum Spanning Tree (MST) construction algorithm. The graph contractions merge the patches based on their pairwise spectral and texture differences. Concurrently with the construction of the irregular image pyramid, structural analysis is done on the agglomerated patches. Man-made object cuing is based on the analysis of shape properties of the constructed patches and their spatial relations. The presented framework can be used as pre-scanning tool for wide area monitoring to quickly guide the further analysis to regions of interest.« less
NASA Astrophysics Data System (ADS)
Ban, Sungbea; Cho, Nam Hyun; Ryu, Yongjae; Jung, Sunwoo; Vavilin, Andrey; Min, Eunjung; Jung, Woonggyu
2016-04-01
Optical projection tomography is a new optical imaging method for visualizing small biological specimens in three dimension. The most important advantage of OPT is to fill the gap between MRI and confocal microscope for the specimen having the range of 1-10 mm. Thus, it has been mainly used for whole-mount small animals and developmental study since this imaging modality was developed. The ability of OPT delivering anatomical and functional information of relatively large tissue in 3D has made it a promising platform in biomedical research. Recently, the potential of OPT spans its coverage to cellular scale. Even though there are increasing demand to obtain better understanding of cellular dynamics, only few studies to visualize cellular structure, shape, size and functional morphology over tissue has been investigated in existing OPT system due to its limited field of view. In this study, we develop a novel optical imaging system for 3D cellular imaging with OPT integrated with dynamic focusing technique. Our tomographic setup has great potential to be used for identifying cell characteristic in tissue because it can provide selective contrast on dynamic focal plane allowing for fluorescence as well as absorption. While the dominant contrast of optical imaging technique is to use the fluorescence for detecting certain target only, the newly developed OPT system will offer considerable advantages over currently available method when imaging cellar molecular dynamics by permitting contrast variation. By achieving multi-contrast, it is expected for this new imaging system to play an important role in delivering better cytological information to pathologist.
NASA Astrophysics Data System (ADS)
Shaar, R.; Farchi, E.; Farfurnik, D.; Ebert, Y.; Haim, G.; Bar-Gill, N.
2017-12-01
Magnetization in rock samples is crucial for paleomagnetometry research, as it harbors valuable geological information on long term processes, such as tectonic movements and the formation of oceans and continents. Nevertheless, current techniques are limited in their ability to measure high spatial resolution and high-sensitivity quantitative vectorial magnetic signatures from individual minerals and micrometer scale samples. As a result, our understanding of bulk rock magnetization is limited, specifically for the case of multi-domain minerals. In this work we use a newly developed nitrogen-vacancy magnetic microscope, capable of quantitative vectorial magnetic imaging with optical resolution. We demonstrate direct imaging of the vectorial magnetic field of a single, multi-domain dendritic magnetite, as well as the measurement and calculation of the weak magnetic moments of an individual grain on the micron scale. Our results were measured in a standoff distance of 3-10 μm, with 350 nm spatial resolution, magnetic sensitivity of 6 μT/√(Hz) and a field of view of 35 μm. The results presented here show the capabilities and the future potential of NV microscopy in measuring the magnetic signals of individual micrometer scale grains. These outcomes pave the way for future applications in paleomagnetometry, and for the fundamental understanding of magnetization in multi-domain samples.
NASA Astrophysics Data System (ADS)
Rusu, Mirabela; Wang, Haibo; Golden, Thea; Gow, Andrew; Madabhushi, Anant
2013-03-01
Mouse lung models facilitate the investigation of conditions such as chronic inflammation which are associated with common lung diseases. The multi-scale manifestation of lung inflammation prompted us to use multi-scale imaging - both in vivo, ex vivo MRI along with ex vivo histology, for its study in a new quantitative way. Some imaging modalities, such as MRI, are non-invasive and capture macroscopic features of the pathology, while others, e.g. ex vivo histology, depict detailed structures. Registering such multi-modal data to the same spatial coordinates will allow the construction of a comprehensive 3D model to enable the multi-scale study of diseases. Moreover, it may facilitate the identification and definition of quantitative of in vivo imaging signatures for diseases and pathologic processes. We introduce a quantitative, image analytic framework to integrate in vivo MR images of the entire mouse with ex vivo histology of the lung alone, using lung ex vivo MRI as conduit to facilitate their co-registration. In our framework, we first align the MR images by registering the in vivo and ex vivo MRI of the lung using an interactive rigid registration approach. Then we reconstruct the 3D volume of the ex vivo histological specimen by efficient group wise registration of the 2D slices. The resulting 3D histologic volume is subsequently registered to the MRI volumes by interactive rigid registration, directly to the ex vivo MRI, and implicitly to in vivo MRI. Qualitative evaluation of the registration framework was performed by comparing airway tree structures in ex vivo MRI and ex vivo histology where airways are visible and may be annotated. We present a use case for evaluation of our co-registration framework in the context of studying chronic inammation in a diseased mouse.
Wang, Wensheng; Nie, Ting; Fu, Tianjiao; Ren, Jianyue; Jin, Longxu
2017-05-06
In target detection of optical remote sensing images, two main obstacles for aircraft target detection are how to extract the candidates in complex gray-scale-multi background and how to confirm the targets in case the target shapes are deformed, irregular or asymmetric, such as that caused by natural conditions (low signal-to-noise ratio, illumination condition or swaying photographing) and occlusion by surrounding objects (boarding bridge, equipment). To solve these issues, an improved active contours algorithm, namely region-scalable fitting energy based threshold (TRSF), and a corner-convex hull based segmentation algorithm (CCHS) are proposed in this paper. Firstly, the maximal variance between-cluster algorithm (Otsu's algorithm) and region-scalable fitting energy (RSF) algorithm are combined to solve the difficulty of targets extraction in complex and gray-scale-multi backgrounds. Secondly, based on inherent shapes and prominent corners, aircrafts are divided into five fragments by utilizing convex hulls and Harris corner points. Furthermore, a series of new structure features, which describe the proportion of targets part in the fragment to the whole fragment and the proportion of fragment to the whole hull, are identified to judge whether the targets are true or not. Experimental results show that TRSF algorithm could improve extraction accuracy in complex background, and that it is faster than some traditional active contours algorithms. The CCHS is effective to suppress the detection difficulties caused by the irregular shape.
NASA Astrophysics Data System (ADS)
Li, J.; Wen, G.; Li, D.
2018-04-01
Trough mastering background information of Yunnan province grassland resources utilization and ecological conditions to improves grassland elaborating management capacity, it carried out grassland resource investigation work by Yunnan province agriculture department in 2017. The traditional grassland resource investigation method is ground based investigation, which is time-consuming and inefficient, especially not suitable for large scale and hard-to-reach areas. While remote sensing is low cost, wide range and efficient, which can reflect grassland resources present situation objectively. It has become indispensable grassland monitoring technology and data sources and it has got more and more recognition and application in grassland resources monitoring research. This paper researches application of multi-source remote sensing image in Yunnan province grassland resources investigation. First of all, it extracts grassland resources thematic information and conducts field investigation through BJ-2 high space resolution image segmentation. Secondly, it classifies grassland types and evaluates grassland degradation degree through high resolution characteristics of Landsat 8 image. Thirdly, it obtained grass yield model and quality classification through high resolution and wide scanning width characteristics of MODIS images and sample investigate data. Finally, it performs grassland field qualitative analysis through UAV remote sensing image. According to project area implementation, it proves that multi-source remote sensing data can be applied to the grassland resources investigation in Yunnan province and it is indispensable method.
Real-Time Multi-Target Localization from Unmanned Aerial Vehicles
Wang, Xuan; Liu, Jinghong; Zhou, Qianfei
2016-01-01
In order to improve the reconnaissance efficiency of unmanned aerial vehicle (UAV) electro-optical stabilized imaging systems, a real-time multi-target localization scheme based on an UAV electro-optical stabilized imaging system is proposed. First, a target location model is studied. Then, the geodetic coordinates of multi-targets are calculated using the homogeneous coordinate transformation. On the basis of this, two methods which can improve the accuracy of the multi-target localization are proposed: (1) the real-time zoom lens distortion correction method; (2) a recursive least squares (RLS) filtering method based on UAV dead reckoning. The multi-target localization error model is established using Monte Carlo theory. In an actual flight, the UAV flight altitude is 1140 m. The multi-target localization results are within the range of allowable error. After we use a lens distortion correction method in a single image, the circular error probability (CEP) of the multi-target localization is reduced by 7%, and 50 targets can be located at the same time. The RLS algorithm can adaptively estimate the location data based on multiple images. Compared with multi-target localization based on a single image, CEP of the multi-target localization using RLS is reduced by 25%. The proposed method can be implemented on a small circuit board to operate in real time. This research is expected to significantly benefit small UAVs which need multi-target geo-location functions. PMID:28029145
Real-Time Multi-Target Localization from Unmanned Aerial Vehicles.
Wang, Xuan; Liu, Jinghong; Zhou, Qianfei
2016-12-25
In order to improve the reconnaissance efficiency of unmanned aerial vehicle (UAV) electro-optical stabilized imaging systems, a real-time multi-target localization scheme based on an UAV electro-optical stabilized imaging system is proposed. First, a target location model is studied. Then, the geodetic coordinates of multi-targets are calculated using the homogeneous coordinate transformation. On the basis of this, two methods which can improve the accuracy of the multi-target localization are proposed: (1) the real-time zoom lens distortion correction method; (2) a recursive least squares (RLS) filtering method based on UAV dead reckoning. The multi-target localization error model is established using Monte Carlo theory. In an actual flight, the UAV flight altitude is 1140 m. The multi-target localization results are within the range of allowable error. After we use a lens distortion correction method in a single image, the circular error probability (CEP) of the multi-target localization is reduced by 7%, and 50 targets can be located at the same time. The RLS algorithm can adaptively estimate the location data based on multiple images. Compared with multi-target localization based on a single image, CEP of the multi-target localization using RLS is reduced by 25%. The proposed method can be implemented on a small circuit board to operate in real time. This research is expected to significantly benefit small UAVs which need multi-target geo-location functions.
Remote focusing for programmable multi-layer differential multiphoton microscopy
Hoover, Erich E.; Young, Michael D.; Chandler, Eric V.; Luo, Anding; Field, Jeffrey J.; Sheetz, Kraig E.; Sylvester, Anne W.; Squier, Jeff A.
2010-01-01
We present the application of remote focusing to multiphoton laser scanning microscopy and utilize this technology to demonstrate simultaneous, programmable multi-layer imaging. Remote focusing is used to independently control the axial location of multiple focal planes that can be simultaneously imaged with single element detection. This facilitates volumetric multiphoton imaging in scattering specimens and can be practically scaled to a large number of focal planes. Further, it is demonstrated that the remote focusing control can be synchronized with the lateral scan directions, enabling imaging in orthogonal scan planes. PMID:21326641
Brunstein, Maia; Wicker, Kai; Hérault, Karine; Heintzmann, Rainer; Oheim, Martin
2013-11-04
Most structured illumination microscopes use a physical or synthetic grating that is projected into the sample plane to generate a periodic illumination pattern. Albeit simple and cost-effective, this arrangement hampers fast or multi-color acquisition, which is a critical requirement for time-lapse imaging of cellular and sub-cellular dynamics. In this study, we designed and implemented an interferometric approach allowing large-field, fast, dual-color imaging at an isotropic 100-nm resolution based on a sub-diffraction fringe pattern generated by the interference of two colliding evanescent waves. Our all-mirror-based system generates illumination pat-terns of arbitrary orientation and period, limited only by the illumination aperture (NA = 1.45), the response time of a fast, piezo-driven tip-tilt mirror (10 ms) and the available fluorescence signal. At low µW laser powers suitable for long-period observation of life cells and with a camera exposure time of 20 ms, our system permits the acquisition of super-resolved 50 µm by 50 µm images at 3.3 Hz. The possibility it offers for rapidly adjusting the pattern between images is particularly advantageous for experiments that require multi-scale and multi-color information. We demonstrate the performance of our instrument by imaging mitochondrial dynamics in cultured cortical astrocytes. As an illustration of dual-color excitation dual-color detection, we also resolve interaction sites between near-membrane mitochondria and the endoplasmic reticulum. Our TIRF-SIM microscope provides a versatile, compact and cost-effective arrangement for super-resolution imaging, allowing the investigation of co-localization and dynamic interactions between organelles--important questions in both cell biology and neurophysiology.
Threshold multi-secret sharing scheme based on phase-shifting interferometry
NASA Astrophysics Data System (ADS)
Deng, Xiaopeng; Wen, Wei; Shi, Zhengang
2017-03-01
A threshold multi-secret sharing scheme is proposed based on phase-shifting interferometry. The K secret images to be shared are firstly encoded by using Fourier transformation, respectively. Then, these encoded images are shared into many shadow images based on recording principle of the phase-shifting interferometry. In the recovering stage, the secret images can be restored by combining any 2 K + 1 or more shadow images, while any 2 K or fewer shadow images cannot obtain any information about the secret images. As a result, a (2 K + 1 , N) threshold multi-secret sharing scheme can be implemented. Simulation results are presented to demonstrate the feasibility of the proposed method.
Multi-Scale Computational Models for Electrical Brain Stimulation
Seo, Hyeon; Jun, Sung C.
2017-01-01
Electrical brain stimulation (EBS) is an appealing method to treat neurological disorders. To achieve optimal stimulation effects and a better understanding of the underlying brain mechanisms, neuroscientists have proposed computational modeling studies for a decade. Recently, multi-scale models that combine a volume conductor head model and multi-compartmental models of cortical neurons have been developed to predict stimulation effects on the macroscopic and microscopic levels more precisely. As the need for better computational models continues to increase, we overview here recent multi-scale modeling studies; we focused on approaches that coupled a simplified or high-resolution volume conductor head model and multi-compartmental models of cortical neurons, and constructed realistic fiber models using diffusion tensor imaging (DTI). Further implications for achieving better precision in estimating cellular responses are discussed. PMID:29123476
Multiscale skeletal representation of images via Voronoi diagrams
NASA Astrophysics Data System (ADS)
Marston, R. E.; Shih, Jian C.
1995-08-01
Polygonal approximations to skeletal or stroke-based representations of 2D objects may consume less storage and be sufficient to describe their shape for many applications. Multi- scale descriptions of object outlines are well established but corresponding methods for skeletal descriptions have been slower to develop. In this paper we offer a method of generating scale-based skeletal representation via the Voronoi diagram. The method has the advantages of less time complexity, a closer relationship between the skeletons at each scale and better control over simplification of the skeleton at lower scales. This is because the algorithm starts by generating the skeleton at the coarsest scale first, then it produces each finer scale, in an iterative manner, directly from the level below. The skeletal approximations produced by the algorithm also benefit from a strong relationship with the object outline, due to the structure of the Voronoi diagram.
Multiple-length-scale deformation analysis in a thermoplastic polyurethane
Sui, Tan; Baimpas, Nikolaos; Dolbnya, Igor P.; Prisacariu, Cristina; Korsunsky, Alexander M.
2015-01-01
Thermoplastic polyurethane elastomers enjoy an exceptionally wide range of applications due to their remarkable versatility. These block co-polymers are used here as an example of a structurally inhomogeneous composite containing nano-scale gradients, whose internal strain differs depending on the length scale of consideration. Here we present a combined experimental and modelling approach to the hierarchical characterization of block co-polymer deformation. Synchrotron-based small- and wide-angle X-ray scattering and radiography are used for strain evaluation across the scales. Transmission electron microscopy image-based finite element modelling and fast Fourier transform analysis are used to develop a multi-phase numerical model that achieves agreement with the combined experimental data using a minimal number of adjustable structural parameters. The results highlight the importance of fuzzy interfaces, that is, regions of nanometre-scale structure and property gradients, in determining the mechanical properties of hierarchical composites across the scales. PMID:25758945
Ren, Yuanqiang; Qiu, Lei; Yuan, Shenfang; Bao, Qiao
2017-05-11
Structural health monitoring (SHM) of aircraft composite structure is helpful to increase reliability and reduce maintenance costs. Due to the great effectiveness in distinguishing particular guided wave modes and identifying the propagation direction, the spatial-wavenumber filter technique has emerged as an interesting SHM topic. In this paper, a new scanning spatial-wavenumber filter (SSWF) based imaging method for multiple damages is proposed to conduct on-line monitoring of aircraft composite structures. Firstly, an on-line multi-damage SSWF is established, including the fundamental principle of SSWF for multiple damages based on a linear piezoelectric (PZT) sensor array, and a corresponding wavenumber-time imaging mechanism by using the multi-damage scattering signal. Secondly, through combining the on-line multi-damage SSWF and a PZT 2D cross-shaped array, an image-mapping method is proposed to conduct wavenumber synthesis and convert the two wavenumber-time images obtained by the PZT 2D cross-shaped array to an angle-distance image, from which the multiple damages can be directly recognized and located. In the experimental validation, both simulated multi-damage and real multi-damage introduced by repeated impacts are performed on a composite plate structure. The maximum localization error is less than 2 cm, which shows good performance of the multi-damage imaging method. Compared with the existing spatial-wavenumber filter based damage evaluation methods, the proposed method requires no more than the multi-damage scattering signal and can be performed without depending on any wavenumber modeling or measuring. Besides, this method locates multiple damages by imaging instead of the geometric method, which helps to improve the signal-to-noise ratio. Thus, it can be easily applied to on-line multi-damage monitoring of aircraft composite structures.
Ren, Yuanqiang; Qiu, Lei; Yuan, Shenfang; Bao, Qiao
2017-01-01
Structural health monitoring (SHM) of aircraft composite structure is helpful to increase reliability and reduce maintenance costs. Due to the great effectiveness in distinguishing particular guided wave modes and identifying the propagation direction, the spatial-wavenumber filter technique has emerged as an interesting SHM topic. In this paper, a new scanning spatial-wavenumber filter (SSWF) based imaging method for multiple damages is proposed to conduct on-line monitoring of aircraft composite structures. Firstly, an on-line multi-damage SSWF is established, including the fundamental principle of SSWF for multiple damages based on a linear piezoelectric (PZT) sensor array, and a corresponding wavenumber-time imaging mechanism by using the multi-damage scattering signal. Secondly, through combining the on-line multi-damage SSWF and a PZT 2D cross-shaped array, an image-mapping method is proposed to conduct wavenumber synthesis and convert the two wavenumber-time images obtained by the PZT 2D cross-shaped array to an angle-distance image, from which the multiple damages can be directly recognized and located. In the experimental validation, both simulated multi-damage and real multi-damage introduced by repeated impacts are performed on a composite plate structure. The maximum localization error is less than 2 cm, which shows good performance of the multi-damage imaging method. Compared with the existing spatial-wavenumber filter based damage evaluation methods, the proposed method requires no more than the multi-damage scattering signal and can be performed without depending on any wavenumber modeling or measuring. Besides, this method locates multiple damages by imaging instead of the geometric method, which helps to improve the signal-to-noise ratio. Thus, it can be easily applied to on-line multi-damage monitoring of aircraft composite structures. PMID:28772879
Computational Modeling System for Deformation and Failure in Polycrystalline Metals
2009-03-29
FIB/EHSD 3.3 The Voronoi Cell FEM for Micromechanical Modeling 3.4 VCFEM for Microstructural Damage Modeling 3.5 Adaptive Multiscale Simulations...accurate and efficient image-based micromechanical finite element model, for crystal plasticity and damage , incorporating real morphological and...topology with evolving strain localization and damage . (v) Development of multi-scaling algorithms in the time domain for compression and localization in
Kim, Dahan; Curthoys, Nikki M.; Parent, Matthew T.; Hess, Samuel T.
2015-01-01
Multi-colour localization microscopy has enabled sub-diffraction studies of colocalization between multiple biological species and quantification of their correlation at length scales previously inaccessible with conventional fluorescence microscopy. However, bleed-through, or misidentification of probe species, creates false colocalization and artificially increases certain types of correlation between two imaged species, affecting the reliability of information provided by colocalization and quantified correlation. Despite the potential risk of these artefacts of bleed-through, neither the effect of bleed-through on correlation nor methods of its correction in correlation analyses has been systematically studied at typical rates of bleed-through reported to affect multi-colour imaging. Here, we present a reliable method of bleed-through correction applicable to image rendering and correlation analysis of multi-colour localization microscopy. Application of our bleed-through correction shows our method accurately corrects the artificial increase in both types of correlations studied (Pearson coefficient and pair correlation), at all rates of bleed-through tested, in all types of correlations examined. In particular, anti-correlation could not be quantified without our bleed-through correction, even at rates of bleed-through as low as 2%. Demonstrated with dichroic-based multi-colour FPALM here, our presented method of bleed-through correction can be applied to all types of localization microscopy (PALM, STORM, dSTORM, GSDIM, etc.), including both simultaneous and sequential multi-colour modalities, provided the rate of bleed-through can be reliably determined. PMID:26185614
Kim, Dahan; Curthoys, Nikki M; Parent, Matthew T; Hess, Samuel T
2013-09-01
Multi-colour localization microscopy has enabled sub-diffraction studies of colocalization between multiple biological species and quantification of their correlation at length scales previously inaccessible with conventional fluorescence microscopy. However, bleed-through, or misidentification of probe species, creates false colocalization and artificially increases certain types of correlation between two imaged species, affecting the reliability of information provided by colocalization and quantified correlation. Despite the potential risk of these artefacts of bleed-through, neither the effect of bleed-through on correlation nor methods of its correction in correlation analyses has been systematically studied at typical rates of bleed-through reported to affect multi-colour imaging. Here, we present a reliable method of bleed-through correction applicable to image rendering and correlation analysis of multi-colour localization microscopy. Application of our bleed-through correction shows our method accurately corrects the artificial increase in both types of correlations studied (Pearson coefficient and pair correlation), at all rates of bleed-through tested, in all types of correlations examined. In particular, anti-correlation could not be quantified without our bleed-through correction, even at rates of bleed-through as low as 2%. Demonstrated with dichroic-based multi-colour FPALM here, our presented method of bleed-through correction can be applied to all types of localization microscopy (PALM, STORM, dSTORM, GSDIM, etc.), including both simultaneous and sequential multi-colour modalities, provided the rate of bleed-through can be reliably determined.
NASA Astrophysics Data System (ADS)
Liu, Likun
2018-01-01
In the field of remote sensing image processing, remote sensing image segmentation is a preliminary step for later analysis of remote sensing image processing and semi-auto human interpretation, fully-automatic machine recognition and learning. Since 2000, a technique of object-oriented remote sensing image processing method and its basic thought prevails. The core of the approach is Fractal Net Evolution Approach (FNEA) multi-scale segmentation algorithm. The paper is intent on the research and improvement of the algorithm, which analyzes present segmentation algorithms and selects optimum watershed algorithm as an initialization. Meanwhile, the algorithm is modified by modifying an area parameter, and then combining area parameter with a heterogeneous parameter further. After that, several experiments is carried on to prove the modified FNEA algorithm, compared with traditional pixel-based method (FCM algorithm based on neighborhood information) and combination of FNEA and watershed, has a better segmentation result.
Real-time blind image deconvolution based on coordinated framework of FPGA and DSP
NASA Astrophysics Data System (ADS)
Wang, Ze; Li, Hang; Zhou, Hua; Liu, Hongjun
2015-10-01
Image restoration takes a crucial place in several important application domains. With the increasing of computation requirement as the algorithms become much more complexity, there has been a significant rise in the need for accelerating implementation. In this paper, we focus on an efficient real-time image processing system for blind iterative deconvolution method by means of the Richardson-Lucy (R-L) algorithm. We study the characteristics of algorithm, and an image restoration processing system based on the coordinated framework of FPGA and DSP (CoFD) is presented. Single precision floating-point processing units with small-scale cascade and special FFT/IFFT processing modules are adopted to guarantee the accuracy of the processing. Finally, Comparing experiments are done. The system could process a blurred image of 128×128 pixels within 32 milliseconds, and is up to three or four times faster than the traditional multi-DSPs systems.
3D Gabor wavelet based vessel filtering of photoacoustic images.
Haq, Israr Ul; Nagoaka, Ryo; Makino, Takahiro; Tabata, Takuya; Saijo, Yoshifumi
2016-08-01
Filtering and segmentation of vasculature is an important issue in medical imaging. The visualization of vasculature is crucial for the early diagnosis and therapy in numerous medical applications. This paper investigates the use of Gabor wavelet to enhance the effect of vasculature while eliminating the noise due to size, sensitivity and aperture of the detector in 3D Optical Resolution Photoacoustic Microscopy (OR-PAM). A detailed multi-scale analysis of wavelet filtering and Hessian based method is analyzed for extracting vessels of different sizes since the blood vessels usually vary with in a range of radii. The proposed algorithm first enhances the vasculature in the image and then tubular structures are classified by eigenvalue decomposition of the local Hessian matrix at each voxel in the image. The algorithm is tested on non-invasive experiments, which shows appreciable results to enhance vasculature in photo-acoustic images.
a Fast Segmentation Algorithm for C-V Model Based on Exponential Image Sequence Generation
NASA Astrophysics Data System (ADS)
Hu, J.; Lu, L.; Xu, J.; Zhang, J.
2017-09-01
For the island coastline segmentation, a fast segmentation algorithm for C-V model method based on exponential image sequence generation is proposed in this paper. The exponential multi-scale C-V model with level set inheritance and boundary inheritance is developed. The main research contributions are as follows: 1) the problems of the "holes" and "gaps" are solved when extraction coastline through the small scale shrinkage, low-pass filtering and area sorting of region. 2) the initial value of SDF (Signal Distance Function) and the level set are given by Otsu segmentation based on the difference of reflection SAR on land and sea, which are finely close to the coastline. 3) the computational complexity of continuous transition are successfully reduced between the different scales by the SDF and of level set inheritance. Experiment results show that the method accelerates the acquisition of initial level set formation, shortens the time of the extraction of coastline, at the same time, removes the non-coastline body part and improves the identification precision of the main body coastline, which automates the process of coastline segmentation.
Multi-focus image fusion based on window empirical mode decomposition
NASA Astrophysics Data System (ADS)
Qin, Xinqiang; Zheng, Jiaoyue; Hu, Gang; Wang, Jiao
2017-09-01
In order to improve multi-focus image fusion quality, a novel fusion algorithm based on window empirical mode decomposition (WEMD) is proposed. This WEMD is an improved form of bidimensional empirical mode decomposition (BEMD), due to its decomposition process using the adding window principle, effectively resolving the signal concealment problem. We used WEMD for multi-focus image fusion, and formulated different fusion rules for bidimensional intrinsic mode function (BIMF) components and the residue component. For fusion of the BIMF components, the concept of the Sum-modified-Laplacian was used and a scheme based on the visual feature contrast adopted; when choosing the residue coefficients, a pixel value based on the local visibility was selected. We carried out four groups of multi-focus image fusion experiments and compared objective evaluation criteria with other three fusion methods. The experimental results show that the proposed fusion approach is effective and performs better at fusing multi-focus images than some traditional methods.
FracPaQ: a MATLAB™ toolbox for the quantification of fracture patterns
NASA Astrophysics Data System (ADS)
Healy, David; Rizzo, Roberto; Farrell, Natalie; Watkins, Hannah; Cornwell, David; Gomez-Rivas, Enrique; Timms, Nick
2017-04-01
The patterns of fractures in deformed rocks are rarely uniform or random. Fracture orientations, sizes, shapes and spatial distributions often exhibit some kind of order. In detail, there may be relationships among the different fracture attributes e.g. small fractures dominated by one orientation, larger fractures by another. These relationships are important because the mechanical (e.g. strength, anisotropy) and transport (e.g. fluids, heat) properties of rock depend on these fracture patterns and fracture attributes. This presentation describes an open source toolbox to quantify fracture patterns, including distributions in fracture attributes and their spatial variation. Software has been developed to quantify fracture patterns from 2-D digital images, such as thin section micrographs, geological maps, outcrop or aerial photographs or satellite images. The toolbox comprises a suite of MATLAB™ scripts based on published quantitative methods for the analysis of fracture attributes: orientations, lengths, intensity, density and connectivity. An estimate of permeability in 2-D is made using a parallel plate model. The software provides an objective and consistent methodology for quantifying fracture patterns and their variations in 2-D across a wide range of length scales. Our current focus for the application of the software is on quantifying crack and fracture patterns in and around fault zones. There is a large body of published work on the quantification of relatively simple joint patterns, but fault zones present a bigger, and arguably more important, challenge. The methods presented are inherently scale independent, and a key task will be to analyse and integrate quantitative fracture pattern data from micro- to macro-scales. New features in this release include multi-scale analyses based on a wavelet method to look for scale transitions, support for multi-colour traces in the input file processed as separate fracture sets, and combining fracture traces from multiple 2-D images to derive the statistically equivalent 3-D fracture pattern expressed as a 2nd rank crack tensor.
Sub-pattern based multi-manifold discriminant analysis for face recognition
NASA Astrophysics Data System (ADS)
Dai, Jiangyan; Guo, Changlu; Zhou, Wei; Shi, Yanjiao; Cong, Lin; Yi, Yugen
2018-04-01
In this paper, we present a Sub-pattern based Multi-manifold Discriminant Analysis (SpMMDA) algorithm for face recognition. Unlike existing Multi-manifold Discriminant Analysis (MMDA) approach which is based on holistic information of face image for recognition, SpMMDA operates on sub-images partitioned from the original face image and then extracts the discriminative local feature from the sub-images separately. Moreover, the structure information of different sub-images from the same face image is considered in the proposed method with the aim of further improve the recognition performance. Extensive experiments on three standard face databases (Extended YaleB, CMU PIE and AR) demonstrate that the proposed method is effective and outperforms some other sub-pattern based face recognition methods.
Hierarchical layered and semantic-based image segmentation using ergodicity map
NASA Astrophysics Data System (ADS)
Yadegar, Jacob; Liu, Xiaoqing
2010-04-01
Image segmentation plays a foundational role in image understanding and computer vision. Although great strides have been made and progress achieved on automatic/semi-automatic image segmentation algorithms, designing a generic, robust, and efficient image segmentation algorithm is still challenging. Human vision is still far superior compared to computer vision, especially in interpreting semantic meanings/objects in images. We present a hierarchical/layered semantic image segmentation algorithm that can automatically and efficiently segment images into hierarchical layered/multi-scaled semantic regions/objects with contextual topological relationships. The proposed algorithm bridges the gap between high-level semantics and low-level visual features/cues (such as color, intensity, edge, etc.) through utilizing a layered/hierarchical ergodicity map, where ergodicity is computed based on a space filling fractal concept and used as a region dissimilarity measurement. The algorithm applies a highly scalable, efficient, and adaptive Peano- Cesaro triangulation/tiling technique to decompose the given image into a set of similar/homogenous regions based on low-level visual cues in a top-down manner. The layered/hierarchical ergodicity map is built through a bottom-up region dissimilarity analysis. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level of detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanisms for contextual topological object/region relationship generation. Experiments have been conducted within the maritime image environment where the segmented layered semantic objects include the basic level objects (i.e. sky/land/water) and deeper level objects in the sky/land/water surfaces. Experimental results demonstrate the proposed algorithm has the capability to robustly and efficiently segment images into layered semantic objects/regions with contextual topological relationships.
NASA Astrophysics Data System (ADS)
Heath, J. E.; Dewers, T. A.; Yoon, H.; Mozley, P.
2016-12-01
Heterogeneity from the nanometer to core and larger length scales is a major challenge to understanding coupled processes in shale. To develop methods to address this challenge, we present application of high throughput multi-beam scanning electron microscopy (mSEM) and nano-to-micro-scale mechanics to the Mancos Shale. We use a 61-beam mSEM to collect 6 nm resolution SEM images at the scale of several square millimeters. These images are analyzed for pore size and shape characteristics including spatial correlation and structure. Nano-indentation, micropillar compression, and axisymmetric testing at multiple length scales allows for examining the influence of sampling size on mechanical response. The combined data set is used to: investigate representative elementary volumes (and areas for the 2D images) for the Mancos Shale; determine if scale separation occurs; and determine if transport and mechanical properties at a given length scale can be statistically defined. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Qu, Zhiyu; Qu, Fuxin; Hou, Changbo; Jing, Fulong
2018-05-19
In an inverse synthetic aperture radar (ISAR) imaging system for targets with complex motion, the azimuth echo signals of the target are always modeled as multicomponent quadratic frequency modulation (QFM) signals. The chirp rate (CR) and quadratic chirp rate (QCR) estimation of QFM signals is very important to solve the ISAR image defocus problem. For multicomponent QFM (multi-QFM) signals, the conventional QR and QCR estimation algorithms suffer from the cross-term and poor anti-noise ability. This paper proposes a novel estimation algorithm called a two-dimensional product modified parameterized chirp rate-quadratic chirp rate distribution (2D-PMPCRD) for QFM signals parameter estimation. The 2D-PMPCRD employs a multi-scale parametric symmetric self-correlation function and modified nonuniform fast Fourier transform-Fast Fourier transform to transform the signals into the chirp rate-quadratic chirp rate (CR-QCR) domains. It can greatly suppress the cross-terms while strengthening the auto-terms by multiplying different CR-QCR domains with different scale factors. Compared with high order ambiguity function-integrated cubic phase function and modified Lv's distribution, the simulation results verify that the 2D-PMPCRD acquires higher anti-noise performance and obtains better cross-terms suppression performance for multi-QFM signals with reasonable computation cost.
An adaptive block-based fusion method with LUE-SSIM for multi-focus images
NASA Astrophysics Data System (ADS)
Zheng, Jianing; Guo, Yongcai; Huang, Yukun
2016-09-01
Because of the lenses' limited depth of field, digital cameras are incapable of acquiring an all-in-focus image of objects at varying distances in a scene. Multi-focus image fusion technique can effectively solve this problem. Aiming at the block-based multi-focus image fusion methods, the problem that blocking-artifacts often occurs. An Adaptive block-based fusion method based on lifting undistorted-edge structural similarity (LUE-SSIM) is put forward. In this method, image quality metrics LUE-SSIM is firstly proposed, which utilizes the characteristics of human visual system (HVS) and structural similarity (SSIM) to make the metrics consistent with the human visual perception. Particle swarm optimization(PSO) algorithm which selects LUE-SSIM as the object function is used for optimizing the block size to construct the fused image. Experimental results on LIVE image database shows that LUE-SSIM outperform SSIM on Gaussian defocus blur images quality assessment. Besides, multi-focus image fusion experiment is carried out to verify our proposed image fusion method in terms of visual and quantitative evaluation. The results show that the proposed method performs better than some other block-based methods, especially in reducing the blocking-artifact of the fused image. And our method can effectively preserve the undistorted-edge details in focus region of the source images.
NASA Astrophysics Data System (ADS)
Tang, Li; Kwon, Young H.; Alward, Wallace L. M.; Greenlee, Emily C.; Lee, Kyungmoo; Garvin, Mona K.; Abràmoff, Michael D.
2010-03-01
The shape of the optic nerve head (ONH) is reconstructed automatically using stereo fundus color images by a robust stereo matching algorithm, which is needed for a quantitative estimate of the amount of nerve fiber loss for patients with glaucoma. Compared to natural scene stereo, fundus images are noisy because of the limits on illumination conditions and imperfections of the optics of the eye, posing challenges to conventional stereo matching approaches. In this paper, multi scale pixel feature vectors which are robust to noise are formulated using a combination of both pixel intensity and gradient features in scale space. Feature vectors associated with potential correspondences are compared with a disparity based matching score. The deep structures of the optic disc are reconstructed with a stack of disparity estimates in scale space. Optical coherence tomography (OCT) data was collected at the same time, and depth information from 3D segmentation was registered with the stereo fundus images to provide the ground truth for performance evaluation. In experiments, the proposed algorithm produces estimates for the shape of the ONH that are close to the OCT based shape, and it shows great potential to help computer-aided diagnosis of glaucoma and other related retinal diseases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urs, Necdet Onur; Mozooni, Babak; Kustov, Mikhail
2016-05-15
Recent developments in the observation of magnetic domains and domain walls by wide-field optical microscopy based on the magneto-optical Kerr, Faraday, Voigt, and Gradient effect are reviewed. Emphasis is given to the existence of higher order magneto-optical effects for advanced magnetic imaging. Fundamental concepts and advances in methodology are discussed that allow for imaging of magnetic domains on various length and time scales. Time-resolved imaging of electric field induced domain wall rotation is shown. Visualization of magnetization dynamics down to picosecond temporal resolution for the imaging of spin-waves and magneto-optical multi-effect domain imaging techniques for obtaining vectorial information are demonstrated.more » Beyond conventional domain imaging, the use of a magneto-optical indicator technique for local temperature sensing is shown.« less
Pesteie, Mehran; Abolmaesumi, Purang; Ashab, Hussam Al-Deen; Lessoway, Victoria A; Massey, Simon; Gunka, Vit; Rohling, Robert N
2015-06-01
Injection therapy is a commonly used solution for back pain management. This procedure typically involves percutaneous insertion of a needle between or around the vertebrae, to deliver anesthetics near nerve bundles. Most frequently, spinal injections are performed either blindly using palpation or under the guidance of fluoroscopy or computed tomography. Recently, due to the drawbacks of the ionizing radiation of such imaging modalities, there has been a growing interest in using ultrasound imaging as an alternative. However, the complex spinal anatomy with different wave-like structures, affected by speckle noise, makes the accurate identification of the appropriate injection plane difficult. The aim of this study was to propose an automated system that can identify the optimal plane for epidural steroid injections and facet joint injections. A multi-scale and multi-directional feature extraction system to provide automated identification of the appropriate plane is proposed. Local Hadamard coefficients are obtained using the sequency-ordered Hadamard transform at multiple scales. Directional features are extracted from local coefficients which correspond to different regions in the ultrasound images. An artificial neural network is trained based on the local directional Hadamard features for classification. The proposed method yields distinctive features for classification which successfully classified 1032 images out of 1090 for epidural steroid injection and 990 images out of 1052 for facet joint injection. In order to validate the proposed method, a leave-one-out cross-validation was performed. The average classification accuracy for leave-one-out validation was 94 % for epidural and 90 % for facet joint targets. Also, the feature extraction time for the proposed method was 20 ms for a native 2D ultrasound image. A real-time machine learning system based on the local directional Hadamard features extracted by the sequency-ordered Hadamard transform for detecting the laminae and facet joints in ultrasound images has been proposed. The system has the potential to assist the anesthesiologists in quickly finding the target plane for epidural steroid injections and facet joint injections.
Mattioli Della Rocca, Francescopaolo
2018-01-01
This paper examines methods to best exploit the High Dynamic Range (HDR) of the single photon avalanche diode (SPAD) in a high fill-factor HDR photon counting pixel that is scalable to megapixel arrays. The proposed method combines multi-exposure HDR with temporal oversampling in-pixel. We present a silicon demonstration IC with 96 × 40 array of 8.25 µm pitch 66% fill-factor SPAD-based pixels achieving >100 dB dynamic range with 3 back-to-back exposures (short, mid, long). Each pixel sums 15 bit-planes or binary field images internally to constitute one frame providing 3.75× data compression, hence the 1k frames per second (FPS) output off-chip represents 45,000 individual field images per second on chip. Two future projections of this work are described: scaling SPAD-based image sensors to HDR 1 MPixel formats and shrinking the pixel pitch to 1–3 µm. PMID:29641479
Multiscale approach to contour fitting for MR images
NASA Astrophysics Data System (ADS)
Rueckert, Daniel; Burger, Peter
1996-04-01
We present a new multiscale contour fitting process which combines information about the image and the contour of the object at different levels of scale. The algorithm is based on energy minimizing deformable models but avoids some of the problems associated with these models. The segmentation algorithm starts by constructing a linear scale-space of an image through convolution of the original image with a Gaussian kernel at different levels of scale, where the scale corresponds to the standard deviation of the Gaussian kernel. At high levels of scale large scale features of the objects are preserved while small scale features, like object details as well as noise, are suppressed. In order to maximize the accuracy of the segmentation, the contour of the object of interest is then tracked in scale-space from coarse to fine scales. We propose a hybrid multi-temperature simulated annealing optimization to minimize the energy of the deformable model. At high levels of scale the SA optimization is started at high temperatures, enabling the SA optimization to find a global optimal solution. At lower levels of scale the SA optimization is started at lower temperatures (at the lowest level the temperature is close to 0). This enforces a more deterministic behavior of the SA optimization at lower scales and leads to an increasingly local optimization as high energy barriers cannot be crossed. The performance and robustness of the algorithm have been tested on spin-echo MR images of the cardiovascular system. The task was to segment the ascending and descending aorta in 15 datasets of different individuals in order to measure regional aortic compliance. The results show that the algorithm is able to provide more accurate segmentation results than the classic contour fitting process and is at the same time very robust to noise and initialization.
NASA Astrophysics Data System (ADS)
De Montigny, Étienne; Goulamhoussen, Nadir; Madore, Wendy-Julie; Strupler, Mathias; Maniakas, Anastasios; Ayad, Tareck; Boudoux, Caroline
2016-02-01
While thyroidectomy is considered a safe surgery, dedicated tools facilitating tissue identification during surgery could improve its outcome. The most common complication following surgery is hypocalcaemia, which results from iatrogenic removal or damage to parathyroid glands. This research project aims at developing and validating an instrument based on optical microscopy modalities to identify tissues in real time during surgery. Our approach is based on a combination of reflectance confocal microscopy (RCM) and optical coherence tomography (OCT) to obtain multi-scale morphological contrast images. The orthogonal field of views provide information to navigate through the sample. To allow simultaneous, synchronized video-rate imaging in both modalities, we designed and built a dual-band wavelength-swept laser which scans a 30 nm band centered at 780 nm and a 90 nm band centered at 1310 nm. We built an imaging setup integrating a custom-made objective lens and a double-clad fibre coupler optimized for confocal microscopy. It features high resolutions in RCM (2µm lateral and 20 µm axial) in a 500 µm x 500 µm field-of-view and a larger field-of-view of 2 mm (lateral) x 5 mm (axial) with 20 µm lateral and axial resolutions in OCT. Imaging of ex vivo animal samples is demonstrated on a bench-top system. Tissues that are visually difficult to distinguish from each other intra-operatively such as parathyroid gland, lymph nodes and adipose tissue are imaged to show the potential of this approach in differentiating neck tissues. We will also provide an update on our ongoing clinical pilot study on patients undergoing thyroidectomy.
Leveraging unsupervised training sets for multi-scale compartmentalization in renal pathology
NASA Astrophysics Data System (ADS)
Lutnick, Brendon; Tomaszewski, John E.; Sarder, Pinaki
2017-03-01
Clinical pathology relies on manual compartmentalization and quantification of biological structures, which is time consuming and often error-prone. Application of computer vision segmentation algorithms to histopathological image analysis, in contrast, can offer fast, reproducible, and accurate quantitative analysis to aid pathologists. Algorithms tunable to different biologically relevant structures can allow accurate, precise, and reproducible estimates of disease states. In this direction, we have developed a fast, unsupervised computational method for simultaneously separating all biologically relevant structures from histopathological images in multi-scale. Segmentation is achieved by solving an energy optimization problem. Representing the image as a graph, nodes (pixels) are grouped by minimizing a Potts model Hamiltonian, adopted from theoretical physics, modeling interacting electron spins. Pixel relationships (modeled as edges) are used to update the energy of the partitioned graph. By iteratively improving the clustering, the optimal number of segments is revealed. To reduce computational time, the graph is simplified using a Cantor pairing function to intelligently reduce the number of included nodes. The classified nodes are then used to train a multiclass support vector machine to apply the segmentation over the full image. Accurate segmentations of images with as many as 106 pixels can be completed only in 5 sec, allowing for attainable multi-scale visualization. To establish clinical potential, we employed our method in renal biopsies to quantitatively visualize for the first time scale variant compartments of heterogeneous intra- and extraglomerular structures simultaneously. Implications of the utility of our method extend to fields such as oncology, genomics, and non-biological problems.
Automatic digital surface model (DSM) generation from aerial imagery data
NASA Astrophysics Data System (ADS)
Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu
2018-04-01
Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.
Multi-focus image fusion and robust encryption algorithm based on compressive sensing
NASA Astrophysics Data System (ADS)
Xiao, Di; Wang, Lan; Xiang, Tao; Wang, Yong
2017-06-01
Multi-focus image fusion schemes have been studied in recent years. However, little work has been done in multi-focus image transmission security. This paper proposes a scheme that can reduce data transmission volume and resist various attacks. First, multi-focus image fusion based on wavelet decomposition can generate complete scene images and optimize the perception of the human eye. The fused images are sparsely represented with DCT and sampled with structurally random matrix (SRM), which reduces the data volume and realizes the initial encryption. Then the obtained measurements are further encrypted to resist noise and crop attack through combining permutation and diffusion stages. At the receiver, the cipher images can be jointly decrypted and reconstructed. Simulation results demonstrate the security and robustness of the proposed scheme.
EIT image regularization by a new Multi-Objective Simulated Annealing algorithm.
Castro Martins, Thiago; Sales Guerra Tsuzuki, Marcos
2015-01-01
Multi-Objective Optimization can be used to produce regularized Electrical Impedance Tomography (EIT) images where the weight of the regularization term is not known a priori. This paper proposes a novel Multi-Objective Optimization algorithm based on Simulated Annealing tailored for EIT image reconstruction. Images are reconstructed from experimental data and compared with images from other Multi and Single Objective optimization methods. A significant performance enhancement from traditional techniques can be inferred from the results.
High-Resolution Remote Sensing Image Building Extraction Based on Markov Model
NASA Astrophysics Data System (ADS)
Zhao, W.; Yan, L.; Chang, Y.; Gong, L.
2018-04-01
With the increase of resolution, remote sensing images have the characteristics of increased information load, increased noise, more complex feature geometry and texture information, which makes the extraction of building information more difficult. To solve this problem, this paper designs a high resolution remote sensing image building extraction method based on Markov model. This method introduces Contourlet domain map clustering and Markov model, captures and enhances the contour and texture information of high-resolution remote sensing image features in multiple directions, and further designs the spectral feature index that can characterize "pseudo-buildings" in the building area. Through the multi-scale segmentation and extraction of image features, the fine extraction from the building area to the building is realized. Experiments show that this method can restrain the noise of high-resolution remote sensing images, reduce the interference of non-target ground texture information, and remove the shadow, vegetation and other pseudo-building information, compared with the traditional pixel-level image information extraction, better performance in building extraction precision, accuracy and completeness.
Multiscale deformation behavior for multilayered steel by in-situ FE-SEM
NASA Astrophysics Data System (ADS)
Tanaka, Y.; Kishimoto, S.; Yin, F.; Kobayashi, M.; Tomimatsu, T.; Kagawa, K.
2010-03-01
The multi-scale deformation behavior of multi-layered steel during tensile loading was investigated by in-situ FE-SEM observation coupled with multi-scale pattern. The material used was multi-layered steel sheet consisting of martensitic and austenitic stainless steel layers. Prior to in-situ tensile testing, the multi-scale pattern combined with a grid and random dots were fabricated by electron beam lithography on the polished surface in the area of 1 mm2 to facilitate direct observation of multi-scale deformation. Both of the grids with pitches of 10 μm and a random speckle pattern ranging from 200 nm to a few μm sizes were drawn onto the specimen surface at same location. The electron moiré method was applied to measure the strain distribution in the deformed specimens at a millimeter scale and digital images correlation method was applied to measure the in-plane deformation and strain distribution at a micron meter scale acquired before and after at various increments of straining. The results showed that the plastic deformation in the austenitic stainless steel layer was larger than the martensitic steel layer at millimeter scale. However, heterogeneous intrinsic grain-scale plastic deformation was clearly observed and it increased with increasing the plastic deformation.
A New Finite Difference Q-compensated RTM Algorithm in Tilted Transverse Isotropic (TTI) Media
NASA Astrophysics Data System (ADS)
Zhou, T.; Hu, W.; Ning, J.
2017-12-01
Attenuating anisotropic geological body is difficult to image with conventional migration methods. In such kind of scenarios, recorded seismic data suffer greatly from both amplitude decay and phase distortion, resulting in degraded resolution, poor illumination and incorrect migration depth in imaging results. To efficiently obtain high quality images, we propose a novel TTI QRTM algorithm based on Generalized Standard Linear Solid model combined with a unique multi-stage optimization technique to simultaneously correct the decayed amplitude and the distorted phase velocity. Numerical tests (shown in the figure) demonstrate that our TTI QRTM algorithm effectively corrects migration depth, significantly improves illumination, and enhances resolution within and below the low Q regions. The result of our new method is very close to the reference RTM image, while QRTM without TTI cannot get a correct image. Compared to the conventional QRTM method based on a pseudo-spectral operator for fractional Laplacian evaluation, our method is more computationally efficient for large scale applications and more suitable for GPU acceleration. With the current multi-stage dispersion optimization scheme, this TTI QRTM method best performs in the frequency range 10-70 Hz, and could be used in a wider frequency range. Furthermore, as this method can also handle frequency dependent Q, it has potential to be applied in imaging deep structures where low Q exists, such as subduction zones, volcanic zones or fault zones with passive source observations.
Detecting early stage pressure ulcer on dark skin using multispectral imager
NASA Astrophysics Data System (ADS)
Kong, Linghua; Sprigle, Stephen; Yi, Dingrong; Wang, Chao; Wang, Fengtao; Liu, Fuhan; Wang, Jiwu; Zhao, Futing
2009-10-01
This paper introduces a novel idea, innovative technology in building multi spectral imaging based device. The benefit from them is people can have low cost, handheld and standing alone device which makes acquire multi spectral images real time with just a snapshot. The paper for the first time publishes some images got from such prototyped miniaturized multi spectral imager.
NASA Astrophysics Data System (ADS)
Xie, Yijing; Thom, Maria; Miserocchi, Anna; McEvoy, Andrew W.; Desjardins, Adrien; Ourselin, Sebastien; Vercauteren, Tom
2017-02-01
In glioma resection surgery, the detection of tumour is often guided by using intraoperative fluorescence imaging notably with 5-ALA-PpIX, providing fluorescent contrast between normal brain tissue and the gliomas tissue to achieve improved tumour delineation and prolonged patient survival compared with the conventional white-light guided resection. However, the commercially available fluorescence imaging system relies on surgeon's eyes to visualise and distinguish the fluorescence signals, which unfortunately makes the resection subjective. In this study, we developed a novel multi-scale spectrally-resolved fluorescence imaging system and a computational model for quantification of PpIX concentration. The system consisted of a wide-field spectrally-resolved quantitative imaging device and a fluorescence endomicroscopic imaging system enabling optical biopsy. Ex vivo animal tissue experiments as well as human tumour sample studies demonstrated that the system was capable of specifically detecting the PpIX fluorescent signal and estimate the true concentration of PpIX in brain specimen.
The iMars web-GIS - spatio-temporal data queries and single image web map services
NASA Astrophysics Data System (ADS)
Walter, S. H. G.; Steikert, R.; Schreiner, B.; Sidiropoulos, P.; Tao, Y.; Muller, J.-P.; Putry, A. R. D.; van Gasselt, S.
2017-09-01
We introduce a new approach for a system dedicated to planetary surface change detection by simultaneous visualisation of single-image time series in a multi-temporal context. In the context of the EU FP-7 iMars project we process and ingest vast amounts of automatically co-registered (ACRO) images. The base of the co-registration are the high precision HRSC multi-orbit quadrangle image mosaics, which are based on bundle-block-adjusted multi-orbit HRSC DTMs.
Herrgård, Markus; Sukumara, Sumesh; Campodonico, Miguel; Zhuang, Kai
2015-12-01
In recent years, bio-based chemicals have gained interest as a renewable alternative to petrochemicals. However, there is a significant need to assess the technological, biological, economic and environmental feasibility of bio-based chemicals, particularly during the early research phase. Recently, the Multi-scale framework for Sustainable Industrial Chemicals (MuSIC) was introduced to address this issue by integrating modelling approaches at different scales ranging from cellular to ecological scales. This framework can be further extended by incorporating modelling of the petrochemical value chain and the de novo prediction of metabolic pathways connecting existing host metabolism to desirable chemical products. This multi-scale, multi-disciplinary framework for quantitative assessment of bio-based chemicals will play a vital role in supporting engineering, strategy and policy decisions as we progress towards a sustainable chemical industry. © 2015 Authors; published by Portland Press Limited.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laura J. Pyrak-Nolte; Nicholas J. Giordano; David D. Nolte
2004-03-01
The principal challenge of upscaling techniques for multi-phase fluid dynamics in porous media is to determine which properties on the micro-scale can be used to predict macroscopic flow and spatial distribution of phases at core- and field-scales. The most notable outcome of recent theories is the identification of interfacial areas per volume for multiple phases as a fundamental parameter that determines much of the multi-phase properties of the porous medium. A formal program of experimental research was begun to directly test upscaling theories in fluid flow through porous media by comparing measurements of relative permeability and capillary-saturation with measurements ofmore » interfacial area per volume. This project on the experimental investigation of relative permeability upscaling has produced a unique combination of three quite different technical approaches to the upscaling problem of obtaining pore-related microscopic properties and using them to predict macroscopic behavior. Several important ''firsts'' have been achieved during the course of the project. (1) Optical coherence imaging, a laser-based ranging and imaging technique, has produced the first images of grain and pore structure up to 1 mm beneath the surface of the sandstone and in a laboratory borehole. (2) Woods metal injection has connected for the first time microscopic pore-scale geometric measurements with macroscopic saturation in real sandstone cores. (3) The micro-model technique has produced the first invertible relationship between saturation and capillary pressure--showing that interfacial area per volume (IAV) provides the linking parameter. IAV is a key element in upscaling theories, so this experimental finding may represent the most important result of this project, with wide ramifications for predictions of fluid behavior in porous media.« less
Regional shape-based feature space for segmenting biomedical images using neural networks
NASA Astrophysics Data System (ADS)
Sundaramoorthy, Gopal; Hoford, John D.; Hoffman, Eric A.
1993-07-01
In biomedical images, structure of interest, particularly the soft tissue structures, such as the heart, airways, bronchial and arterial trees often have grey-scale and textural characteristics similar to other structures in the image, making it difficult to segment them using only gray- scale and texture information. However, these objects can be visually recognized by their unique shapes and sizes. In this paper we discuss, what we believe to be, a novel, simple scheme for extracting features based on regional shapes. To test the effectiveness of these features for image segmentation (classification), we use an artificial neural network and a statistical cluster analysis technique. The proposed shape-based feature extraction algorithm computes regional shape vectors (RSVs) for all pixels that meet a certain threshold criteria. The distance from each such pixel to a boundary is computed in 8 directions (or in 26 directions for a 3-D image). Together, these 8 (or 26) values represent the pixel's (or voxel's) RSV. All RSVs from an image are used to train a multi-layered perceptron neural network which uses these features to 'learn' a suitable classification strategy. To clearly distinguish the desired object from other objects within an image, several examples from inside and outside the desired object are used for training. Several examples are presented to illustrate the strengths and weaknesses of our algorithm. Both synthetic and actual biomedical images are considered. Future extensions to this algorithm are also discussed.
Unresolved fine-scale structure in solar coronal loop-tops
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scullion, E.; Van der Voort, L. Rouppe; Wedemeyer, S.
2014-12-10
New and advanced space-based observing facilities continue to lower the resolution limit and detect solar coronal loops in greater detail. We continue to discover even finer substructures within coronal loop cross-sections, in order to understand the nature of the solar corona. Here, we push this lower limit further to search for the finest coronal loop substructures, through taking advantage of the resolving power of the Swedish 1 m Solar Telescope/CRisp Imaging Spectro-Polarimeter (CRISP), together with co-observations from the Solar Dynamics Observatory/Atmospheric Image Assembly (AIA). High-resolution imaging of the chromospheric Hα 656.28 nm spectral line core and wings can, under certainmore » circumstances, allow one to deduce the topology of the local magnetic environment of the solar atmosphere where its observed. Here, we study post-flare coronal loops, which become filled with evaporated chromosphere that rapidly condenses into chromospheric clumps of plasma (detectable in Hα) known as a coronal rain, to investigate their fine-scale structure. We identify, through analysis of three data sets, large-scale catastrophic cooling in coronal loop-tops and the existence of multi-thermal, multi-stranded substructures. Many cool strands even extend fully intact from loop-top to footpoint. We discover that coronal loop fine-scale strands can appear bunched with as many as eight parallel strands within an AIA coronal loop cross-section. The strand number density versus cross-sectional width distribution, as detected by CRISP within AIA-defined coronal loops, most likely peaks at well below 100 km, and currently, 69% of the substructure strands are statistically unresolved in AIA coronal loops.« less
Finegan, Donal P; Scheel, Mario; Robinson, James B; Tjaden, Bernhard; Di Michiel, Marco; Hinds, Gareth; Brett, Dan J L; Shearing, Paul R
2016-11-16
Catastrophic failure of lithium-ion batteries occurs across multiple length scales and over very short time periods. A combination of high-speed operando tomography, thermal imaging and electrochemical measurements is used to probe the degradation mechanisms leading up to overcharge-induced thermal runaway of a LiCoO 2 pouch cell, through its interrelated dynamic structural, thermal and electrical responses. Failure mechanisms across multiple length scales are explored using a post-mortem multi-scale tomography approach, revealing significant morphological and phase changes in the LiCoO 2 electrode microstructure and location dependent degradation. This combined operando and multi-scale X-ray computed tomography (CT) technique is demonstrated as a comprehensive approach to understanding battery degradation and failure.
NASA Astrophysics Data System (ADS)
Spanò, A.; Chiabrando, F.; Sammartano, G.; Teppati Losè, L.
2018-05-01
The paper focuses on the exploration of the suitability and the discretization of applicability issues about advanced surveying integrated techniques, mainly based on image-based approaches compared and integrated to range-based ones that have been developed with the use of the cutting-edge solutions tested on field. The investigated techniques integrate both technological devices for 3D data acquisition and thus editing and management systems to handle metric models and multi-dimensional data in a geospatial perspective, in order to innovate and speed up the extraction of information during the archaeological excavation activities. These factors, have been experienced in the outstanding site of the Hierapolis of Phrygia ancient city (Turkey), downstream the 2017 surveying missions, in order to produce high-scale metric deliverables in terms of high-detailed Digital Surface Models (DSM), 3D continuous surface models and high-resolution orthoimages products. In particular, the potentialities in the use of UAV platforms for low altitude acquisitions in aerial photogrammetric approach, together with terrestrial panoramic acquisitions (Trimble V10 imaging rover), have been investigated with a comparison toward consolidated Terrestrial Laser Scanning (TLS) measurements. One of the main purposes of the paper is to evaluate the results offered by the technologies used independently and using integrated approaches. A section of the study in fact, is specifically dedicated to experimenting the union of different sensor dense clouds: both dense clouds derived from UAV have been integrated with terrestrial Lidar clouds, to evaluate their fusion. Different test cases have been considered, representing typical situations that can be encountered in archaeological sites.
Multi-scale imaging and elastic simulation of carbonates
NASA Astrophysics Data System (ADS)
Faisal, Titly Farhana; Awedalkarim, Ahmed; Jouini, Mohamed Soufiane; Jouiad, Mustapha; Chevalier, Sylvie; Sassi, Mohamed
2016-05-01
Digital Rock Physics (DRP) is an emerging technology that can be used to generate high quality, fast and cost effective special core analysis (SCAL) properties compared to conventional experimental techniques and modeling techniques. The primary workflow of DRP conssits of three elements: 1) image the rock sample using high resolution 3D scanning techniques (e.g. micro CT, FIB/SEM), 2) process and digitize the images by segmenting the pore and matrix phases 3) simulate the desired physical properties of the rocks such as elastic moduli and velocities of wave propagation. A Finite Element Method based algorithm, that discretizes the basic Hooke's Law equation of linear elasticity and solves it numerically using a fast conjugate gradient solver, developed by Garboczi and Day [1] is used for mechanical and elastic property simulations. This elastic algorithm works directly on the digital images by treating each pixel as an element. The images are assumed to have periodic constant-strain boundary condition. The bulk and shear moduli of the different phases are required inputs. For standard 1.5" diameter cores however the Micro-CT scanning reoslution (around 40 μm) does not reveal smaller micro- and nano- pores beyond the resolution. This results in an unresolved "microporous" phase, the moduli of which is uncertain. Knackstedt et al. [2] assigned effective elastic moduli to the microporous phase based on self-consistent theory (which gives good estimation of velocities for well cemented granular media). Jouini et al. [3] segmented the core plug CT scan image into three phases and assumed that micro porous phase is represented by a sub-extracted micro plug (which too was scanned using Micro-CT). Currently the elastic numerical simulations based on CT-images alone largely overpredict the bulk, shear and Young's modulus when compared to laboratory acoustic tests of the same rocks. For greater accuracy of numerical simulation prediction, better estimates of moduli inputs for this current unresolved phase is important. In this work we take a multi-scale imaging approach by first extracting a smaller 0.5" core and scanning at approx 13 µm, then further extracting a 5mm diameter core scanned at 5 μm. From this last scale, region of interests (containing unresolved areas) are identified for scanning at higher resolutions using Focalised Ion Beam (FIB/SEM) scanning technique reaching 50 nm resolution. Numerical simulation is run on such a small unresolved section to obtain a better estimate of the effective moduli which is then used as input for simulations performed using CT-images. Results are compared with expeirmental acoustic test moduli obtained also at two scales: 1.5" and 0.5" diameter cores.
Multi-scale X-ray Microtomography Imaging of Immiscible Fluids After Imbibition
NASA Astrophysics Data System (ADS)
Garing, C.; de Chalendar, J.; Voltolini, M.; Ajo Franklin, J. B.; Benson, S. M.
2015-12-01
A major issue for CO2 storage security is the efficiency and long-term reliability of the trapping mechanisms occurring in the reservoir where CO2 is injected. Residual trapping is one of the key processes for storage security beyond the primary stratigraphic seal. Although classical conceptual models of residual fluid trapping assume that disconnected ganglia are permanently immobilized, multiple mechanisms exist which could allow the remobilization of residually trapped CO2. The aim of this study is to quantify fluid phases saturation, connectivity and morphology after imbibition using x-ray microtomography in order to evaluate potential changes in droplets organization due to differences in capillary pressure between disconnected ganglia. Particular emphasis is placed on the effect of image resolution. Synchrotron-based x-ray microtomographic datasets of air-water spontaneous imbibition were acquired in sintered glass beads and sandstone samples with voxel sizes varying from 0.64 to 4.44 μm. The results show that for both sandstones the residual air phase is homogeneously distributed within the entire pore space and consists of disconnected clusters of multiple sizes and morphologies. The multi-scale analysis of subsamples of few pores and throats imaged at the same location of the sample reveals significant variations in the estimation of connectivity, size and shape of the fluid phases. This is particularly noticeable when comparing the results from the images with voxel sizes above 1 μm with the results from the images acquired with voxel sizes below 1 μm.
Mashup Scheme Design of Map Tiles Using Lightweight Open Source Webgis Platform
NASA Astrophysics Data System (ADS)
Hu, T.; Fan, J.; He, H.; Qin, L.; Li, G.
2018-04-01
To address the difficulty involved when using existing commercial Geographic Information System platforms to integrate multi-source image data fusion, this research proposes the loading of multi-source local tile data based on CesiumJS and examines the tile data organization mechanisms and spatial reference differences of the CesiumJS platform, as well as various tile data sources, such as Google maps, Map World, and Bing maps. Two types of tile data loading schemes have been designed for the mashup of tiles, the single data source loading scheme and the multi-data source loading scheme. The multi-sources of digital map tiles used in this paper cover two different but mainstream spatial references, the WGS84 coordinate system and the Web Mercator coordinate system. According to the experimental results, the single data source loading scheme and the multi-data source loading scheme with the same spatial coordinate system showed favorable visualization effects; however, the multi-data source loading scheme was prone to lead to tile image deformation when loading multi-source tile data with different spatial references. The resulting method provides a low cost and highly flexible solution for small and medium-scale GIS programs and has a certain potential for practical application values. The problem of deformation during the transition of different spatial references is an important topic for further research.
Fast Detection of Material Deformation through Structural Dissimilarity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ushizima, Daniela; Perciano, Talita; Parkinson, Dilworth
2015-10-29
Designing materials that are resistant to extreme temperatures and brittleness relies on assessing structural dynamics of samples. Algorithms are critically important to characterize material deformation under stress conditions. Here, we report on our design of coarse-grain parallel algorithms for image quality assessment based on structural information and on crack detection of gigabyte-scale experimental datasets. We show how key steps can be decomposed into distinct processing flows, one based on structural similarity (SSIM) quality measure, and another on spectral content. These algorithms act upon image blocks that fit into memory, and can execute independently. We discuss the scientific relevance of themore » problem, key developments, and decomposition of complementary tasks into separate executions. We show how to apply SSIM to detect material degradation, and illustrate how this metric can be allied to spectral analysis for structure probing, while using tiled multi-resolution pyramids stored in HDF5 chunked multi-dimensional arrays. Results show that the proposed experimental data representation supports an average compression rate of 10X, and data compression scales linearly with the data size. We also illustrate how to correlate SSIM to crack formation, and how to use our numerical schemes to enable fast detection of deformation from 3D datasets evolving in time.« less
Pore-scale Simulation and Imaging of Multi-phase Flow and Transport in Porous Media (Invited)
NASA Astrophysics Data System (ADS)
Crawshaw, J.; Welch, N.; Daher, I.; Yang, J.; Shah, S.; Grey, F.; Boek, E.
2013-12-01
We combine multi-scale imaging and computer simulation of multi-phase flow and reactive transport in rock samples to enhance our fundamental understanding of long term CO2 storage in rock formations. The imaging techniques include Confocal Laser Scanning Microscopy (CLSM), micro-CT and medical CT scanning, with spatial resolutions ranging from sub-micron to mm respectively. First, we report a new sample preparation technique to study micro-porosity in carbonates using CLSM in 3 dimensions. Second, we use micro-CT scanning to generate high resolution 3D pore space images of carbonate and cap rock samples. In addition, we employ micro-CT to image the processes of evaporation in fractures and cap rock degradation due to exposure to CO2 flow. Third, we use medical CT scanning to image spontaneous imbibition in carbonate rock samples. Our imaging studies are complemented by computer simulations of multi-phase flow and transport, using the 3D pore space images obtained from the scanning experiments. We have developed a massively parallel lattice-Boltzmann (LB) code to calculate the single phase flow field in these pore space images. The resulting flow fields are then used to calculate hydrodynamic dispersion using a novel scheme to predict probability distributions for molecular displacements using the LB method and a streamline algorithm, modified for optimal solid boundary conditions. We calculate solute transport on pore-space images of rock cores with increasing degree of heterogeneity: a bead pack, Bentheimer sandstone and Portland carbonate. We observe that for homogeneous rock samples, such as bead packs, the displacement distribution remains Gaussian with time increasing. In the more heterogeneous rocks, on the other hand, the displacement distribution develops a stagnant part. We observe that the fraction of trapped solute increases from the beadpack (0 %) to Bentheimer sandstone (1.5 %) to Portland carbonate (8.1 %), in excellent agreement with PFG-NMR experiments. We then use our preferred multi-phase model to directly calculate flow in pore space images of two different sandstones and observe excellent agreement with experimental relative permeabilities. Also we calculate cluster size distributions in good agreement with experimental studies. Our analysis shows that the simulations are able to predict both multi-phase flow and transport properties directly on large 3D pore space images of real rocks. Pore space images, left and velocity distributions, right (Yang and Boek, 2013)
Processing Digital Imagery to Enhance Perceptions of Realism
NASA Technical Reports Server (NTRS)
Woodell, Glenn A.; Jobson, Daniel J.; Rahman, Zia-ur
2003-01-01
Multi-scale retinex with color restoration (MSRCR) is a method of processing digital image data based on Edwin Land s retinex (retina + cortex) theory of human color vision. An outgrowth of basic scientific research and its application to NASA s remote-sensing mission, MSRCR is embodied in a general-purpose algorithm that greatly improves the perception of visual realism and the quantity and quality of perceived information in a digitized image. In addition, the MSRCR algorithm includes provisions for automatic corrections to accelerate and facilitate what could otherwise be a tedious image-editing process. The MSRCR algorithm has been, and is expected to continue to be, the basis for development of commercial image-enhancement software designed to extend and refine its capabilities for diverse applications.
Wang, Kun-Ching
2015-01-01
The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI). In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII). The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD) algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC) and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech. PMID:25594590
Prasoon, Adhish; Petersen, Kersten; Igel, Christian; Lauze, François; Dam, Erik; Nielsen, Mads
2013-01-01
Segmentation of anatomical structures in medical images is often based on a voxel/pixel classification approach. Deep learning systems, such as convolutional neural networks (CNNs), can infer a hierarchical representation of images that fosters categorization. We propose a novel system for voxel classification integrating three 2D CNNs, which have a one-to-one association with the xy, yz and zx planes of 3D image, respectively. We applied our method to the segmentation of tibial cartilage in low field knee MRI scans and tested it on 114 unseen scans. Although our method uses only 2D features at a single scale, it performs better than a state-of-the-art method using 3D multi-scale features. In the latter approach, the features and the classifier have been carefully adapted to the problem at hand. That we were able to get better results by a deep learning architecture that autonomously learns the features from the images is the main insight of this study.
A combined method for correlative 3D imaging of biological samples from macro to nano scale
NASA Astrophysics Data System (ADS)
Kellner, Manuela; Heidrich, Marko; Lorbeer, Raoul-Amadeus; Antonopoulos, Georgios C.; Knudsen, Lars; Wrede, Christoph; Izykowski, Nicole; Grothausmann, Roman; Jonigk, Danny; Ochs, Matthias; Ripken, Tammo; Kühnel, Mark P.; Meyer, Heiko
2016-10-01
Correlative analysis requires examination of a specimen from macro to nano scale as well as applicability of analytical methods ranging from morphological to molecular. Accomplishing this with one and the same sample is laborious at best, due to deformation and biodegradation during measurements or intermediary preparation steps. Furthermore, data alignment using differing imaging techniques turns out to be a complex task, which considerably complicates the interconnection of results. We present correlative imaging of the accessory rat lung lobe by combining a modified Scanning Laser Optical Tomography (SLOT) setup with a specially developed sample preparation method (CRISTAL). CRISTAL is a resin-based embedding method that optically clears the specimen while allowing sectioning and preventing degradation. We applied and correlated SLOT with Multi Photon Microscopy, histological and immunofluorescence analysis as well as Transmission Electron Microscopy, all in the same sample. Thus, combining CRISTAL with SLOT enables the correlative utilization of a vast variety of imaging techniques.
Colorizing SENTINEL-1 SAR Images Using a Variational Autoencoder Conditioned on SENTINEL-2 Imagery
NASA Astrophysics Data System (ADS)
Schmitt, M.; Hughes, L. H.; Körner, M.; Zhu, X. X.
2018-05-01
In this paper, we have shown an approach for the automatic colorization of SAR backscatter images, which are usually provided in the form of single-channel gray-scale imagery. Using a deep generative model proposed for the purpose of photograph colorization and a Lab-space-based SAR-optical image fusion formulation, we are able to predict artificial color SAR images, which disclose much more information to the human interpreter than the original SAR data. Future work will aim at further adaption of the employed procedure to our special case of multi-sensor remote sensing imagery. Furthermore, we will investigate if the low-level representations learned intrinsically by the deep network can be used for SAR image interpretation in an end-to-end manner.
Wang, Yunlong; Liu, Fei; Zhang, Kunbo; Hou, Guangqi; Sun, Zhenan; Tan, Tieniu
2018-09-01
The low spatial resolution of light-field image poses significant difficulties in exploiting its advantage. To mitigate the dependency of accurate depth or disparity information as priors for light-field image super-resolution, we propose an implicitly multi-scale fusion scheme to accumulate contextual information from multiple scales for super-resolution reconstruction. The implicitly multi-scale fusion scheme is then incorporated into bidirectional recurrent convolutional neural network, which aims to iteratively model spatial relations between horizontally or vertically adjacent sub-aperture images of light-field data. Within the network, the recurrent convolutions are modified to be more effective and flexible in modeling the spatial correlations between neighboring views. A horizontal sub-network and a vertical sub-network of the same network structure are ensembled for final outputs via stacked generalization. Experimental results on synthetic and real-world data sets demonstrate that the proposed method outperforms other state-of-the-art methods by a large margin in peak signal-to-noise ratio and gray-scale structural similarity indexes, which also achieves superior quality for human visual systems. Furthermore, the proposed method can enhance the performance of light field applications such as depth estimation.
Morphological rational multi-scale algorithm for color contrast enhancement
NASA Astrophysics Data System (ADS)
Peregrina-Barreto, Hayde; Terol-Villalobos, Iván R.
2010-01-01
Contrast enhancement main goal consists on improving the image visual appearance but also it is used for providing a transformed image in order to segment it. In mathematical morphology several works have been derived from the framework theory for contrast enhancement proposed by Meyer and Serra. However, when working with images with a wide range of scene brightness, as for example when strong highlights and deep shadows appear in the same image, the proposed morphological methods do not allow the enhancement. In this work, a rational multi-scale method, which uses a class of morphological connected filters called filters by reconstruction, is proposed. Granulometry is used by finding the more accurate scales for filters and with the aim of avoiding the use of other little significant scales. The CIE-u'v'Y' space was used to introduce our results since it takes into account the Weber's Law and by avoiding the creation of new colors it permits to modify the luminance values without affecting the hue. The luminance component ('Y) is enhanced separately using the proposed method, next it is used for enhancing the chromatic components (u', v') by means of the center of gravity law of color mixing.
NASA Astrophysics Data System (ADS)
Modiri, M.; Salehabadi, A.; Mohebbi, M.; Hashemi, A. M.; Masumi, M.
2015-12-01
The use of UAV in the application of photogrammetry to obtain cover images and achieve the main objectives of the photogrammetric mapping has been a boom in the region. The images taken from REGGIOLO region in the province of, Italy Reggio -Emilia by UAV with non-metric camera Canon Ixus and with an average height of 139.42 meters were used to classify urban feature. Using the software provided SURE and cover images of the study area, to produce dense point cloud, DSM and Artvqvtv spatial resolution of 10 cm was prepared. DTM area using Adaptive TIN filtering algorithm was developed. NDSM area was prepared with using the difference between DSM and DTM and a separate features in the image stack. In order to extract features, using simultaneous occurrence matrix features mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment, and correlation for each of the RGB band image was used Orthophoto area. Classes used to classify urban problems, including buildings, trees and tall vegetation, grass and vegetation short, paved road and is impervious surfaces. Class consists of impervious surfaces such as pavement conditions, the cement, the car, the roof is stored. In order to pixel-based classification and selection of optimal features of classification was GASVM pixel basis. In order to achieve the classification results with higher accuracy and spectral composition informations, texture, and shape conceptual image featureOrthophoto area was fencing. The segmentation of multi-scale segmentation method was used.it belonged class. Search results using the proposed classification of urban feature, suggests the suitability of this method of classification complications UAV is a city using images. The overall accuracy and kappa coefficient method proposed in this study, respectively, 47/93% and 84/91% was.
Multi-platform observations on melt pond in Arctic summer 2010
NASA Astrophysics Data System (ADS)
Wang, Y.; Huang, W.; Lu, P.; Li, Z.
2011-12-01
Melt ponds play an important role in sea ice surface albedo and further affect the heat budget between ice-air interface. The overall reductions of Arctic sea ice extend and thickness especially in recent years is considered to be enhanced partly by the melt ponds, and understanding of melt ponds on how they change the heat and mass balance of sea ice through the ice surface albedo decrease is urgently required. Although satellite remote sensing is a general tool to observe sea ice surface features on a large scale, the small scale information with higher spatial and temporal resolution is more helpful to understand the physical mechanism in the evolution of melt ponds. Arctic summer in 2010 is special because of an obvious trans-polar melting, during which the multi-year ice in the central Arctic was seriously melted, and formed a trans-polar zone with ice concentration less than 80% stretching from the Chukchi Sea to the Greenland Sea. It provided a fantastic opportunity to observe melt ponds especially at the high latitude. The Fourth Chinese National Arctic Research Expedition in 2010 (CHINARE-2010) was carried out from July 1 to September 20, 2010. As R/V Xuelong sailing in the ice-infested seas, a multi-platform observation was conducted to investigate the evolution of melt ponds on Arctic sea ice. Among which, aerial photography provided a downward-looking snapshot of the ice surface by using the camera installed on a helicopter, and melt pond information on a 100-meter scale can be obtained. Shipboard photography gave an inclined inspection on the ice conditions beside the ship using the camera installed on the vessel, and melt pond information on a 10-meter scale can be determined. Ground-based photography was similar to the shipboard photography, but the camera with tilt angle was installed on the top of a vertical lifting device fixed on the ice, and melt pond information on a 1-meter scale can be observed. Over 10,000 sea ice images from different platforms were collected during the cruise, and the survey area covered the regions 140°W-180°W, 70°N-88°N. An image processing technique based on difference in colors of the surface features was used to divide each image into three components: snow-covered ice floes, melt ponds and leads. And then geometric features of melt ponds, such as area, perimeter, and roundness, could be extracted from the aerial images. These data can enrich our knowledge on the distribution of melt pond on different spatial scale, especially those in the high latitude regions where summer melting was never so serious in previous years.
NASA Astrophysics Data System (ADS)
Poux, F.; Neuville, R.; Billen, R.
2017-08-01
Reasoning from information extraction given by point cloud data mining allows contextual adaptation and fast decision making. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. This paper presents an automatic knowledge-based method for pre-processing multi-sensory data and classifying a hybrid point cloud from both terrestrial laser scanning and dense image matching. Using 18 features including sensor's biased data, each tessera in the high-density point cloud from the 3D captured complex mosaics of Germigny-des-prés (France) is segmented via a colour multi-scale abstraction-based featuring extracting connectivity. A 2D surface and outline polygon of each tessera is generated by a RANSAC plane extraction and convex hull fitting. Knowledge is then used to classify every tesserae based on their size, surface, shape, material properties and their neighbour's class. The detection and semantic enrichment method shows promising results of 94% correct semantization, a first step toward the creation of an archaeological smart point cloud.
Multi-energy CT based on a prior rank, intensity and sparsity model (PRISM).
Gao, Hao; Yu, Hengyong; Osher, Stanley; Wang, Ge
2011-11-01
We propose a compressive sensing approach for multi-energy computed tomography (CT), namely the prior rank, intensity and sparsity model (PRISM). To further compress the multi-energy image for allowing the reconstruction with fewer CT data and less radiation dose, the PRISM models a multi-energy image as the superposition of a low-rank matrix and a sparse matrix (with row dimension in space and column dimension in energy), where the low-rank matrix corresponds to the stationary background over energy that has a low matrix rank, and the sparse matrix represents the rest of distinct spectral features that are often sparse. Distinct from previous methods, the PRISM utilizes the generalized rank, e.g., the matrix rank of tight-frame transform of a multi-energy image, which offers a way to characterize the multi-level and multi-filtered image coherence across the energy spectrum. Besides, the energy-dependent intensity information can be incorporated into the PRISM in terms of the spectral curves for base materials, with which the restoration of the multi-energy image becomes the reconstruction of the energy-independent material composition matrix. In other words, the PRISM utilizes prior knowledge on the generalized rank and sparsity of a multi-energy image, and intensity/spectral characteristics of base materials. Furthermore, we develop an accurate and fast split Bregman method for the PRISM and demonstrate the superior performance of the PRISM relative to several competing methods in simulations.
Multi-scale radiomic analysis of sub-cortical regions in MRI related to autism, gender and age
NASA Astrophysics Data System (ADS)
Chaddad, Ahmad; Desrosiers, Christian; Toews, Matthew
2017-03-01
We propose using multi-scale image textures to investigate links between neuroanatomical regions and clinical variables in MRI. Texture features are derived at multiple scales of resolution based on the Laplacian-of-Gaussian (LoG) filter. Three quantifier functions (Average, Standard Deviation and Entropy) are used to summarize texture statistics within standard, automatically segmented neuroanatomical regions. Significance tests are performed to identify regional texture differences between ASD vs. TDC and male vs. female groups, as well as correlations with age (corrected p < 0.05). The open-access brain imaging data exchange (ABIDE) brain MRI dataset is used to evaluate texture features derived from 31 brain regions from 1112 subjects including 573 typically developing control (TDC, 99 females, 474 males) and 539 Autism spectrum disorder (ASD, 65 female and 474 male) subjects. Statistically significant texture differences between ASD vs. TDC groups are identified asymmetrically in the right hippocampus, left choroid-plexus and corpus callosum (CC), and symmetrically in the cerebellar white matter. Sex-related texture differences in TDC subjects are found in primarily in the left amygdala, left cerebellar white matter, and brain stem. Correlations between age and texture in TDC subjects are found in the thalamus-proper, caudate and pallidum, most exhibiting bilateral symmetry.
NASA Astrophysics Data System (ADS)
Senarathna, Janaka; Hadjiabadi, Darian; Gil, Stacy; Thakor, Nitish V.; Pathak, Arvind P.
2017-02-01
Different brain regions exhibit complex information processing even at rest. Therefore, assessing temporal correlations between regions permits task-free visualization of their `resting state connectivity'. Although functional MRI (fMRI) is widely used for mapping resting state connectivity in the human brain, it is not well suited for `microvascular scale' imaging in rodents because of its limited spatial resolution. Moreover, co-registered cerebral blood flow (CBF) and total hemoglobin (HbT) data are often unavailable in conventional fMRI experiments. Therefore, we built a customized system that combines laser speckle contrast imaging (LSCI), intrinsic optical signal (IOS) imaging and fluorescence imaging (FI) to generate multi-contrast functional connectivity maps at a spatial resolution of 10 μm. This system comprised of three illumination sources: a 632 nm HeNe laser (for LSCI), a 570 nm ± 5 nm filtered white light source (for IOS), and a 473 nm blue laser (for FI), as well as a sensitive CCD camera operating at 10 frames per second for image acquisition. The acquired data enabled visualization of changes in resting state neurophysiology at microvascular spatial scales. Moreover, concurrent mapping of CBF and HbT-based temporal correlations enabled in vivo mapping of how resting brain regions were linked in terms of their hemodynamics. Additionally, we complemented this approach by exploiting the transit times of a fluorescent tracer (Dextran-FITC) to distinguish arterial from venous perfusion. Overall, we demonstrated the feasibility of wide area mapping of resting state connectivity at microvascular resolution and created a new toolbox for interrogating neurovascular function.
Simultaneous single-shot readout of multi-qubit circuits using a traveling-wave parametric amplifier
NASA Astrophysics Data System (ADS)
O'Brien, Kevin
Observing and controlling the state of ever larger quantum systems is critical for advancing quantum computation. Utilizing a Josephson traveling wave parametric amplifier (JTWPA), we demonstrate simultaneous multiplexed single shot readout of 10 transmon qubits in a planar architecture. We employ digital image sideband rejection to eliminate noise at the image frequencies. We quantify crosstalk and infidelity due to simultaneous readout and control of multiple qubits. Based on current amplifier technology, this approach can scale to simultaneous readout of at least 20 qubits. This work was supported by the Army Research Office.
NASA Astrophysics Data System (ADS)
Peng, Yahui; Jiang, Yulei; Soylu, Fatma N.; Tomek, Mark; Sensakovic, William; Oto, Aytekin
2012-02-01
Quantitative analysis of multi-parametric magnetic resonance (MR) images of the prostate, including T2-weighted (T2w) and diffusion-weighted (DW) images, requires accurate image registration. We compared two registration methods between T2w and DW images. We collected pre-operative MR images of 124 prostate cancer patients (68 patients scanned with a GE scanner and 56 with Philips scanners). A landmark-based rigid registration was done based on six prostate landmarks in both T2w and DW images identified by a radiologist. Independently, a researcher manually registered the same images. A radiologist visually evaluated the registration results by using a 5-point ordinal scale of 1 (worst) to 5 (best). The Wilcoxon signed-rank test was used to determine whether the radiologist's ratings of the results of the two registration methods were significantly different. Results demonstrated that both methods were accurate: the average ratings were 4.2, 3.3, and 3.8 for GE, Philips, and all images, respectively, for the landmark-based method; and 4.6, 3.7, and 4.2, respectively, for the manual method. The manual registration results were more accurate than the landmark-based registration results (p < 0.0001 for GE, Philips, and all images). Therefore, the manual method produces more accurate registration between T2w and DW images than the landmark-based method.
NASA Astrophysics Data System (ADS)
Buchmann, Jens; Kaplan, Bernhard A.; Prohaska, Steffen; Laufer, Jan
2017-03-01
Quantitative photoacoustic tomography (qPAT) aims to extract physiological parameters, such as blood oxygen saturation (sO2), from measured multi-wavelength image data sets. The challenge of this approach lies in the inherently nonlinear fluence distribution in the tissue, which has to be accounted for by using an appropriate model, and the large scale of the inverse problem. In addition, the accuracy of experimental and scanner-specific parameters, such as the wavelength dependence of the incident fluence, the acoustic detector response, the beam profile and divergence, needs to be considered. This study aims at quantitative imaging of blood sO2, as it has been shown to be a more robust parameter compared to absolute concentrations. We propose a Monte-Carlo-based inversion scheme in conjunction with a reduction in the number of variables achieved using image segmentation. The inversion scheme is experimentally validated in tissue-mimicking phantoms consisting of polymer tubes suspended in a scattering liquid. The tubes were filled with chromophore solutions at different concentration ratios. 3-D multi-spectral image data sets were acquired using a Fabry-Perot based PA scanner. A quantitative comparison of the measured data with the output of the forward model is presented. Parameter estimates of chromophore concentration ratios were found to be within 5 % of the true values.
Enhanced Imaging of Specific Cell-Surface Glycosylation Based on Multi-FRET.
Yuan, Baoyin; Chen, Yuanyuan; Sun, Yuqiong; Guo, Qiuping; Huang, Jin; Liu, Jianbo; Meng, Xiangxian; Yang, Xiaohai; Wen, Xiaohong; Li, Zenghui; Li, Lie; Wang, Kemin
2018-05-15
Cell-surface glycosylation contains abundant biological information that reflects cell physiological state, and it is of great value to image cell-surface glycosylation to elucidate its functions. Here we present a hybridization chain reaction (HCR)-based multifluorescence resonance energy transfer (multi-FRET) method for specific imaging of cell-surface glycosylation. By installing donors through metabolic glycan labeling and acceptors through aptamer-tethered nanoassemblies on the same glycoconjugate, intramolecular multi-FRET occurs due to near donor-acceptor distance. Benefiting from amplified effect and spatial flexibility of the HCR nanoassemblies, enhanced multi-FRET imaging of specific cell-surface glycosylation can be obtained. With this HCR-based multi-FRET method, we achieved obvious contrast in imaging of protein-specific GalNAcylation on 7211 cell surfaces. In addition, we demonstrated the general applicability of this method by visualizing the protein-specific sialylation on CEM cell surfaces. Furthermore, the expression changes of CEM cell-surface protein-specific sialylation under drug treatment was accurately monitored. This developed imaging method may provide a powerful tool in researching glycosylation functions, discovering biomarkers, and screening drugs.
NASA Astrophysics Data System (ADS)
Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.
2016-05-01
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.
Kopriva, Ivica; Persin, Antun; Puizina-Ivić, Neira; Mirić, Lina
2010-07-02
This study was designed to demonstrate robust performance of the novel dependent component analysis (DCA)-based approach to demarcation of the basal cell carcinoma (BCC) through unsupervised decomposition of the red-green-blue (RGB) fluorescent image of the BCC. Robustness to intensity fluctuation is due to the scale invariance property of DCA algorithms, which exploit spectral and spatial diversities between the BCC and the surrounding tissue. Used filtering-based DCA approach represents an extension of the independent component analysis (ICA) and is necessary in order to account for statistical dependence that is induced by spectral similarity between the BCC and surrounding tissue. This generates weak edges what represents a challenge for other segmentation methods as well. By comparative performance analysis with state-of-the-art image segmentation methods such as active contours (level set), K-means clustering, non-negative matrix factorization, ICA and ratio imaging we experimentally demonstrate good performance of DCA-based BCC demarcation in two demanding scenarios where intensity of the fluorescent image has been varied almost two orders of magnitude. Copyright 2010 Elsevier B.V. All rights reserved.
Super-resolution reconstruction of MR image with a novel residual learning network algorithm
NASA Astrophysics Data System (ADS)
Shi, Jun; Liu, Qingping; Wang, Chaofeng; Zhang, Qi; Ying, Shihui; Xu, Haoyu
2018-04-01
Spatial resolution is one of the key parameters of magnetic resonance imaging (MRI). The image super-resolution (SR) technique offers an alternative approach to improve the spatial resolution of MRI due to its simplicity. Convolutional neural networks (CNN)-based SR algorithms have achieved state-of-the-art performance, in which the global residual learning (GRL) strategy is now commonly used due to its effectiveness for learning image details for SR. However, the partial loss of image details usually happens in a very deep network due to the degradation problem. In this work, we propose a novel residual learning-based SR algorithm for MRI, which combines both multi-scale GRL and shallow network block-based local residual learning (LRL). The proposed LRL module works effectively in capturing high-frequency details by learning local residuals. One simulated MRI dataset and two real MRI datasets have been used to evaluate our algorithm. The experimental results show that the proposed SR algorithm achieves superior performance to all of the other compared CNN-based SR algorithms in this work.
Large scale track analysis for wide area motion imagery surveillance
NASA Astrophysics Data System (ADS)
van Leeuwen, C. J.; van Huis, J. R.; Baan, J.
2016-10-01
Wide Area Motion Imagery (WAMI) enables image based surveillance of areas that can cover multiple square kilometers. Interpreting and analyzing information from such sources, becomes increasingly time consuming as more data is added from newly developed methods for information extraction. Captured from a moving Unmanned Aerial Vehicle (UAV), the high-resolution images allow detection and tracking of moving vehicles, but this is a highly challenging task. By using a chain of computer vision detectors and machine learning techniques, we are capable of producing high quality track information of more than 40 thousand vehicles per five minutes. When faced with such a vast number of vehicular tracks, it is useful for analysts to be able to quickly query information based on region of interest, color, maneuvers or other high-level types of information, to gain insight and find relevant activities in the flood of information. In this paper we propose a set of tools, combined in a graphical user interface, which allows data analysts to survey vehicles in a large observed area. In order to retrieve (parts of) images from the high-resolution data, we developed a multi-scale tile-based video file format that allows to quickly obtain only a part, or a sub-sampling of the original high resolution image. By storing tiles of a still image according to a predefined order, we can quickly retrieve a particular region of the image at any relevant scale, by skipping to the correct frames and reconstructing the image. Location based queries allow a user to select tracks around a particular region of interest such as landmark, building or street. By using an integrated search engine, users can quickly select tracks that are in the vicinity of locations of interest. Another time-reducing method when searching for a particular vehicle, is to filter on color or color intensity. Automatic maneuver detection adds information to the tracks that can be used to find vehicles based on their behavior.
He, Xinzi; Yu, Zhen; Wang, Tianfu; Lei, Baiying; Shi, Yiyan
2018-01-01
Dermoscopy imaging has been a routine examination approach for skin lesion diagnosis. Accurate segmentation is the first step for automatic dermoscopy image assessment. The main challenges for skin lesion segmentation are numerous variations in viewpoint and scale of skin lesion region. To handle these challenges, we propose a novel skin lesion segmentation network via a very deep dense deconvolution network based on dermoscopic images. Specifically, the deep dense layer and generic multi-path Deep RefineNet are combined to improve the segmentation performance. The deep representation of all available layers is aggregated to form the global feature maps using skip connection. Also, the dense deconvolution layer is leveraged to capture diverse appearance features via the contextual information. Finally, we apply the dense deconvolution layer to smooth segmentation maps and obtain final high-resolution output. Our proposed method shows the superiority over the state-of-the-art approaches based on the public available 2016 and 2017 skin lesion challenge dataset and achieves the accuracy of 96.0% and 93.9%, which obtained a 6.0% and 1.2% increase over the traditional method, respectively. By utilizing Dense Deconvolution Net, the average time for processing one testing images with our proposed framework was 0.253 s.
Zuluaga, Maria A; Rodionov, Roman; Nowell, Mark; Achhala, Sufyan; Zombori, Gergely; Mendelson, Alex F; Cardoso, M Jorge; Miserocchi, Anna; McEvoy, Andrew W; Duncan, John S; Ourselin, Sébastien
2015-08-01
Brain vessels are among the most critical landmarks that need to be assessed for mitigating surgical risks in stereo-electroencephalography (SEEG) implantation. Intracranial haemorrhage is the most common complication associated with implantation, carrying significantly associated morbidity. SEEG planning is done pre-operatively to identify avascular trajectories for the electrodes. In current practice, neurosurgeons have no assistance in the planning of electrode trajectories. There is great interest in developing computer-assisted planning systems that can optimise the safety profile of electrode trajectories, maximising the distance to critical structures. This paper presents a method that integrates the concepts of scale, neighbourhood structure and feature stability with the aim of improving robustness and accuracy of vessel extraction within a SEEG planning system. The developed method accounts for scale and vicinity of a voxel by formulating the problem within a multi-scale tensor voting framework. Feature stability is achieved through a similarity measure that evaluates the multi-modal consistency in vesselness responses. The proposed measurement allows the combination of multiple images modalities into a single image that is used within the planning system to visualise critical vessels. Twelve paired data sets from two image modalities available within the planning system were used for evaluation. The mean Dice similarity coefficient was 0.89 ± 0.04, representing a statistically significantly improvement when compared to a semi-automated single human rater, single-modality segmentation protocol used in clinical practice (0.80 ± 0.03). Multi-modal vessel extraction is superior to semi-automated single-modality segmentation, indicating the possibility of safer SEEG planning, with reduced patient morbidity.
Wang, Wensheng; Nie, Ting; Fu, Tianjiao; Ren, Jianyue; Jin, Longxu
2017-01-01
In target detection of optical remote sensing images, two main obstacles for aircraft target detection are how to extract the candidates in complex gray-scale-multi background and how to confirm the targets in case the target shapes are deformed, irregular or asymmetric, such as that caused by natural conditions (low signal-to-noise ratio, illumination condition or swaying photographing) and occlusion by surrounding objects (boarding bridge, equipment). To solve these issues, an improved active contours algorithm, namely region-scalable fitting energy based threshold (TRSF), and a corner-convex hull based segmentation algorithm (CCHS) are proposed in this paper. Firstly, the maximal variance between-cluster algorithm (Otsu’s algorithm) and region-scalable fitting energy (RSF) algorithm are combined to solve the difficulty of targets extraction in complex and gray-scale-multi backgrounds. Secondly, based on inherent shapes and prominent corners, aircrafts are divided into five fragments by utilizing convex hulls and Harris corner points. Furthermore, a series of new structure features, which describe the proportion of targets part in the fragment to the whole fragment and the proportion of fragment to the whole hull, are identified to judge whether the targets are true or not. Experimental results show that TRSF algorithm could improve extraction accuracy in complex background, and that it is faster than some traditional active contours algorithms. The CCHS is effective to suppress the detection difficulties caused by the irregular shape. PMID:28481260
Multi-sensor fusion of Landsat 8 thermal infrared (TIR) and panchromatic (PAN) images.
Jung, Hyung-Sup; Park, Sung-Whan
2014-12-18
Data fusion is defined as the combination of data from multiple sensors such that the resulting information is better than would be possible when the sensors are used individually. The multi-sensor fusion of panchromatic (PAN) and thermal infrared (TIR) images is a good example of this data fusion. While a PAN image has higher spatial resolution, a TIR one has lower spatial resolution. In this study, we have proposed an efficient method to fuse Landsat 8 PAN and TIR images using an optimal scaling factor in order to control the trade-off between the spatial details and the thermal information. We have compared the fused images created from different scaling factors and then tested the performance of the proposed method at urban and rural test areas. The test results show that the proposed method merges the spatial resolution of PAN image and the temperature information of TIR image efficiently. The proposed method may be applied to detect lava flows of volcanic activity, radioactive exposure of nuclear power plants, and surface temperature change with respect to land-use change.
Automatic co-registration of 3D multi-sensor point clouds
NASA Astrophysics Data System (ADS)
Persad, Ravi Ancil; Armenakis, Costas
2017-08-01
We propose an approach for the automatic coarse alignment of 3D point clouds which have been acquired from various platforms. The method is based on 2D keypoint matching performed on height map images of the point clouds. Initially, a multi-scale wavelet keypoint detector is applied, followed by adaptive non-maxima suppression. A scale, rotation and translation-invariant descriptor is then computed for all keypoints. The descriptor is built using the log-polar mapping of Gabor filter derivatives in combination with the so-called Rapid Transform. In the final step, source and target height map keypoint correspondences are determined using a bi-directional nearest neighbour similarity check, together with a threshold-free modified-RANSAC. Experiments with urban and non-urban scenes are presented and results show scale errors ranging from 0.01 to 0.03, 3D rotation errors in the order of 0.2° to 0.3° and 3D translation errors from 0.09 m to 1.1 m.
Lee, Junghoon; Carass, Aaron; Jog, Amod; Zhao, Can; Prince, Jerry L
2017-02-01
Accurate CT synthesis, sometimes called electron density estimation, from MRI is crucial for successful MRI-based radiotherapy planning and dose computation. Existing CT synthesis methods are able to synthesize normal tissues but are unable to accurately synthesize abnormal tissues (i.e., tumor), thus providing a suboptimal solution. We propose a multi-atlas-based hybrid synthesis approach that combines multi-atlas registration and patch-based synthesis to accurately synthesize both normal and abnormal tissues. Multi-parametric atlas MR images are registered to the target MR images by multi-channel deformable registration, from which the atlas CT images are deformed and fused by locally-weighted averaging using a structural similarity measure (SSIM). Synthetic MR images are also computed from the registered atlas MRIs by using the same weights used for the CT synthesis; these are compared to the target patient MRIs allowing for the assessment of the CT synthesis fidelity. Poor synthesis regions are automatically detected based on the fidelity measure and refined by a patch-based synthesis. The proposed approach was tested on brain cancer patient data, and showed a noticeable improvement for the tumor region.
Study of Automatic Image Rectification and Registration of Scanned Historical Aerial Photographs
NASA Astrophysics Data System (ADS)
Chen, H. R.; Tseng, Y. H.
2016-06-01
Historical aerial photographs directly provide good evidences of past times. The Research Center for Humanities and Social Sciences (RCHSS) of Taiwan Academia Sinica has collected and scanned numerous historical maps and aerial images of Taiwan and China. Some maps or images have been geo-referenced manually, but most of historical aerial images have not been registered since there are no GPS or IMU data for orientation assisting in the past. In our research, we developed an automatic process of matching historical aerial images by SIFT (Scale Invariant Feature Transform) for handling the great quantity of images by computer vision. SIFT is one of the most popular method of image feature extracting and matching. This algorithm extracts extreme values in scale space into invariant image features, which are robust to changing in rotation scale, noise, and illumination. We also use RANSAC (Random sample consensus) to remove outliers, and obtain good conjugated points between photographs. Finally, we manually add control points for registration through least square adjustment based on collinear equation. In the future, we can use image feature points of more photographs to build control image database. Every new image will be treated as query image. If feature points of query image match the features in database, it means that the query image probably is overlapped with control images.With the updating of database, more and more query image can be matched and aligned automatically. Other research about multi-time period environmental changes can be investigated with those geo-referenced temporal spatial data.
NASA Astrophysics Data System (ADS)
Khan, F.; Enzmann, F.; Kersten, M.
2015-12-01
In X-ray computed microtomography (μXCT) image processing is the most important operation prior to image analysis. Such processing mainly involves artefact reduction and image segmentation. We propose a new two-stage post-reconstruction procedure of an image of a geological rock core obtained by polychromatic cone-beam μXCT technology. In the first stage, the beam-hardening (BH) is removed applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. The final BH-corrected image is extracted from the residual data, or the difference between the surface elevation values and the original grey-scale values. For the second stage, we propose using a least square support vector machine (a non-linear classifier algorithm) to segment the BH-corrected data as a pixel-based multi-classification task. A combination of the two approaches was used to classify a complex multi-mineral rock sample. The Matlab code for this approach is provided in the Appendix. A minor drawback is that the proposed segmentation algorithm may become computationally demanding in the case of a high dimensional training data set.
Research on segmentation based on multi-atlas in brain MR image
NASA Astrophysics Data System (ADS)
Qian, Yuejing
2018-03-01
Accurate segmentation of specific tissues in brain MR image can be effectively achieved with the multi-atlas-based segmentation method, and the accuracy mainly depends on the image registration accuracy and fusion scheme. This paper proposes an automatic segmentation method based on the multi-atlas for brain MR image. Firstly, to improve the registration accuracy in the area to be segmented, we employ a target-oriented image registration method for the refinement. Then In the label fusion, we proposed a new algorithm to detect the abnormal sparse patch and simultaneously abandon the corresponding abnormal sparse coefficients, this method is made based on the remaining sparse coefficients combined with the multipoint label estimator strategy. The performance of the proposed method was compared with those of the nonlocal patch-based label fusion method (Nonlocal-PBM), the sparse patch-based label fusion method (Sparse-PBM) and majority voting method (MV). Based on our experimental results, the proposed method is efficient in the brain MR images segmentation compared with MV, Nonlocal-PBM, and Sparse-PBM methods.
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline; Laporte, Nadine; Netanyahuy, Nathan S.; Zukor, Dorothy (Technical Monitor)
2001-01-01
The characterization and the mapping of land cover/land use of forest areas, such as the Central African rainforest, is a very complex task. This complexity is mainly due to the extent of such areas and, as a consequence, to the lack of full and continuous cloud-free coverage of those large regions by one single remote sensing instrument, In order to provide improved vegetation maps of Central Africa and to develop forest monitoring techniques for applications at the local and regional scales, we propose to utilize multi-sensor remote sensing observations coupled with in-situ data. Fusion and clustering of multi-sensor data are the first steps towards the development of such a forest monitoring system. In this paper, we will describe some preliminary experiments involving the fusion of SAR and Landsat image data of the Lope Reserve in Gabon. Similarly to previous fusion studies, our fusion method is wavelet-based. The fusion provides a new image data set which contains more detailed texture features and preserves the large homogeneous regions that are observed by the Thematic Mapper sensor. The fusion step is followed by unsupervised clustering and provides a vegetation map of the area.
Serag, Ahmed; Blesa, Manuel; Moore, Emma J; Pataky, Rozalia; Sparrow, Sarah A; Wilkinson, A G; Macnaught, Gillian; Semple, Scott I; Boardman, James P
2016-03-24
Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases 'uniformly' distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course.
A fast color image enhancement algorithm based on Max Intensity Channel
Sun, Wei; Han, Long; Guo, Baolong; Jia, Wenyan; Sun, Mingui
2014-01-01
In this paper, we extend image enhancement techniques based on the retinex theory imitating human visual perception of scenes containing high illumination variations. This extension achieves simultaneous dynamic range modification, color consistency, and lightness rendition without multi-scale Gaussian filtering which has a certain halo effect. The reflection component is analyzed based on the illumination and reflection imaging model. A new prior named Max Intensity Channel (MIC) is implemented assuming that the reflections of some points in the scene are very high in at least one color channel. Using this prior, the illumination of the scene is obtained directly by performing a gray-scale closing operation and a fast cross-bilateral filtering on the MIC of the input color image. Consequently, the reflection component of each RGB color channel can be determined from the illumination and reflection imaging model. The proposed algorithm estimates the illumination component which is relatively smooth and maintains the edge details in different regions. A satisfactory color rendition is achieved for a class of images that do not satisfy the gray-world assumption implicit to the theoretical foundation of the retinex. Experiments are carried out to compare the new method with several spatial and transform domain methods. Our results indicate that the new method is superior in enhancement applications, improves computation speed, and performs well for images with high illumination variations than other methods. Further comparisons of images from National Aeronautics and Space Administration and a wearable camera eButton have shown a high performance of the new method with better color restoration and preservation of image details. PMID:25110395
A fast color image enhancement algorithm based on Max Intensity Channel.
Sun, Wei; Han, Long; Guo, Baolong; Jia, Wenyan; Sun, Mingui
2014-03-30
In this paper, we extend image enhancement techniques based on the retinex theory imitating human visual perception of scenes containing high illumination variations. This extension achieves simultaneous dynamic range modification, color consistency, and lightness rendition without multi-scale Gaussian filtering which has a certain halo effect. The reflection component is analyzed based on the illumination and reflection imaging model. A new prior named Max Intensity Channel (MIC) is implemented assuming that the reflections of some points in the scene are very high in at least one color channel. Using this prior, the illumination of the scene is obtained directly by performing a gray-scale closing operation and a fast cross-bilateral filtering on the MIC of the input color image. Consequently, the reflection component of each RGB color channel can be determined from the illumination and reflection imaging model. The proposed algorithm estimates the illumination component which is relatively smooth and maintains the edge details in different regions. A satisfactory color rendition is achieved for a class of images that do not satisfy the gray-world assumption implicit to the theoretical foundation of the retinex. Experiments are carried out to compare the new method with several spatial and transform domain methods. Our results indicate that the new method is superior in enhancement applications, improves computation speed, and performs well for images with high illumination variations than other methods. Further comparisons of images from National Aeronautics and Space Administration and a wearable camera eButton have shown a high performance of the new method with better color restoration and preservation of image details.
A fast color image enhancement algorithm based on Max Intensity Channel
NASA Astrophysics Data System (ADS)
Sun, Wei; Han, Long; Guo, Baolong; Jia, Wenyan; Sun, Mingui
2014-03-01
In this paper, we extend image enhancement techniques based on the retinex theory imitating human visual perception of scenes containing high illumination variations. This extension achieves simultaneous dynamic range modification, color consistency, and lightness rendition without multi-scale Gaussian filtering which has a certain halo effect. The reflection component is analyzed based on the illumination and reflection imaging model. A new prior named Max Intensity Channel (MIC) is implemented assuming that the reflections of some points in the scene are very high in at least one color channel. Using this prior, the illumination of the scene is obtained directly by performing a gray-scale closing operation and a fast cross-bilateral filtering on the MIC of the input color image. Consequently, the reflection component of each RGB color channel can be determined from the illumination and reflection imaging model. The proposed algorithm estimates the illumination component which is relatively smooth and maintains the edge details in different regions. A satisfactory color rendition is achieved for a class of images that do not satisfy the gray-world assumption implicit to the theoretical foundation of the retinex. Experiments are carried out to compare the new method with several spatial and transform domain methods. Our results indicate that the new method is superior in enhancement applications, improves computation speed, and performs well for images with high illumination variations than other methods. Further comparisons of images from National Aeronautics and Space Administration and a wearable camera eButton have shown a high performance of the new method with better color restoration and preservation of image details.
Retinex Image Processing: Improved Fidelity To Direct Visual Observation
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.; Rahman, Zia-Ur; Woodell, Glenn A.
1996-01-01
Recorded color images differ from direct human viewing by the lack of dynamic range compression and color constancy. Research is summarized which develops the center/surround retinex concept originated by Edwin Land through a single scale design to a multi-scale design with color restoration (MSRCR). The MSRCR synthesizes dynamic range compression, color constancy, and color rendition and, thereby, approaches fidelity to direct observation.
A dynamic multi-scale Markov model based methodology for remaining life prediction
NASA Astrophysics Data System (ADS)
Yan, Jihong; Guo, Chaozhong; Wang, Xing
2011-05-01
The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.
On a Fundamental Evaluation of a Uav Equipped with a Multichannel Laser Scanner
NASA Astrophysics Data System (ADS)
Nakano, K.; Suzuki, H.; Omori, K.; Hayakawa, K.; Kurodai, M.
2018-05-01
Unmanned aerial vehicles (UAVs), which have been widely used in various fields such as archaeology, agriculture, mining, and construction, can acquire high-resolution images at the millimetre scale. It is possible to obtain realistic 3D models using high-overlap images and 3D reconstruction software based on computer vision technologies such as Structure from Motion and Multi-view Stereo. However, it remains difficult to obtain key points from surfaces with limited texture such as new asphalt or concrete, or from areas like forests that may be concealed by vegetation. A promising method for conducting aerial surveys is through the use of UAVs equipped with laser scanners. We conducted a fundamental performance evaluation of the Velodyne VLP-16 multi-channel laser scanner equipped to a DJI Matrice 600 Pro UAV at a construction site. Here, we present our findings with respect to both the geometric and radiometric aspects of the acquired data.
NASA Astrophysics Data System (ADS)
Lee, Joohwi; Kim, Sun Hyung; Styner, Martin
2016-03-01
The delineation of rodent brain structures is challenging due to low-contrast multiple cortical and subcortical organs that are closely interfacing to each other. Atlas-based segmentation has been widely employed due to its ability to delineate multiple organs at the same time via image registration. The use of multiple atlases and subsequent label fusion techniques has further improved the robustness and accuracy of atlas-based segmentation. However, the accuracy of atlas-based segmentation is still prone to registration errors; for example, the segmentation of in vivo MR images can be less accurate and robust against image artifacts than the segmentation of post mortem images. In order to improve the accuracy and robustness of atlas-based segmentation, we propose a multi-object, model-based, multi-atlas segmentation method. We first establish spatial correspondences across atlases using a set of dense pseudo-landmark particles. We build a multi-object point distribution model using those particles in order to capture inter- and intra- subject variation among brain structures. The segmentation is obtained by fitting the model into a subject image, followed by label fusion process. Our result shows that the proposed method resulted in greater accuracy than comparable segmentation methods, including a widely used ANTs registration tool.
Renjith, Arokia; Manjula, P; Mohan Kumar, P
2015-01-01
Brain tumour is one of the main causes for an increase in transience among children and adults. This paper proposes an improved method based on Magnetic Resonance Imaging (MRI) brain image classification and image segmentation approach. Automated classification is encouraged by the need of high accuracy when dealing with a human life. The detection of the brain tumour is a challenging problem, due to high diversity in tumour appearance and ambiguous tumour boundaries. MRI images are chosen for detection of brain tumours, as they are used in soft tissue determinations. First of all, image pre-processing is used to enhance the image quality. Second, dual-tree complex wavelet transform multi-scale decomposition is used to analyse texture of an image. Feature extraction extracts features from an image using gray-level co-occurrence matrix (GLCM). Then, the Neuro-Fuzzy technique is used to classify the stages of brain tumour as benign, malignant or normal based on texture features. Finally, tumour location is detected using Otsu thresholding. The classifier performance is evaluated based on classification accuracies. The simulated results show that the proposed classifier provides better accuracy than previous method.
Visual enhancement of unmixed multispectral imagery using adaptive smoothing
Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.
2004-01-01
Adaptive smoothing (AS) has been previously proposed as a method to smooth uniform regions of an image, retain contrast edges, and enhance edge boundaries. The method is an implementation of the anisotropic diffusion process which results in a gray scale image. This paper discusses modifications to the AS method for application to multi-band data which results in a color segmented image. The process was used to visually enhance the three most distinct abundance fraction images produced by the Lagrange constraint neural network learning-based unmixing of Landsat 7 Enhanced Thematic Mapper Plus multispectral sensor data. A mutual information-based method was applied to select the three most distinct fraction images for subsequent visualization as a red, green, and blue composite. A reported image restoration technique (partial restoration) was applied to the multispectral data to reduce unmixing error, although evaluation of the performance of this technique was beyond the scope of this paper. The modified smoothing process resulted in a color segmented image with homogeneous regions separated by sharpened, coregistered multiband edges. There was improved class separation with the segmented image, which has importance to subsequent operations involving data classification.
Dark and Bright Terrains of Pluto
2015-07-10
These circular maps shows the distribution of Pluto's dark and bright terrains as revealed by NASA's New Horizons mission prior to July 4, 2015. Each map is an azimuthal equidistant projection centered on the north pole, with latitude and longitude indicated. Both a gray-scale and color version are shown. The gray-scale version is based on 7 days of panchromatic imaging from the Long Range Reconnaissance Imager (LORRI), whereas the color version uses the gray-scale base and incorporates lower-resolution color information from the Multi-spectral Visible Imaging Camera (MVIC), part of the Ralph instrument. The color version is also shown in a simple cylindrical projection in PIA19700. In these maps, the polar bright terrain is surrounded by a somewhat darker polar fringe, one whose latitudinal position varies strongly with longitude. Especially striking are the much darker regions along the equator. A broad dark swath ("the whale") stretches along the equator from approximately 20 to 160 degrees of longitude. Several dark patches appear in a regular sequence centered near 345 degrees of longitude. A spectacular bright region occupies Pluto's mid-latitudes near 180 degrees of longitude, and stretches southward over the equator. New Horizons' closest approach to Pluto will occur near this longitude, which will permit high-resolution visible imaging and compositional mapping of these various regions. http://photojournal.jpl.nasa.gov/catalog/PIA19706
NASA Astrophysics Data System (ADS)
Preusker, Frank; Scholten, Frank; Matz, Klaus-Dieter; Roatsch, Thomas; Willner, Konrad; Hviid, Stubbe; Knollenberg, Jörg; Kührt, Ekkehard; Sierks, Holger
2015-04-01
The European Space Agency's Rosetta spacecraft is equipped with the OSIRIS imaging system which consists of a wide-angle and a narrow-angle camera (WAC and NAC). After the approach phase, Rosetta was inserted into a descent trajectory of comet 67P/Churyumov-Gerasimenko (C-G) in early August 2014. Until early September, OSIRIS acquired several hundred NAC images of C-G's surface at different scales (from ~5 m/pixel during approach to ~0.9 m/pixel during descent). In that one month observation period, the surface was imaged several times within different mapping sequences. With the comet's rotation period of ~12.4 h and the low spacecraft velocity (< 1 m/s), the entire NAC dataset provides multiple NAC stereo coverage, adequate for stereo-photogrammetric (SPG) analysis towards the derivation of 3D surface models. We constrained the OSIRIS NAC images with our stereo requirements (15° < stereo angles < 45°, incidence angles <85°, emission angles <45°, differences in illumination < 10°, scale better than 5 m/pixel) and extracted about 220 NAC images that provide at least triple stereo image coverage for the entire illuminated surface in about 250 independent multi-stereo image combinations. For each image combination we determined tie points by multi-image matching in order to set-up a 3D control network and a dense surface point cloud for the precise reconstruction of C-G's shape. The control point network defines the input for a stereo-photogrammetric least squares adjustment. Based on the statistical analysis of adjustments we first refined C-G's rotational state (pole orientation and rotational period) and its behavior over time. Based upon this description of the orientation of C-G's body-fixed reference frame, we derived corrections for the nominal navigation data (pointing and position) within a final stereo-photogrammetric block adjustment where the mean 3D point accuracy of more than 100 million surface points has been improved from ~10 m to the sub-meter range. We finally applied point filtering and interpolation techniques to these surface 3D points and show the resulting SPG-based 3D surface model with a lateral sampling rate of about 2 m.
NASA Astrophysics Data System (ADS)
Cvetkovic, Sascha D.; Schirris, Johan; de With, Peter H. N.
2009-01-01
For real-time imaging in surveillance applications, visibility of details is of primary importance to ensure customer confidence. If we display High Dynamic-Range (HDR) scenes whose contrast spans four or more orders of magnitude on a conventional monitor without additional processing, results are unacceptable. Compression of the dynamic range is therefore a compulsory part of any high-end video processing chain because standard monitors are inherently Low- Dynamic Range (LDR) devices with maximally two orders of display dynamic range. In real-time camera processing, many complex scenes are improved with local contrast enhancements, bringing details to the best possible visibility. In this paper, we show how a multi-scale high-frequency enhancement scheme, in which gain is a non-linear function of the detail energy, can be used for the dynamic range compression of HDR real-time video camera signals. We also show the connection of our enhancement scheme to the processing way of the Human Visual System (HVS). Our algorithm simultaneously controls perceived sharpness, ringing ("halo") artifacts (contrast) and noise, resulting in a good balance between visibility of details and non-disturbance of artifacts. The overall quality enhancement, suitable for both HDR and LDR scenes, is based on a careful selection of the filter types for the multi-band decomposition and a detailed analysis of the signal per frequency band.
Intelligent multi-spectral IR image segmentation
NASA Astrophysics Data System (ADS)
Lu, Thomas; Luong, Andrew; Heim, Stephen; Patel, Maharshi; Chen, Kang; Chao, Tien-Hsin; Chow, Edward; Torres, Gilbert
2017-05-01
This article presents a neural network based multi-spectral image segmentation method. A neural network is trained on the selected features of both the objects and background in the longwave (LW) Infrared (IR) images. Multiple iterations of training are performed until the accuracy of the segmentation reaches satisfactory level. The segmentation boundary of the LW image is used to segment the midwave (MW) and shortwave (SW) IR images. A second neural network detects the local discontinuities and refines the accuracy of the local boundaries. This article compares the neural network based segmentation method to the Wavelet-threshold and Grab-Cut methods. Test results have shown increased accuracy and robustness of this segmentation scheme for multi-spectral IR images.
Multi-scale hippocampal parcellation improves atlas-based segmentation accuracy
NASA Astrophysics Data System (ADS)
Plassard, Andrew J.; McHugo, Maureen; Heckers, Stephan; Landman, Bennett A.
2017-02-01
Known for its distinct role in memory, the hippocampus is one of the most studied regions of the brain. Recent advances in magnetic resonance imaging have allowed for high-contrast, reproducible imaging of the hippocampus. Typically, a trained rater takes 45 minutes to manually trace the hippocampus and delineate the anterior from the posterior segment at millimeter resolution. As a result, there has been a significant desire for automated and robust segmentation of the hippocampus. In this work we use a population of 195 atlases based on T1-weighted MR images with the left and right hippocampus delineated into the head and body. We initialize the multi-atlas segmentation to a region directly around each lateralized hippocampus to both speed up and improve the accuracy of registration. This initialization allows for incorporation of nearly 200 atlases, an accomplishment which would typically involve hundreds of hours of computation per target image. The proposed segmentation results in a Dice similiarity coefficient over 0.9 for the full hippocampus. This result outperforms a multi-atlas segmentation using the BrainCOLOR atlases (Dice 0.85) and FreeSurfer (Dice 0.75). Furthermore, the head and body delineation resulted in a Dice coefficient over 0.87 for both structures. The head and body volume measurements also show high reproducibility on the Kirby 21 reproducibility population (R2 greater than 0.95, p < 0.05 for all structures). This work signifies the first result in an ongoing work to develop a robust tool for measurement of the hippocampus and other temporal lobe structures.
Construction of multi-scale consistent brain networks: methods and applications.
Ge, Bao; Tian, Yin; Hu, Xintao; Chen, Hanbo; Zhu, Dajiang; Zhang, Tuo; Han, Junwei; Guo, Lei; Liu, Tianming
2015-01-01
Mapping human brain networks provides a basis for studying brain function and dysfunction, and thus has gained significant interest in recent years. However, modeling human brain networks still faces several challenges including constructing networks at multiple spatial scales and finding common corresponding networks across individuals. As a consequence, many previous methods were designed for a single resolution or scale of brain network, though the brain networks are multi-scale in nature. To address this problem, this paper presents a novel approach to constructing multi-scale common structural brain networks from DTI data via an improved multi-scale spectral clustering applied on our recently developed and validated DICCCOLs (Dense Individualized and Common Connectivity-based Cortical Landmarks). Since the DICCCOL landmarks possess intrinsic structural correspondences across individuals and populations, we employed the multi-scale spectral clustering algorithm to group the DICCCOL landmarks and their connections into sub-networks, meanwhile preserving the intrinsically-established correspondences across multiple scales. Experimental results demonstrated that the proposed method can generate multi-scale consistent and common structural brain networks across subjects, and its reproducibility has been verified by multiple independent datasets. As an application, these multi-scale networks were used to guide the clustering of multi-scale fiber bundles and to compare the fiber integrity in schizophrenia and healthy controls. In general, our methods offer a novel and effective framework for brain network modeling and tract-based analysis of DTI data.
Hadwiger, M; Beyer, J; Jeong, Won-Ki; Pfister, H
2012-12-01
This paper presents the first volume visualization system that scales to petascale volumes imaged as a continuous stream of high-resolution electron microscopy images. Our architecture scales to dense, anisotropic petascale volumes because it: (1) decouples construction of the 3D multi-resolution representation required for visualization from data acquisition, and (2) decouples sample access time during ray-casting from the size of the multi-resolution hierarchy. Our system is designed around a scalable multi-resolution virtual memory architecture that handles missing data naturally, does not pre-compute any 3D multi-resolution representation such as an octree, and can accept a constant stream of 2D image tiles from the microscopes. A novelty of our system design is that it is visualization-driven: we restrict most computations to the visible volume data. Leveraging the virtual memory architecture, missing data are detected during volume ray-casting as cache misses, which are propagated backwards for on-demand out-of-core processing. 3D blocks of volume data are only constructed from 2D microscope image tiles when they have actually been accessed during ray-casting. We extensively evaluate our system design choices with respect to scalability and performance, compare to previous best-of-breed systems, and illustrate the effectiveness of our system for real microscopy data from neuroscience.
Future Directions in Medical Physics: Models, Technology, and Translation to Medicine
NASA Astrophysics Data System (ADS)
Siewerdsen, Jeffrey
The application of physics in medicine has been integral to major advances in diagnostic and therapeutic medicine. Two primary areas represent the mainstay of medical physics research in the last century: in radiation therapy, physicists have propelled advances in conformal radiation treatment and high-precision image guidance; and in diagnostic imaging, physicists have advanced an arsenal of multi-modality imaging that includes CT, MRI, ultrasound, and PET as indispensible tools for noninvasive screening, diagnosis, and assessment of treatment response. In addition to their role in building such technologically rich fields of medicine, physicists have also become integral to daily clinical practice in these areas. The future suggests new opportunities for multi-disciplinary research bridging physics, biology, engineering, and computer science, and collaboration in medical physics carries a strong capacity for identification of significant clinical needs, access to clinical data, and translation of technologies to clinical studies. In radiation therapy, for example, the extraction of knowledge from large datasets on treatment delivery, image-based phenotypes, genomic profile, and treatment outcome will require innovation in computational modeling and connection with medical physics for the curation of large datasets. Similarly in imaging physics, the demand for new imaging technology capable of measuring physical and biological processes over orders of magnitude in scale (from molecules to whole organ systems) and exploiting new contrast mechanisms for greater sensitivity to molecular agents and subtle functional / morphological change will benefit from multi-disciplinary collaboration in physics, biology, and engineering. Also in surgery and interventional radiology, where needs for increased precision and patient safety meet constraints in cost and workflow, development of new technologies for imaging, image registration, and robotic assistance can leverage collaboration in physics, biomedical engineering, and computer science. In each area, there is major opportunity for multi-disciplinary collaboration with medical physics to accelerate the translation of such technologies to clinical use. Research supported by the National Institutes of Health, Siemens Healthcare, and Carestream Health.
Infrared and visible image fusion scheme based on NSCT and low-level visual features
NASA Astrophysics Data System (ADS)
Li, Huafeng; Qiu, Hongmei; Yu, Zhengtao; Zhang, Yafei
2016-05-01
Multi-scale transform (MST) is an efficient tool for image fusion. Recently, many fusion methods have been developed based on different MSTs, and they have shown potential application in many fields. In this paper, we propose an effective infrared and visible image fusion scheme in nonsubsampled contourlet transform (NSCT) domain, in which the NSCT is firstly employed to decompose each of the source images into a series of high frequency subbands and one low frequency subband. To improve the fusion performance we designed two new activity measures for fusion of the lowpass subbands and the highpass subbands. These measures are developed based on the fact that the human visual system (HVS) percept the image quality mainly according to its some low-level features. Then, the selection principles of different subbands are presented based on the corresponding activity measures. Finally, the merged subbands are constructed according to the selection principles, and the final fused image is produced by applying the inverse NSCT on these merged subbands. Experimental results demonstrate the effectiveness and superiority of the proposed method over the state-of-the-art fusion methods in terms of both visual effect and objective evaluation results.
Hansen, Matt; Stehman, Steve; Loveland, Tom; Vogelmann, Jim; Cochrane, Mark
2009-01-01
Quantifying rates of forest-cover change is important for improved carbon accounting and climate change modeling, management of forestry and agricultural resources, and biodiversity monitoring. A practical solution to examining trends in forest cover change at global scale is to employ remotely sensed data. Satellite-based monitoring of forest cover can be implemented consistently across large regions at annual and inter-annual intervals. This research extends previous research on global forest-cover dynamics and land-cover change estimation to establish a robust, operational forest monitoring and assessment system. The approach integrates both MODIS and Landsat data to provide timely biome-scale forest change estimation. This is achieved by using annual MODIS change indicator maps to stratify biomes into low, medium and high change categories. Landsat image pairs can then be sampled within these strata and analyzed for estimating area of forest cleared.
3D tensor-based blind multispectral image decomposition for tumor demarcation
NASA Astrophysics Data System (ADS)
Kopriva, Ivica; Peršin, Antun
2010-03-01
Blind decomposition of multi-spectral fluorescent image for tumor demarcation is formulated exploiting tensorial structure of the image. First contribution of the paper is identification of the matrix of spectral responses and 3D tensor of spatial distributions of the materials present in the image from Tucker3 or PARAFAC models of 3D image tensor. Second contribution of the paper is clustering based estimation of the number of the materials present in the image as well as matrix of their spectral profiles. 3D tensor of the spatial distributions of the materials is recovered through 3-mode multiplication of the multi-spectral image tensor and inverse of the matrix of spectral profiles. Tensor representation of the multi-spectral image preserves its local spatial structure that is lost, due to vectorization process, when matrix factorization-based decomposition methods (such as non-negative matrix factorization and independent component analysis) are used. Superior performance of the tensor-based image decomposition over matrix factorization-based decompositions is demonstrated on experimental red-green-blue (RGB) image with known ground truth as well as on RGB fluorescent images of the skin tumor (basal cell carcinoma).
LSST Resources for the Community
NASA Astrophysics Data System (ADS)
Jones, R. Lynne
2011-01-01
LSST will generate 100 petabytes of images and 20 petabytes of catalogs, covering 18,000-20,000 square degrees of area sampled every few days, throughout a total of ten years of time -- all publicly available and exquisitely calibrated. The primary access to this data will be through Data Access Centers (DACs). DACs will provide access to catalogs of sources (single detections from individual images) and objects (associations of sources from multiple images). Simple user interfaces or direct SQL queries at the DAC can return user-specified portions of data from catalogs or images. More complex manipulations of the data, such as calculating multi-point correlation functions or creating alternative photo-z measurements on terabyte-scale data, can be completed with the DAC's own resources. Even more data-intensive computations requiring access to large numbers of image pixels on petabyte-scale could also be conducted at the DAC, using compute resources allocated in a similar manner to a TAC. DAC resources will be available to all individuals in member countries or institutes and LSST science collaborations. DACs will also assist investigators with requests for allocations at national facilities such as the Petascale Computing Facility, TeraGrid, and Open Science Grid. Using data on this scale requires new approaches to accessibility and analysis which are being developed through interactions with the LSST Science Collaborations. We are producing simulated images (as might be acquired by LSST) based on models of the universe and generating catalogs from these images (as well as from the base model) using the LSST data management framework in a series of data challenges. The resulting images and catalogs are being made available to the science collaborations to verify the algorithms and develop user interfaces. All LSST software is open source and available online, including preliminary catalog formats. We encourage feedback from the community.
Al-Kadi, Omar S; Chung, Daniel Y F; Carlisle, Robert C; Coussios, Constantin C; Noble, J Alison
2015-04-01
Intensity variations in image texture can provide powerful quantitative information about physical properties of biological tissue. However, tissue patterns can vary according to the utilized imaging system and are intrinsically correlated to the scale of analysis. In the case of ultrasound, the Nakagami distribution is a general model of the ultrasonic backscattering envelope under various scattering conditions and densities where it can be employed for characterizing image texture, but the subtle intra-heterogeneities within a given mass are difficult to capture via this model as it works at a single spatial scale. This paper proposes a locally adaptive 3D multi-resolution Nakagami-based fractal feature descriptor that extends Nakagami-based texture analysis to accommodate subtle speckle spatial frequency tissue intensity variability in volumetric scans. Local textural fractal descriptors - which are invariant to affine intensity changes - are extracted from volumetric patches at different spatial resolutions from voxel lattice-based generated shape and scale Nakagami parameters. Using ultrasound radio-frequency datasets we found that after applying an adaptive fractal decomposition label transfer approach on top of the generated Nakagami voxels, tissue characterization results were superior to the state of art. Experimental results on real 3D ultrasonic pre-clinical and clinical datasets suggest that describing tumor intra-heterogeneity via this descriptor may facilitate improved prediction of therapy response and disease characterization. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Integral-geometry characterization of photobiomodulation effects on retinal vessel morphology
Barbosa, Marconi; Natoli, Riccardo; Valter, Kriztina; Provis, Jan; Maddess, Ted
2014-01-01
The morphological characterization of quasi-planar structures represented by gray-scale images is challenging when object identification is sub-optimal due to registration artifacts. We propose two alternative procedures that enhances object identification in the integral-geometry morphological image analysis (MIA) framework. The first variant streamlines the framework by introducing an active contours segmentation process whose time step is recycled as a multi-scale parameter. In the second variant, we used the refined object identification produced in the first variant to perform the standard MIA with exact dilation radius as multi-scale parameter. Using this enhanced MIA we quantify the extent of vaso-obliteration in oxygen-induced retinopathic vascular growth, the preventative effect (by photobiomodulation) of exposure during tissue development to near-infrared light (NIR, 670 nm), and the lack of adverse effects due to exposure to NIR light. PMID:25071966
Kim, Won Hwa; Chung, Moo K; Singh, Vikas
2013-01-01
The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem.
NASA Astrophysics Data System (ADS)
Ghanta, Sindhu; Shahini Shamsabadi, Salar; Dy, Jennifer; Wang, Ming; Birken, Ralf
2015-04-01
Around 3,000,000 million vehicle miles are annually traveled utilizing the US transportation systems alone. In addition to the road traffic safety, maintaining the road infrastructure in a sound condition promotes a more productive and competitive economy. Due to the significant amounts of financial and human resources required to detect surface cracks by visual inspection, detection of these surface defects are often delayed resulting in deferred maintenance operations. This paper introduces an automatic system for acquisition, detection, classification, and evaluation of pavement surface cracks by unsupervised analysis of images collected from a camera mounted on the rear of a moving vehicle. A Hessian-based multi-scale filter has been utilized to detect ridges in these images at various scales. Post-processing on the extracted features has been implemented to produce statistics of length, width, and area covered by cracks, which are crucial for roadway agencies to assess pavement quality. This process has been realized on three sets of roads with different pavement conditions in the city of Brockton, MA. A ground truth dataset labeled manually is made available to evaluate this algorithm and results rendered more than 90% segmentation accuracy demonstrating the feasibility of employing this approach at a larger scale.
Liu, Jin-xinp; Lu, Heng; Zeng, Yan; Yue, Jian-wei; Meng, Fan-yun; Zhang, Yi-guang
2012-09-01
Resources survey of traditional Chinese medicine and reserves estimation are found to be the most important issues for the protection and utilization of traditional Chinese medicine resources, this paper used multi-spatial resolution remote sensing images (RS) , geographic information systems (GIS) and global positioning system (GPS) , to establish Scutellaria resources survey of 3S data platform. Combined with the traditional field survey methods, small-scale habitat types were established based on different skullcap reserve estimation model, which can estimate reserves of the wild Scutellaria in Beijing-Tianjin-Hebei region and improve the estimation accuracy. It can provide an important parameter for the fourth national survey of traditional Chinese medicine resources and traditional Chinese medicine reserves estimates based on 3S technology by multiple spatial scales model.
Image Segmentation Method Using Fuzzy C Mean Clustering Based on Multi-Objective Optimization
NASA Astrophysics Data System (ADS)
Chen, Jinlin; Yang, Chunzhi; Xu, Guangkui; Ning, Li
2018-04-01
Image segmentation is not only one of the hottest topics in digital image processing, but also an important part of computer vision applications. As one kind of image segmentation algorithms, fuzzy C-means clustering is an effective and concise segmentation algorithm. However, the drawback of FCM is that it is sensitive to image noise. To solve the problem, this paper designs a novel fuzzy C-mean clustering algorithm based on multi-objective optimization. We add a parameter λ to the fuzzy distance measurement formula to improve the multi-objective optimization. The parameter λ can adjust the weights of the pixel local information. In the algorithm, the local correlation of neighboring pixels is added to the improved multi-objective mathematical model to optimize the clustering cent. Two different experimental results show that the novel fuzzy C-means approach has an efficient performance and computational time while segmenting images by different type of noises.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Dengwang; Wang, Qinfen; Li, H
Purpose: The purpose of this research is studying tumor heterogeneity of the primary and lymphoma by using multi-scale texture analysis with PET-CT images, where the tumor heterogeneity is expressed by texture features. Methods: Datasets were collected from 12 lung cancer patients, and both of primary and lymphoma tumors were detected with all these patients. All patients underwent whole-body 18F-FDG PET/CT scan before treatment.The regions of interest (ROI) of primary and lymphoma tumor were contoured by experienced clinical doctors. Then the ROI of primary and lymphoma tumor is extracted automatically by using Matlab software. According to the geometry size of contourmore » structure, the images of tumor are decomposed by multi-scale method.Wavelet transform was performed on ROI structures within images by L layers sampling, and then wavelet sub-bands which have the same size of the original image are obtained. The number of sub-bands is 3L+1.The gray level co-occurrence matrix (GLCM) is calculated within different sub-bands, thenenergy, inertia, correlation and gray in-homogeneity were extracted from GLCM.Finally, heterogeneity statistical analysis was studied for primary and lymphoma tumor using the texture features. Results: Energy, inertia, correlation and gray in-homogeneity are calculated with our experiments for heterogeneity statistical analysis.Energy for primary and lymphomatumor is equal with the same patient, while gray in-homogeneity and inertia of primaryare 2.59595±0.00855, 0.6439±0.0007 respectively. Gray in-homogeneity and inertia of lymphoma are 2.60115±0.00635, 0.64435±0.00055 respectively. The experiments showed that the volume of lymphoma is smaller than primary tumor, but thegray in-homogeneity and inertia were higher than primary tumor with the same patient, and the correlation with lymphoma tumors is zero, while the correlation with primary tumor isslightly strong. Conclusion: This studying showed that there were effective heterogeneity differences between primary and lymphoma tumor by multi-scale image texture analysis. This work is supported by National Natural Science Foundation of China (No. 61201441), Research Fund for Excellent Young and Middle-aged Scientists of Shandong Province (No. BS2012DX038), Project of Shandong Province Higher Educational Science and Technology Program (No. J12LN23), Jinan youth science and technology star (No.20120109)« less
Imaging complex nutrient dynamics in mycelial networks.
Fricker, M D; Lee, J A; Bebber, D P; Tlalka, M; Hynes, J; Darrah, P R; Watkinson, S C; Boddy, L
2008-08-01
Transport networks are vital components of multi-cellular organisms, distributing nutrients and removing waste products. Animal cardiovascular and respiratory systems, and plant vasculature, are branching trees whose architecture is thought to determine universal scaling laws in these organisms. In contrast, the transport systems of many multi-cellular fungi do not fit into this conceptual framework, as they have evolved to explore a patchy environment in search of new resources, rather than ramify through a three-dimensional organism. These fungi grow as a foraging mycelium, formed by the branching and fusion of threadlike hyphae, that gives rise to a complex network. To function efficiently, the mycelial network must both transport nutrients between spatially separated source and sink regions and also maintain its integrity in the face of continuous attack by mycophagous insects or random damage. Here we review the development of novel imaging approaches and software tools that we have used to characterise nutrient transport and network formation in foraging mycelia over a range of spatial scales. On a millimetre scale, we have used a combination of time-lapse confocal imaging and fluorescence recovery after photobleaching to quantify the rate of diffusive transport through the unique vacuole system in individual hyphae. These data then form the basis of a simulation model to predict the impact of such diffusion-based movement on a scale of several millimetres. On a centimetre scale, we have used novel photon-counting scintillation imaging techniques to visualize radiolabel movement in small microcosms. This approach has revealed novel N-transport phenomena, including rapid, preferential N-resource allocation to C-rich sinks, induction of simultaneous bi-directional transport, abrupt switching between different pre-existing transport routes, and a strong pulsatile component to transport in some species. Analysis of the pulsatile transport component using Fourier techniques shows that as the colony forms, it self-organizes into well demarcated domains that are identifiable by differences in the phase relationship of the pulses. On the centimetre to metre scale, we have begun to use techniques borrowed from graph theory to characterize the development and dynamics of the network, and used these abstracted network models to predict the transport characteristics, resilience, and cost of the network.
An automated 3D reconstruction method of UAV images
NASA Astrophysics Data System (ADS)
Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping
2015-10-01
In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.
NASA Astrophysics Data System (ADS)
Shen, Wei; Zhao, Kai; Jiang, Yuan; Wang, Yan; Bai, Xiang; Yuille, Alan
2017-11-01
Object skeletons are useful for object representation and object detection. They are complementary to the object contour, and provide extra information, such as how object scale (thickness) varies among object parts. But object skeleton extraction from natural images is very challenging, because it requires the extractor to be able to capture both local and non-local image context in order to determine the scale of each skeleton pixel. In this paper, we present a novel fully convolutional network with multiple scale-associated side outputs to address this problem. By observing the relationship between the receptive field sizes of the different layers in the network and the skeleton scales they can capture, we introduce two scale-associated side outputs to each stage of the network. The network is trained by multi-task learning, where one task is skeleton localization to classify whether a pixel is a skeleton pixel or not, and the other is skeleton scale prediction to regress the scale of each skeleton pixel. Supervision is imposed at different stages by guiding the scale-associated side outputs toward the groundtruth skeletons at the appropriate scales. The responses of the multiple scale-associated side outputs are then fused in a scale-specific way to detect skeleton pixels using multiple scales effectively. Our method achieves promising results on two skeleton extraction datasets, and significantly outperforms other competitors. Additionally, the usefulness of the obtained skeletons and scales (thickness) are verified on two object detection applications: Foreground object segmentation and object proposal detection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engel, David W.; Reichardt, Thomas A.; Kulp, Thomas J.
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensormore » level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.« less
Dual tree fractional quaternion wavelet transform for disparity estimation.
Kumar, Sanoj; Kumar, Sanjeev; Sukavanam, Nagarajan; Raman, Balasubramanian
2014-03-01
This paper proposes a novel phase based approach for computing disparity as the optical flow from the given pair of consecutive images. A new dual tree fractional quaternion wavelet transform (FrQWT) is proposed by defining the 2D Fourier spectrum upto a single quadrant. In the proposed FrQWT, each quaternion wavelet consists of a real part (a real DWT wavelet) and three imaginary parts that are organized according to the quaternion algebra. First two FrQWT phases encode the shifts of image features in the absolute horizontal and vertical coordinate system, while the third phase has the texture information. The FrQWT allowed a multi-scale framework for calculating and adjusting local disparities and executing phase unwrapping from coarse to fine scales with linear computational efficiency. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Time-marching multi-grid seismic tomography
NASA Astrophysics Data System (ADS)
Tong, P.; Yang, D.; Liu, Q.
2016-12-01
From the classic ray-based traveltime tomography to the state-of-the-art full waveform inversion, because of the nonlinearity of seismic inverse problems, a good starting model is essential for preventing the convergence of the objective function toward local minima. With a focus on building high-accuracy starting models, we propose the so-called time-marching multi-grid seismic tomography method in this study. The new seismic tomography scheme consists of a temporal time-marching approach and a spatial multi-grid strategy. We first divide the recording period of seismic data into a series of time windows. Sequentially, the subsurface properties in each time window are iteratively updated starting from the final model of the previous time window. There are at least two advantages of the time-marching approach: (1) the information included in the seismic data of previous time windows has been explored to build the starting models of later time windows; (2) seismic data of later time windows could provide extra information to refine the subsurface images. Within each time window, we use a multi-grid method to decompose the scale of the inverse problem. Specifically, the unknowns of the inverse problem are sampled on a coarse mesh to capture the macro-scale structure of the subsurface at the beginning. Because of the low dimensionality, it is much easier to reach the global minimum on a coarse mesh. After that, finer meshes are introduced to recover the micro-scale properties. That is to say, the subsurface model is iteratively updated on multi-grid in every time window. We expect that high-accuracy starting models should be generated for the second and later time windows. We will test this time-marching multi-grid method by using our newly developed eikonal-based traveltime tomography software package tomoQuake. Real application results in the 2016 Kumamoto earthquake (Mw 7.0) region in Japan will be demonstrated.
Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.
Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua
2018-02-01
Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.
Quantification of pulmonary vessel diameter in low-dose CT images
NASA Astrophysics Data System (ADS)
Rudyanto, Rina D.; Ortiz de Solórzano, Carlos; Muñoz-Barrutia, Arrate
2015-03-01
Accurate quantification of vessel diameter in low-dose Computer Tomography (CT) images is important to study pulmonary diseases, in particular for the diagnosis of vascular diseases and the characterization of morphological vascular remodeling in Chronic Obstructive Pulmonary Disease (COPD). In this study, we objectively compare several vessel diameter estimation methods using a physical phantom. Five solid tubes of differing diameters (from 0.898 to 3.980 mm) were embedded in foam, simulating vessels in the lungs. To measure the diameters, we first extracted the vessels using either of two approaches: vessel enhancement using multi-scale Hessian matrix computation, or explicitly segmenting them using intensity threshold. We implemented six methods to quantify the diameter: three estimating diameter as a function of scale used to calculate the Hessian matrix; two calculating equivalent diameter from the crosssection area obtained by thresholding the intensity and vesselness response, respectively; and finally, estimating the diameter of the object using the Full Width Half Maximum (FWHM). We find that the accuracy of frequently used methods estimating vessel diameter from the multi-scale vesselness filter depends on the range and the number of scales used. Moreover, these methods still yield a significant error margin on the challenging estimation of the smallest diameter (on the order or below the size of the CT point spread function). Obviously, the performance of the thresholding-based methods depends on the value of the threshold. Finally, we observe that a simple adaptive thresholding approach can achieve a robust and accurate estimation of the smallest vessels diameter.
A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration
Rau, Jiann-Yeou; Yeh, Po-Chia
2012-01-01
The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656
A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.
Rau, Jiann-Yeou; Yeh, Po-Chia
2012-01-01
The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.
Quantitative analysis of nano-pore geomaterials and representative sampling for digital rock physics
NASA Astrophysics Data System (ADS)
Yoon, H.; Dewers, T. A.
2014-12-01
Geomaterials containing nano-pores (e.g., shales and carbonate rocks) have become increasingly important for emerging problems such as unconventional gas and oil resources, enhanced oil recovery, and geologic storage of CO2. Accurate prediction of coupled geophysical and chemical processes at the pore scale requires realistic representation of pore structure and topology. This is especially true for chalk materials, where pore networks are small and complex, and require characterization at sub-micron scale. In this work, we apply laser scanning confocal microscopy to characterize pore structures and microlithofacies at micron- and greater scales and dual focused ion beam-scanning electron microscopy (FIB-SEM) for 3D imaging of nanometer-to-micron scale microcracks and pore distributions. With imaging techniques advanced for nano-pore characterization, a problem of scale with FIB-SEM images is how to take nanometer scale information and apply it to the thin-section or larger scale. In this work, several texture characterization techniques including graph-based spectral segmentation, support vector machine, and principal component analysis are applied for segmentation clusters represented by 1-2 FIB-SEM samples per each cluster. Geometric and topological properties are analyzed and lattice-Boltzmann method (LBM) is used to obtain permeability at several different scales. Upscaling of permeability to the Darcy scale (e.g., the thin-section scale) with image dataset will be discussed with emphasis on understanding microfracture-matrix interaction, representative volume for FIB-SEM sampling, and multiphase flow and reactive transport. Funding from the DOE Basic Energy Sciences Geosciences Program is gratefully acknowledged. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Telescopic multi-resolution augmented reality
NASA Astrophysics Data System (ADS)
Jenkins, Jeffrey; Frenchi, Christopher; Szu, Harold
2014-05-01
To ensure a self-consistent scaling approximation, the underlying microscopic fluctuation components can naturally influence macroscopic means, which may give rise to emergent observable phenomena. In this paper, we describe a consistent macroscopic (cm-scale), mesoscopic (micron-scale), and microscopic (nano-scale) approach to introduce Telescopic Multi-Resolution (TMR) into current Augmented Reality (AR) visualization technology. We propose to couple TMR-AR by introducing an energy-matter interaction engine framework that is based on known Physics, Biology, Chemistry principles. An immediate payoff of TMR-AR is a self-consistent approximation of the interaction between microscopic observables and their direct effect on the macroscopic system that is driven by real-world measurements. Such an interdisciplinary approach enables us to achieve more than multiple scale, telescopic visualization of real and virtual information but also conducting thought experiments through AR. As a result of the consistency, this framework allows us to explore a large dimensionality parameter space of measured and unmeasured regions. Towards this direction, we explore how to build learnable libraries of biological, physical, and chemical mechanisms. Fusing analytical sensors with TMR-AR libraries provides a robust framework to optimize testing and evaluation through data-driven or virtual synthetic simulations. Visualizing mechanisms of interactions requires identification of observable image features that can indicate the presence of information in multiple spatial and temporal scales of analog data. The AR methodology was originally developed to enhance pilot-training as well as `make believe' entertainment industries in a user-friendly digital environment We believe TMR-AR can someday help us conduct thought experiments scientifically, to be pedagogically visualized in a zoom-in-and-out, consistent, multi-scale approximations.
Image degradation characteristics and restoration based on regularization for diffractive imaging
NASA Astrophysics Data System (ADS)
Zhi, Xiyang; Jiang, Shikai; Zhang, Wei; Wang, Dawei; Li, Yun
2017-11-01
The diffractive membrane optical imaging system is an important development trend of ultra large aperture and lightweight space camera. However, related investigations on physics-based diffractive imaging degradation characteristics and corresponding image restoration methods are less studied. In this paper, the model of image quality degradation for the diffraction imaging system is first deduced mathematically based on diffraction theory and then the degradation characteristics are analyzed. On this basis, a novel regularization model of image restoration that contains multiple prior constraints is established. After that, the solving approach of the equation with the multi-norm coexistence and multi-regularization parameters (prior's parameters) is presented. Subsequently, the space-variant PSF image restoration method for large aperture diffractive imaging system is proposed combined with block idea of isoplanatic region. Experimentally, the proposed algorithm demonstrates its capacity to achieve multi-objective improvement including MTF enhancing, dispersion correcting, noise and artifact suppressing as well as image's detail preserving, and produce satisfactory visual quality. This can provide scientific basis for applications and possesses potential application prospects on future space applications of diffractive membrane imaging technology.
Extended depth of field integral imaging using multi-focus fusion
NASA Astrophysics Data System (ADS)
Piao, Yongri; Zhang, Miao; Wang, Xiaohui; Li, Peihua
2018-03-01
In this paper, we propose a new method for depth of field extension in integral imaging by realizing the image fusion method on the multi-focus elemental images. In the proposed method, a camera is translated on a 2D grid to take multi-focus elemental images by sweeping the focus plane across the scene. Simply applying an image fusion method on the elemental images holding rich parallax information does not work effectively because registration accuracy of images is the prerequisite for image fusion. To solve this problem an elemental image generalization method is proposed. The aim of this generalization process is to geometrically align the objects in all elemental images so that the correct regions of multi-focus elemental images can be exacted. The all-in focus elemental images are then generated by fusing the generalized elemental images using the block based fusion method. The experimental results demonstrate that the depth of field of synthetic aperture integral imaging system has been extended by realizing the generation method combined with the image fusion on multi-focus elemental images in synthetic aperture integral imaging system.
NASA Astrophysics Data System (ADS)
Zhang, Xiaoxian; Crawford, John W.; Flavel, Richard J.; Young, Iain M.
2016-10-01
The Lattice Boltzmann (LB) model and X-ray computed tomography (CT) have been increasingly used in combination over the past decade to simulate water flow and chemical transport at pore scale in porous materials. Because of its limitation in resolution and the hierarchical structure of most natural soils, the X-ray CT tomography can only identify pores that are greater than its resolution and treats other pores as solid. As a result, the so-called solid phase in X-ray images may in reality be a grey phase, containing substantial connected pores capable of conducing fluids and solute. Although modified LB models have been developed to simulate fluid flow in such media, models for solute transport are relatively limited. In this paper, we propose a LB model for simulating solute transport in binary soil images containing permeable solid phase. The model is based on the single-relaxation time approach and uses a modified partial bounce-back method to describe the resistance caused by the permeable solid phase to chemical transport. We derive the relationship between the diffusion coefficient and the parameter introduced in the partial bounce-back method, and test the model against analytical solution for movement of a pulse of tracer. We also validate it against classical finite volume method for solute diffusion in a simple 2D image, and then apply the model to a soil image acquired using X-ray tomography at resolution of 30 μm in attempts to analyse how the ability of the solid phase to diffuse solute at micron-scale affects the behaviour of the solute at macro-scale after a volumetric average. Based on the simulated results, we discuss briefly the danger in interpreting experimental results using the continuum model without fully understanding the pore-scale processes, as well as the potential of using pore-scale modelling and tomography to help improve the continuum models.
Planarity constrained multi-view depth map reconstruction for urban scenes
NASA Astrophysics Data System (ADS)
Hou, Yaolin; Peng, Jianwei; Hu, Zhihua; Tao, Pengjie; Shan, Jie
2018-05-01
Multi-view depth map reconstruction is regarded as a suitable approach for 3D generation of large-scale scenes due to its flexibility and scalability. However, there are challenges when this technique is applied to urban scenes where apparent man-made regular shapes may present. To address this need, this paper proposes a planarity constrained multi-view depth (PMVD) map reconstruction method. Starting with image segmentation and feature matching for each input image, the main procedure is iterative optimization under the constraints of planar geometry and smoothness. A set of candidate local planes are first generated by an extended PatchMatch method. The image matching costs are then computed and aggregated by an adaptive-manifold filter (AMF), whereby the smoothness constraint is applied to adjacent pixels through belief propagation. Finally, multiple criteria are used to eliminate image matching outliers. (Vertical) aerial images, oblique (aerial) images and ground images are used for qualitative and quantitative evaluations. The experiments demonstrated that the PMVD outperforms the popular multi-view depth map reconstruction with an accuracy two times better for the aerial datasets and achieves an outcome comparable to the state-of-the-art for ground images. As expected, PMVD is able to preserve the planarity for piecewise flat structures in urban scenes and restore the edges in depth discontinuous areas.
Cilfone, Nicholas A.; Kirschner, Denise E.; Linderman, Jennifer J.
2015-01-01
Biologically related processes operate across multiple spatiotemporal scales. For computational modeling methodologies to mimic this biological complexity, individual scale models must be linked in ways that allow for dynamic exchange of information across scales. A powerful methodology is to combine a discrete modeling approach, agent-based models (ABMs), with continuum models to form hybrid models. Hybrid multi-scale ABMs have been used to simulate emergent responses of biological systems. Here, we review two aspects of hybrid multi-scale ABMs: linking individual scale models and efficiently solving the resulting model. We discuss the computational choices associated with aspects of linking individual scale models while simultaneously maintaining model tractability. We demonstrate implementations of existing numerical methods in the context of hybrid multi-scale ABMs. Using an example model describing Mycobacterium tuberculosis infection, we show relative computational speeds of various combinations of numerical methods. Efficient linking and solution of hybrid multi-scale ABMs is key to model portability, modularity, and their use in understanding biological phenomena at a systems level. PMID:26366228
Scaling dimensions in spectroscopy of soil and vegetation
NASA Astrophysics Data System (ADS)
Malenovský, Zbyněk; Bartholomeus, Harm M.; Acerbi-Junior, Fausto W.; Schopfer, Jürg T.; Painter, Thomas H.; Epema, Gerrit F.; Bregt, Arnold K.
2007-05-01
The paper revises and clarifies definitions of the term scale and scaling conversions for imaging spectroscopy of soil and vegetation. We demonstrate a new four-dimensional scale concept that includes not only spatial but also the spectral, directional and temporal components. Three scaling remote sensing techniques are reviewed: (1) radiative transfer, (2) spectral (un)mixing, and (3) data fusion. Relevant case studies are given in the context of their up- and/or down-scaling abilities over the soil/vegetation surfaces and a multi-source approach is proposed for their integration. Radiative transfer (RT) models are described to show their capacity for spatial, spectral up-scaling, and directional down-scaling within a heterogeneous environment. Spectral information and spectral derivatives, like vegetation indices (e.g. TCARI/OSAVI), can be scaled and even tested by their means. Radiative transfer of an experimental Norway spruce ( Picea abies (L.) Karst.) research plot in the Czech Republic was simulated by the Discrete Anisotropic Radiative Transfer (DART) model to prove relevance of the correct object optical properties scaled up to image data at two different spatial resolutions. Interconnection of the successive modelling levels in vegetation is shown. A future development in measurement and simulation of the leaf directional spectral properties is discussed. We describe linear and/or non-linear spectral mixing techniques and unmixing methods that demonstrate spatial down-scaling. Relevance of proper selection or acquisition of the spectral endmembers using spectral libraries, field measurements, and pure pixels of the hyperspectral image is highlighted. An extensive list of advanced unmixing techniques, a particular example of unmixing a reflective optics system imaging spectrometer (ROSIS) image from Spain, and examples of other mixture applications give insight into the present status of scaling capabilities. Simultaneous spatial and temporal down-scaling by means of a data fusion technique is described. A demonstrative example is given for the moderate resolution imaging spectroradiometer (MODIS) and LANDSAT Thematic Mapper (TM) data from Brazil. Corresponding spectral bands of both sensors were fused via a pyramidal wavelet transform in Fourier space. New spectral and temporal information of the resultant image can be used for thematic classification or qualitative mapping. All three described scaling techniques can be integrated as the relevant methodological steps within a complex multi-source approach. We present this concept of combining numerous optical remote sensing data and methods to generate inputs for ecosystem process models.
MINC 2.0: A Flexible Format for Multi-Modal Images.
Vincent, Robert D; Neelin, Peter; Khalili-Mahani, Najmeh; Janke, Andrew L; Fonov, Vladimir S; Robbins, Steven M; Baghdadi, Leila; Lerch, Jason; Sled, John G; Adalat, Reza; MacDonald, David; Zijdenbos, Alex P; Collins, D Louis; Evans, Alan C
2016-01-01
It is often useful that an imaging data format can afford rich metadata, be flexible, scale to very large file sizes, support multi-modal data, and have strong inbuilt mechanisms for data provenance. Beginning in 1992, MINC was developed as a system for flexible, self-documenting representation of neuroscientific imaging data with arbitrary orientation and dimensionality. The MINC system incorporates three broad components: a file format specification, a programming library, and a growing set of tools. In the early 2000's the MINC developers created MINC 2.0, which added support for 64-bit file sizes, internal compression, and a number of other modern features. Because of its extensible design, it has been easy to incorporate details of provenance in the header metadata, including an explicit processing history, unique identifiers, and vendor-specific scanner settings. This makes MINC ideal for use in large scale imaging studies and databases. It also makes it easy to adapt to new scanning sequences and modalities.
NASA Astrophysics Data System (ADS)
Chen, Y.; Zhang, Y.; Gao, J.; Yuan, Y.; Lv, Z.
2018-04-01
Recently, built-up area detection from high-resolution satellite images (HRSI) has attracted increasing attention because HRSI can provide more detailed object information. In this paper, multi-resolution wavelet transform and local spatial autocorrelation statistic are introduced to model the spatial patterns of built-up areas. First, the input image is decomposed into high- and low-frequency subbands by wavelet transform at three levels. Then the high-frequency detail information in three directions (horizontal, vertical and diagonal) are extracted followed by a maximization operation to integrate the information in all directions. Afterward, a cross-scale operation is implemented to fuse different levels of information. Finally, local spatial autocorrelation statistic is introduced to enhance the saliency of built-up features and an adaptive threshold algorithm is used to achieve the detection of built-up areas. Experiments are conducted on ZY-3 and Quickbird panchromatic satellite images, and the results show that the proposed method is very effective for built-up area detection.
Decision Tree Repository and Rule Set Based Mingjiang River Estuarine Wetlands Classifaction
NASA Astrophysics Data System (ADS)
Zhang, W.; Li, X.; Xiao, W.
2018-05-01
The increasing urbanization and industrialization have led to wetland losses in estuarine area of Mingjiang River over past three decades. There has been increasing attention given to produce wetland inventories using remote sensing and GIS technology. Due to inconsistency training site and training sample, traditionally pixel-based image classification methods can't achieve a comparable result within different organizations. Meanwhile, object-oriented image classification technique shows grate potential to solve this problem and Landsat moderate resolution remote sensing images are widely used to fulfill this requirement. Firstly, the standardized atmospheric correct, spectrally high fidelity texture feature enhancement was conducted before implementing the object-oriented wetland classification method in eCognition. Secondly, we performed the multi-scale segmentation procedure, taking the scale, hue, shape, compactness and smoothness of the image into account to get the appropriate parameters, using the top and down region merge algorithm from single pixel level, the optimal texture segmentation scale for different types of features is confirmed. Then, the segmented object is used as the classification unit to calculate the spectral information such as Mean value, Maximum value, Minimum value, Brightness value and the Normalized value. The Area, length, Tightness and the Shape rule of the image object Spatial features and texture features such as Mean, Variance and Entropy of image objects are used as classification features of training samples. Based on the reference images and the sampling points of on-the-spot investigation, typical training samples are selected uniformly and randomly for each type of ground objects. The spectral, texture and spatial characteristics of each type of feature in each feature layer corresponding to the range of values are used to create the decision tree repository. Finally, with the help of high resolution reference images, the random sampling method is used to conduct the field investigation, achieve an overall accuracy of 90.31 %, and the Kappa coefficient is 0.88. The classification method based on decision tree threshold values and rule set developed by the repository, outperforms the results obtained from the traditional methodology. Our decision tree repository and rule set based object-oriented classification technique was an effective method for producing comparable and consistency wetlands data set.
NASA Astrophysics Data System (ADS)
Pirpinia, Kleopatra; Bosman, Peter A. N.; Sonke, Jan-Jakob; van Herk, Marcel; Alderliesten, Tanja
2015-03-01
The use of gradient information is well-known to be highly useful in single-objective optimization-based image registration methods. However, its usefulness has not yet been investigated for deformable image registration from a multi-objective optimization perspective. To this end, within a previously introduced multi-objective optimization framework, we use a smooth B-spline-based dual-dynamic transformation model that allows us to derive gradient information analytically, while still being able to account for large deformations. Within the multi-objective framework, we previously employed a powerful evolutionary algorithm (EA) that computes and advances multiple outcomes at once, resulting in a set of solutions (a so-called Pareto front) that represents efficient trade-offs between the objectives. With the addition of the B-spline-based transformation model, we studied the usefulness of gradient information in multiobjective deformable image registration using three different optimization algorithms: the (gradient-less) EA, a gradientonly algorithm, and a hybridization of these two. We evaluated the algorithms to register highly deformed images: 2D MRI slices of the breast in prone and supine positions. Results demonstrate that gradient-based multi-objective optimization significantly speeds up optimization in the initial stages of optimization. However, allowing sufficient computational resources, better results could still be obtained with the EA. Ultimately, the hybrid EA found the best overall approximation of the optimal Pareto front, further indicating that adding gradient-based optimization for multiobjective optimization-based deformable image registration can indeed be beneficial
HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.
Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye
2017-02-09
In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.
Toward GEOS-6, A Global Cloud System Resolving Atmospheric Model
NASA Technical Reports Server (NTRS)
Putman, William M.
2010-01-01
NASA is committed to observing and understanding the weather and climate of our home planet through the use of multi-scale modeling systems and space-based observations. Global climate models have evolved to take advantage of the influx of multi- and many-core computing technologies and the availability of large clusters of multi-core microprocessors. GEOS-6 is a next-generation cloud system resolving atmospheric model that will place NASA at the forefront of scientific exploration of our atmosphere and climate. Model simulations with GEOS-6 will produce a realistic representation of our atmosphere on the scale of typical satellite observations, bringing a visual comprehension of model results to a new level among the climate enthusiasts. In preparation for GEOS-6, the agency's flagship Earth System Modeling Framework [JDl] has been enhanced to support cutting-edge high-resolution global climate and weather simulations. Improvements include a cubed-sphere grid that exposes parallelism; a non-hydrostatic finite volume dynamical core, and algorithm designed for co-processor technologies, among others. GEOS-6 represents a fundamental advancement in the capability of global Earth system models. The ability to directly compare global simulations at the resolution of spaceborne satellite images will lead to algorithm improvements and better utilization of space-based observations within the GOES data assimilation system
Sauer, Alexander; Li, Mengxia; Holl-Wieden, Annette; Pabst, Thomas; Neubauer, Henning
2017-10-12
Diffusion-weighted MRI has been proposed as a new technique for imaging synovitis without intravenous contrast application. We investigated diagnostic utility of multi-shot readout-segmented diffusion-weighted MRI (multi-shot DWI) for synovial imaging of the knee joint in patients with juvenile idiopathic arthritis (JIA). Thirty-two consecutive patients with confirmed or suspected JIA (21 girls, median age 13 years) underwent routine 1.5 T MRI with contrast-enhanced T1w imaging (contrast-enhanced MRI) and with multi-shot DWI (RESOLVE, b-values 0-50 and 800 s/mm 2 ). Contrast-enhanced MRI, representing the diagnostic standard, and diffusion-weighted images at b = 800 s/mm 2 were separately rated by three independent blinded readers at different levels of expertise for the presence and the degree of synovitis on a modified 5-item Likert scale along with the level of subjective diagnostic confidence. Fourteen (44%) patients had active synovitis and joint effusion, nine (28%) patients showed mild synovial enhancement not qualifying for arthritis and another nine (28%) patients had no synovial signal alterations on contrast-enhanced imaging. Ratings by the 1st reader on contrast-enhanced MRI and on DWI showed substantial agreement (κ = 0.74). Inter-observer-agreement was high for diagnosing, or ruling out, active arthritis of the knee joint on contrast-enhanced MRI and on DWI, showing full agreement between 1st and 2nd reader and disagreement in one case (3%) between 1st and 3rd reader. In contrast, ratings in cases of absent vs. little synovial inflammation were markedly inconsistent on DWI. Diagnostic confidence was lower on DWI, compared to contrast-enhanced imaging. Multi-shot DWI of the knee joint is feasible in routine imaging and reliably diagnoses, or rules out, active arthritis of the knee joint in paediatric patients without the need of gadolinium-based i.v. contrast injection. Possibly due to "T2w shine-through" artifacts, DWI does not reliably differentiate non-inflamed joints from knee joints with mild synovial irritation.
NASA Astrophysics Data System (ADS)
Zhang, Zhihong
2017-02-01
Inflammatory monocytes/macrophages (Mon/Mφ) play an important role in cutaneous allergic inflammation. However, their migration and activation in dermatitis and how they accelerate the inflammatory reaction are largely unknown. Optical molecular imaging is the most promising tool for investigating the function and motility of immune cells in vivo. We have developed a multi-scale optical imaging approach to evaluate the spatio-temporal dynamic behavior and properties of immune cells from the whole field of organs to the cellular level at the inflammatory site in delayed type hypersensitivity reaction. Here, we developed some multi-color labeling mouse models based on the endogenous labeling with fluorescent proteins and the exogenous labeling with fluorescent dyes. We investigated the cell movement, cell interaction and function of immunocytes (e.g. Mon/Mφ, DC, T cells and neutrophils) in the skin allergy inflammation (e.g., contact hypersensitivity) by using intravital microscopy. The long-term imaging data showed that after inflammatory Mon/Mφ transendothelial migration in dermis, they migrating in interstitial space of dermis. Depletion of blood monocyte with clodronate liposome extremely reduced the inflammatory reaction. Our finding provided further insight into inflammatory Mon/Mφ mediating the inflammatory cascade through functional migration in allergic contact dermatitis.
Agile Multi-Scale Decompositions for Automatic Image Registration
NASA Technical Reports Server (NTRS)
Murphy, James M.; Leija, Omar Navarro; Le Moigne, Jacqueline
2016-01-01
In recent works, the first and third authors developed an automatic image registration algorithm based on a multiscale hybrid image decomposition with anisotropic shearlets and isotropic wavelets. This prototype showed strong performance, improving robustness over registration with wavelets alone. However, this method imposed a strict hierarchy on the order in which shearlet and wavelet features were used in the registration process, and also involved an unintegrated mixture of MATLAB and C code. In this paper, we introduce a more agile model for generating features, in which a flexible and user-guided mix of shearlet and wavelet features are computed. Compared to the previous prototype, this method introduces a flexibility to the order in which shearlet and wavelet features are used in the registration process. Moreover, the present algorithm is now fully coded in C, making it more efficient and portable than the MATLAB and C prototype. We demonstrate the versatility and computational efficiency of this approach by performing registration experiments with the fully-integrated C algorithm. In particular, meaningful timing studies can now be performed, to give a concrete analysis of the computational costs of the flexible feature extraction. Examples of synthetically warped and real multi-modal images are analyzed.
Zyout, Imad; Czajkowska, Joanna; Grzegorzek, Marcin
2015-12-01
The high number of false positives and the resulting number of avoidable breast biopsies are the major problems faced by current mammography Computer Aided Detection (CAD) systems. False positive reduction is not only a requirement for mass but also for calcification CAD systems which are currently deployed for clinical use. This paper tackles two problems related to reducing the number of false positives in the detection of all lesions and masses, respectively. Firstly, textural patterns of breast tissue have been analyzed using several multi-scale textural descriptors based on wavelet and gray level co-occurrence matrix. The second problem addressed in this paper is the parameter selection and performance optimization. For this, we adopt a model selection procedure based on Particle Swarm Optimization (PSO) for selecting the most discriminative textural features and for strengthening the generalization capacity of the supervised learning stage based on a Support Vector Machine (SVM) classifier. For evaluating the proposed methods, two sets of suspicious mammogram regions have been used. The first one, obtained from Digital Database for Screening Mammography (DDSM), contains 1494 regions (1000 normal and 494 abnormal samples). The second set of suspicious regions was obtained from database of Mammographic Image Analysis Society (mini-MIAS) and contains 315 (207 normal and 108 abnormal) samples. Results from both datasets demonstrate the efficiency of using PSO based model selection for optimizing both classifier hyper-parameters and parameters, respectively. Furthermore, the obtained results indicate the promising performance of the proposed textural features and more specifically, those based on co-occurrence matrix of wavelet image representation technique. Copyright © 2015 Elsevier Ltd. All rights reserved.
Robust curb detection with fusion of 3D-Lidar and camera data.
Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen
2014-05-21
Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes.
NASA Astrophysics Data System (ADS)
Hu, Ruiguang; Xiao, Liping; Zheng, Wenjuan
2015-12-01
In this paper, multi-kernel learning(MKL) is used for drug-related webpages classification. First, body text and image-label text are extracted through HTML parsing, and valid images are chosen by the FOCARSS algorithm. Second, text based BOW model is used to generate text representation, and image-based BOW model is used to generate images representation. Last, text and images representation are fused with a few methods. Experimental results demonstrate that the classification accuracy of MKL is higher than those of all other fusion methods in decision level and feature level, and much higher than the accuracy of single-modal classification.
NASA Astrophysics Data System (ADS)
Lee, Junghoon; Carass, Aaron; Jog, Amod; Zhao, Can; Prince, Jerry L.
2017-02-01
Accurate CT synthesis, sometimes called electron density estimation, from MRI is crucial for successful MRI-based radiotherapy planning and dose computation. Existing CT synthesis methods are able to synthesize normal tissues but are unable to accurately synthesize abnormal tissues (i.e., tumor), thus providing a suboptimal solution. We propose a multiatlas- based hybrid synthesis approach that combines multi-atlas registration and patch-based synthesis to accurately synthesize both normal and abnormal tissues. Multi-parametric atlas MR images are registered to the target MR images by multi-channel deformable registration, from which the atlas CT images are deformed and fused by locally-weighted averaging using a structural similarity measure (SSIM). Synthetic MR images are also computed from the registered atlas MRIs by using the same weights used for the CT synthesis; these are compared to the target patient MRIs allowing for the assessment of the CT synthesis fidelity. Poor synthesis regions are automatically detected based on the fidelity measure and refined by a patch-based synthesis. The proposed approach was tested on brain cancer patient data, and showed a noticeable improvement for the tumor region.
Global Contrast Based Salient Region Detection.
Cheng, Ming-Ming; Mitra, Niloy J; Huang, Xiaolei; Torr, Philip H S; Hu, Shi-Min
2015-03-01
Automatic estimation of salient object regions across images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances many computer vision and computer graphics applications. We introduce a regional contrast based salient object detection algorithm, which simultaneously evaluates global contrast differences and spatial weighted coherence scores. The proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, high-quality saliency maps. These saliency maps are further used to initialize a novel iterative version of GrabCut, namely SaliencyCut, for high quality unsupervised salient object segmentation. We extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging Internet image dataset. Our experimental results demonstrate that our algorithm consistently outperforms 15 existing salient object detection and segmentation methods, yielding higher precision and better recall rates. We also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling effective sketch-based image retrieval (SBIR) via simple shape comparisons. Despite such noisy internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information.
Automated Geo/Co-Registration of Multi-Temporal Very-High-Resolution Imagery.
Han, Youkyung; Oh, Jaehong
2018-05-17
For time-series analysis using very-high-resolution (VHR) multi-temporal satellite images, both accurate georegistration to the map coordinates and subpixel-level co-registration among the images should be conducted. However, applying well-known matching methods, such as scale-invariant feature transform and speeded up robust features for VHR multi-temporal images, has limitations. First, they cannot be used for matching an optical image to heterogeneous non-optical data for georegistration. Second, they produce a local misalignment induced by differences in acquisition conditions, such as acquisition platform stability, the sensor's off-nadir angle, and relief displacement of the considered scene. Therefore, this study addresses the problem by proposing an automated geo/co-registration framework for full-scene multi-temporal images acquired from a VHR optical satellite sensor. The proposed method comprises two primary steps: (1) a global georegistration process, followed by (2) a fine co-registration process. During the first step, two-dimensional multi-temporal satellite images are matched to three-dimensional topographic maps to assign the map coordinates. During the second step, a local analysis of registration noise pixels extracted between the multi-temporal images that have been mapped to the map coordinates is conducted to extract a large number of well-distributed corresponding points (CPs). The CPs are finally used to construct a non-rigid transformation function that enables minimization of the local misalignment existing among the images. Experiments conducted on five Kompsat-3 full scenes confirmed the effectiveness of the proposed framework, showing that the georegistration performance resulted in an approximately pixel-level accuracy for most of the scenes, and the co-registration performance further improved the results among all combinations of the georegistered Kompsat-3 image pairs by increasing the calculated cross-correlation values.
NASA Astrophysics Data System (ADS)
Shi, Wenzhong; Deng, Susu; Xu, Wenbing
2018-02-01
For automatic landslide detection, landslide morphological features should be quantitatively expressed and extracted. High-resolution Digital Elevation Models (DEMs) derived from airborne Light Detection and Ranging (LiDAR) data allow fine-scale morphological features to be extracted, but noise in DEMs influences morphological feature extraction, and the multi-scale nature of landslide features should be considered. This paper proposes a method to extract landslide morphological features characterized by homogeneous spatial patterns. Both profile and tangential curvature are utilized to quantify land surface morphology, and a local Gi* statistic is calculated for each cell to identify significant patterns of clustering of similar morphometric values. The method was tested on both synthetic surfaces simulating natural terrain and airborne LiDAR data acquired over an area dominated by shallow debris slides and flows. The test results of the synthetic data indicate that the concave and convex morphologies of the simulated terrain features at different scales and distinctness could be recognized using the proposed method, even when random noise was added to the synthetic data. In the test area, cells with large local Gi* values were extracted at a specified significance level from the profile and the tangential curvature image generated from the LiDAR-derived 1-m DEM. The morphologies of landslide main scarps, source areas and trails were clearly indicated, and the morphological features were represented by clusters of extracted cells. A comparison with the morphological feature extraction method based on curvature thresholds proved the proposed method's robustness to DEM noise. When verified against a landslide inventory, the morphological features of almost all recent (< 5 years) landslides and approximately 35% of historical (> 10 years) landslides were extracted. This finding indicates that the proposed method can facilitate landslide detection, although the cell clusters extracted from curvature images should be filtered using a filtering strategy based on supplementary information provided by expert knowledge or other data sources.
NASA Astrophysics Data System (ADS)
Xiao, Zhili; Tan, Chao; Dong, Feng
2017-08-01
Magnetic induction tomography (MIT) is a promising technique for continuous monitoring of intracranial hemorrhage due to its contactless nature, low cost and capacity to penetrate the high-resistivity skull. The inter-tissue inductive coupling increases with frequency, which may lead to errors in multi-frequency imaging at high frequency. The effect of inter-tissue inductive coupling was investigated to improve the multi-frequency imaging of hemorrhage. An analytical model of inter-tissue inductive coupling based on the equivalent circuit was established. A set of new multi-frequency decomposition equations separating the phase shift of hemorrhage from other brain tissues was derived by employing the coupling information to improve the multi-frequency imaging of intracranial hemorrhage. The decomposition error and imaging error are both decreased after considering the inter-tissue inductive coupling information. The study reveals that the introduction of inter-tissue inductive coupling can reduce the errors of multi-frequency imaging, promoting the development of intracranial hemorrhage monitoring by multi-frequency MIT.
LINKS: learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images.
Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H; Lin, Weili; Shen, Dinggang
2015-03-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.
LINKS: Learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images
Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang
2014-01-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8 months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. PMID:25541188
An intelligent framework for medical image retrieval using MDCT and multi SVM.
Balan, J A Alex Rajju; Rajan, S Edward
2014-01-01
Volumes of medical images are rapidly generated in medical field and to manage them effectively has become a great challenge. This paper studies the development of innovative medical image retrieval based on texture features and accuracy. The objective of the paper is to analyze the image retrieval based on diagnosis of healthcare management systems. This paper traces the development of innovative medical image retrieval to estimate both the image texture features and accuracy. The texture features of medical images are extracted using MDCT and multi SVM. Both the theoretical approach and the simulation results revealed interesting observations and they were corroborated using MDCT coefficients and SVM methodology. All attempts to extract the data about the image in response to the query has been computed successfully and perfect image retrieval performance has been obtained. Experimental results on a database of 100 trademark medical images show that an integrated texture feature representation results in 98% of the images being retrieved using MDCT and multi SVM. Thus we have studied a multiclassification technique based on SVM which is prior suitable for medical images. The results show the retrieval accuracy of 98%, 99% for different sets of medical images with respect to the class of image.
NASA Astrophysics Data System (ADS)
Tonbul, H.; Kavzoglu, T.
2016-12-01
In recent years, object based image analysis (OBIA) has spread out and become a widely accepted technique for the analysis of remotely sensed data. OBIA deals with grouping pixels into homogenous objects based on spectral, spatial and textural features of contiguous pixels in an image. The first stage of OBIA, named as image segmentation, is the most prominent part of object recognition. In this study, multiresolution segmentation, which is a region-based approach, was employed to construct image objects. In the application of multi-resolution, three parameters, namely shape, compactness and scale must be set by the analyst. Segmentation quality remarkably influences the fidelity of the thematic maps and accordingly the classification accuracy. Therefore, it is of great importance to search and set optimal values for the segmentation parameters. In the literature, main focus has been on the definition of scale parameter, assuming that the effect of shape and compactness parameters is limited in terms of achieved classification accuracy. The aim of this study is to deeply analyze the influence of shape/compactness parameters by varying their values while using the optimal scale parameter determined by the use of Estimation of Scale Parameter (ESP-2) approach. A pansharpened Qickbird-2 image covering Trabzon, Turkey was employed to investigate the objectives of the study. For this purpose, six different combinations of shape/compactness were utilized to make deductions on the behavior of shape and compactness parameters and optimal setting for all parameters as a whole. Objects were assigned to classes using nearest neighbor classifier in all segmentation observations and equal number of pixels was randomly selected to calculate accuracy metrics. The highest overall accuracy (92.3%) was achieved by setting the shape/compactness criteria to 0.3/0.3. The results of this study indicate that shape/compactness parameters can have significant effect on classification accuracy with 4% change in overall accuracy. Also, statistical significance of differences in accuracy was tested using the McNemar's test and found that the difference between poor and optimal setting of shape/compactness parameters was statistically significant, suggesting a search for optimal parameterization instead of default setting.
Large Scale Crop Classification in Ukraine using Multi-temporal Landsat-8 Images with Missing Data
NASA Astrophysics Data System (ADS)
Kussul, N.; Skakun, S.; Shelestov, A.; Lavreniuk, M. S.
2014-12-01
At present, there are no globally available Earth observation (EO) derived products on crop maps. This issue is being addressed within the Sentinel-2 for Agriculture initiative where a number of test sites (including from JECAM) participate to provide coherent protocols and best practices for various global agriculture systems, and subsequently crop maps from Sentinel-2. One of the problems in dealing with optical images for large territories (more than 10,000 sq. km) is the presence of clouds and shadows that result in having missing values in data sets. In this abstract, a new approach to classification of multi-temporal optical satellite imagery with missing data due to clouds and shadows is proposed. First, self-organizing Kohonen maps (SOMs) are used to restore missing pixel values in a time series of satellite imagery. SOMs are trained for each spectral band separately using non-missing values. Missing values are restored through a special procedure that substitutes input sample's missing components with neuron's weight coefficients. After missing data restoration, a supervised classification is performed for multi-temporal satellite images. For this, an ensemble of neural networks, in particular multilayer perceptrons (MLPs), is proposed. Ensembling of neural networks is done by the technique of average committee, i.e. to calculate the average class probability over classifiers and select the class with the highest average posterior probability for the given input sample. The proposed approach is applied for large scale crop classification using multi temporal Landsat-8 images for the JECAM test site in Ukraine [1-2]. It is shown that ensemble of MLPs provides better performance than a single neural network in terms of overall classification accuracy and kappa coefficient. The obtained classification map is also validated through estimated crop and forest areas and comparison to official statistics. 1. A.Yu. Shelestov et al., "Geospatial information system for agricultural monitoring," Cybernetics Syst. Anal., vol. 49, no. 1, pp. 124-132, 2013. 2. J. Gallego et al., "Efficiency Assessment of Different Approaches to Crop Classification Based on Satellite and Ground Observations," J. Autom. Inform. Scie., vol. 44, no. 5, pp. 67-80, 2012.
Tug-of-war lacunarity—A novel approach for estimating lacunarity
NASA Astrophysics Data System (ADS)
Reiss, Martin A.; Lemmerer, Birgit; Hanslmeier, Arnold; Ahammer, Helmut
2016-11-01
Modern instrumentation provides us with massive repositories of digital images that will likely only increase in the future. Therefore, it has become increasingly important to automatize the analysis of digital images, e.g., with methods from pattern recognition. These methods aim to quantify the visual appearance of captured textures with quantitative measures. As such, lacunarity is a useful multi-scale measure of texture's heterogeneity but demands high computational efforts. Here we investigate a novel approach based on the tug-of-war algorithm, which estimates lacunarity in a single pass over the image. We computed lacunarity for theoretical and real world sample images, and found that the investigated approach is able to estimate lacunarity with low uncertainties. We conclude that the proposed method combines low computational efforts with high accuracy, and that its application may have utility in the analysis of high-resolution images.
NASA Astrophysics Data System (ADS)
Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng
2018-02-01
De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.
Graphene metamaterial spatial light modulator for infrared single pixel imaging.
Fan, Kebin; Suen, Jonathan Y; Padilla, Willie J
2017-10-16
High-resolution and hyperspectral imaging has long been a goal for multi-dimensional data fusion sensing applications - of interest for autonomous vehicles and environmental monitoring. In the long wave infrared regime this quest has been impeded by size, weight, power, and cost issues, especially as focal-plane array detector sizes increase. Here we propose and experimentally demonstrated a new approach based on a metamaterial graphene spatial light modulator (GSLM) for infrared single pixel imaging. A frequency-division multiplexing (FDM) imaging technique is designed and implemented, and relies entirely on the electronic reconfigurability of the GSLM. We compare our approach to the more common raster-scan method and directly show FDM image frame rates can be 64 times faster with no degradation of image quality. Our device and related imaging architecture are not restricted to the infrared regime, and may be scaled to other bands of the electromagnetic spectrum. The study presented here opens a new approach for fast and efficient single pixel imaging utilizing graphene metamaterials with novel acquisition strategies.
Multi-modal molecular diffuse optical tomography system for small animal imaging
Guggenheim, James A.; Basevi, Hector R. A.; Frampton, Jon; Styles, Iain B.; Dehghani, Hamid
2013-01-01
A multi-modal optical imaging system for quantitative 3D bioluminescence and functional diffuse imaging is presented, which has no moving parts and uses mirrors to provide multi-view tomographic data for image reconstruction. It is demonstrated that through the use of trans-illuminated spectral near infrared measurements and spectrally constrained tomographic reconstruction, recovered concentrations of absorbing agents can be used as prior knowledge for bioluminescence imaging within the visible spectrum. Additionally, the first use of a recently developed multi-view optical surface capture technique is shown and its application to model-based image reconstruction and free-space light modelling is demonstrated. The benefits of model-based tomographic image recovery as compared to 2D planar imaging are highlighted in a number of scenarios where the internal luminescence source is not visible or is confounding in 2D images. The results presented show that the luminescence tomographic imaging method produces 3D reconstructions of individual light sources within a mouse-sized solid phantom that are accurately localised to within 1.5mm for a range of target locations and depths indicating sensitivity and accurate imaging throughout the phantom volume. Additionally the total reconstructed luminescence source intensity is consistent to within 15% which is a dramatic improvement upon standard bioluminescence imaging. Finally, results from a heterogeneous phantom with an absorbing anomaly are presented demonstrating the use and benefits of a multi-view, spectrally constrained coupled imaging system that provides accurate 3D luminescence images. PMID:24954977
BigView Image Viewing on Tiled Displays
NASA Technical Reports Server (NTRS)
Sandstrom, Timothy
2007-01-01
BigView allows for interactive panning and zooming of images of arbitrary size on desktop PCs running Linux. Additionally, it can work in a multi-screen environment where multiple PCs cooperate to view a single, large image. Using this software, one can explore on relatively modest machines images such as the Mars Orbiter Camera mosaic [92,160 33,280 pixels]. The images must be first converted into paged format, where the image is stored in 256 256 pages to allow rapid movement of pixels into texture memory. The format contains an image pyramid : a set of scaled versions of the original image. Each scaled image is 1/2 the size of the previous, starting with the original down to the smallest, which fits into a single 256 x 256 page.
An improved KCF tracking algorithm based on multi-feature and multi-scale
NASA Astrophysics Data System (ADS)
Wu, Wei; Wang, Ding; Luo, Xin; Su, Yang; Tian, Weiye
2018-02-01
The purpose of visual tracking is to associate the target object in a continuous video frame. In recent years, the method based on the kernel correlation filter has become the research hotspot. However, the algorithm still has some problems such as video capture equipment fast jitter, tracking scale transformation. In order to improve the ability of scale transformation and feature description, this paper has carried an innovative algorithm based on the multi feature fusion and multi-scale transform. The experimental results show that our method solves the problem that the target model update when is blocked or its scale transforms. The accuracy of the evaluation (OPE) is 77.0%, 75.4% and the success rate is 69.7%, 66.4% on the VOT and OTB datasets. Compared with the optimal one of the existing target-based tracking algorithms, the accuracy of the algorithm is improved by 6.7% and 6.3% respectively. The success rates are improved by 13.7% and 14.2% respectively.
Amoroso, N; Errico, R; Bruno, S; Chincarini, A; Garuccio, E; Sensi, F; Tangaro, S; Tateo, A; Bellotti, R
2015-11-21
In this study we present a novel fully automated Hippocampal Unified Multi-Atlas-Networks (HUMAN) algorithm for the segmentation of the hippocampus in structural magnetic resonance imaging. In multi-atlas approaches atlas selection is of crucial importance for the accuracy of the segmentation. Here we present an optimized method based on the definition of a small peri-hippocampal region to target the atlas learning with linear and non-linear embedded manifolds. All atlases were co-registered to a data driven template resulting in a computationally efficient method that requires only one test registration. The optimal atlases identified were used to train dedicated artificial neural networks whose labels were then propagated and fused to obtain the final segmentation. To quantify data heterogeneity and protocol inherent effects, HUMAN was tested on two independent data sets provided by the Alzheimer's Disease Neuroimaging Initiative and the Open Access Series of Imaging Studies. HUMAN is accurate and achieves state-of-the-art performance (Dice[Formula: see text] and Dice[Formula: see text]). It is also a robust method that remains stable when applied to the whole hippocampus or to sub-regions (patches). HUMAN also compares favorably with a basic multi-atlas approach and a benchmark segmentation tool such as FreeSurfer.
NASA Astrophysics Data System (ADS)
Amoroso, N.; Errico, R.; Bruno, S.; Chincarini, A.; Garuccio, E.; Sensi, F.; Tangaro, S.; Tateo, A.; Bellotti, R.; Alzheimers Disease Neuroimaging Initiative,the
2015-11-01
In this study we present a novel fully automated Hippocampal Unified Multi-Atlas-Networks (HUMAN) algorithm for the segmentation of the hippocampus in structural magnetic resonance imaging. In multi-atlas approaches atlas selection is of crucial importance for the accuracy of the segmentation. Here we present an optimized method based on the definition of a small peri-hippocampal region to target the atlas learning with linear and non-linear embedded manifolds. All atlases were co-registered to a data driven template resulting in a computationally efficient method that requires only one test registration. The optimal atlases identified were used to train dedicated artificial neural networks whose labels were then propagated and fused to obtain the final segmentation. To quantify data heterogeneity and protocol inherent effects, HUMAN was tested on two independent data sets provided by the Alzheimer’s Disease Neuroimaging Initiative and the Open Access Series of Imaging Studies. HUMAN is accurate and achieves state-of-the-art performance (Dice{{}\\text{ADNI}} =0.929+/- 0.003 and Dice{{}\\text{OASIS}} =0.869+/- 0.002 ). It is also a robust method that remains stable when applied to the whole hippocampus or to sub-regions (patches). HUMAN also compares favorably with a basic multi-atlas approach and a benchmark segmentation tool such as FreeSurfer.
A digital 3D atlas of the marmoset brain based on multi-modal MRI.
Liu, Cirong; Ye, Frank Q; Yen, Cecil Chern-Chyi; Newman, John D; Glen, Daniel; Leopold, David A; Silva, Afonso C
2018-04-01
The common marmoset (Callithrix jacchus) is a New-World monkey of growing interest in neuroscience. Magnetic resonance imaging (MRI) is an essential tool to unveil the anatomical and functional organization of the marmoset brain. To facilitate identification of regions of interest, it is desirable to register MR images to an atlas of the brain. However, currently available atlases of the marmoset brain are mainly based on 2D histological data, which are difficult to apply to 3D imaging techniques. Here, we constructed a 3D digital atlas based on high-resolution ex-vivo MRI images, including magnetization transfer ratio (a T1-like contrast), T2w images, and multi-shell diffusion MRI. Based on the multi-modal MRI images, we manually delineated 54 cortical areas and 16 subcortical regions on one hemisphere of the brain (the core version). The 54 cortical areas were merged into 13 larger cortical regions according to their locations to yield a coarse version of the atlas, and also parcellated into 106 sub-regions using a connectivity-based parcellation method to produce a refined atlas. Finally, we compared the new atlas set with existing histology atlases and demonstrated its applications in connectome studies, and in resting state and stimulus-based fMRI. The atlas set has been integrated into the widely-distributed neuroimaging data analysis software AFNI and SUMA, providing a readily usable multi-modal template space with multi-level anatomical labels (including labels from the Paxinos atlas) that can facilitate various neuroimaging studies of marmosets. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Beguet, Benoit; Guyon, Dominique; Boukir, Samia; Chehata, Nesrine
2014-10-01
The main goal of this study is to design a method to describe the structure of forest stands from Very High Resolution satellite imagery, relying on some typical variables such as crown diameter, tree height, trunk diameter, tree density and tree spacing. The emphasis is placed on the automatization of the process of identification of the most relevant image features for the forest structure retrieval task, exploiting both spectral and spatial information. Our approach is based on linear regressions between the forest structure variables to be estimated and various spectral and Haralick's texture features. The main drawback of this well-known texture representation is the underlying parameters which are extremely difficult to set due to the spatial complexity of the forest structure. To tackle this major issue, an automated feature selection process is proposed which is based on statistical modeling, exploring a wide range of parameter values. It provides texture measures of diverse spatial parameters hence implicitly inducing a multi-scale texture analysis. A new feature selection technique, we called Random PRiF, is proposed. It relies on random sampling in feature space, carefully addresses the multicollinearity issue in multiple-linear regression while ensuring accurate prediction of forest variables. Our automated forest variable estimation scheme was tested on Quickbird and Pléiades panchromatic and multispectral images, acquired at different periods on the maritime pine stands of two sites in South-Western France. It outperforms two well-established variable subset selection techniques. It has been successfully applied to identify the best texture features in modeling the five considered forest structure variables. The RMSE of all predicted forest variables is improved by combining multispectral and panchromatic texture features, with various parameterizations, highlighting the potential of a multi-resolution approach for retrieving forest structure variables from VHR satellite images. Thus an average prediction error of ˜ 1.1 m is expected on crown diameter, ˜ 0.9 m on tree spacing, ˜ 3 m on height and ˜ 0.06 m on diameter at breast height.
Medipix-based Spectral Micro-CT.
Yu, Hengyong; Xu, Qiong; He, Peng; Bennett, James; Amir, Raja; Dobbs, Bruce; Mou, Xuanqin; Wei, Biao; Butler, Anthony; Butler, Phillip; Wang, Ge
2012-12-01
Since Hounsfield's Nobel Prize winning breakthrough decades ago, X-ray CT has been widely applied in the clinical and preclinical applications - producing a huge number of tomographic gray-scale images. However, these images are often insufficient to distinguish crucial differences needed for diagnosis. They have poor soft tissue contrast due to inherent photon-count issues, involving high radiation dose. By physics, the X-ray spectrum is polychromatic, and it is now feasible to obtain multi-energy, spectral, or true-color, CT images. Such spectral images promise powerful new diagnostic information. The emerging Medipix technology promises energy-sensitive, high-resolution, accurate and rapid X-ray detection. In this paper, we will review the recent progress of Medipix-based spectral micro-CT with the emphasis on the results obtained by our team. It includes the state- of-the-art Medipix detector, the system and method of a commercial MARS (Medipix All Resolution System) spectral micro-CT, and the design and color diffusion of a hybrid spectral micro-CT.
Sequential Superresolution Imaging of Multiple Targets Using a Single Fluorophore
Lidke, Diane S.; Lidke, Keith A.
2015-01-01
Fluorescence superresolution (SR) microscopy, or fluorescence nanoscopy, provides nanometer scale detail of cellular structures and allows for imaging of biological processes at the molecular level. Specific SR imaging methods, such as localization-based imaging, rely on stochastic transitions between on (fluorescent) and off (dark) states of fluorophores. Imaging multiple cellular structures using multi-color imaging is complicated and limited by the differing properties of various organic dyes including their fluorescent state duty cycle, photons per switching event, number of fluorescent cycles before irreversible photobleaching, and overall sensitivity to buffer conditions. In addition, multiple color imaging requires consideration of multiple optical paths or chromatic aberration that can lead to differential aberrations that are important at the nanometer scale. Here, we report a method for sequential labeling and imaging that allows for SR imaging of multiple targets using a single fluorophore with negligible cross-talk between images. Using brightfield image correlation to register and overlay multiple image acquisitions with ~10 nm overlay precision in the x-y imaging plane, we have exploited the optimal properties of AlexaFluor647 for dSTORM to image four distinct cellular proteins. We also visualize the changes in co-localization of the epidermal growth factor (EGF) receptor and clathrin upon EGF addition that are consistent with clathrin-mediated endocytosis. These results are the first to demonstrate sequential SR (s-SR) imaging using direct stochastic reconstruction microscopy (dSTORM), and this method for sequential imaging can be applied to any superresolution technique. PMID:25860558
NASA Astrophysics Data System (ADS)
Meneveau, C. V.; Bai, K.; Katz, J.
2011-12-01
The vegetation canopy has a significant impact on various physical and biological processes such as forest microclimate, rainfall evaporation distribution and climate change. Most scaled laboratory experimental studies have used canopy element models that consist of rigid vertical strips or cylindrical rods that can be typically represented through only one or a few characteristic length scales, for example the diameter and height for cylindrical rods. However, most natural canopies and vegetation are highly multi-scale with branches and sub-branches, covering a wide range of length scales. Fractals provide a convenient idealization of multi-scale objects, since their multi-scale properties can be described in simple ways (Mandelbrot 1982). While fractal aspects of turbulence have been studied in several works in the past decades, research on turbulence generated by fractal objects started more recently. We present an experimental study of boundary layer flow over fractal tree-like objects. Detailed Particle-Image-Velocimetry (PIV) measurements are carried out in the near-wake of a fractal-like tree. The tree is a pre-fractal with five generations, with three branches and a scale reduction factor 1/2 at each generation. Its similarity fractal dimension (Mandelbrot 1982) is D ~ 1.58. Detailed mean velocity and turbulence stress profiles are documented, as well as their downstream development. We then turn attention to the turbulence mixing properties of the flow, specifically to the question whether a mixing length-scale can be identified in this flow, and if so, how it relates to the geometric length-scales in the pre-fractal object. Scatter plots of mean velocity gradient (shear) and Reynolds shear stress exhibit good linear relation at all locations in the flow. Therefore, in the transverse direction of the wake evolution, the Boussinesq eddy viscosity concept is appropriate to describe the mixing. We find that the measured mixing length increases with increasing streamwise locations. Conversely, the measured eddy viscosity and mixing length decrease with increasing elevation, which differs from eddy viscosity and mixing length behaviors of traditional boundary layers or canopies studied before. In order to find an appropriate length for the flow, several models based on the notion of superposition of scales are proposed and examined. One approach is based on spectral distributions. Another more practical approach is based on length-scale distributions evaluated using fractal geometry tools. These proposed models agree well with the measured mixing length. The results indicate that information about multi-scale clustering of branches as it occurs in fractals has to be incorporated into models of the mixing length for flows through canopies with multiple scales. The research is supported by National Science Foundation grant ATM-0621396 and AGS-1047550.
A conceptual design for an exoplanet imager
NASA Astrophysics Data System (ADS)
Hyland, David C.; Winkeller, Jon; Mosher, Robert; Momin, Anif; Iglesias, Gerardo; Donnellan, Quentin; Stanley, Jerry; Myers, Storm; Whittington, William G.; Asazuma, Taro; Slagle, Kami; Newton, Lindsay; Bourgeois, Scott; Tejeda, Donny; Young, Brian; Shaver, Nick; Cooper, Jacob; Underwood, Dennis; Perkins, James; Morea, Nathan; Goodnight, Ryan; Colunga, Aaron; Peltier, Scott; Singleton, Zane; Brashear, John; McPherson, Ronald; Guillory, Winston; Patel, Sunil; Stovall, Rachel; Meyer, Ryall; Eberle, Patrick; Morrison, Cole; Mong, Chun Yu
2007-09-01
This paper reports the results of a design study for an exoplanet imaging system. The design team consisted of the students in the "Electromagnetic Sensing for Space-Bourne Imaging" class taught by the principal author in the Spring, 2005 semester. The design challenge was to devise a space system capable of forming 10X10 pixel images of terrestrial-class planets out to 10 parsecs, observing in the 9.0 to 17.0 microns range. It was presumed that this system would operate after the Terrestrial Planet Finder had been deployed and had identified a number of planetary systems for more detailed imaging. The design team evaluated a large number of tradeoffs, starting with the use of a single monolithic telescope, versus a truss-mounted sparse aperture, versus a formation of free-flying telescopes. Having selected the free-flyer option, the team studied a variety of sensing technologies, including amplitude interferometry, intensity correlation imaging (ICI, based on the Brown-Twiss effect and phase retrieval), heterodyne interferometry and direct electric field reconstruction. Intensity correlation imaging was found to have several advantages. It does not require combiner spacecraft, nor nanometer-level control of the relative positions, nor diffraction-limited optics. Orbit design, telescope design, spacecraft structural design, thermal management and communications architecture trades were also addressed. A six spacecraft design involving non-repeating baselines was selected. By varying the overall scale of the baselines it was found possible to unambiguously characterize an entire multi-planet system, to image the parent star and, for the largest base scales, to determine 10X10 pixel images of individual planets.
NASA Astrophysics Data System (ADS)
Tian, Yu; Rao, Changhui; Wei, Kai
2008-07-01
The adaptive optics can only partially compensate the image blurred by atmospheric turbulence due to the observing condition and hardware restriction. A post-processing method based on frame selection and multi-frames blind deconvolution to improve images partially corrected by adaptive optics is proposed. The appropriate frames which are suitable for blind deconvolution from the recorded AO close-loop frames series are selected by the frame selection technique and then do the multi-frame blind deconvolution. There is no priori knowledge except for the positive constraint in blind deconvolution. It is benefit for the use of multi-frame images to improve the stability and convergence of the blind deconvolution algorithm. The method had been applied in the image restoration of celestial bodies which were observed by 1.2m telescope equipped with 61-element adaptive optical system at Yunnan Observatory. The results show that the method can effectively improve the images partially corrected by adaptive optics.
Automatic deformable diffusion tensor registration for fiber population analysis.
Irfanoglu, M O; Machiraju, R; Sammet, S; Pierpaoli, C; Knopp, M V
2008-01-01
In this work, we propose a novel method for deformable tensor-to-tensor registration of Diffusion Tensor Images. Our registration method models the distances in between the tensors with Geode-sic-Loxodromes and employs a version of Multi-Dimensional Scaling (MDS) algorithm to unfold the manifold described with this metric. Defining the same shape properties as tensors, the vector images obtained through MDS are fed into a multi-step vector-image registration scheme and the resulting deformation fields are used to reorient the tensor fields. Results on brain DTI indicate that the proposed method is very suitable for deformable fiber-to-fiber correspondence and DTI-atlas construction.
Video auto stitching in multicamera surveillance system
NASA Astrophysics Data System (ADS)
He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang
2012-01-01
This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.
Video auto stitching in multicamera surveillance system
NASA Astrophysics Data System (ADS)
He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang
2011-12-01
This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lapuyade-Lahorgue, J; Ruan, S; Li, H
Purpose: Multi-tracer PET imaging is getting more attention in radiotherapy by providing additional tumor volume information such as glucose and oxygenation. However, automatic PET-based tumor segmentation is still a very challenging problem. We propose a statistical fusion approach to joint segment the sub-area of tumors from the two tracers FDG and FMISO PET images. Methods: Non-standardized Gamma distributions are convenient to model intensity distributions in PET. As a serious correlation exists in multi-tracer PET images, we proposed a new fusion method based on copula which is capable to represent dependency between different tracers. The Hidden Markov Field (HMF) model ismore » used to represent spatial relationship between PET image voxels and statistical dynamics of intensities for each modality. Real PET images of five patients with FDG and FMISO are used to evaluate quantitatively and qualitatively our method. A comparison between individual and multi-tracer segmentations was conducted to show advantages of the proposed fusion method. Results: The segmentation results show that fusion with Gaussian copula can receive high Dice coefficient of 0.84 compared to that of 0.54 and 0.3 of monomodal segmentation results based on individual segmentation of FDG and FMISO PET images. In addition, high correlation coefficients (0.75 to 0.91) for the Gaussian copula for all five testing patients indicates the dependency between tumor regions in the multi-tracer PET images. Conclusion: This study shows that using multi-tracer PET imaging can efficiently improve the segmentation of tumor region where hypoxia and glucidic consumption are present at the same time. Introduction of copulas for modeling the dependency between two tracers can simultaneously take into account information from both tracers and deal with two pathological phenomena. Future work will be to consider other families of copula such as spherical and archimedian copulas, and to eliminate partial volume effect by considering dependency between neighboring voxels.« less
Huo, Mengmeng; Li, Wenyan; Chaudhuri, Arka Sen; Fan, Yuchao; Han, Xiu; Yang, Chen; Wu, Zhenghong; Qi, Xiaole
2017-09-01
In this study, we developed bio-stimuli-responsive multi-scale hyaluronic acid (HA) nanoparticles encapsulated with polyamidoamine (PAMAM) dendrimers as the subunits. These HA/PAMAM nanoparticles of large scale (197.10±3.00nm) were stable during systematic circulation then enriched at the tumor sites; however, they were prone to be degraded by the high expressed hyaluronidase (HAase) to release inner PAMAM dendrimers and regained a small scale (5.77±0.25nm) with positive charge. After employing tumor spheroids penetration assay on A549 3D tumor spheroids for 8h, the fluorescein isothiocyanate (FITC) labeled multi-scale HA/PAMAM-FITC nanoparticles could penetrate deeply into these tumor spheroids with the degradation of HAase. Moreover, small animal imaging technology in male nude mice bearing H22 tumor showed HA/PAMAM-FITC nanoparticles possess higher prolonged systematic circulation compared with both PAMAM-FITC nanoparticles and free FITC. In addition, after intravenous administration in mice bearing H22 tumors, methotrexate (MTX) loaded multi-scale HA/PAMAM-MTX nanoparticles exhibited a 2.68-fold greater antitumor activity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Atmospheric Science Data Center
2013-04-19
article title: Closed Large Cell Clouds in the South Pacific ... the Multi-angle Imaging SpectroRadiometer (MISR) provide an example of very large scale closed cells, and can be contrasted with the ... MD. The MISR data were obtained from the NASA Langley Research Center Atmospheric Science Data Center in Hampton, VA. Image ...
NASA Astrophysics Data System (ADS)
Cheng, Gong; Han, Junwei; Zhou, Peicheng; Guo, Lei
2014-12-01
The rapid development of remote sensing technology has facilitated us the acquisition of remote sensing images with higher and higher spatial resolution, but how to automatically understand the image contents is still a big challenge. In this paper, we develop a practical and rotation-invariant framework for multi-class geospatial object detection and geographic image classification based on collection of part detectors (COPD). The COPD is composed of a set of representative and discriminative part detectors, where each part detector is a linear support vector machine (SVM) classifier used for the detection of objects or recurring spatial patterns within a certain range of orientation. Specifically, when performing multi-class geospatial object detection, we learn a set of seed-based part detectors where each part detector corresponds to a particular viewpoint of an object class, so the collection of them provides a solution for rotation-invariant detection of multi-class objects. When performing geographic image classification, we utilize a large number of pre-trained part detectors to discovery distinctive visual parts from images and use them as attributes to represent the images. Comprehensive evaluations on two remote sensing image databases and comparisons with some state-of-the-art approaches demonstrate the effectiveness and superiority of the developed framework.
Prasad, Dilip K; Rajan, Deepu; Rachmawati, Lily; Rajabally, Eshan; Quek, Chai
2016-12-01
This paper addresses the problem of horizon detection, a fundamental process in numerous object detection algorithms, in a maritime environment. The maritime environment is characterized by the absence of fixed features, the presence of numerous linear features in dynamically changing objects and background and constantly varying illumination, rendering the typically simple problem of detecting the horizon a challenging one. We present a novel method called multi-scale consistence of weighted edge Radon transform, abbreviated as MuSCoWERT. It detects the long linear features consistent over multiple scales using multi-scale median filtering of the image followed by Radon transform on a weighted edge map and computing the histogram of the detected linear features. We show that MuSCoWERT has excellent performance, better than seven other contemporary methods, for 84 challenging maritime videos, containing over 33,000 frames, and captured using visible range and near-infrared range sensors mounted onboard, onshore, or on floating buoys. It has a median error of about 2 pixels (less than 0.2%) from the center of the actual horizon and a median angular error of less than 0.4 deg. We are also sharing a new challenging horizon detection dataset of 65 videos of visible, infrared cameras for onshore and onboard ship camera placement.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-04-06
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-01-01
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503
Nanomedicine: Tiny Particles and Machines Give Huge Gains
Tong, Sheng; Fine, Eli J.; Lin, Yanni; Cradick, Thomas J.; Bao, Gang
2014-01-01
Nanomedicine is an emerging field that integrates nanotechnology, biomolecular engineering, life sciences and medicine; it is expected to produce major breakthroughs in medical diagnostics and therapeutics. Nano-scale structures and devices are compatible in size with proteins and nucleic acids in living cells. Therefore, the design, characterization and application of nano-scale probes, carriers and machines may provide unprecedented opportunities for achieving a better control of biological processes, and drastic improvements in disease detection, therapy, and prevention. Recent advances in nanomedicine include the development of nanoparticle-based probes for molecular imaging, nano-carriers for drug/gene delivery, multi-functional nanoparticles for theranostics, and molecular machines for biological and medical studies. This article provides an overview of the nanomedicine field, with an emphasis on nanoparticles for imaging and therapy, as well as engineered nucleases for genome editing. The challenges in translating nanomedicine approaches to clinical applications are discussed. PMID:24297494
Detecting Water Bodies in LANDSAT8 Oli Image Using Deep Learning
NASA Astrophysics Data System (ADS)
Jiang, W.; He, G.; Long, T.; Ni, Y.
2018-04-01
Water body identifying is critical to climate change, water resources, ecosystem service and hydrological cycle. Multi-layer perceptron(MLP) is the popular and classic method under deep learning framework to detect target and classify image. Therefore, this study adopts this method to identify the water body of Landsat8. To compare the performance of classification, the maximum likelihood and water index are employed for each study area. The classification results are evaluated from accuracy indices and local comparison. Evaluation result shows that multi-layer perceptron(MLP) can achieve better performance than the other two methods. Moreover, the thin water also can be clearly identified by the multi-layer perceptron. The proposed method has the application potential in mapping global scale surface water with multi-source medium-high resolution satellite data.
Sperry, Megan M; Kartha, Sonia; Granquist, Eric J; Winkelstein, Beth A
2018-07-01
Inter-subject networks are used to model correlations between brain regions and are particularly useful for metabolic imaging techniques, like 18F-2-deoxy-2-(18F)fluoro-D-glucose (FDG) positron emission tomography (PET). Since FDG PET typically produces a single image, correlations cannot be calculated over time. Little focus has been placed on the basic properties of inter-subject networks and if they are affected by group size and image normalization. FDG PET images were acquired from rats (n = 18), normalized by whole brain, visual cortex, or cerebellar FDG uptake, and used to construct correlation matrices. Group size effects on network stability were investigated by systematically adding rats and evaluating local network connectivity (node strength and clustering coefficient). Modularity and community structure were also evaluated in the differently normalized networks to assess meso-scale network relationships. Local network properties are stable regardless of normalization region for groups of at least 10. Whole brain-normalized networks are more modular than visual cortex- or cerebellum-normalized network (p < 0.00001); however, community structure is similar at network resolutions where modularity differs most between brain and randomized networks. Hierarchical analysis reveals consistent modules at different scales and clustering of spatially-proximate brain regions. Findings suggest inter-subject FDG PET networks are stable for reasonable group sizes and exhibit multi-scale modularity.
Convection and mass loss through the chromosphere of Betelgeuse
NASA Astrophysics Data System (ADS)
Ridgway, Stephen
2011-10-01
Betelgeuse is well suited for detailed study of the mass loss process in a massive red supergiant. We have engaged in a multi-scale, multi-color study to trace the ejected material from the photosphere to the interstellar medium, and understand its chemical evolution {formation of molecules and dust}. Infrared interferometry already gave us a detailed image of the photosphere, compatible with large convective cells. Adaptive optics spectro-imaging {1.0-2.2 microns} allowed us to detect the presence of the CN molecule and mass loss plume structures up to at least 6 R*. At larger distances, we observed silicate-rich dust in thermal IR {8-20 microns}. From the surface to 100 R*, we therefore have a continuous coverage with multicolor imagery. The chromosphere lies at a key location, between the photosphere and the molecular envelope. As shown by STIS spatially resolved spectroscopy {Lobel & Dupree 2001}, it contains rising and falling gases. Such structure is supported by our 3D modeling of the convection. In order to probe the dynamics of the envelope and its relation to photospheric spots and mass loss plumes, we propose to obtain UV imaging with STIS at 3 epochs to complement our coordinated ground-based effort as well as the earlier HST UV snapshots. We will use this imagery to correlate structures at different radii and temperatures, and to explore the time-scales of evolution. With the support of our 3D models, this information will answer specific questions including deciding between convective and polar explanations for bright spots and plumes. Our infrared imaging observations will be repeated contemporaneously with the requested HST/STIS images.
Automated detection of exudates for diabetic retinopathy screening
NASA Astrophysics Data System (ADS)
Fleming, Alan D.; Philip, Sam; Goatman, Keith A.; Williams, Graeme J.; Olson, John A.; Sharp, Peter F.
2007-12-01
Automated image analysis is being widely sought to reduce the workload required for grading images resulting from diabetic retinopathy screening programmes. The recognition of exudates in retinal images is an important goal for automated analysis since these are one of the indicators that the disease has progressed to a stage requiring referral to an ophthalmologist. Candidate exudates were detected using a multi-scale morphological process. Based on local properties, the likelihoods of a candidate being a member of classes exudate, drusen or background were determined. This leads to a likelihood of the image containing exudates which can be thresholded to create a binary decision. Compared to a clinical reference standard, images containing exudates were detected with sensitivity 95.0% and specificity 84.6% in a test set of 13 219 images of which 300 contained exudates. Depending on requirements, this method could form part of an automated system to detect images showing either any diabetic retinopathy or referable diabetic retinopathy.
Alternative transitions between existing representations in multi-scale maps
NASA Astrophysics Data System (ADS)
Dumont, Marion; Touya, Guillaume; Duchêne, Cécile
2018-05-01
Map users may have issues to achieve multi-scale navigation tasks, as cartographic objects may have various representations across scales. We assume that adding intermediate representations could be one way to reduce the differences between existing representations, and to ease the transitions across scales. We consider an existing multiscale map on the scale range from 1 : 25k to 1 : 100k scales. Based on hypotheses about intermediate representations design, we build custom multi-scale maps with alternative transitions. We will conduct in a next future a user evaluation to compare the efficiency of these alternative maps for multi-scale navigation. This paper discusses the hypotheses and production process of these alternative maps.
NASA Astrophysics Data System (ADS)
Uchiyama, Kazuharu; Nishikawa, Naoki; Nakagomi, Ryo; Kobayashi, Kiyoshi; Hori, Hirokazu
2018-02-01
To design optoelectronic functionalities in nanometer scale based on interactions of electronic system with optical near-fields, it is essential to evaluate the relationship between optical near-fields and their sources. Several theoretical studies have been performed, so far, to analyze such complex relationship to design the interaction fields of several specific scales. In this study, we have performed detailed and high-precision measurements of optical near-field structures woven by a large number of independent polarizations generated in the gold nanorods array under laser light irradiation at the resonant frequency. We have accumulated the multi-layered data of optical near-field imaging at different heights above the planar surface with the resolution of several nm by a STM-assisted scanning near-field optical microscope. Based on these data, we have performed an inverse calculation to estimate the position, direction, and strength of the local polarization buried under the flat surface of the sample. As a result of the inverse operation, we have confirmed that the complexities in the nanometer scale optical near-fields could be reconstructed by combinations of induced polarization in each gold nanorod. We have demonstrated the hierarchical properties of optical near-fields based on spatial frequency expansion and superposition of dipole fields to provide insightful information for applications such for secure multi-layered information storage.
Bonora, Stefano; Jian, Yifan; Zhang, Pengfei; Zam, Azhar; Pugh, Edward N; Zawadzki, Robert J; Sarunic, Marinko V
2015-08-24
Adaptive optics is rapidly transforming microscopy and high-resolution ophthalmic imaging. The adaptive elements commonly used to control optical wavefronts are liquid crystal spatial light modulators and deformable mirrors. We introduce a novel Multi-actuator Adaptive Lens that can correct aberrations to high order, and which has the potential to increase the spread of adaptive optics to many new applications by simplifying its integration with existing systems. Our method combines an adaptive lens with an imaged-based optimization control that allows the correction of images to the diffraction limit, and provides a reduction of hardware complexity with respect to existing state-of-the-art adaptive optics systems. The Multi-actuator Adaptive Lens design that we present can correct wavefront aberrations up to the 4th order of the Zernike polynomial characterization. The performance of the Multi-actuator Adaptive Lens is demonstrated in a wide field microscope, using a Shack-Hartmann wavefront sensor for closed loop control. The Multi-actuator Adaptive Lens and image-based wavefront-sensorless control were also integrated into the objective of a Fourier Domain Optical Coherence Tomography system for in vivo imaging of mouse retinal structures. The experimental results demonstrate that the insertion of the Multi-actuator Objective Lens can generate arbitrary wavefronts to correct aberrations down to the diffraction limit, and can be easily integrated into optical systems to improve the quality of aberrated images.
A multimodal image sensor system for identifying water stress in grapevines
NASA Astrophysics Data System (ADS)
Zhao, Yong; Zhang, Qin; Li, Minzan; Shao, Yongni; Zhou, Jianfeng; Sun, Hong
2012-11-01
Water stress is one of the most common limitations of fruit growth. Water is the most limiting resource for crop growth. In grapevines, as well as in other fruit crops, fruit quality benefits from a certain level of water deficit which facilitates to balance vegetative and reproductive growth and the flow of carbohydrates to reproductive structures. A multi-modal sensor system was designed to measure the reflectance signature of grape plant surfaces and identify different water stress levels in this paper. The multi-modal sensor system was equipped with one 3CCD camera (three channels in R, G, and IR). The multi-modal sensor can capture and analyze grape canopy from its reflectance features, and identify the different water stress levels. This research aims at solving the aforementioned problems. The core technology of this multi-modal sensor system could further be used as a decision support system that combines multi-modal sensory data to improve plant stress detection and identify the causes of stress. The images were taken by multi-modal sensor which could output images in spectral bands of near-infrared, green and red channel. Based on the analysis of the acquired images, color features based on color space and reflectance features based on image process method were calculated. The results showed that these parameters had the potential as water stress indicators. More experiments and analysis are needed to validate the conclusion.
NASA Astrophysics Data System (ADS)
Gao, Zhiyun; Holtze, Colin; Sonka, Milan; Hoffman, Eric; Saha, Punam K.
2010-03-01
Distinguishing pulmonary arterial and venous (A/V) trees via in vivo imaging is a critical first step in the quantification of vascular geometry for purposes of determining, for instance, pulmonary hypertension, detection of pulmonary emboli and more. A multi-scale topo-morphologic opening algorithm has recently been introduced by us separating A/V trees in pulmonary multiple-detector X-ray computed tomography (MDCT) images without contrast. The method starts with two sets of seeds - one for each of A/V trees and combines fuzzy distance transform, fuzzy connectivity, and morphologic reconstruction leading to multi-scale opening of two mutually fused structures while preserving their continuity. The method locally determines the optimum morphological scale separating the two structures. Here, a validation study is reported examining accuracy of the method using mathematically generated phantoms with different levels of fuzziness, overlap, scale, resolution, noise, and geometric coupling and MDCT images of pulmonary vessel casting of pigs. After exsanguinating the animal, a vessel cast was generated using rapid-hardening methyl methacrylate compound with additional contrast by 10cc of Ethiodol in the arterial side which was scanned in a MDCT scanner at 0.5mm slice thickness and 0.47mm in plane resolution. True segmentations of A/V trees were computed from these images by thresholding. Subsequently, effects of distinguishing A/V contrasts were eliminated and resulting images were used for A/V separation by our method. Experimental results show that 92% - 98% accuracy is achieved using only one seed for each object in phantoms while 94.4% accuracy is achieved in MDCT cast images using ten seeds for each of A/V trees.
Computational method for multi-modal microscopy based on transport of intensity equation
NASA Astrophysics Data System (ADS)
Li, Jiaji; Chen, Qian; Sun, Jiasong; Zhang, Jialin; Zuo, Chao
2017-02-01
In this paper, we develop the requisite theory to describe a hybrid virtual-physical multi-modal imaging system which yields quantitative phase, Zernike phase contrast, differential interference contrast (DIC), and light field moment imaging simultaneously based on transport of intensity equation(TIE). We then give the experimental demonstration of these ideas by time-lapse imaging of live HeLa cell mitosis. Experimental results verify that a tunable lens based TIE system, combined with the appropriate post-processing algorithm, can achieve a variety of promising imaging modalities in parallel with the quantitative phase images for the dynamic study of cellular processes.
a Spiral-Based Downscaling Method for Generating 30 M Time Series Image Data
NASA Astrophysics Data System (ADS)
Liu, B.; Chen, J.; Xing, H.; Wu, H.; Zhang, J.
2017-09-01
The spatial detail and updating frequency of land cover data are important factors influencing land surface dynamic monitoring applications in high spatial resolution scale. However, the fragmentized patches and seasonal variable of some land cover types (e. g. small crop field, wetland) make it labor-intensive and difficult in the generation of land cover data. Utilizing the high spatial resolution multi-temporal image data is a possible solution. Unfortunately, the spatial and temporal resolution of available remote sensing data like Landsat or MODIS datasets can hardly satisfy the minimum mapping unit and frequency of current land cover mapping / updating at the same time. The generation of high resolution time series may be a compromise to cover the shortage in land cover updating process. One of popular way is to downscale multi-temporal MODIS data with other high spatial resolution auxiliary data like Landsat. But the usual manner of downscaling pixel based on a window may lead to the underdetermined problem in heterogeneous area, result in the uncertainty of some high spatial resolution pixels. Therefore, the downscaled multi-temporal data can hardly reach high spatial resolution as Landsat data. A spiral based method was introduced to downscale low spatial and high temporal resolution image data to high spatial and high temporal resolution image data. By the way of searching the similar pixels around the adjacent region based on the spiral, the pixel set was made up in the adjacent region pixel by pixel. The underdetermined problem is prevented to a large extent from solving the linear system when adopting the pixel set constructed. With the help of ordinary least squares, the method inverted the endmember values of linear system. The high spatial resolution image was reconstructed on the basis of high spatial resolution class map and the endmember values band by band. Then, the high spatial resolution time series was formed with these high spatial resolution images image by image. Simulated experiment and remote sensing image downscaling experiment were conducted. In simulated experiment, the 30 meters class map dataset Globeland30 was adopted to investigate the effect on avoid the underdetermined problem in downscaling procedure and a comparison between spiral and window was conducted. Further, the MODIS NDVI and Landsat image data was adopted to generate the 30m time series NDVI in remote sensing image downscaling experiment. Simulated experiment results showed that the proposed method had a robust performance in downscaling pixel in heterogeneous region and indicated that it was superior to the traditional window-based methods. The high resolution time series generated may be a benefit to the mapping and updating of land cover data.
Fast data reconstructed method of Fourier transform imaging spectrometer based on multi-core CPU
NASA Astrophysics Data System (ADS)
Yu, Chunchao; Du, Debiao; Xia, Zongze; Song, Li; Zheng, Weijian; Yan, Min; Lei, Zhenggang
2017-10-01
Imaging spectrometer can gain two-dimensional space image and one-dimensional spectrum at the same time, which shows high utility in color and spectral measurements, the true color image synthesis, military reconnaissance and so on. In order to realize the fast reconstructed processing of the Fourier transform imaging spectrometer data, the paper designed the optimization reconstructed algorithm with OpenMP parallel calculating technology, which was further used for the optimization process for the HyperSpectral Imager of `HJ-1' Chinese satellite. The results show that the method based on multi-core parallel computing technology can control the multi-core CPU hardware resources competently and significantly enhance the calculation of the spectrum reconstruction processing efficiency. If the technology is applied to more cores workstation in parallel computing, it will be possible to complete Fourier transform imaging spectrometer real-time data processing with a single computer.
NASA Technical Reports Server (NTRS)
Kumar, Uttam; Nemani, Ramakrishna R.; Ganguly, Sangram; Kalia, Subodh; Michaelis, Andrew
2017-01-01
In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS-national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91 percent was achieved, which is a 6 percent improvement in unmixing based classification relative to per-pixel-based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.
NASA Astrophysics Data System (ADS)
Ganguly, S.; Kumar, U.; Nemani, R. R.; Kalia, S.; Michaelis, A.
2017-12-01
In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS - national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91% was achieved, which is a 6% improvement in unmixing based classification relative to per-pixel based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.
NASA Astrophysics Data System (ADS)
Ganguly, S.; Kumar, U.; Nemani, R. R.; Kalia, S.; Michaelis, A.
2016-12-01
In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS - national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91% was achieved, which is a 6% improvement in unmixing based classification relative to per-pixel based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.
Multi-Algorithm Particle Simulations with Spatiocyte.
Arjunan, Satya N V; Takahashi, Koichi
2017-01-01
As quantitative biologists get more measurements of spatially regulated systems such as cell division and polarization, simulation of reaction and diffusion of proteins using the data is becoming increasingly relevant to uncover the mechanisms underlying the systems. Spatiocyte is a lattice-based stochastic particle simulator for biochemical reaction and diffusion processes. Simulations can be performed at single molecule and compartment spatial scales simultaneously. Molecules can diffuse and react in 1D (filament), 2D (membrane), and 3D (cytosol) compartments. The implications of crowded regions in the cell can be investigated because each diffusing molecule has spatial dimensions. Spatiocyte adopts multi-algorithm and multi-timescale frameworks to simulate models that simultaneously employ deterministic, stochastic, and particle reaction-diffusion algorithms. Comparison of light microscopy images to simulation snapshots is supported by Spatiocyte microscopy visualization and molecule tagging features. Spatiocyte is open-source software and is freely available at http://spatiocyte.org .
Laboratory MCAO Test-Bed for Developing Wavefront Sensing Concepts.
Goncharov, A V; Dainty, J C; Esposito, S; Puglisi, A
2005-07-11
An experimental optical bench test-bed for developing new wavefront sensing concepts for Multi-Conjugate Adaptive Optics (MCAO) systems is described. The main objective is to resolve imaging problems associated with wavefront sensing of the atmospheric turbulence for future MCAO systems on Extremely Large Telescopes (ELTs). The test-bed incorporates five reference sources, two deformable mirrors (DMs) and atmospheric phase screens to simulate a scaled version of a 10-m adaptive telescope operating at the K band. A recently proposed compact tomographic wavefront sensor is employed for star-oriented DMs control in the MCAO system. The MCAO test-bed is used to verify the feasibility of the wavefront sensing concept utilizing a field lenslet array for multi-pupil imaging on a single detector. First experimental results of MCAO correction with the proposed tomographic wavefront sensor are presented and compared to the theoretical prediction based on the characteristics of the phase screens, actuator density of the DMs and the guide star configuration.
NASA Astrophysics Data System (ADS)
Álvarez, Charlens; Martínez, Fabio; Romero, Eduardo
2015-01-01
The pelvic magnetic Resonance images (MRI) are used in Prostate cancer radiotherapy (RT), a process which is part of the radiation planning. Modern protocols require a manual delineation, a tedious and variable activity that may take about 20 minutes per patient, even for trained experts. That considerable time is an important work ow burden in most radiological services. Automatic or semi-automatic methods might improve the efficiency by decreasing the measure times while conserving the required accuracy. This work presents a fully automatic atlas- based segmentation strategy that selects the more similar templates for a new MRI using a robust multi-scale SURF analysis. Then a new segmentation is achieved by a linear combination of the selected templates, which are previously non-rigidly registered towards the new image. The proposed method shows reliable segmentations, obtaining an average DICE Coefficient of 79%, when comparing with the expert manual segmentation, under a leave-one-out scheme with the training database.
[The research on bidirectional reflectance computer simulation of forest canopy at pixel scale].
Song, Jin-Ling; Wang, Jin-Di; Shuai, Yan-Min; Xiao, Zhi-Qiang
2009-08-01
Computer simulation is based on computer graphics to generate the realistic 3D structure scene of vegetation, and to simulate the canopy regime using radiosity method. In the present paper, the authors expand the computer simulation model to simulate forest canopy bidirectional reflectance at pixel scale. But usually, the trees are complex structures, which are tall and have many branches. So there is almost a need for hundreds of thousands or even millions of facets to built up the realistic structure scene for the forest It is difficult for the radiosity method to compute so many facets. In order to make the radiosity method to simulate the forest scene at pixel scale, in the authors' research, the authors proposed one idea to simplify the structure of forest crowns, and abstract the crowns to ellipsoids. And based on the optical characteristics of the tree component and the characteristics of the internal energy transmission of photon in real crown, the authors valued the optical characteristics of ellipsoid surface facets. In the computer simulation of the forest, with the idea of geometrical optics model, the gap model is considered to get the forest canopy bidirectional reflectance at pixel scale. Comparing the computer simulation results with the GOMS model, and Multi-angle Imaging SpectroRadiometer (MISR) multi-angle remote sensing data, the simulation results are in agreement with the GOMS simulation result and MISR BRF. But there are also some problems to be solved. So the authors can conclude that the study has important value for the application of multi-angle remote sensing and the inversion of vegetation canopy structure parameters.
Multisensor data fusion across time and space
NASA Astrophysics Data System (ADS)
Villeneuve, Pierre V.; Beaven, Scott G.; Reed, Robert A.
2014-06-01
Field measurement campaigns typically deploy numerous sensors having different sampling characteristics for spatial, temporal, and spectral domains. Data analysis and exploitation is made more difficult and time consuming as the sample data grids between sensors do not align. This report summarizes our recent effort to demonstrate feasibility of a processing chain capable of "fusing" image data from multiple independent and asynchronous sensors into a form amenable to analysis and exploitation using commercially-available tools. Two important technical issues were addressed in this work: 1) Image spatial registration onto a common pixel grid, 2) Image temporal interpolation onto a common time base. The first step leverages existing image matching and registration algorithms. The second step relies upon a new and innovative use of optical flow algorithms to perform accurate temporal upsampling of slower frame rate imagery. Optical flow field vectors were first derived from high-frame rate, high-resolution imagery, and then finally used as a basis for temporal upsampling of the slower frame rate sensor's imagery. Optical flow field values are computed using a multi-scale image pyramid, thus allowing for more extreme object motion. This involves preprocessing imagery to varying resolution scales and initializing new vector flow estimates using that from the previous coarser-resolution image. Overall performance of this processing chain is demonstrated using sample data involving complex too motion observed by multiple sensors mounted to the same base. Multiple sensors were included, including a high-speed visible camera, up to a coarser resolution LWIR camera.
NASA Astrophysics Data System (ADS)
Wei, Hongqiang; Zhou, Guiyun; Zhou, Junjie
2018-04-01
The classification of leaf and wood points is an essential preprocessing step for extracting inventory measurements and canopy characterization of trees from the terrestrial laser scanning (TLS) data. The geometry-based approach is one of the widely used classification method. In the geometry-based method, it is common practice to extract salient features at one single scale before the features are used for classification. It remains unclear how different scale(s) used affect the classification accuracy and efficiency. To assess the scale effect on the classification accuracy and efficiency, we extracted the single-scale and multi-scale salient features from the point clouds of two oak trees of different sizes and conducted the classification on leaf and wood. Our experimental results show that the balanced accuracy of the multi-scale method is higher than the average balanced accuracy of the single-scale method by about 10 % for both trees. The average speed-up ratio of single scale classifiers over multi-scale classifier for each tree is higher than 30.
Document image binarization using "multi-scale" predefined filters
NASA Astrophysics Data System (ADS)
Saabni, Raid M.
2018-04-01
Reading text or searching for key words within a historical document is a very challenging task. one of the first steps of the complete task is binarization, where we separate foreground such as text, figures and drawings from the background. Successful results of this important step in many cases can determine next steps to success or failure, therefore it is very vital to the success of the complete task of reading and analyzing the content of a document image. Generally, historical documents images are of poor quality due to their storage condition and degradation over time, which mostly cause to varying contrasts, stains, dirt and seeping ink from reverse side. In this paper, we use banks of anisotropic predefined filters in different scales and orientations to develop a binarization method for degraded documents and manuscripts. Using the fact, that handwritten strokes may follow different scales and orientations, we use predefined sets of filter banks having various scales, weights, and orientations to seek a compact set of filters and weights in order to generate diffrent layers of foregrounds and background. Results of convolving these fiters on the gray level image locally, weighted and accumulated to enhance the original image. Based on the different layers, seeds of components in the gray level image and a learning process, we present an improved binarization algorithm to separate the background from layers of foreground. Different layers of foreground which may be caused by seeping ink, degradation or other factors are also separated from the real foreground in a second phase. Promising experimental results were obtained on the DIBCO2011 , DIBCO2013 and H-DIBCO2016 data sets and a collection of images taken from real historical documents.
High-resolution Observations of Hα Spectra with a Subtractive Double Pass
NASA Astrophysics Data System (ADS)
Beck, C.; Rezaei, R.; Choudhary, D. P.; Gosain, S.; Tritschler, A.; Louis, R. E.
2018-02-01
High-resolution imaging spectroscopy in solar physics has relied on Fabry-Pérot interferometers (FPIs) in recent years. FPI systems, however, become technically challenging and expensive for telescopes larger than the 1 m class. A conventional slit spectrograph with a diffraction-limited performance over a large field of view (FOV) can be built at much lower cost and effort. It can be converted into an imaging spectro(polari)meter using the concept of a subtractive double pass (SDP). We demonstrate that an SDP system can reach a similar performance as FPI-based systems with a high spatial and moderate spectral resolution across a FOV of 100^'' ×100^' ' with a spectral coverage of 1 nm. We use Hα spectra taken with an SDP system at the Dunn Solar Telescope and complementary full-disc data to infer the properties of small-scale superpenumbral filaments. We find that the majority of all filaments end in patches of opposite-polarity fields. The internal fine-structure in the line-core intensity of Hα at spatial scales of about 0.5'' exceeds that in other parameters such as the line width, indicating small-scale opacity effects in a larger-scale structure with common properties. We conclude that SDP systems in combination with (multi-conjugate) adaptive optics are a valid alternative to FPI systems when high spatial resolution and a large FOV are required. They can also reach a cadence that is comparable to that of FPI systems, while providing a much larger spectral range and a simultaneous multi-line capability.
Saliency detection using mutual consistency-guided spatial cues combination
NASA Astrophysics Data System (ADS)
Wang, Xin; Ning, Chen; Xu, Lizhong
2015-09-01
Saliency detection has received extensive interests due to its remarkable contribution to wide computer vision and pattern recognition applications. However, most existing computational models are designed for detecting saliency in visible images or videos. When applied to infrared images, they may suffer from limitations in saliency detection accuracy and robustness. In this paper, we propose a novel algorithm to detect visual saliency in infrared images by mutual consistency-guided spatial cues combination. First, based on the luminance contrast and contour characteristics of infrared images, two effective saliency maps, i.e., the luminance contrast saliency map and contour saliency map are constructed, respectively. Afterwards, an adaptive combination scheme guided by mutual consistency is exploited to integrate these two maps to generate the spatial saliency map. This idea is motivated by the observation that different maps are actually related to each other and the fusion scheme should present a logically consistent view of these maps. Finally, an enhancement technique is adopted to incorporate spatial saliency maps at various scales into a unified multi-scale framework to improve the reliability of the final saliency map. Comprehensive evaluations on real-life infrared images and comparisons with many state-of-the-art saliency models demonstrate the effectiveness and superiority of the proposed method for saliency detection in infrared images.
NASA Astrophysics Data System (ADS)
Luther, Ed; Mendes, Livia; Pan, Jiayi; Costa, Daniel; Sarisozen, Can; Torchilin, Vladimir
2018-02-01
We rely on in vitro cellular cultures to evaluate the effects of the components of multifunctional nano-based formulations under development. We employ an incubator-adapted, label-free holographic imaging cytometer HoloMonitor M4® (Phase Holographic Imaging, Lund, Sweden) to obtain multi-day time-lapse sequences at 5- minute intervals. An automated stage allows hand-free acquisition of multiple fields of view. Our system is based on the Mach-Zehnder interferometry principle to create interference patterns which are deconvolved to produce images of the optical thickness of the field of view. These images are automatically segmented resulting in a full complement of quantitative morphological features, such as optical volume, thickness, and area amongst many others. Precise XY cell locations and the time of acquisition are also recorded. Visualization is best achieved by novel 4-Dimensional plots, where XY position is plotted overtime time (Z-directions) and cell-thickness is coded as color or gray scale brightness. Fundamental events of interest, i.e., cells undergoing mitosis or mitotic dysfunction, cell death, cell-to-cell interactions, motility are discernable. We use both 2D and 3D models of the tumor microenvironment. We report our new analysis method to track feature changes over time based on a 4-sample version of the Kolmogorov-Smirnov test. Feature A is compared to Control A, and Feature B is compared to Control B to give a 2D probability plot of the feature changes over time. As a result, we efficiently obtain vectors quantifying feature changes over time in various sample conditions, i.e., changing compound concentrations or multi-compound combinations.
Multi-spectral endogenous fluorescence imaging for bacterial differentiation
NASA Astrophysics Data System (ADS)
Chernomyrdin, Nikita V.; Babayants, Margarita V.; Korotkov, Oleg V.; Kudrin, Konstantin G.; Rimskaya, Elena N.; Shikunova, Irina A.; Kurlov, Vladimir N.; Cherkasova, Olga P.; Komandin, Gennady A.; Reshetov, Igor V.; Zaytsev, Kirill I.
2017-07-01
In this paper, the multi-spectral endogenous fluorescence imaging was implemented for bacterial differentiation. The fluorescence imaging was performed using a digital camera equipped with a set of visual bandpass filters. Narrowband 365 nm ultraviolet radiation passed through a beam homogenizer was used to excite the sample fluorescence. In order to increase a signal-to-noise ratio and suppress a non-fluorescence background in images, the intensity of the UV excitation was modulated using a mechanical chopper. The principal components were introduced for differentiating the samples of bacteria based on the multi-spectral endogenous fluorescence images.
Robust multi-atlas label propagation by deep sparse representation
Zu, Chen; Wang, Zhengxia; Zhang, Daoqiang; Liang, Peipeng; Shi, Yonghong; Shen, Dinggang; Wu, Guorong
2016-01-01
Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer (label-specific dictionaries) consists of groups of representative atlas patches and the subsequent layers (residual dictionaries) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared to other counterpart label fusion methods. PMID:27942077
Robust multi-atlas label propagation by deep sparse representation.
Zu, Chen; Wang, Zhengxia; Zhang, Daoqiang; Liang, Peipeng; Shi, Yonghong; Shen, Dinggang; Wu, Guorong
2017-03-01
Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer ( label-specific dictionaries ) consists of groups of representative atlas patches and the subsequent layers ( residual dictionaries ) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared to other counterpart label fusion methods.
Multi-Scale Scattering Transform in Music Similarity Measuring
NASA Astrophysics Data System (ADS)
Wang, Ruobai
Scattering transform is a Mel-frequency spectrum based, time-deformation stable method, which can be used in evaluating music similarity. Compared with Dynamic time warping, it has better performance in detecting similar audio signals under local time-frequency deformation. Multi-scale scattering means to combine scattering transforms of different window lengths. This paper argues that, multi-scale scattering transform is a good alternative of dynamic time warping in music similarity measuring. We tested the performance of multi-scale scattering transform against other popular methods, with data designed to represent different conditions.
Strategies for Multi-Modal Analysis
NASA Astrophysics Data System (ADS)
Hexemer, Alexander; Wang, Cheng; Pandolfi, Ronald; Kumar, Dinesh; Venkatakrishnan, Singanallur; Sethian, James; Camera Team
This section on soft materials will be dedicated to discuss the extraction of the chemical distribution and spatial arrangement of constituent elements and functional groups at multiple length scales and, thus, the examination of collective dynamics, transport, and electronic ordering phenomena. Traditional measures of structure in soft materials have relied heavily on scattering and imaging based techniques due to their capacity to measure nanoscale dimensions and their capacity to monitor structure under conditions of dynamic stress loading. Special attentions are planned to focus on the application of resonant x-ray scattering, contrast-varied neutron scattering, analytical transmission electron microscopy, and their combinations. This session aims to bring experts in both scattering and electron microscope fields to discuss recent advances in selectively characterizing structural architectures of complex soft materials, which have often multi-components with a wide range of length scales and multiple functionalities, and thus hopes to foster novel ideas to decipher a higher level of structural complexity in soft materials in future. CAMERA, Early Career Award.
Bush Encroachment Mapping for Africa - Multi-Scale Analysis with Remote Sensing and GIS
NASA Astrophysics Data System (ADS)
Graw, V. A. M.; Oldenburg, C.; Dubovyk, O.
2015-12-01
Bush encroachment describes a global problem which is especially facing the savanna ecosystem in Africa. Livestock is directly affected by decreasing grasslands and inedible invasive species which defines the process of bush encroachment. For many small scale farmers in developing countries livestock represents a type of insurance in times of crop failure or drought. Among that bush encroachment is also a problem for crop production. Studies on the mapping of bush encroachment so far focus on small scales using high-resolution data and rarely provide information beyond the national level. Therefore a process chain was developed using a multi-scale approach to detect bush encroachment for whole Africa. The bush encroachment map is calibrated with ground truth data provided by experts in Southern, Eastern and Western Africa. By up-scaling location specific information on different levels of remote sensing imagery - 30m with Landsat images and 250m with MODIS data - a map is created showing potential and actual areas of bush encroachment on the African continent and thereby provides an innovative approach to map bush encroachment on the regional scale. A classification approach links location data based on GPS information from experts to the respective pixel in the remote sensing imagery. Supervised classification is used while actual bush encroachment information represents the training samples for the up-scaling. The classification technique is based on Random Forests and regression trees, a machine learning classification approach. Working on multiple scales and with the help of field data an innovative approach can be presented showing areas affected by bush encroachment on the African continent. This information can help to prevent further grassland decrease and identify those regions where land management strategies are of high importance to sustain livestock keeping and thereby also secure livelihoods in rural areas.
Network Access to Visual Information: A Study of Costs and Uses.
ERIC Educational Resources Information Center
Besser, Howard
This paper summarizes a subset of the findings of a study of digital image distribution that focused on the Museum Educational Site Licensing (MESL) project--the first large-scale multi-institutional project to explore digital delivery of art images and accompanying text/metadata from disparate sources. This Mellon Foundation-sponsored study…
A feasibility study on the measurement of tree trunks in forests using multi-scale vertical images
NASA Astrophysics Data System (ADS)
Berveglieri, A.; Oliveira, R. O.; Tommaselli, A. M. G.
2014-06-01
The determination of the Diameter at Breast Height (DBH) is an important variable that contributes to several studies on forest, e.g., environmental monitoring, tree growth, volume of wood, and biomass estimation. This paper presents a preliminary technique for the measurement of tree trunks using terrestrial images collected with a panoramic camera in nadir view. A multi-scale model is generated with these images. Homologue points on the trunk surface are measured over the images and their ground coordinates are determined by intersection of rays. The resulting XY coordinates of each trunk, defining an arc shape, can be used as observations in a circle fitting by least squares. Then, the DBH of each trunk is calculated using an estimated radius. Experiments were performed in two urban forest areas to assess the approach. In comparison with direct measurements on the trunks taken with a measuring tape, the discrepancies presented a Root Mean Square Error (RMSE) of 1.8 cm with a standard deviation of 0.7 cm. These results demonstrate compatibility with manual measurements and confirm the feasibility of the proposed technique.
Medical image registration based on normalized multidimensional mutual information
NASA Astrophysics Data System (ADS)
Li, Qi; Ji, Hongbing; Tong, Ming
2009-10-01
Registration of medical images is an essential research topic in medical image processing and applications, and especially a preliminary and key step for multimodality image fusion. This paper offers a solution to medical image registration based on normalized multi-dimensional mutual information. Firstly, affine transformation with translational and rotational parameters is applied to the floating image. Then ordinal features are extracted by ordinal filters with different orientations to represent spatial information in medical images. Integrating ordinal features with pixel intensities, the normalized multi-dimensional mutual information is defined as similarity criterion to register multimodality images. Finally the immune algorithm is used to search registration parameters. The experimental results demonstrate the effectiveness of the proposed registration scheme.
Using endmembers in AVIRIS images to estimate changes in vegetative biomass
NASA Technical Reports Server (NTRS)
Smith, Milton O.; Adams, John B.; Ustin, Susan L.; Roberts, Dar A.
1992-01-01
Field techniques for estimating vegetative biomass are labor intensive, and rarely are used to monitor changes in biomass over time. Remote-sensing offers an attractive alternative to field measurements; however, because there is no simple correspondence between encoded radiance in multispectral images and biomass, it is not possible to measure vegetative biomass directly from AVIRIS images. Ways to estimate vegetative biomass by identifying community types and then applying biomass scalars derived from field measurements are investigated. Field measurements of community-scale vegetative biomass can be made, at least for local areas, but it is not always possible to identify vegetation communities unambiguously using remote measurements and conventional image-processing techniques. Furthermore, even when communities are well characterized in a single image, it typically is difficult to assess the extent and nature of changes in a time series of images, owing to uncertainties introduced by variations in illumination geometry, atmospheric attenuation, and instrumental responses. Our objective is to develop an improved method based on spectral mixture analysis to characterize and identify vegetative communities, that can be applied to multi-temporal AVIRIS and other types of images. In previous studies, multi-temporal data sets (AVIRIS and TM) of Owens Valley, CA were analyzed and vegetation communities were defined in terms of fractions of reference (laboratory and field) endmember spectra. An advantage of converting an image to fractions of reference endmembers is that, although fractions in a given pixel may vary from image to image in a time series, the endmembers themselves typically are constant, thus providing a consistent frame of reference.
Fusion of multi-spectral and panchromatic images based on 2D-PWVD and SSIM
NASA Astrophysics Data System (ADS)
Tan, Dongjie; Liu, Yi; Hou, Ruonan; Xue, Bindang
2016-03-01
A combined method using 2D pseudo Wigner-Ville distribution (2D-PWVD) and structural similarity(SSIM) index is proposed for fusion of low resolution multi-spectral (MS) image and high resolution panchromatic (PAN) image. First, the intensity component of multi-spectral image is extracted with generalized IHS transform. Then, the spectrum diagrams of the intensity components of multi-spectral image and panchromatic image are obtained with 2D-PWVD. Different fusion rules are designed for different frequency information of the spectrum diagrams. SSIM index is used to evaluate the high frequency information of the spectrum diagrams for assigning the weights in the fusion processing adaptively. After the new spectrum diagram is achieved according to the fusion rule, the final fusion image can be obtained by inverse 2D-PWVD and inverse GIHS transform. Experimental results show that, the proposed method can obtain high quality fusion images.
NASA Astrophysics Data System (ADS)
Wu, Yu-Xia; Zhang, Xi; Xu, Xiao-Pan; Liu, Yang; Zhang, Guo-Peng; Li, Bao-Juan; Chen, Hui-Jun; Lu, Hong-Bing
2017-02-01
Ischemic stroke has great correlation with carotid atherosclerosis and is mostly caused by vulnerable plaques. It's particularly important to analysis the components of plaques for the detection of vulnerable plaques. Recently plaque analysis based on multi-contrast magnetic resonance imaging has attracted great attention. Though multi-contrast MR imaging has potentials in enhanced demonstration of carotid wall, its performance is hampered by the misalignment of different imaging sequences. In this study, a coarse-to-fine registration strategy based on cross-sectional images and wall boundaries is proposed to solve the problem. It includes two steps: a rigid step using the iterative closest points to register the centerlines of carotid artery extracted from multi-contrast MR images, and a non-rigid step using the thin plate spline to register the lumen boundaries of carotid artery. In the rigid step, the centerline was extracted by tracking the crosssectional images along the vessel direction calculated by Hessian matrix. In the non-rigid step, a shape context descriptor is introduced to find corresponding points of two similar boundaries. In addition, the deterministic annealing technique is used to find a globally optimized solution. The proposed strategy was evaluated by newly developed three-dimensional, fast and high resolution multi-contrast black blood MR imaging. Quantitative validation indicated that after registration, the overlap of two boundaries from different sequences is 95%, and their mean surface distance is 0.12 mm. In conclusion, the proposed algorithm has improved the accuracy of registration effectively for further component analysis of carotid plaques.
Compressed single pixel imaging in the spatial frequency domain
Torabzadeh, Mohammad; Park, Il-Yong; Bartels, Randy A.; Durkin, Anthony J.; Tromberg, Bruce J.
2017-01-01
Abstract. We have developed compressed sensing single pixel spatial frequency domain imaging (cs-SFDI) to characterize tissue optical properties over a wide field of view (35 mm×35 mm) using multiple near-infrared (NIR) wavelengths simultaneously. Our approach takes advantage of the relatively sparse spatial content required for mapping tissue optical properties at length scales comparable to the transport scattering length in tissue (ltr∼1 mm) and the high bandwidth available for spectral encoding using a single-element detector. cs-SFDI recovered absorption (μa) and reduced scattering (μs′) coefficients of a tissue phantom at three NIR wavelengths (660, 850, and 940 nm) within 7.6% and 4.3% of absolute values determined using camera-based SFDI, respectively. These results suggest that cs-SFDI can be developed as a multi- and hyperspectral imaging modality for quantitative, dynamic imaging of tissue optical and physiological properties. PMID:28300272
NASA Astrophysics Data System (ADS)
Timchenko, Leonid; Yarovyi, Andrii; Kokriatskaya, Nataliya; Nakonechna, Svitlana; Abramenko, Ludmila; Ławicki, Tomasz; Popiel, Piotr; Yesmakhanova, Laura
2016-09-01
The paper presents a method of parallel-hierarchical transformations for rapid recognition of dynamic images using GPU technology. Direct parallel-hierarchical transformations based on cluster CPU-and GPU-oriented hardware platform. Mathematic models of training of the parallel hierarchical (PH) network for the transformation are developed, as well as a training method of the PH network for recognition of dynamic images. This research is most topical for problems on organizing high-performance computations of super large arrays of information designed to implement multi-stage sensing and processing as well as compaction and recognition of data in the informational structures and computer devices. This method has such advantages as high performance through the use of recent advances in parallelization, possibility to work with images of ultra dimension, ease of scaling in case of changing the number of nodes in the cluster, auto scan of local network to detect compute nodes.
Mixedness determination of rare earth-doped ceramics
NASA Astrophysics Data System (ADS)
Czerepinski, Jennifer H.
The lack of chemical uniformity in a powder mixture, such as clustering of a minor component, can lead to deterioration of materials properties. A method to determine powder mixture quality is to correlate the chemical homogeneity of a multi-component mixture with its particle size distribution and mixing method. This is applicable to rare earth-doped ceramics, which require at least 1-2 nm dopant ion spacing to optimize optical properties. Mixedness simulations were conducted for random heterogeneous mixtures of Nd-doped LaF3 mixtures using the Concentric Shell Model of Mixedness (CSMM). Results indicate that when the host to dopant particle size ratio is 100, multi-scale concentration variance is optimized. In order to verify results from the model, experimental methods that probe a mixture at the micro, meso, and macro scales are needed. To directly compare CSMM results experimentally, an image processing method was developed to calculate variance profiles from electron images. An in-lens (IL) secondary electron image is subtracted from the corresponding Everhart-Thornley (ET) secondary electron image in a Field-Emission Scanning Electron Microscope (FESEM) to produce two phases and pores that can be quantified with 50 nm spatial resolution. A macro was developed to quickly analyze multi-scale compositional variance from these images. Results for a 50:50 mixture of NdF3 and LaF3 agree with the computational model. The method has proven to be applicable only for mixtures with major components and specific particle morphologies, but the macro is useful for any type of imaging that produces excellent phase contrast, such as confocal microscopy. Fluorescence spectroscopy was used as an indirect method to confirm computational results for Nd-doped LaF3 mixtures. Fluorescence lifetime can be used as a quantitative method to indirectly measure chemical homogeneity when the limits of electron microscopy have been reached. Fluorescence lifetime represents the compositional fluctuations of a dopant on the nanoscale while accounting for billions of particles in a fast, non-destructive manner. The significance of this study will show how small-scale fluctuations in homogeneity limit the optimization of optical properties, which can be improved by the proper selection of particle size and mixing method.
Zhang, Wei; Zhang, Xiaolong; Qiang, Yan; Tian, Qi; Tang, Xiaoxian
2017-01-01
The fast and accurate segmentation of lung nodule image sequences is the basis of subsequent processing and diagnostic analyses. However, previous research investigating nodule segmentation algorithms cannot entirely segment cavitary nodules, and the segmentation of juxta-vascular nodules is inaccurate and inefficient. To solve these problems, we propose a new method for the segmentation of lung nodule image sequences based on superpixels and density-based spatial clustering of applications with noise (DBSCAN). First, our method uses three-dimensional computed tomography image features of the average intensity projection combined with multi-scale dot enhancement for preprocessing. Hexagonal clustering and morphological optimized sequential linear iterative clustering (HMSLIC) for sequence image oversegmentation is then proposed to obtain superpixel blocks. The adaptive weight coefficient is then constructed to calculate the distance required between superpixels to achieve precise lung nodules positioning and to obtain the subsequent clustering starting block. Moreover, by fitting the distance and detecting the change in slope, an accurate clustering threshold is obtained. Thereafter, a fast DBSCAN superpixel sequence clustering algorithm, which is optimized by the strategy of only clustering the lung nodules and adaptive threshold, is then used to obtain lung nodule mask sequences. Finally, the lung nodule image sequences are obtained. The experimental results show that our method rapidly, completely and accurately segments various types of lung nodule image sequences. PMID:28880916
NASA Astrophysics Data System (ADS)
Ding, Peng; Zhang, Ye; Deng, Wei-Jian; Jia, Ping; Kuijper, Arjan
2018-07-01
Detection of objects from satellite optical remote sensing images is very important for many commercial and governmental applications. With the development of deep convolutional neural networks (deep CNNs), the field of object detection has seen tremendous advances. Currently, objects in satellite remote sensing images can be detected using deep CNNs. In general, optical remote sensing images contain many dense and small objects, and the use of the original Faster Regional CNN framework does not yield a suitably high precision. Therefore, after careful analysis we adopt dense convoluted networks, a multi-scale representation and various combinations of improvement schemes to enhance the structure of the base VGG16-Net for improving the precision. We propose an approach to reduce the test-time (detection time) and memory requirements. To validate the effectiveness of our approach, we perform experiments using satellite remote sensing image datasets of aircraft and automobiles. The results show that the improved network structure can detect objects in satellite optical remote sensing images more accurately and efficiently.
Chen, Ning; Yu, Dejie; Xia, Baizhan; Liu, Jian; Ma, Zhengdong
2017-04-01
This paper presents a homogenization-based interval analysis method for the prediction of coupled structural-acoustic systems involving periodical composites and multi-scale uncertain-but-bounded parameters. In the structural-acoustic system, the macro plate structure is assumed to be composed of a periodically uniform microstructure. The equivalent macro material properties of the microstructure are computed using the homogenization method. By integrating the first-order Taylor expansion interval analysis method with the homogenization-based finite element method, a homogenization-based interval finite element method (HIFEM) is developed to solve a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters. The corresponding formulations of the HIFEM are deduced. A subinterval technique is also introduced into the HIFEM for higher accuracy. Numerical examples of a hexahedral box and an automobile passenger compartment are given to demonstrate the efficiency of the presented method for a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters.
USDA-ARS?s Scientific Manuscript database
Multi-angle remote sensing has been proved useful for mapping vegetation community types in desert regions. Based on Multi-angle Imaging Spectro-Radiometer (MISR) multi-angular images, this study compares roles played by Bidirectional Reflectance Distribution Function (BRDF) model parameters with th...
Fully automatic multi-atlas segmentation of CTA for partial volume correction in cardiac SPECT/CT
NASA Astrophysics Data System (ADS)
Liu, Qingyi; Mohy-ud-Din, Hassan; Boutagy, Nabil E.; Jiang, Mingyan; Ren, Silin; Stendahl, John C.; Sinusas, Albert J.; Liu, Chi
2017-05-01
Anatomical-based partial volume correction (PVC) has been shown to improve image quality and quantitative accuracy in cardiac SPECT/CT. However, this method requires manual segmentation of various organs from contrast-enhanced computed tomography angiography (CTA) data. In order to achieve fully automatic CTA segmentation for clinical translation, we investigated the most common multi-atlas segmentation methods. We also modified the multi-atlas segmentation method by introducing a novel label fusion algorithm for multiple organ segmentation to eliminate overlap and gap voxels. To evaluate our proposed automatic segmentation, eight canine 99mTc-labeled red blood cell SPECT/CT datasets that incorporated PVC were analyzed, using the leave-one-out approach. The Dice similarity coefficient of each organ was computed. Compared to the conventional label fusion method, our proposed label fusion method effectively eliminated gaps and overlaps and improved the CTA segmentation accuracy. The anatomical-based PVC of cardiac SPECT images with automatic multi-atlas segmentation provided consistent image quality and quantitative estimation of intramyocardial blood volume, as compared to those derived using manual segmentation. In conclusion, our proposed automatic multi-atlas segmentation method of CTAs is feasible, practical, and facilitates anatomical-based PVC of cardiac SPECT/CT images.
Cloud-based image sharing network for collaborative imaging diagnosis and consultation
NASA Astrophysics Data System (ADS)
Yang, Yuanyuan; Gu, Yiping; Wang, Mingqing; Sun, Jianyong; Li, Ming; Zhang, Weiqiang; Zhang, Jianguo
2018-03-01
In this presentation, we presented a new approach to design cloud-based image sharing network for collaborative imaging diagnosis and consultation through Internet, which can enable radiologists, specialists and physicians locating in different sites collaboratively and interactively to do imaging diagnosis or consultation for difficult or emergency cases. The designed network combined a regional RIS, grid-based image distribution management, an integrated video conferencing system and multi-platform interactive image display devices together with secured messaging and data communication. There are three kinds of components in the network: edge server, grid-based imaging documents registry and repository, and multi-platform display devices. This network has been deployed in a public cloud platform of Alibaba through Internet since March 2017 and used for small lung nodule or early staging lung cancer diagnosis services between Radiology departments of Huadong hospital in Shanghai and the First Hospital of Jiaxing in Zhejiang Province.
Image fusion based on Bandelet and sparse representation
NASA Astrophysics Data System (ADS)
Zhang, Jiuxing; Zhang, Wei; Li, Xuzhi
2018-04-01
Bandelet transform could acquire geometric regular direction and geometric flow, sparse representation could represent signals with as little as possible atoms on over-complete dictionary, both of which could be used to image fusion. Therefore, a new fusion method is proposed based on Bandelet and Sparse Representation, to fuse Bandelet coefficients of multi-source images and obtain high quality fusion effects. The test are performed on remote sensing images and simulated multi-focus images, experimental results show that the performance of new method is better than tested methods according to objective evaluation indexes and subjective visual effects.
NASA Astrophysics Data System (ADS)
Gao, Y.; Lin, Q.; Bijeljic, B.; Blunt, M. J.
2017-12-01
To observe intermittency in consolidated rock, we image a steady state flow of brine and decane in Bentheimer sandstone. We devise an experimental method based on X-ray differential imaging method to examine how changes in flow rate impact the pore-scale distribution of fluids during co-injection flow under dynamic flow conditions at steady state. This helps us elucidate the diverse flow regimes (connected, intermittent break-up, or continual break-up of the non-wetting phase pathways) for two capillary numbers. Also, relative permeability curves under both capillary and viscous limited conditions could be measured. We have performed imbibition sample floods using oil-brine and measured steady state relative permeability on a sandstone rock core in order to fully characterize the flow behaviour at low and high Ca. Two sets of experiments at high and low flow rates are provided to explore the time-evolution of the non-wetting phase clusters distribution under different flow conditions. The high flow rate is 0.5 mL/min, whose corresponding capillary number is 7.7×10-6. The low flow rate is 0.02 mL/min, whose capillary number is 3.1×10-7. A procedure based on using high-salinity brine as the contrast phase and applying differential imaging between the dry scan and that of the sample saturation with a 30 wt% Potassium iodide (KI) doped brine help to make sure there is no non-wetting phase in micro-pores. Then the intermittent phase in multiphase flow image at high Ca can be quantified by obtaining the differential image between the 30 wt% KI brine image and the scans that taken at each fixed fractional flow. By using the grey scale histogram distribution of the raw images at each condition, the oil proportion in the intermittent phase can be calculated. The pressure drops at each fractional flow at low and high Ca can be measured by high-precision pressure differential sensors and utilized to calculate to the relative permeability at pore scale. The relative permeability data and fw-Sw relationship obtained by our experiment at pore scale are compared with the data collected from experiments which were conducted at core scale, and they match well.
Early Results from the Odyssey THEMIS Investigation
NASA Technical Reports Server (NTRS)
Christensen, Philip R.; Bandfield, Joshua L.; Bell, James F., III; Hamilton, Victoria E.; Ivanov, Anton; Jakosky, Bruce M.; Kieffer, Hugh H.; Lane, Melissa D.; Malin, Michael C.; McConnochie, Timothy
2003-01-01
The Thermal Emission Imaging System (THEMIS) began studying the surface and atmosphere of Mars in February, 2002 using thermal infrared (IR) multi-spectral imaging between 6.5 and 15 m, and visible/near-IR images from 450 to 850 nm. The infrared observations continue a long series of spacecraft observations of Mars, including the Mariner 6/7 Infrared Spectrometer, the Mariner 9 Infrared Interferometer Spectrometer (IRIS), the Viking Infrared Thermal Mapper (IRTM) investigations, the Phobos Termoscan, and the Mars Global Surveyor Thermal Emission Spectrometer (MGS TES). The THEMIS investigation's specific objectives are to: (1) determine the mineralogy of localized deposits associated with hydrothermal or sub-aqueous environments, and to identify future landing sites likely to represent these environments; (2) search for thermal anomalies associated with active sub-surface hydrothermal systems; (3) study small-scale geologic processes and landing site characteristics using morphologic and thermophysical properties; (4) investigate polar cap processes at all seasons; and (5) provide a high spatial resolution link to the global hyperspectral mineral mapping from the TES investigation. THEMIS provides substantially higher spatial resolution IR multi-spectral images to complement TES hyperspectral (143-band) global mapping, and regional visible imaging at scales intermediate between the Viking and MGS cameras.
Viewing zone duplication of multi-projection 3D display system using uniaxial crystal.
Lee, Chang-Kun; Park, Soon-Gi; Moon, Seokil; Lee, Byoungho
2016-04-18
We propose a novel multiplexing technique for increasing the viewing zone of a multi-view based multi-projection 3D display system by employing double refraction in uniaxial crystal. When linearly polarized images from projector pass through the uniaxial crystal, two possible optical paths exist according to the polarization states of image. Therefore, the optical paths of the image could be changed, and the viewing zone is shifted in a lateral direction. The polarization modulation of the image from a single projection unit enables us to generate two viewing zones at different positions. For realizing full-color images at each viewing zone, a polarization-based temporal multiplexing technique is adopted with a conventional polarization switching device of liquid crystal (LC) display. Through experiments, a prototype of a ten-view multi-projection 3D display system presenting full-colored view images is implemented by combining five laser scanning projectors, an optically clear calcite (CaCO3) crystal, and an LC polarization rotator. For each time sequence of temporal multiplexing, the luminance distribution of the proposed system is measured and analyzed.
NASA Astrophysics Data System (ADS)
Weigand, Maximilian; Kemna, Andreas
2017-02-01
A better understanding of root-soil interactions and associated processes is essential in achieving progress in crop breeding and management, prompting the need for high-resolution and non-destructive characterization methods. To date, such methods are still lacking or restricted by technical constraints, in particular the charactization and monitoring of root growth and function in the field. A promising technique in this respect is electrical impedance tomography (EIT), which utilizes low-frequency (< 1 kHz)- electrical conduction- and polarization properties in an imaging framework. It is well established that cells and cell clusters exhibit an electrical polarization response in alternating electric-current fields due to electrical double layers which form at cell membranes. This double layer is directly related to the electrical surface properties of the membrane, which in turn are influenced by nutrient dynamics (fluxes and concentrations on both sides of the membranes). Therefore, it can be assumed that the electrical polarization properties of roots are inherently related to ion uptake and translocation processes in the root systems. We hereby propose broadband (mHz to hundreds of Hz) multi-frequency EIT as a non-invasive methodological approach for the monitoring and physiological, i.e., functional, characterization of crop root systems. The approach combines the spatial-resolution capability of an imaging method with the diagnostic potential of electrical-impedance spectroscopy. The capability of multi-frequency EIT to characterize and monitor crop root systems was investigated in a rhizotron laboratory experiment, in which the root system of oilseed plants was monitored in a water-filled rhizotron, that is, in a nutrient-deprived environment. We found a low-frequency polarization response of the root system, which enabled the successful delineation of its spatial extension. The magnitude of the overall polarization response decreased along with the physiological decay of the root system due to the stress situation. Spectral polarization parameters, as derived from a pixel-based Debye decomposition analysis of the multi-frequency imaging results, reveal systematic changes in the spatial and spectral electrical response of the root system. In particular, quantified mean relaxation times (of the order of 10 ms) indicate changes in the length scales on which the polarization processes took place in the root system, as a response to the prolonged induced stress situation. Our results demonstrate that broadband EIT is a capable, non-invasive method to image root system extension as well as to monitor changes associated with the root physiological processes. Given its applicability on both laboratory and field scales, our results suggest an enormous potential of the method for the structural and functional imaging of root systems for various applications. This particularly holds for the field scale, where corresponding methods are highly desired but to date are lacking.
Energy Efficient Image/Video Data Transmission on Commercial Multi-Core Processors
Lee, Sungju; Kim, Heegon; Chung, Yongwha; Park, Daihee
2012-01-01
In transmitting image/video data over Video Sensor Networks (VSNs), energy consumption must be minimized while maintaining high image/video quality. Although image/video compression is well known for its efficiency and usefulness in VSNs, the excessive costs associated with encoding computation and complexity still hinder its adoption for practical use. However, it is anticipated that high-performance handheld multi-core devices will be used as VSN processing nodes in the near future. In this paper, we propose a way to improve the energy efficiency of image and video compression with multi-core processors while maintaining the image/video quality. We improve the compression efficiency at the algorithmic level or derive the optimal parameters for the combination of a machine and compression based on the tradeoff between the energy consumption and the image/video quality. Based on experimental results, we confirm that the proposed approach can improve the energy efficiency of the straightforward approach by a factor of 2∼5 without compromising image/video quality. PMID:23202181
Multi-focus image fusion using a guided-filter-based difference image.
Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Yang, Tingwu
2016-03-20
The aim of multi-focus image fusion technology is to integrate different partially focused images into one all-focused image. To realize this goal, a new multi-focus image fusion method based on a guided filter is proposed and an efficient salient feature extraction method is presented in this paper. Furthermore, feature extraction is primarily the main objective of the present work. Based on salient feature extraction, the guided filter is first used to acquire the smoothing image containing the most sharpness regions. To obtain the initial fusion map, we compose a mixed focus measure by combining the variance of image intensities and the energy of the image gradient together. Then, the initial fusion map is further processed by a morphological filter to obtain a good reprocessed fusion map. Lastly, the final fusion map is determined via the reprocessed fusion map and is optimized by a guided filter. Experimental results demonstrate that the proposed method does markedly improve the fusion performance compared to previous fusion methods and can be competitive with or even outperform state-of-the-art fusion methods in terms of both subjective visual effects and objective quality metrics.
Automated road network extraction from high spatial resolution multi-spectral imagery
NASA Astrophysics Data System (ADS)
Zhang, Qiaoping
For the last three decades, the Geomatics Engineering and Computer Science communities have considered automated road network extraction from remotely-sensed imagery to be a challenging and important research topic. The main objective of this research is to investigate the theory and methodology of automated feature extraction for image-based road database creation, refinement or updating, and to develop a series of algorithms for road network extraction from high resolution multi-spectral imagery. The proposed framework for road network extraction from multi-spectral imagery begins with an image segmentation using the k-means algorithm. This step mainly concerns the exploitation of the spectral information for feature extraction. The road cluster is automatically identified using a fuzzy classifier based on a set of predefined road surface membership functions. These membership functions are established based on the general spectral signature of road pavement materials and the corresponding normalized digital numbers on each multi-spectral band. Shape descriptors of the Angular Texture Signature are defined and used to reduce the misclassifications between roads and other spectrally similar objects (e.g., crop fields, parking lots, and buildings). An iterative and localized Radon transform is developed for the extraction of road centerlines from the classified images. The purpose of the transform is to accurately and completely detect the road centerlines. It is able to find short, long, and even curvilinear lines. The input image is partitioned into a set of subset images called road component images. An iterative Radon transform is locally applied to each road component image. At each iteration, road centerline segments are detected based on an accurate estimation of the line parameters and line widths. Three localization approaches are implemented and compared using qualitative and quantitative methods. Finally, the road centerline segments are grouped into a road network. The extracted road network is evaluated against a reference dataset using a line segment matching algorithm. The entire process is unsupervised and fully automated. Based on extensive experimentation on a variety of remotely-sensed multi-spectral images, the proposed methodology achieves a moderate success in automating road network extraction from high spatial resolution multi-spectral imagery.
NASA Astrophysics Data System (ADS)
Verma, Sneha K.; Liu, Brent J.; Gridley, Daila S.; Mao, Xiao W.; Kotha, Nikhil
2015-03-01
In previous years we demonstrated an imaging informatics system designed to support multi-institutional research focused on the utilization of proton radiation for treating spinal cord injury (SCI)-related pain. This year we will demonstrate an update on the system with new modules added to perform image processing on evaluation data using immunhistochemistry methods to observe effects of proton therapy. The overarching goal of the research is to determine the effectiveness of using the proton beam for treating SCI-related neuropathic pain as an alternative to invasive surgical lesioning. The research is a joint collaboration between three major institutes, University of Southern California (data collection/integration and image analysis), Spinal Cord Institute VA Healthcare System, Long Beach (patient subject recruitment), and Loma Linda University and Medical Center (human and preclinical animal studies). The system that we are presenting is one of its kind which is capable of integrating a large range of data types, including text data, imaging data, DICOM objects from proton therapy treatment and pathological data. For multi-institutional studies, keeping data secure and integrated is very crucial. Different kinds of data within the study workflow are generated at different stages and different groups of people who process and analyze them in order to see hidden patterns within healthcare data from a broader perspective. The uniqueness of our system relies on the fact that it is platform independent and web-based which makes it very useful in such a large-scale study.
Jones, Christopher P; Brenner, Ceri M; Stitt, Camilla A; Armstrong, Chris; Rusby, Dean R; Mirfayzi, Seyed R; Wilson, Lucy A; Alejo, Aarón; Ahmed, Hamad; Allott, Ric; Butler, Nicholas M H; Clarke, Robert J; Haddock, David; Hernandez-Gomez, Cristina; Higginson, Adam; Murphy, Christopher; Notley, Margaret; Paraskevoulakos, Charilaos; Jowsey, John; McKenna, Paul; Neely, David; Kar, Satya; Scott, Thomas B
2016-11-15
A small scale sample nuclear waste package, consisting of a 28mm diameter uranium penny encased in grout, was imaged by absorption contrast radiography using a single pulse exposure from an X-ray source driven by a high-power laser. The Vulcan laser was used to deliver a focused pulse of photons to a tantalum foil, in order to generate a bright burst of highly penetrating X-rays (with energy >500keV), with a source size of <0.5mm. BAS-TR and BAS-SR image plates were used for image capture, alongside a newly developed Thalium doped Caesium Iodide scintillator-based detector coupled to CCD chips. The uranium penny was clearly resolved to sub-mm accuracy over a 30cm(2) scan area from a single shot acquisition. In addition, neutron generation was demonstrated in situ with the X-ray beam, with a single shot, thus demonstrating the potential for multi-modal criticality testing of waste materials. This feasibility study successfully demonstrated non-destructive radiography of encapsulated, high density, nuclear material. With recent developments of high-power laser systems, to 10Hz operation, a laser-driven multi-modal beamline for waste monitoring applications is envisioned. Copyright © 2016. Published by Elsevier B.V.
Correlations of stock price fluctuations under multi-scale and multi-threshold scenarios
NASA Astrophysics Data System (ADS)
Sui, Guo; Li, Huajiao; Feng, Sida; Liu, Xueyong; Jiang, Meihui
2018-01-01
The multi-scale method is widely used in analyzing time series of financial markets and it can provide market information for different economic entities who focus on different periods. Through constructing multi-scale networks of price fluctuation correlation in the stock market, we can detect the topological relationship between each time series. Previous research has not addressed the problem that the original fluctuation correlation networks are fully connected networks and more information exists within these networks that is currently being utilized. Here we use listed coal companies as a case study. First, we decompose the original stock price fluctuation series into different time scales. Second, we construct the stock price fluctuation correlation networks at different time scales. Third, we delete the edges of the network based on thresholds and analyze the network indicators. Through combining the multi-scale method with the multi-threshold method, we bring to light the implicit information of fully connected networks.
Robust Curb Detection with Fusion of 3D-Lidar and Camera Data
Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen
2014-01-01
Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes. PMID:24854364
Bonora, Stefano; Jian, Yifan; Zhang, Pengfei; Zam, Azhar; Pugh, Edward N.; Zawadzki, Robert J.; Sarunic, Marinko V.
2015-01-01
Adaptive optics is rapidly transforming microscopy and high-resolution ophthalmic imaging. The adaptive elements commonly used to control optical wavefronts are liquid crystal spatial light modulators and deformable mirrors. We introduce a novel Multi-actuator Adaptive Lens that can correct aberrations to high order, and which has the potential to increase the spread of adaptive optics to many new applications by simplifying its integration with existing systems. Our method combines an adaptive lens with an imaged-based optimization control that allows the correction of images to the diffraction limit, and provides a reduction of hardware complexity with respect to existing state-of-the-art adaptive optics systems. The Multi-actuator Adaptive Lens design that we present can correct wavefront aberrations up to the 4th order of the Zernike polynomial characterization. The performance of the Multi-actuator Adaptive Lens is demonstrated in a wide field microscope, using a Shack-Hartmann wavefront sensor for closed loop control. The Multi-actuator Adaptive Lens and image-based wavefront-sensorless control were also integrated into the objective of a Fourier Domain Optical Coherence Tomography system for in vivo imaging of mouse retinal structures. The experimental results demonstrate that the insertion of the Multi-actuator Objective Lens can generate arbitrary wavefronts to correct aberrations down to the diffraction limit, and can be easily integrated into optical systems to improve the quality of aberrated images. PMID:26368169
NASA Astrophysics Data System (ADS)
Qin, Chen; Ren, Bin; Guo, Longfei; Dou, Wenhua
2014-11-01
Multi-projector three dimension display is a promising multi-view glass-free three dimension (3D) display technology, can produce full colour high definition 3D images on its screen. One key problem of multi-projector 3D display is how to acquire the source images of projector array while avoiding pseudoscopic problem. This paper analysis the displaying characteristics of multi-projector 3D display first and then propose a projector content synthetic method using tetrahedral transform. A 3D video format that based on stereo image pair and associated disparity map is presented, it is well suit for any type of multi-projector 3D display and has advantage in saving storage usage. Experiment results show that our method solved the pseudoscopic problem.
Multi exposure image fusion algorithm based on YCbCr space
NASA Astrophysics Data System (ADS)
Yang, T. T.; Fang, P. Y.
2018-05-01
To solve the problem that scene details and visual effects are difficult to be optimized in high dynamic image synthesis, we proposes a multi exposure image fusion algorithm for processing low dynamic range images in YCbCr space, and weighted blending of luminance and chromatic aberration components respectively. The experimental results show that the method can retain color effect of the fused image while balancing details of the bright and dark regions of the high dynamic image.
D' Archangel, Jeffrey; Tucker, Eric; Kinzel, Ed; Muller, Eric A; Bechtel, Hans A; Martin, Michael C; Raschke, Markus B; Boreman, Glenn
2013-07-15
Optical metamaterials have unique properties which result from geometric confinement of the optical conductivity. We developed a series of infrared metasurfaces based on an array of metallic square loop antennas. The far-field absorption spectrum can be designed with resonances across the infrared by scaling the geometric dimensions. We measure the amplitude and phase of the resonant mode as standing wave patterns within the square loops using scattering-scanning near-field optical microscopy (s-SNOM). Further, using a broad-band synchrotron-based FTIR microscope and s-SNOM at the Advanced Light Source, we are able to correlate far-field spectra to near-field modes of the metasurface as the resonance is tuned between samples. The results highlight the importance of multi-modal imaging for the design and characterization of optical metamaterials.
Kernel Regression Estimation of Fiber Orientation Mixtures in Diffusion MRI
Cabeen, Ryan P.; Bastin, Mark E.; Laidlaw, David H.
2016-01-01
We present and evaluate a method for kernel regression estimation of fiber orientations and associated volume fractions for diffusion MR tractography and population-based atlas construction in clinical imaging studies of brain white matter. This is a model-based image processing technique in which representative fiber models are estimated from collections of component fiber models in model-valued image data. This extends prior work in nonparametric image processing and multi-compartment processing to provide computational tools for image interpolation, smoothing, and fusion with fiber orientation mixtures. In contrast to related work on multi-compartment processing, this approach is based on directional measures of divergence and includes data-adaptive extensions for model selection and bilateral filtering. This is useful for reconstructing complex anatomical features in clinical datasets analyzed with the ball-and-sticks model, and our framework’s data-adaptive extensions are potentially useful for general multi-compartment image processing. We experimentally evaluate our approach with both synthetic data from computational phantoms and in vivo clinical data from human subjects. With synthetic data experiments, we evaluate performance based on errors in fiber orientation, volume fraction, compartment count, and tractography-based connectivity. With in vivo data experiments, we first show improved scan-rescan reproducibility and reliability of quantitative fiber bundle metrics, including mean length, volume, streamline count, and mean volume fraction. We then demonstrate the creation of a multi-fiber tractography atlas from a population of 80 human subjects. In comparison to single tensor atlasing, our multi-fiber atlas shows more complete features of known fiber bundles and includes reconstructions of the lateral projections of the corpus callosum and complex fronto-parietal connections of the superior longitudinal fasciculus I, II, and III. PMID:26691524