Science.gov

Sample records for sparse representation combined

  1. Grassmannian sparse representations

    NASA Astrophysics Data System (ADS)

    Azary, Sherif; Savakis, Andreas

    2015-05-01

    We present Grassmannian sparse representations (GSR), a sparse representation Grassmann learning framework for efficient classification. Sparse representation classification offers a powerful approach for recognition in a variety of contexts. However, a major drawback of sparse representation methods is their computational performance and memory utilization for high-dimensional data. A Grassmann manifold is a space that promotes smooth surfaces where points represent subspaces and the relationship between points is defined by the mapping of an orthogonal matrix. Grassmann manifolds are well suited for computer vision problems because they promote high between-class discrimination and within-class clustering, while offering computational advantages by mapping each subspace onto a single point. The GSR framework combines Grassmannian kernels and sparse representations, including regularized least squares and least angle regression, to improve high accuracy recognition while overcoming the drawbacks of performance and dependencies on high dimensional data distributions. The effectiveness of GSR is demonstrated on computationally intensive multiview action sequences, three-dimensional action sequences, and face recognition datasets.

  2. Multiple Sparse Representations Classification

    PubMed Central

    Plenge, Esben; Klein, Stefan S.; Niessen, Wiro J.; Meijering, Erik

    2015-01-01

    Sparse representations classification (SRC) is a powerful technique for pixelwise classification of images and it is increasingly being used for a wide variety of image analysis tasks. The method uses sparse representation and learned redundant dictionaries to classify image pixels. In this empirical study we propose to further leverage the redundancy of the learned dictionaries to achieve a more accurate classifier. In conventional SRC, each image pixel is associated with a small patch surrounding it. Using these patches, a dictionary is trained for each class in a supervised fashion. Commonly, redundant/overcomplete dictionaries are trained and image patches are sparsely represented by a linear combination of only a few of the dictionary elements. Given a set of trained dictionaries, a new patch is sparse coded using each of them, and subsequently assigned to the class whose dictionary yields the minimum residual energy. We propose a generalization of this scheme. The method, which we call multiple sparse representations classification (mSRC), is based on the observation that an overcomplete, class specific dictionary is capable of generating multiple accurate and independent estimates of a patch belonging to the class. So instead of finding a single sparse representation of a patch for each dictionary, we find multiple, and the corresponding residual energies provides an enhanced statistic which is used to improve classification. We demonstrate the efficacy of mSRC for three example applications: pixelwise classification of texture images, lumen segmentation in carotid artery magnetic resonance imaging (MRI), and bifurcation point detection in carotid artery MRI. We compare our method with conventional SRC, K-nearest neighbor, and support vector machine classifiers. The results show that mSRC outperforms SRC and the other reference methods. In addition, we present an extensive evaluation of the effect of the main mSRC parameters: patch size, dictionary size, and

  3. Structural damage identification via a combination of blind feature extraction and sparse representation classification

    NASA Astrophysics Data System (ADS)

    Yang, Yongchao; Nagarajaiah, Satish

    2014-03-01

    This paper addresses two problems in structural damage identification: locating damage and assessing damage severity, which are incorporated into the classification framework based on the theory of sparse representation (SR) and compressed sensing (CS). The sparsity nature implied in the classification problem itself is exploited, establishing a sparse representation framework for damage identification. Specifically, the proposed method consists of two steps: feature extraction and classification. In the feature extraction step, the modal features of both the test structure and the reference structure model are first blindly extracted by the unsupervised complexity pursuit (CP) algorithm. Then in the classification step, expressing the test modal feature as a linear combination of the bases of the over-complete reference feature dictionary—constructed by concatenating all modal features of all candidate damage classes—builds a highly underdetermined linear system of equations with an underlying sparse representation, which can be correctly recovered by ℓ1-minimization; the non-zero entry in the recovered sparse representation directly assigns the damage class which the test structure (feature) belongs to. The two-step CP-SR damage identification method alleviates the training process required by traditional pattern recognition based methods. In addition, the reference feature dictionary can be of small size by formulating the issues of locating damage and assessing damage extent as a two-stage procedure and by taking advantage of the robustness of the SR framework. Numerical simulations and experimental study are conducted to verify the developed CP-SR method. The problems of identifying multiple damage, using limited sensors and partial features, and the performance under heavy noise and random excitation are investigated, and promising results are obtained.

  4. [Identification of transmission fluid based on NIR spectroscopy by combining sparse representation method with manifold learning].

    PubMed

    Jiang, Lu-Lu; Luo, Mei-Fu; Zhang, Yu; Yu, Xin-Jie; Kong, Wen-Wen; Liu, Fei

    2014-01-01

    An identification method based on sparse representation (SR) combined with autoencoder network (AN) manifold learning was proposed for discriminating the varieties of transmission fluid by using near infrared (NIR) spectroscopy technology. NIR transmittance spectra from 600 to 1 800 nm were collected from 300 transmission fluid samples of five varieties (each variety consists of 60 samples). For each variety, 30 samples were randomly selected as training set (totally 150 samples), and the rest 30 ones as testing set (totally 150 samples). Autoencoder network manifold learning was applied to obtain the characteristic information in the 600-1800 nm spectra and the number of characteristics was reduced to 10. Principal component analysis (PCA) was applied to extract several relevant variables to represent the useful information of spectral variables. All of the training samples made up a data dictionary of the sparse representation (SR). Then the transmission fluid variety identification problem was reduced to the problem as how to represent the testing samples from the data dictionary (training samples data). The identification result thus could be achieved by solving the L-1 norm-based optimization problem. We compared the effectiveness of the proposed method with that of linear discriminant analysis (LDA), least squares support vector machine (LS-SVM) and sparse representation (SR) using the relevant variables selected by principal component analysis (PCA) and AN. Experimental results demonstrated that the overall identification accuracy of the proposed method for the five transmission fluid varieties was 97.33% by AN-SR, which was significantly higher than that of LDA or LS-SVM. Therefore, the proposed method can provide a new effective method for identification of transmission fluid variety.

  5. Fingerprint Compression Based on Sparse Representation.

    PubMed

    Shao, Guangqi; Wu, Yanping; A, Yong; Liu, Xiao; Guo, Tiande

    2014-02-01

    A new fingerprint compression algorithm based on sparse representation is introduced. Obtaining an overcomplete dictionary from a set of fingerprint patches allows us to represent them as a sparse linear combination of dictionary atoms. In the algorithm, we first construct a dictionary for predefined fingerprint image patches. For a new given fingerprint images, represent its patches according to the dictionary by computing l(0)-minimization and then quantize and encode the representation. In this paper, we consider the effect of various factors on compression results. Three groups of fingerprint images are tested. The experiments demonstrate that our algorithm is efficient compared with several competing compression techniques (JPEG, JPEG 2000, and WSQ), especially at high compression ratios. The experiments also illustrate that the proposed algorithm is robust to extract minutiae.

  6. Improved protein-protein interactions prediction via weighted sparse representation model combining continuous wavelet descriptor and PseAA composition.

    PubMed

    Huang, Yu-An; You, Zhu-Hong; Chen, Xing; Yan, Gui-Ying

    2016-12-23

    Protein-protein interactions (PPIs) are essential to most biological processes. Since bioscience has entered into the era of genome and proteome, there is a growing demand for the knowledge about PPI network. High-throughput biological technologies can be used to identify new PPIs, but they are expensive, time-consuming, and tedious. Therefore, computational methods for predicting PPIs have an important role. For the past years, an increasing number of computational methods such as protein structure-based approaches have been proposed for predicting PPIs. The major limitation in principle of these methods lies in the prior information of the protein to infer PPIs. Therefore, it is of much significance to develop computational methods which only use the information of protein amino acids sequence. Here, we report a highly efficient approach for predicting PPIs. The main improvements come from the use of a novel protein sequence representation by combining continuous wavelet descriptor and Chou's pseudo amino acid composition (PseAAC), and from adopting weighted sparse representation based classifier (WSRC). This method, cross-validated on the PPIs datasets of Saccharomyces cerevisiae, Human and H. pylori, achieves an excellent results with accuracies as high as 92.50%, 95.54% and 84.28% respectively, significantly better than previously proposed methods. Extensive experiments are performed to compare the proposed method with state-of-the-art Support Vector Machine (SVM) classifier. The outstanding results yield by our model that the proposed feature extraction method combing two kinds of descriptors have strong expression ability and are expected to provide comprehensive and effective information for machine learning-based classification models. In addition, the prediction performance in the comparison experiments shows the well cooperation between the combined feature and WSRC. Thus, the proposed method is a very efficient method to predict PPIs and may be a useful

  7. SAR Image Despeckling Via Structural Sparse Representation

    NASA Astrophysics Data System (ADS)

    Lu, Ting; Li, Shutao; Fang, Leyuan; Benediktsson, Jón Atli

    2016-12-01

    A novel synthetic aperture radar (SAR) image despeckling method based on structural sparse representation is introduced. The proposed method utilizes the fact that different regions in SAR images correspond to varying terrain reflectivity. Therefore, SAR images can be split into a heterogeneous class (with a varied terrain reflectivity) and a homogeneous class (with a constant terrain reflectivity). In the proposed method, different sparse representation based despeckling schemes are designed by combining the different region characteristics in SAR images. For heterogeneous regions with rich structure and texture information, structural dictionaries are learned to appropriately represent varied structural characteristics. Specifically, each patch in these regions is sparsely coded with the best fitted structural dictionary, thus good structure preservation can be obtained. For homogenous regions without rich structure and texture information, the highly redundant photometric self-similarity is exploited to suppress speckle noise without introducing artifacts. That is achieved by firstly learning the sub-dictionary, then simultaneously sparsely coding for each group of photometrically similar image patches. Visual and objective experimental results demonstrate the superiority of the proposed method over the-state-of-the-art methods.

  8. Sparse representation for the ISAR image reconstruction

    NASA Astrophysics Data System (ADS)

    Hu, Mengqi; Montalbo, John; Li, Shuxia; Sun, Ligang; Qiao, Zhijun G.

    2016-05-01

    In this paper, a sparse representation of the data for an inverse synthetic aperture radar (ISAR) system is provided in two dimensions. The proposed sparse representation motivates the use a of a Convex Optimization that recovers the image with far less samples, which is required by Nyquist-Shannon sampling theorem to increases the efficiency and decrease the cost of calculation in radar imaging.

  9. Discriminative Sparse Representations in Hyperspectral Imagery

    DTIC Science & Technology

    2010-03-01

    classification , and unsupervised labeling (clustering) [2, 3, 4, 5, 6, 7, 8]. Recently, a non-parametric (Bayesian) approach to sparse modeling and com...DISCRIMINATIVE SPARSE REPRESENTATIONS IN HYPERSPECTRAL IMAGERY By Alexey Castrodad, Zhengming Xing John Greer, Edward Bosch Lawrence Carin and...00-00-2010 to 00-00-2010 4. TITLE AND SUBTITLE Discriminative Sparse Representations in Hyperspectral Imagery 5a. CONTRACT NUMBER 5b. GRANT

  10. Saliency Detection Using Sparse and Nonlinear Feature Representation

    PubMed Central

    Zhao, Qingjie; Manzoor, Muhammad Farhan; Ishaq Khan, Saqib

    2014-01-01

    An important aspect of visual saliency detection is how features that form an input image are represented. A popular theory supports sparse feature representation, an image being represented with a basis dictionary having sparse weighting coefficient. Another method uses a nonlinear combination of image features for representation. In our work, we combine the two methods and propose a scheme that takes advantage of both sparse and nonlinear feature representation. To this end, we use independent component analysis (ICA) and covariant matrices, respectively. To compute saliency, we use a biologically plausible center surround difference (CSD) mechanism. Our sparse features are adaptive in nature; the ICA basis function are learnt at every image representation, rather than being fixed. We show that Adaptive Sparse Features when used with a CSD mechanism yield better results compared to fixed sparse representations. We also show that covariant matrices consisting of nonlinear integration of color information alone are sufficient to efficiently estimate saliency from an image. The proposed dual representation scheme is then evaluated against human eye fixation prediction, response to psychological patterns, and salient object detection on well-known datasets. We conclude that having two forms of representation compliments one another and results in better saliency detection. PMID:24895644

  11. SAR Image despeckling via sparse representation

    NASA Astrophysics Data System (ADS)

    Wang, Zhongmei; Yang, Xiaomei; Zheng, Liang

    2014-11-01

    SAR image despeckling is an active research area in image processing due to its importance in improving the quality of image for object detection and classification.In this paper, a new approach is proposed for multiplicative noise in SAR image removal based on nonlocal sparse representation by dictionary learning and collaborative filtering. First, a image is divided into many patches, and then a cluster is formed by clustering log-similar image patches using Fuzzy C-means (FCM). For each cluster, an over-complete dictionary is computed using the K-SVD method that iteratively updates the dictionary and the sparse coefficients. The patches belonging to the same cluster are then reconstructed by a sparse combination of the corresponding dictionary atoms. The reconstructed patches are finally collaboratively aggregated to build the denoised image. The experimental results show that the proposed method achieves much better results than many state-of-the-art algorithms in terms of both objective evaluation index (PSNR and ENL) and subjective visual perception.

  12. SAR target recognition based on improved joint sparse representation

    NASA Astrophysics Data System (ADS)

    Cheng, Jian; Li, Lan; Li, Hongsheng; Wang, Feng

    2014-12-01

    In this paper, a SAR target recognition method is proposed based on the improved joint sparse representation (IJSR) model. The IJSR model can effectively combine multiple-view SAR images from the same physical target to improve the recognition performance. The classification process contains two stages. Convex relaxation is used to obtain support sample candidates with the ℓ 1-norm minimization in the first stage. The low-rank matrix recovery strategy is introduced to explore the final support samples and its corresponding sparse representation coefficient matrix in the second stage. Finally, with the minimal reconstruction residual strategy, we can make the SAR target classification. The experimental results on the MSTAR database show the recognition performance outperforms state-of-the-art methods, such as the joint sparse representation classification (JSRC) method and the sparse representation classification (SRC) method.

  13. Bayesian learning of sparse multiscale image representations.

    PubMed

    Hughes, James Michael; Rockmore, Daniel N; Wang, Yang

    2013-12-01

    Multiscale representations of images have become a standard tool in image analysis. Such representations offer a number of advantages over fixed-scale methods, including the potential for improved performance in denoising, compression, and the ability to represent distinct but complementary information that exists at various scales. A variety of multiresolution transforms exist, including both orthogonal decompositions such as wavelets as well as nonorthogonal, overcomplete representations. Recently, techniques for finding adaptive, sparse representations have yielded state-of-the-art results when applied to traditional image processing problems. Attempts at developing multiscale versions of these so-called dictionary learning models have yielded modest but encouraging results. However, none of these techniques has sought to combine a rigorous statistical formulation of the multiscale dictionary learning problem and the ability to share atoms across scales. We present a model for multiscale dictionary learning that overcomes some of the drawbacks of previous approaches by first decomposing an input into a pyramid of distinct frequency bands using a recursive filtering scheme, after which we perform dictionary learning and sparse coding on the individual levels of the resulting pyramid. The associated image model allows us to use a single set of adapted dictionary atoms that is shared--and learned--across all scales in the model. The underlying statistical model of our proposed method is fully Bayesian and allows for efficient inference of parameters, including the level of additive noise for denoising applications. We apply the proposed model to several common image processing problems including non-Gaussian and nonstationary denoising of real-world color images.

  14. Learning discriminative dictionary for group sparse representation.

    PubMed

    Sun, Yubao; Liu, Qingshan; Tang, Jinhui; Tao, Dacheng

    2014-09-01

    In recent years, sparse representation has been widely used in object recognition applications. How to learn the dictionary is a key issue to sparse representation. A popular method is to use l1 norm as the sparsity measurement of representation coefficients for dictionary learning. However, the l1 norm treats each atom in the dictionary independently, so the learned dictionary cannot well capture the multisubspaces structural information of the data. In addition, the learned subdictionary for each class usually shares some common atoms, which weakens the discriminative ability of the reconstruction error of each subdictionary. This paper presents a new dictionary learning model to improve sparse representation for image classification, which targets at learning a class-specific subdictionary for each class and a common subdictionary shared by all classes. The model is composed of a discriminative fidelity, a weighted group sparse constraint, and a subdictionary incoherence term. The discriminative fidelity encourages each class-specific subdictionary to sparsely represent the samples in the corresponding class. The weighted group sparse constraint term aims at capturing the structural information of the data. The subdictionary incoherence term is to make all subdictionaries independent as much as possible. Because the common subdictionary represents features shared by all classes, we only use the reconstruction error of each class-specific subdictionary for classification. Extensive experiments are conducted on several public image databases, and the experimental results demonstrate the power of the proposed method, compared with the state-of-the-arts.

  15. Visual Tracking Based on Extreme Learning Machine and Sparse Representation

    PubMed Central

    Wang, Baoxian; Tang, Linbo; Yang, Jinglin; Zhao, Baojun; Wang, Shuigen

    2015-01-01

    The existing sparse representation-based visual trackers mostly suffer from both being time consuming and having poor robustness problems. To address these issues, a novel tracking method is presented via combining sparse representation and an emerging learning technique, namely extreme learning machine (ELM). Specifically, visual tracking can be divided into two consecutive processes. Firstly, ELM is utilized to find the optimal separate hyperplane between the target observations and background ones. Thus, the trained ELM classification function is able to remove most of the candidate samples related to background contents efficiently, thereby reducing the total computational cost of the following sparse representation. Secondly, to further combine ELM and sparse representation, the resultant confidence values (i.e., probabilities to be a target) of samples on the ELM classification function are used to construct a new manifold learning constraint term of the sparse representation framework, which tends to achieve robuster results. Moreover, the accelerated proximal gradient method is used for deriving the optimal solution (in matrix form) of the constrained sparse tracking model. Additionally, the matrix form solution allows the candidate samples to be calculated in parallel, thereby leading to a higher efficiency. Experiments demonstrate the effectiveness of the proposed tracker. PMID:26506359

  16. Visual tracking based on extreme learning machine and sparse representation.

    PubMed

    Wang, Baoxian; Tang, Linbo; Yang, Jinglin; Zhao, Baojun; Wang, Shuigen

    2015-10-22

    The existing sparse representation-based visual trackers mostly suffer from both being time consuming and having poor robustness problems. To address these issues, a novel tracking method is presented via combining sparse representation and an emerging learning technique, namely extreme learning machine (ELM). Specifically, visual tracking can be divided into two consecutive processes. Firstly, ELM is utilized to find the optimal separate hyperplane between the target observations and background ones. Thus, the trained ELM classification function is able to remove most of the candidate samples related to background contents efficiently, thereby reducing the total computational cost of the following sparse representation. Secondly, to further combine ELM and sparse representation, the resultant confidence values (i.e., probabilities to be a target) of samples on the ELM classification function are used to construct a new manifold learning constraint term of the sparse representation framework, which tends to achieve robuster results. Moreover, the accelerated proximal gradient method is used for deriving the optimal solution (in matrix form) of the constrained sparse tracking model. Additionally, the matrix form solution allows the candidate samples to be calculated in parallel, thereby leading to a higher efficiency. Experiments demonstrate the effectiveness of the proposed tracker.

  17. Ensemble polarimetric SAR image classification based on contextual sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Lamei; Wang, Xiao; Zou, Bin; Qiao, Zhijun

    2016-05-01

    Polarimetric SAR image interpretation has become one of the most interesting topics, in which the construction of the reasonable and effective technique of image classification is of key importance. Sparse representation represents the data using the most succinct sparse atoms of the over-complete dictionary and the advantages of sparse representation also have been confirmed in the field of PolSAR classification. However, it is not perfect, like the ordinary classifier, at different aspects. So ensemble learning is introduced to improve the issue, which makes a plurality of different learners training and obtained the integrated results by combining the individual learner to get more accurate and ideal learning results. Therefore, this paper presents a polarimetric SAR image classification method based on the ensemble learning of sparse representation to achieve the optimal classification.

  18. Noise-aware dictionary-learning-based sparse representation framework for detection and removal of single and combined noises from ECG signal

    PubMed Central

    Ramkumar, Barathram; Sabarimalai Manikandan, M.

    2017-01-01

    Automatic electrocardiogram (ECG) signal enhancement has become a crucial pre-processing step in most ECG signal analysis applications. In this Letter, the authors propose an automated noise-aware dictionary learning-based generalised ECG signal enhancement framework which can automatically learn the dictionaries based on the ECG noise type for effective representation of ECG signal and noises, and can reduce the computational load of sparse representation-based ECG enhancement system. The proposed framework consists of noise detection and identification, noise-aware dictionary learning, sparse signal decomposition and reconstruction. The noise detection and identification is performed based on the moving average filter, first-order difference, and temporal features such as number of turning points, maximum absolute amplitude, zerocrossings, and autocorrelation features. The representation dictionary is learned based on the type of noise identified in the previous stage. The proposed framework is evaluated using noise-free and noisy ECG signals. Results demonstrate that the proposed method can significantly reduce computational load as compared with conventional dictionary learning-based ECG denoising approaches. Further, comparative results show that the method outperforms existing methods in automatically removing noises such as baseline wanders, power-line interference, muscle artefacts and their combinations without distorting the morphological content of local waves of ECG signal. PMID:28529758

  19. Noise-aware dictionary-learning-based sparse representation framework for detection and removal of single and combined noises from ECG signal.

    PubMed

    Satija, Udit; Ramkumar, Barathram; Sabarimalai Manikandan, M

    2017-02-01

    Automatic electrocardiogram (ECG) signal enhancement has become a crucial pre-processing step in most ECG signal analysis applications. In this Letter, the authors propose an automated noise-aware dictionary learning-based generalised ECG signal enhancement framework which can automatically learn the dictionaries based on the ECG noise type for effective representation of ECG signal and noises, and can reduce the computational load of sparse representation-based ECG enhancement system. The proposed framework consists of noise detection and identification, noise-aware dictionary learning, sparse signal decomposition and reconstruction. The noise detection and identification is performed based on the moving average filter, first-order difference, and temporal features such as number of turning points, maximum absolute amplitude, zerocrossings, and autocorrelation features. The representation dictionary is learned based on the type of noise identified in the previous stage. The proposed framework is evaluated using noise-free and noisy ECG signals. Results demonstrate that the proposed method can significantly reduce computational load as compared with conventional dictionary learning-based ECG denoising approaches. Further, comparative results show that the method outperforms existing methods in automatically removing noises such as baseline wanders, power-line interference, muscle artefacts and their combinations without distorting the morphological content of local waves of ECG signal.

  20. Robust Multi Sensor Classification via Jointly Sparse Representation

    DTIC Science & Technology

    2016-03-14

    model [3] when we developed the multi- sensor joint sparse representation fusion model in the presence of gross but sparse noise penalized by an `1...complementary features from multiple measurements, we incorporate different structures on the concatenated coefficient matrix A through the penalized function FS...sparsity structure that simultaneously penalize several sparsity levels in a combined cost function. In the most general form, our model searches for the

  1. Learning sparse representations for human action recognition.

    PubMed

    Guha, Tanaya; Ward, Rabab Kreidieh

    2012-08-01

    This paper explores the effectiveness of sparse representations obtained by learning a set of overcomplete basis (dictionary) in the context of action recognition in videos. Although this work concentrates on recognizing human movements-physical actions as well as facial expressions-the proposed approach is fairly general and can be used to address other classification problems. In order to model human actions, three overcomplete dictionary learning frameworks are investigated. An overcomplete dictionary is constructed using a set of spatio-temporal descriptors (extracted from the video sequences) in such a way that each descriptor is represented by some linear combination of a small number of dictionary elements. This leads to a more compact and richer representation of the video sequences compared to the existing methods that involve clustering and vector quantization. For each framework, a novel classification algorithm is proposed. Additionally, this work also presents the idea of a new local spatio-temporal feature that is distinctive, scale invariant, and fast to compute. The proposed approach repeatedly achieves state-of-the-art results on several public data sets containing various physical actions and facial expressions.

  2. Maxdenominator Reweighted Sparse Representation for Tumor Classification

    PubMed Central

    Li, Weibiao; Liao, Bo; Zhu, Wen; Chen, Min; Peng, Li; Wei, Xiaohui; Gu, Changlong; Li, Keqin

    2017-01-01

    The classification of tumors is crucial for the proper treatment of cancer. Sparse representation-based classifier (SRC) exhibits good classification performance and has been successfully used to classify tumors using gene expression profile data. In this study, we propose a three-step maxdenominator reweighted sparse representation classification (MRSRC) method to classify tumors. First, we extract a set of metagenes from the training samples. These metagenes can capture the structures inherent to the data and are more effective for classification than the original gene expression data. Second, we use a reweighted regularization method to obtain the sparse representation coefficients. Reweighted regularization can enhance sparsity and obtain better sparse representation coefficients. Third, we classify the data by utilizing a maxdenominator residual error function. Maxdenominator strategy can reduce the residual error and improve the accuracy of the final classification. Extensive experiments using publicly available gene expression profile data sets show that the performance of MRSRC is comparable with or better than many existing representative methods. PMID:28393883

  3. Automatic landslide and mudflow detection method via multichannel sparse representation

    NASA Astrophysics Data System (ADS)

    Chao, Chen; Zhou, Jianjun; Hao, Zhuo; Sun, Bo; He, Jun; Ge, Fengxiang

    2015-10-01

    Landslide and mudflow detection is an important application of aerial images and high resolution remote sensing images, which is crucial for national security and disaster relief. Since the high resolution images are often large in size, it's necessary to develop an efficient algorithm for landslide and mudflow detection. Based on the theory of sparse representation and, we propose a novel automatic landslide and mudflow detection method in this paper, which combines multi-channel sparse representation and eight neighbor judgment methods. The whole process of the detection is totally automatic. We make the experiment on a high resolution image of ZhouQu district of Gansu province in China on August, 2010 and get a promising result which proved the effective of using sparse representation on landslide and mudflow detection.

  4. A Robust Sparse Representation Model for Hyperspectral Image Classification.

    PubMed

    Huang, Shaoguang; Zhang, Hongyan; Pižurica, Aleksandra

    2017-09-12

    Sparse representation has been extensively investigated for hyperspectral image (HSI) classification and led to substantial improvements in the performance over the traditional methods, such as support vector machine (SVM). However, the existing sparsity-based classification methods typically assume Gaussian noise, neglecting the fact that HSIs are often corrupted by different types of noise in practice. In this paper, we develop a robust classification model that admits realistic mixed noise, which includes Gaussian noise and sparse noise. We combine a model for mixed noise with a prior on the representation coefficients of input data within a unified framework, which produces three kinds of robust classification methods based on sparse representation classification (SRC), joint SRC and joint SRC on a super-pixels level. Experimental results on simulated and real data demonstrate the effectiveness of the proposed method and clear benefits from the introduced mixed-noise model.

  5. Feature Selection and Pedestrian Detection Based on Sparse Representation

    PubMed Central

    Yao, Shihong; Wang, Tao; Shen, Weiming; Pan, Shaoming; Chong, Yanwen; Ding, Fei

    2015-01-01

    Pedestrian detection have been currently devoted to the extraction of effective pedestrian features, which has become one of the obstacles in pedestrian detection application according to the variety of pedestrian features and their large dimension. Based on the theoretical analysis of six frequently-used features, SIFT, SURF, Haar, HOG, LBP and LSS, and their comparison with experimental results, this paper screens out the sparse feature subsets via sparse representation to investigate whether the sparse subsets have the same description abilities and the most stable features. When any two of the six features are fused, the fusion feature is sparsely represented to obtain its important components. Sparse subsets of the fusion features can be rapidly generated by avoiding calculation of the corresponding index of dimension numbers of these feature descriptors; thus, the calculation speed of the feature dimension reduction is improved and the pedestrian detection time is reduced. Experimental results show that sparse feature subsets are capable of keeping the important components of these six feature descriptors. The sparse features of HOG and LSS possess the same description ability and consume less time compared with their full features. The ratios of the sparse feature subsets of HOG and LSS to their full sets are the highest among the six, and thus these two features can be used to best describe the characteristics of the pedestrian and the sparse feature subsets of the combination of HOG-LSS show better distinguishing ability and parsimony. PMID:26295480

  6. Feature Selection and Pedestrian Detection Based on Sparse Representation.

    PubMed

    Yao, Shihong; Wang, Tao; Shen, Weiming; Pan, Shaoming; Chong, Yanwen; Ding, Fei

    2015-01-01

    Pedestrian detection have been currently devoted to the extraction of effective pedestrian features, which has become one of the obstacles in pedestrian detection application according to the variety of pedestrian features and their large dimension. Based on the theoretical analysis of six frequently-used features, SIFT, SURF, Haar, HOG, LBP and LSS, and their comparison with experimental results, this paper screens out the sparse feature subsets via sparse representation to investigate whether the sparse subsets have the same description abilities and the most stable features. When any two of the six features are fused, the fusion feature is sparsely represented to obtain its important components. Sparse subsets of the fusion features can be rapidly generated by avoiding calculation of the corresponding index of dimension numbers of these feature descriptors; thus, the calculation speed of the feature dimension reduction is improved and the pedestrian detection time is reduced. Experimental results show that sparse feature subsets are capable of keeping the important components of these six feature descriptors. The sparse features of HOG and LSS possess the same description ability and consume less time compared with their full features. The ratios of the sparse feature subsets of HOG and LSS to their full sets are the highest among the six, and thus these two features can be used to best describe the characteristics of the pedestrian and the sparse feature subsets of the combination of HOG-LSS show better distinguishing ability and parsimony.

  7. Superpixel sparse representation for target detection in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Dong, Chunhua; Naghedolfeizi, Masoud; Aberra, Dawit; Qiu, Hao; Zeng, Xiangyan

    2017-05-01

    Sparse Representation (SR) is an effective classification method. Given a set of data vectors, SR aims at finding the sparsest representation of each data vector among the linear combinations of the bases in a given dictionary. In order to further improve the classification performance, the joint SR that incorporates interpixel correlation information of neighborhoods has been proposed for image pixel classification. However, SR and joint SR demand significant amount of computational time and memory, especially when classifying a large number of pixels. To address this issue, we propose a superpixel sparse representation (SSR) algorithm for target detection in hyperspectral imagery. We firstly cluster hyperspectral pixels into nearly uniform hyperspectral superpixels using our proposed patch-based SLIC approach based on their spectral and spatial information. The sparse representations of these superpixels are then obtained by simultaneously decomposing superpixels over a given dictionary consisting of both target and background pixels. The class of a hyperspectral pixel is determined by a competition between its projections on target and background subdictionaries. One key advantage of the proposed superpixel representation algorithm with respect to pixelwise and joint sparse representation algorithms is that it reduces computational cost while still maintaining competitive classification performance. We demonstrate the effectiveness of the proposed SSR algorithm through experiments on target detection in the in-door and out-door scene data under daylight illumination as well as the remote sensing data. Experimental results show that SSR generally outperforms state of the art algorithms both quantitatively and qualitatively.

  8. Remote sensing image fusion via wavelet transform and sparse representation

    NASA Astrophysics Data System (ADS)

    Cheng, Jian; Liu, Haijun; Liu, Ting; Wang, Feng; Li, Hongsheng

    2015-06-01

    In this paper, we propose a remote sensing image fusion method which combines the wavelet transform and sparse representation to obtain fusion images with high spectral resolution and high spatial resolution. Firstly, intensity-hue-saturation (IHS) transform is applied to Multi-Spectral (MS) images. Then, wavelet transform is used to the intensity component of MS images and the Panchromatic (Pan) image to construct the multi-scale representation respectively. With the multi-scale representation, different fusion strategies are taken on the low-frequency and the high-frequency sub-images. Sparse representation with training dictionary is introduced into the low-frequency sub-image fusion. The fusion rule for the sparse representation coefficients of the low-frequency sub-images is defined by the spatial frequency maximum. For high-frequency sub-images with prolific detail information, the fusion rule is established by the images information fusion measurement indicator. Finally, the fused results are obtained through inverse wavelet transform and inverse IHS transform. The wavelet transform has the ability to extract the spectral information and the global spatial details from the original pairwise images, while sparse representation can extract the local structures of images effectively. Therefore, our proposed fusion method can well preserve the spectral information and the spatial detail information of the original images. The experimental results on the remote sensing images have demonstrated that our proposed method could well maintain the spectral characteristics of fusion images with a high spatial resolution.

  9. Learning Stable Multilevel Dictionaries for Sparse Representations.

    PubMed

    Thiagarajan, Jayaraman J; Ramamurthy, Karthikeyan Natesan; Spanias, Andreas

    2015-09-01

    Sparse representations using learned dictionaries are being increasingly used with success in several data processing and machine learning applications. The increasing need for learning sparse models in large-scale applications motivates the development of efficient, robust, and provably good dictionary learning algorithms. Algorithmic stability and generalizability are desirable characteristics for dictionary learning algorithms that aim to build global dictionaries, which can efficiently model any test data similar to the training samples. In this paper, we propose an algorithm to learn dictionaries for sparse representations from large scale data, and prove that the proposed learning algorithm is stable and generalizable asymptotically. The algorithm employs a 1-D subspace clustering procedure, the K-hyperline clustering, to learn a hierarchical dictionary with multiple levels. We also propose an information-theoretic scheme to estimate the number of atoms needed in each level of learning and develop an ensemble approach to learn robust dictionaries. Using the proposed dictionaries, the sparse code for novel test data can be computed using a low-complexity pursuit procedure. We demonstrate the stability and generalization characteristics of the proposed algorithm using simulations. We also evaluate the utility of the multilevel dictionaries in compressed recovery and subspace learning applications.

  10. Generative models for discovering sparse distributed representations.

    PubMed

    Hinton, G E; Ghahramani, Z

    1997-08-29

    We describe a hierarchical, generative model that can be viewed as a nonlinear generalization of factor analysis and can be implemented in a neural network. The model uses bottom-up, top-down and lateral connections to perform Bayesian perceptual inference correctly. Once perceptual inference has been performed the connection strengths can be updated using a very simple learning rule that only requires locally available information. We demonstrate that the network learns to extract sparse, distributed, hierarchical representations.

  11. Aerial Scene Recognition using Efficient Sparse Representation

    SciTech Connect

    Cheriyadat, Anil M

    2012-01-01

    Advanced scene recognition systems for processing large volumes of high-resolution aerial image data are in great demand today. However, automated scene recognition remains a challenging problem. Efficient encoding and representation of spatial and structural patterns in the imagery are key in developing automated scene recognition algorithms. We describe an image representation approach that uses simple and computationally efficient sparse code computation to generate accurate features capable of producing excellent classification performance using linear SVM kernels. Our method exploits unlabeled low-level image feature measurements to learn a set of basis vectors. We project the low-level features onto the basis vectors and use simple soft threshold activation function to derive the sparse features. The proposed technique generates sparse features at a significantly lower computational cost than other methods~\\cite{Yang10, newsam11}, yet it produces comparable or better classification accuracy. We apply our technique to high-resolution aerial image datasets to quantify the aerial scene classification performance. We demonstrate that the dense feature extraction and representation methods are highly effective for automatic large-facility detection on wide area high-resolution aerial imagery.

  12. Efficient visual tracking via low-complexity sparse representation

    NASA Astrophysics Data System (ADS)

    Lu, Weizhi; Zhang, Jinglin; Kpalma, Kidiyo; Ronsin, Joseph

    2015-12-01

    Thanks to its good performance on object recognition, sparse representation has recently been widely studied in the area of visual object tracking. Up to now, little attention has been paid to the complexity of sparse representation, while most works are focused on the performance improvement. By reducing the computation load related to sparse representation hundreds of times, this paper proposes by far the most computationally efficient tracking approach based on sparse representation. The proposal simply consists of two stages of sparse representation, one is for object detection and the other for object validation. Experimentally, it achieves better performance than some state-of-the-art methods in both accuracy and speed.

  13. Dictionary learning algorithms for sparse representation.

    PubMed

    Kreutz-Delgado, Kenneth; Murray, Joseph F; Rao, Bhaskar D; Engan, Kjersti; Lee, Te-Won; Sejnowski, Terrence J

    2003-02-01

    Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial "25 words or less"), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations. Experiments were performed using synthetic data and natural images. For complete dictionaries, we demonstrate that our algorithms have improved performance over other independent component analysis (ICA) methods, measured in terms of signal-to-noise ratios of separated sources. In the overcomplete case, we show that the true underlying dictionary and sparse sources can be accurately recovered. In tests with natural images, learned overcomplete dictionaries are shown to have higher coding efficiency than complete dictionaries; that is, images encoded with an overcomplete dictionary have both higher compression (fewer bits per pixel) and higher accuracy (lower mean square error).

  14. Dictionary Learning Algorithms for Sparse Representation

    PubMed Central

    Kreutz-Delgado, Kenneth; Murray, Joseph F.; Rao, Bhaskar D.; Engan, Kjersti; Lee, Te-Won; Sejnowski, Terrence J.

    2010-01-01

    Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial “25 words or less”), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations. Experiments were performed using synthetic data and natural images. For complete dictionaries, we demonstrate that our algorithms have improved performance over other independent component analysis (ICA) methods, measured in terms of signal-to-noise ratios of separated sources. In the overcomplete case, we show that the true underlying dictionary and sparse sources can be accurately recovered. In tests with natural images, learned overcomplete dictionaries are shown to have higher coding efficiency than complete dictionaries; that is, images encoded with an over-complete dictionary have both higher compression (fewer bits per pixel) and higher accuracy (lower mean square error). PMID:12590811

  15. Joint sparse representation for robust multimodal biometrics recognition.

    PubMed

    Shekhar, Sumit; Patel, Vishal M; Nasrabadi, Nasser M; Chellappa, Rama

    2014-01-01

    Traditional biometric recognition systems rely on a single biometric signature for authentication. While the advantage of using multiple sources of information for establishing the identity has been widely recognized, computational models for multimodal biometrics recognition have only recently received attention. We propose a multimodal sparse representation method, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations. Thus, we simultaneously take into account correlations as well as coupling information among biometric modalities. A multimodal quality measure is also proposed to weigh each modality as it gets fused. Furthermore, we also kernelize the algorithm to handle nonlinearity in data. The optimization problem is solved using an efficient alternative direction method. Various experiments show that the proposed method compares favorably with competing fusion-based methods.

  16. Sparse representation for color image restoration.

    PubMed

    Mairal, Julien; Elad, Michael; Sapiro, Guillermo

    2008-01-01

    Sparse representations of signals have drawn considerable interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. In particular, the design of well adapted dictionaries for images has been a major challenge. The K-SVD has been recently proposed for this task and shown to perform very well for various grayscale image processing tasks. In this paper, we address the problem of learning dictionaries for color images and extend the K-SVD-based grayscale image denoising algorithm that appears in. This work puts forward ways for handling nonhomogeneous noise and missing information, paving the way to state-of-the-art results in applications such as color image denoising, demosaicing, and inpainting, as demonstrated in this paper.

  17. Neonatal atlas construction using sparse representation.

    PubMed

    Shi, Feng; Wang, Li; Wu, Guorong; Li, Gang; Gilmore, John H; Lin, Weili; Shen, Dinggang

    2014-09-01

    Atlas construction generally includes first an image registration step to normalize all images into a common space and then an atlas building step to fuse the information from all the aligned images. Although numerous atlas construction studies have been performed to improve the accuracy of the image registration step, unweighted or simply weighted average is often used in the atlas building step. In this article, we propose a novel patch-based sparse representation method for atlas construction after all images have been registered into the common space. By taking advantage of local sparse representation, more anatomical details can be recovered in the built atlas. To make the anatomical structures spatially smooth in the atlas, the anatomical feature constraints on group structure of representations and also the overlapping of neighboring patches are imposed to ensure the anatomical consistency between neighboring patches. The proposed method has been applied to 73 neonatal MR images with poor spatial resolution and low tissue contrast, for constructing a neonatal brain atlas with sharp anatomical details. Experimental results demonstrate that the proposed method can significantly enhance the quality of the constructed atlas by discovering more anatomical details especially in the highly convoluted cortical regions. The resulting atlas demonstrates superior performance of our atlas when applied to spatially normalizing three different neonatal datasets, compared with other start-of-the-art neonatal brain atlases.

  18. Sparse representation for classification of dolphin whistles by type.

    PubMed

    Esfahanian, M; Zhuang, H; Erdol, N

    2014-07-01

    A compressive-sensing approach called Sparse Representation Classifier (SRC) is applied to the classification of bottlenose dolphin whistles by type. The SRC algorithm constructs a dictionary of whistles from the collection of training whistles. In the classification phase, an unknown whistle is represented sparsely by a linear combination of the training whistles and then the call class can be determined with an l1-norm optimization procedure. Experimental studies conducted in this research reveal the advantages and limitations of the proposed method against some existing techniques such as K-Nearest Neighbors and Support Vector Machines in distinguishing different vocalizations.

  19. Metasample-based sparse representation for tumor classification.

    PubMed

    Zheng, Chun-Hou; Zhang, Lei; Ng, To-Yee; Shiu, Simon C K; Huang, De-Shuang

    2011-01-01

    A reliable and accurate identification of the type of tumors is crucial to the proper treatment of cancers. In recent years, it has been shown that sparse representation (SR) by l1-norm minimization is robust to noise, outliers and even incomplete measurements, and SR has been successfully used for classification. This paper presents a new SR-based method for tumor classification using gene expression data. A set of metasamples are extracted from the training samples, and then an input testing sample is represented as the linear combination of these metasamples by l1-regularized least square method. Classification is achieved by using a discriminating function defined on the representation coefficients. Since l1-norm minimization leads to a sparse solution, the proposed method is called metasample-based SR classification (MSRC). Extensive experiments on publicly available gene expression data sets show that MSRC is efficient for tumor classification, achieving higher accuracy than many existing representative schemes.

  20. Automatic target recognition via sparse representations

    NASA Astrophysics Data System (ADS)

    Estabridis, Katia

    2010-04-01

    Automatic target recognition (ATR) based on the emerging technology of Compressed Sensing (CS) can considerably improve accuracy, speed and cost associated with these types of systems. An image based ATR algorithm has been built upon this new theory, which can perform target detection and recognition in a low dimensional space. Compressed dictionaries (A) are formed to include rotational information for a scale of interest. The algorithm seeks to identify y(test sample) as a linear combination of the dictionary elements : y=Ax, where A ∈ Rnxm(n<sparse vector whose non-zero entries identify the input y. The signal x will be sparse with respect to the dictionary A as long as y is a valid target. The algorithm can reject clutter and background, which are part of the input image. The detection and recognition problems are solved by finding the sparse-solution to the undetermined system y=Ax via Orthogonal Matching Pursuit (OMP) and l1 minimization techniques. Visible and MWIR imagery collected by the Army Night Vision and Electronic Sensors Directorate (NVESD) was utilized to test the algorithm. Results show an average detection and recognition rates above 95% for targets at ranges up to 3Km for both image modalities.

  1. Improving sparse representation algorithms for maritime video processing

    NASA Astrophysics Data System (ADS)

    Smith, L. N.; Nichols, J. M.; Waterman, J. R.; Olson, C. C.; Judd, K. P.

    2012-06-01

    We present several improvements to published algorithms for sparse image modeling with the goal of improving processing of imagery of small watercraft in littoral environments. The first improvement is to the K-SVD algorithm for training over-complete dictionaries, which are used in sparse representations. It is shown that the training converges significantly faster by incorporating multiple dictionary (i.e., codebook) update stages in each training iteration. The paper also provides several useful and practical lessons learned from our experience with sparse representations. Results of three applications of sparse representation are presented and compared to the state-of-the-art methods; image compression, image denoising, and super-resolution.

  2. Prostate segmentation by sparse representation based classification

    PubMed Central

    Gao, Yaozong; Liao, Shu; Shen, Dinggang

    2012-01-01

    Purpose: The segmentation of prostate in CT images is of essential importance to external beam radiotherapy, which is one of the major treatments for prostate cancer nowadays. During the radiotherapy, the prostate is radiated by high-energy x rays from different directions. In order to maximize the dose to the cancer and minimize the dose to the surrounding healthy tissues (e.g., bladder and rectum), the prostate in the new treatment image needs to be accurately localized. Therefore, the effectiveness and efficiency of external beam radiotherapy highly depend on the accurate localization of the prostate. However, due to the low contrast of the prostate with its surrounding tissues (e.g., bladder), the unpredicted prostate motion, and the large appearance variations across different treatment days, it is challenging to segment the prostate in CT images. In this paper, the authors present a novel classification based segmentation method to address these problems. Methods: To segment the prostate, the proposed method first uses sparse representation based classification (SRC) to enhance the prostate in CT images by pixel-wise classification, in order to overcome the limitation of poor contrast of the prostate images. Then, based on the classification results, previous segmented prostates of the same patient are used as patient-specific atlases to align onto the current treatment image and the majority voting strategy is finally adopted to segment the prostate. In order to address the limitations of the traditional SRC in pixel-wise classification, especially for the purpose of segmentation, the authors extend SRC from the following four aspects: (1) A discriminant subdictionary learning method is proposed to learn a discriminant and compact representation of training samples for each class so that the discriminant power of SRC can be increased and also SRC can be applied to the large-scale pixel-wise classification. (2) The L1 regularized sparse coding is replaced by

  3. Classwise Sparse and Collaborative Patch Representation for Face Recognition.

    PubMed

    Lai, Jian; Jiang, Xudong

    2016-07-01

    Sparse representation has shown its merits in solving some classification problems and delivered some impressive results in face recognition. However, the unsupervised optimization of the sparse representation may result in undesired classification outcome if the variations of the data population are not well represented by the training samples. In this paper, a method of class-wise sparse representation (CSR) is proposed to tackle the problems of the conventional sample-wise sparse representation and applied to face recognition. It seeks an optimum representation of the query image by minimizing the class-wise sparsity of the training data. To tackle the problem of the uncontrolled training data, this paper further proposes a collaborative patch (CP) framework, together with the proposed CSR, named CSR-CP. Different from the conventional patch-based methods that optimize each patch representation separately, the CSR-CP approach optimizes all patches together to seek a CP groupwise sparse representation by putting all patches of an image into a group. It alleviates the problem of losing discriminative information in the training data caused by the partition of the image into patches. Extensive experiments on several benchmark face databases demonstrate that the proposed CSR-CP significantly outperforms the sparse representation-related holistic and patch-based approaches.

  4. LiDAR point classification based on sparse representation

    NASA Astrophysics Data System (ADS)

    Li, Nan; Pfeifer, Norbert; Liu, Chun

    2017-04-01

    In order to combine the initial spatial structure and features of LiDAR data for accurate classification. The LiDAR data is represented as a 4-order tensor. Sparse representation for classification(SRC) method is used for LiDAR tensor classification. It turns out SRC need only a few of training samples from each class, meanwhile can achieve good classification result. Multiple features are extracted from raw LiDAR points to generate a high-dimensional vector at each point. Then the LiDAR tensor is built by the spatial distribution and feature vectors of the point neighborhood. The entries of LiDAR tensor are accessed via four indexes. Each index is called mode: three spatial modes in direction X ,Y ,Z and one feature mode. Sparse representation for classification(SRC) method is proposed in this paper. The sparsity algorithm is to find the best represent the test sample by sparse linear combination of training samples from a dictionary. To explore the sparsity of LiDAR tensor, the tucker decomposition is used. It decomposes a tensor into a core tensor multiplied by a matrix along each mode. Those matrices could be considered as the principal components in each mode. The entries of core tensor show the level of interaction between the different components. Therefore, the LiDAR tensor can be approximately represented by a sparse tensor multiplied by a matrix selected from a dictionary along each mode. The matrices decomposed from training samples are arranged as initial elements in the dictionary. By dictionary learning, a reconstructive and discriminative structure dictionary along each mode is built. The overall structure dictionary composes of class-specified sub-dictionaries. Then the sparse core tensor is calculated by tensor OMP(Orthogonal Matching Pursuit) method based on dictionaries along each mode. It is expected that original tensor should be well recovered by sub-dictionary associated with relevant class, while entries in the sparse tensor associated with

  5. Joint sparse representation based automatic target recognition in SAR images

    NASA Astrophysics Data System (ADS)

    Zhang, Haichao; Nasrabadi, Nasser M.; Huang, Thomas S.; Zhang, Yanning

    2011-06-01

    In this paper, we introduce a novel joint sparse representation based automatic target recognition (ATR) method using multiple views, which can not only handle multi-view ATR without knowing the pose but also has the advantage of exploiting the correlations among the multiple views for a single joint recognition decision. We cast the problem as a multi-variate regression model and recover the sparse representations for the multiple views simultaneously. The recognition is accomplished via classifying the target to the class which gives the minimum total reconstruction error accumulated across all the views. Extensive experiments have been carried out on Moving and Stationary Target Acquisition and Recognition (MSTAR) public database to evaluate the proposed method compared with several state-of-the-art methods such as linear Support Vector Machine (SVM), kernel SVM as well as a sparse representation based classifier. Experimental results demonstrate that the effectiveness as well as robustness of the proposed joint sparse representation ATR method.

  6. Sparse Representation Based SAR Vehicle Recognition along with Aspect Angle

    PubMed Central

    Ji, Kefeng; Zou, Huanxin; Sun, Jixiang

    2014-01-01

    As a method of representing the test sample with few training samples from an overcomplete dictionary, sparse representation classification (SRC) has attracted much attention in synthetic aperture radar (SAR) automatic target recognition (ATR) recently. In this paper, we develop a novel SAR vehicle recognition method based on sparse representation classification along with aspect information (SRCA), in which the correlation between the vehicle's aspect angle and the sparse representation vector is exploited. The detailed procedure presented in this paper can be summarized as follows. Initially, the sparse representation vector of a test sample is solved by sparse representation algorithm with a principle component analysis (PCA) feature-based dictionary. Then, the coefficient vector is projected onto a sparser one within a certain range of the vehicle's aspect angle. Finally, the vehicle is classified into a certain category that minimizes the reconstruction error with the novel sparse representation vector. Extensive experiments are conducted on the moving and stationary target acquisition and recognition (MSTAR) dataset and the results demonstrate that the proposed method performs robustly under the variations of depression angle and target configurations, as well as incomplete observation. PMID:25161398

  7. Sparse representation based SAR vehicle recognition along with aspect angle.

    PubMed

    Xing, Xiangwei; Ji, Kefeng; Zou, Huanxin; Sun, Jixiang

    2014-01-01

    As a method of representing the test sample with few training samples from an overcomplete dictionary, sparse representation classification (SRC) has attracted much attention in synthetic aperture radar (SAR) automatic target recognition (ATR) recently. In this paper, we develop a novel SAR vehicle recognition method based on sparse representation classification along with aspect information (SRCA), in which the correlation between the vehicle's aspect angle and the sparse representation vector is exploited. The detailed procedure presented in this paper can be summarized as follows. Initially, the sparse representation vector of a test sample is solved by sparse representation algorithm with a principle component analysis (PCA) feature-based dictionary. Then, the coefficient vector is projected onto a sparser one within a certain range of the vehicle's aspect angle. Finally, the vehicle is classified into a certain category that minimizes the reconstruction error with the novel sparse representation vector. Extensive experiments are conducted on the moving and stationary target acquisition and recognition (MSTAR) dataset and the results demonstrate that the proposed method performs robustly under the variations of depression angle and target configurations, as well as incomplete observation.

  8. Extracting pure endmembers using symmetric sparse representation for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Sun, Weiwei; Liu, Chun; Sun, Yanwei; Li, Weiyue; Li, Jialin

    2016-10-01

    This article proposes a symmetric sparse representation (SSR) method to extract pure endmembers from hyperspectral imagery (HSI). The SSR combines the features of the linear unmixing model and the sparse subspace clustering model of endmembers, and it assumes that the desired endmembers and all the HSI pixel points can be sparsely represented by each other. It formulates the endmember extraction problem into a famous program of archetypal analysis, and accordingly, extracting pure endmembers can be transformed as finding the archetypes in the minimal convex hull containing all the HSI pixel points. The vector quantization scheme is adopted to help in carefully choosing the initial pure endmembers, and the archetypal analysis program is solved using the simple projected gradient algorithm. Seven state-of-the-art methods are implemented to make comparisons with the SSR on both synthetic and real hyperspectral images. Experimental results show that the SSR outperforms all the seven methods in spectral angle distance and root-mean-square error, and it can be a good alternative choice for extracting pure endmembers from HSI data.

  9. Sparse coding based feature representation method for remote sensing images

    NASA Astrophysics Data System (ADS)

    Oguslu, Ender

    In this dissertation, we study sparse coding based feature representation method for the classification of multispectral and hyperspectral images (HSI). The existing feature representation systems based on the sparse signal model are computationally expensive, requiring to solve a convex optimization problem to learn a dictionary. A sparse coding feature representation framework for the classification of HSI is presented that alleviates the complexity of sparse coding through sub-band construction, dictionary learning, and encoding steps. In the framework, we construct the dictionary based upon the extracted sub-bands from the spectral representation of a pixel. In the encoding step, we utilize a soft threshold function to obtain sparse feature representations for HSI. Experimental results showed that a randomly selected dictionary could be as effective as a dictionary learned from optimization. The new representation usually has a very high dimensionality requiring a lot of computational resources. In addition, the spatial information of the HSI data has not been included in the representation. Thus, we modify the framework by incorporating the spatial information of the HSI pixels and reducing the dimension of the new sparse representations. The enhanced model, called sparse coding based dense feature representation (SC-DFR), is integrated with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) classifiers to discriminate different types of land cover. We evaluated the proposed algorithm on three well known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit (SOMP) and image fusion and recursive filtering (IFRF). The results from the experiments showed that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification. To further

  10. Optical double-image encryption and authentication by sparse representation.

    PubMed

    Mohammed, Emad A; Saadon, H L

    2016-12-10

    An optical double-image encryption and authentication method by sparse representation is proposed. The information from double-image encryption can be integrated into a sparse representation. Unlike the traditional double-image encryption technique, only sparse (partial) data from the encrypted data is adopted for the authentication process. Simulation results demonstrate that the correct authentication results are achieved even with partial information from the encrypted data. The randomly selected sparse encrypted information will be used as an effective key for a security system. Therefore, the proposed method is feasible, effective, and can provide an additional security layer for optical security systems. In addition, the method also achieved the general requirements of storage and transmission due to a high reduction of the encrypted information.

  11. Pseudo spectral Chebyshev representation of few-group cross sections on sparse grids

    SciTech Connect

    Bokov, P. M.; Botes, D.; Zimin, V. G.

    2012-07-01

    This paper presents a pseudo spectral method for representing few-group homogenised cross sections, based on hierarchical polynomial interpolation. The interpolation is performed on a multi-dimensional sparse grid built from Chebyshev nodes. The representation is assembled directly from the samples using basis functions that are constructed as tensor products of the classical one-dimensional Lagrangian interpolation functions. The advantage of this representation is that it combines the accuracy of Chebyshev interpolation with the efficiency of sparse grid methods. As an initial test, this interpolation method was used to construct a representation for the two-group macroscopic cross sections of a VVER pin cell. (authors)

  12. Implicit kernel sparse shape representation: a sparse-neighbors-based objection segmentation framework.

    PubMed

    Yao, Jincao; Yu, Huimin; Hu, Roland

    2017-01-01

    This paper introduces a new implicit-kernel-sparse-shape-representation-based object segmentation framework. Given an input object whose shape is similar to some of the elements in the training set, the proposed model can automatically find a cluster of implicit kernel sparse neighbors to approximately represent the input shape and guide the segmentation. A distance-constrained probabilistic definition together with a dualization energy term is developed to connect high-level shape representation and low-level image information. We theoretically prove that our model not only derives from two projected convex sets but is also equivalent to a sparse-reconstruction-error-based representation in the Hilbert space. Finally, a "wake-sleep"-based segmentation framework is applied to drive the evolutionary curve to recover the original shape of the object. We test our model on two public datasets. Numerical experiments on both synthetic images and real applications show the superior capabilities of the proposed framework.

  13. Medical Image Fusion Based on Feature Extraction and Sparse Representation

    PubMed Central

    Wei, Gao; Zongxi, Song

    2017-01-01

    As a novel multiscale geometric analysis tool, sparse representation has shown many advantages over the conventional image representation methods. However, the standard sparse representation does not take intrinsic structure and its time complexity into consideration. In this paper, a new fusion mechanism for multimodal medical images based on sparse representation and decision map is proposed to deal with these problems simultaneously. Three decision maps are designed including structure information map (SM) and energy information map (EM) as well as structure and energy map (SEM) to make the results reserve more energy and edge information. SM contains the local structure feature captured by the Laplacian of a Gaussian (LOG) and EM contains the energy and energy distribution feature detected by the mean square deviation. The decision map is added to the normal sparse representation based method to improve the speed of the algorithm. Proposed approach also improves the quality of the fused results by enhancing the contrast and reserving more structure and energy information from the source images. The experiment results of 36 groups of CT/MR, MR-T1/MR-T2, and CT/PET images demonstrate that the method based on SR and SEM outperforms five state-of-the-art methods. PMID:28321246

  14. Medical Image Fusion Based on Feature Extraction and Sparse Representation.

    PubMed

    Fei, Yin; Wei, Gao; Zongxi, Song

    2017-01-01

    As a novel multiscale geometric analysis tool, sparse representation has shown many advantages over the conventional image representation methods. However, the standard sparse representation does not take intrinsic structure and its time complexity into consideration. In this paper, a new fusion mechanism for multimodal medical images based on sparse representation and decision map is proposed to deal with these problems simultaneously. Three decision maps are designed including structure information map (SM) and energy information map (EM) as well as structure and energy map (SEM) to make the results reserve more energy and edge information. SM contains the local structure feature captured by the Laplacian of a Gaussian (LOG) and EM contains the energy and energy distribution feature detected by the mean square deviation. The decision map is added to the normal sparse representation based method to improve the speed of the algorithm. Proposed approach also improves the quality of the fused results by enhancing the contrast and reserving more structure and energy information from the source images. The experiment results of 36 groups of CT/MR, MR-T1/MR-T2, and CT/PET images demonstrate that the method based on SR and SEM outperforms five state-of-the-art methods.

  15. Inverse lithography using sparse mask representations

    NASA Astrophysics Data System (ADS)

    Ionescu, Radu C.; Hurley, Paul; Apostol, Stefan

    2015-03-01

    We present a novel optimisation algorithm for inverse lithography, based on optimization of the mask derivative, a domain inherently sparse, and for rectilinear polygons, invertible. The method is first developed assuming a point light source, and then extended to general incoherent sources. What results is a fast algorithm, producing manufacturable masks (the search space is constrained to rectilinear polygons), and flexible (specific constraints such as minimal line widths can be imposed). One inherent trick is to treat polygons as continuous entities, thus making aerial image calculation extremely fast and accurate. Requirements for mask manufacturability can be integrated in the optimization without too much added complexity. We also explain how to extend the scheme for phase-changing mask optimization.

  16. Sparse Distributed Representation and Hierarchy: Keys to Scalable Machine Intelligence

    DTIC Science & Technology

    2016-04-01

    AFRL-RY-WP-TR-2016-0030 SPARSE DISTRIBUTED REPRESENTATION & HIERARCHY: KEYS TO SCALABLE MACHINE INTELLIGENCE Gerard (Rod) Rinkus, Greg...REPRESENTATION & HIERARCHY: KEYS TO SCALABLE MACHINE INTELLIGENCE 5a. CONTRACT NUMBER FA8650-13-C-7342 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...classification accuracy on the Weizmann data set, accomplished with 3.5 minutes training time, with no machine parallelism and almost no software

  17. Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking

    PubMed Central

    Qu, Shiru

    2016-01-01

    Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness. PMID:27630710

  18. Representation-Independent Iteration of Sparse Data Arrays

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    An approach is defined that describes a method of iterating over massively large arrays containing sparse data using an approach that is implementation independent of how the contents of the sparse arrays are laid out in memory. What is unique and important here is the decoupling of the iteration over the sparse set of array elements from how they are internally represented in memory. This enables this approach to be backward compatible with existing schemes for representing sparse arrays as well as new approaches. What is novel here is a new approach for efficiently iterating over sparse arrays that is independent of the underlying memory layout representation of the array. A functional interface is defined for implementing sparse arrays in any modern programming language with a particular focus for the Chapel programming language. Examples are provided that show the translation of a loop that computes a matrix vector product into this representation for both the distributed and not-distributed cases. This work is directly applicable to NASA and its High Productivity Computing Systems (HPCS) program that JPL and our current program are engaged in. The goal of this program is to create powerful, scalable, and economically viable high-powered computer systems suitable for use in national security and industry by 2010. This is important to NASA for its computationally intensive requirements for analyzing and understanding the volumes of science data from our returned missions.

  19. Sparse representation-based image restoration via nonlocal supervised coding

    NASA Astrophysics Data System (ADS)

    Li, Ao; Chen, Deyun; Sun, Guanglu; Lin, Kezheng

    2016-10-01

    Sparse representation (SR) and nonlocal technique (NLT) have shown great potential in low-level image processing. However, due to the degradation of the observed image, SR and NLT may not be accurate enough to obtain a faithful restoration results when they are used independently. To improve the performance, in this paper, a nonlocal supervised coding strategy-based NLT for image restoration is proposed. The novel method has three main contributions. First, to exploit the useful nonlocal patches, a nonnegative sparse representation is introduced, whose coefficients can be utilized as the supervised weights among patches. Second, a novel objective function is proposed, which integrated the supervised weights learning and the nonlocal sparse coding to guarantee a more promising solution. Finally, to make the minimization tractable and convergence, a numerical scheme based on iterative shrinkage thresholding is developed to solve the above underdetermined inverse problem. The extensive experiments validate the effectiveness of the proposed method.

  20. Distributed dictionary learning for sparse representation in sensor networks.

    PubMed

    Liang, Junli; Zhang, Miaohua; Zeng, Xianyu; Yu, Guoyang

    2014-06-01

    This paper develops a distributed dictionary learning algorithm for sparse representation of the data distributed across nodes of sensor networks, where the sensitive or private data are stored or there is no fusion center or there exists a big data application. The main contributions of this paper are: 1) we decouple the combined dictionary atom update and nonzero coefficient revision procedure into two-stage operations to facilitate distributed computations, first updating the dictionary atom in terms of the eigenvalue decomposition of the sum of the residual (correlation) matrices across the nodes then implementing a local projection operation to obtain the related representation coefficients for each node; 2) we cast the aforementioned atom update problem as a set of decentralized optimization subproblems with consensus constraints. Then, we simplify the multiplier update for the symmetry undirected graphs in sensor networks and minimize the separable subproblems to attain the consistent estimates iteratively; and 3) dictionary atoms are typically constrained to be of unit norm in order to avoid the scaling ambiguity. We efficiently solve the resultant hidden convex subproblems by determining the optimal Lagrange multiplier. Some experiments are given to show that the proposed algorithm is an alternative distributed dictionary learning approach, and is suitable for the sensor network environment.

  1. Optimized Color Filter Arrays for Sparse Representation Based Demosaicking.

    PubMed

    Li, Jia; Bai, Chenyan; Lin, Zhouchen; Yu, Jian

    2017-03-08

    Demosaicking is the problem of reconstructing a color image from the raw image captured by a digital color camera that covers its only imaging sensor with a color filter array (CFA). Sparse representation based demosaicking has been shown to produce superior reconstruction quality. However, almost all existing algorithms in this category use the CFAs which are not specifically optimized for the algorithms. In this paper, we consider optimally designing CFAs for sparse representation based demosaicking, where the dictionary is well-chosen. The fact that CFAs correspond to the projection matrices used in compressed sensing inspires us to optimize CFAs via minimizing the mutual coherence. This is more challenging than that for traditional projection matrices because CFAs have physical realizability constraints. However, most of the existing methods for minimizing the mutual coherence require that the projection matrices should be unconstrained, making them inapplicable for designing CFAs. We consider directly minimizing the mutual coherence with the CFA's physical realizability constraints as a generalized fractional programming problem, which needs to find sufficiently accurate solutions to a sequence of nonconvex nonsmooth minimization problems. We adapt the redistributed proximal bundle method to address this issue. Experiments on benchmark images testify to the superiority of the proposed method. In particular, we show that a simple sparse representation based demosaicking algorithm with our specifically optimized CFA can outperform LSSC [1]. To the best of our knowledge, it is the first sparse representation based demosaicking algorithm that beats LSSC in terms of CPSNR.

  2. Sparse Representations for Three-Dimensional Range Data Restoration

    DTIC Science & Technology

    2009-09-01

    to images, in scanning 3D data occlusion or missing information can occur. We now investigate methods for fill- ing/ inpainting the holes in 3D shape...assuming the location of the holes is known.2 In [2], the problem of image inpainting is investigated using the sparse representations. Based on this

  3. Sparse representation of group-wise FMRI signals.

    PubMed

    Lv, Jinglei; Li, Xiang; Zhu, Dajiang; Jiang, Xi; Zhang, Xin; Hu, Xintao; Zhang, Tuo; Guo, Lei; Liu, Tianming

    2013-01-01

    The human brain function involves complex processes with population codes of neuronal activities. Neuroscience research has demonstrated that when representing neuronal activities, sparsity is an important characterizing property. Inspired by this finding, significant amount of efforts from the scientific communities have been recently devoted to sparse representations of signals and patterns, and promising achievements have been made. However, sparse representation of fMRI signals, particularly at the population level of a group of different brains, has been rarely explored yet. In this paper, we present a novel group-wise sparse representation of task-based fMRI signals from multiple subjects via dictionary learning methods. Specifically, we extract and pool task-based fMRI signals for a set of cortical landmarks, each of which possesses intrinsic anatomical correspondence, from a group of subjects. Then an effective online dictionary learning algorithm is employed to learn an over-complete dictionary from the pooled population of fMRI signals based on optimally determined dictionary size. Our experiments have identified meaningful Atoms of Interests (AOI) in the learned dictionary, which correspond to consistent and meaningful functional responses of the brain to external stimulus. Our work demonstrated that sparse representation of group-wise fMRI signals is naturally suitable and effective in recovering population codes of neuronal signals conveyed in fMRI data.

  4. Image denoising via group Sparse representation over learned dictionary

    NASA Astrophysics Data System (ADS)

    Cheng, Pan; Deng, Chengzhi; Wang, Shengqian; Zhang, Chunfeng

    2013-10-01

    Images are one of vital ways to get information for us. However, in the practical application, images are often subject to a variety of noise, so that solving the problem of image denoising becomes particularly important. The K-SVD algorithm can improve the denoising effect by sparse coding atoms instead of the traditional method of sparse coding dictionary. In order to further improve the effect of denoising, we propose to extended the K-SVD algorithm via group sparse representation. The key point of this method is dividing the sparse coefficients into groups, so that adjusts the correlation among the elements by controlling the size of the groups. This new approach can improve the local constraints between adjacent atoms, thereby it is very important to increase the correlation between the atoms. The experimental results show that our method has a better effect on image recovery, which is efficient to prevent the block effect and can get smoother images.

  5. Feature selection and multi-kernel learning for sparse representation on a manifold.

    PubMed

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2014-03-01

    Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao et al. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Group-sparse representation with dictionary learning for medical image denoising and fusion.

    PubMed

    Li, Shutao; Yin, Haitao; Fang, Leyuan

    2012-12-01

    Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches.

  7. Color Sparse Representations for Image Processing: Review, Models, and Prospects.

    PubMed

    Barthélemy, Quentin; Larue, Anthony; Mars, Jérôme I

    2015-11-01

    Sparse representations have been extended to deal with color images composed of three channels. A review of dictionary-learning-based sparse representations for color images is made here, detailing the differences between the models, and comparing their results on the real and simulated data. These models are considered in a unifying framework that is based on the degrees of freedom of the linear filtering/transformation of the color channels. Moreover, this allows it to be shown that the scalar quaternionic linear model is equivalent to constrained matrix-based color filtering, which highlights the filtering implicitly applied through this model. Based on this reformulation, the new color filtering model is introduced, using unconstrained filters. In this model, spatial morphologies of color images are encoded by atoms, and colors are encoded by color filters. Color variability is no longer captured in increasing the dictionary size, but with color filters, this gives an efficient color representation.

  8. Orthogonal Procrustes Analysis for Dictionary Learning in Sparse Linear Representation.

    PubMed

    Grossi, Giuliano; Lanzarotti, Raffaella; Lin, Jianyi

    2017-01-01

    In the sparse representation model, the design of overcomplete dictionaries plays a key role for the effectiveness and applicability in different domains. Recent research has produced several dictionary learning approaches, being proven that dictionaries learnt by data examples significantly outperform structured ones, e.g. wavelet transforms. In this context, learning consists in adapting the dictionary atoms to a set of training signals in order to promote a sparse representation that minimizes the reconstruction error. Finding the best fitting dictionary remains a very difficult task, leaving the question still open. A well-established heuristic method for tackling this problem is an iterative alternating scheme, adopted for instance in the well-known K-SVD algorithm. Essentially, it consists in repeating two stages; the former promotes sparse coding of the training set and the latter adapts the dictionary to reduce the error. In this paper we present R-SVD, a new method that, while maintaining the alternating scheme, adopts the Orthogonal Procrustes analysis to update the dictionary atoms suitably arranged into groups. Comparative experiments on synthetic data prove the effectiveness of R-SVD with respect to well known dictionary learning algorithms such as K-SVD, ILS-DLA and the online method OSDL. Moreover, experiments on natural data such as ECG compression, EEG sparse representation, and image modeling confirm R-SVD's robustness and wide applicability.

  9. Orthogonal Procrustes Analysis for Dictionary Learning in Sparse Linear Representation

    PubMed Central

    Grossi, Giuliano; Lin, Jianyi

    2017-01-01

    In the sparse representation model, the design of overcomplete dictionaries plays a key role for the effectiveness and applicability in different domains. Recent research has produced several dictionary learning approaches, being proven that dictionaries learnt by data examples significantly outperform structured ones, e.g. wavelet transforms. In this context, learning consists in adapting the dictionary atoms to a set of training signals in order to promote a sparse representation that minimizes the reconstruction error. Finding the best fitting dictionary remains a very difficult task, leaving the question still open. A well-established heuristic method for tackling this problem is an iterative alternating scheme, adopted for instance in the well-known K-SVD algorithm. Essentially, it consists in repeating two stages; the former promotes sparse coding of the training set and the latter adapts the dictionary to reduce the error. In this paper we present R-SVD, a new method that, while maintaining the alternating scheme, adopts the Orthogonal Procrustes analysis to update the dictionary atoms suitably arranged into groups. Comparative experiments on synthetic data prove the effectiveness of R-SVD with respect to well known dictionary learning algorithms such as K-SVD, ILS-DLA and the online method OSDL. Moreover, experiments on natural data such as ECG compression, EEG sparse representation, and image modeling confirm R-SVD’s robustness and wide applicability. PMID:28103283

  10. Multiscale Sparse Image Representation with Learned Dictionaries (PREPRINT)

    DTIC Science & Technology

    2007-01-01

    age processing, e.g., image denoising [5]. In [1] the K- SVD is proposed for learning a single-scale dic- tionary for sparse representation of image...performance we obtain. 2. THE SINGLE-SCALE K- SVD DENOISING ALGORITHM In this section, we briefly review the main ideas of the K- SVD frame- work for sparse...weighted average: x̂ = “ λI + X ij R T ijRij ”−1“ λy + X ij R T ijD̂α̂ij ” . (4) Fig. 1. The single-scale K- SVD -based image denoising algorithm. Fig

  11. Airborne LIDAR Points Classification Based on Tensor Sparse Representation

    NASA Astrophysics Data System (ADS)

    Li, N.; Pfeifer, N.; Liu, C.

    2017-09-01

    The common statistical methods for supervised classification usually require a large amount of training data to achieve reasonable results, which is time consuming and inefficient. This paper proposes a tensor sparse representation classification (SRC) method for airborne LiDAR points. The LiDAR points are represented as tensors to keep attributes in its spatial space. Then only a few of training data is used for dictionary learning, and the sparse tensor is calculated based on tensor OMP algorithm. The point label is determined by the minimal reconstruction residuals. Experiments are carried out on real LiDAR points whose result shows that objects can be distinguished by this algorithm successfully.

  12. Sparse representation based face recognition using weighted regions

    NASA Astrophysics Data System (ADS)

    Bilgazyev, Emil; Yeniaras, E.; Uyanik, I.; Unan, Mahmut; Leiss, E. L.

    2013-12-01

    Face recognition is a challenging research topic, especially when the training (gallery) and recognition (probe) images are acquired using different cameras under varying conditions. Even a small noise or occlusion in the images can compromise the accuracy of recognition. Lately, sparse encoding based classification algorithms gave promising results for such uncontrollable scenarios. In this paper, we introduce a novel methodology by modeling the sparse encoding with weighted patches to increase the robustness of face recognition even further. In the training phase, we define a mask (i.e., weight matrix) using a sparse representation selecting the facial regions, and in the recognition phase, we perform comparison on selected facial regions. The algorithm was evaluated both quantitatively and qualitatively using two comprehensive surveillance facial image databases, i.e., SCfaceandMFPV, with the results clearly superior to common state-of-the-art methodologies in different scenarios.

  13. Sparse Representation for Prediction of HIV-1 Protease Drug Resistance.

    PubMed

    Yu, Xiaxia; Weber, Irene T; Harrison, Robert W

    2013-01-01

    HIV rapidly evolves drug resistance in response to antiviral drugs used in AIDS therapy. Estimating the specific resistance of a given strain of HIV to individual drugs from sequence data has important benefits for both the therapy of individual patients and the development of novel drugs. We have developed an accurate classification method based on the sparse representation theory, and demonstrate that this method is highly effective with HIV-1 protease. The protease structure is represented using our newly proposed encoding method based on Delaunay triangulation, and combined with the mutated amino acid sequences of known drug-resistant strains to train a machine-learning algorithm both for classification and regression of drug-resistant mutations. An overall cross-validated classification accuracy of 97% is obtained when trained on a publically available data base of approximately 1.5×10(4) known sequences (Stanford HIV database http://hivdb.stanford.edu/cgi-bin/GenoPhenoDS.cgi). Resistance to four FDA approved drugs is computed and comparisons with other algorithms demonstrate that our method shows significant improvements in classification accuracy.

  14. Learning Multiscale Sparse Representations for Image and Video Restoration (PREPRINT)

    DTIC Science & Technology

    2007-07-01

    video denoising [35]. In this paper, we extend the basic K- SVD work, providing a framework for learning multiscale and sparse image representation. In... denoising algorithm [1], the extensions to color image denoising , non-homogeneous noise, and inpainting [25], and the K- SVD for denoising videos [35]. Section...improvements to the original single-scale K- SVD . Section 6 presents some applications of the multiscale K- SVD , covering grayscale and color image denoising

  15. Joint Sparse Representation for Robust Multimodal Biometrics Recognition

    DTIC Science & Technology

    2014-01-01

    Effect of quality on recognition performance across (a) noise (b) random blocks. [13] S. Kim , A. Magnani, and S. Boyd, “Optimal kernel selection in kernel...A. Ganesh, S. S. Sastry , and Y. Ma, “Robust face recognition via sparse representation,” IEEE Transactions on Pattern Analysis and Machine...variable lighting and pose,” IEEE Transac- tions on Information Forensics and Security, vol. 7, pp. 954–965, June 2012 . [20] Q. Zhang and B. Li

  16. Spatial, Temporal and Spectral Satellite Image Fusion via Sparse Representation

    NASA Astrophysics Data System (ADS)

    Song, Huihui

    Remote sensing provides good measurements for monitoring and further analyzing the climate change, dynamics of ecosystem, and human activities in global or regional scales. Over the past two decades, the number of launched satellite sensors has been increasing with the development of aerospace technologies and the growing requirements on remote sensing data in a vast amount of application fields. However, a key technological challenge confronting these sensors is that they tradeoff between spatial resolution and other properties, including temporal resolution, spectral resolution, swath width, etc., due to the limitations of hardware technology and budget constraints. To increase the spatial resolution of data with other good properties, one possible cost-effective solution is to explore data integration methods that can fuse multi-resolution data from multiple sensors, thereby enhancing the application capabilities of available remote sensing data. In this thesis, we propose to fuse the spatial resolution with temporal resolution and spectral resolution, respectively, based on sparse representation theory. Taking the study case of Landsat ETM+ (with spatial resolution of 30m and temporal resolution of 16 days) and MODIS (with spatial resolution of 250m ~ 1km and daily temporal resolution) reflectance, we propose two spatial-temporal fusion methods to combine the fine spatial information of Landsat image and the daily temporal resolution of MODIS image. Motivated by that the images from these two sensors are comparable on corresponding bands, we propose to link their spatial information on available Landsat- MODIS image pair (captured on prior date) and then predict the Landsat image from the MODIS counterpart on prediction date. To well-learn the spatial details from the prior images, we use a redundant dictionary to extract the basic representation atoms for both Landsat and MODIS images based on sparse representation. Under the scenario of two prior Landsat

  17. Vigilance detection based on sparse representation of EEG.

    PubMed

    Yu, Hongbin; Lu, Hongtao; Ouyang, Tian; Liu, Hongjun; Lu, Bao-Liang

    2010-01-01

    Electroencephalogram (EEG) based vigilance detection of those people who engage in long time attention demanding tasks such as monotonous monitoring or driving is a key field in the research of brain-computer interface (BCI). However, robust detection of human vigilance from EEG is very difficult due to the low SNR nature of EEG signals. Recently, compressive sensing and sparse representation become successful tools in the fields of signal reconstruction and machine learning. In this paper, we propose to use the sparse representation of EEG to the vigilance detection problem. We first use continuous wavelet transform to extract the rhythm features of EEG data, and then employ the sparse representation method to the wavelet transform coefficients. We collect five subjects' EEG recordings in a simulation driving environment and apply the proposed method to detect the vigilance of the subjects. The experimental results show that the algorithm framework proposed in this paper can successfully estimate driver's vigilance with the average accuracy about 94.22 %. We also compare our algorithm framework with other vigilance estimation methods using different feature extraction and classifier selection approaches, the result shows that the proposed method has obvious advantages in the classification accuracy.

  18. Change detection with one-class sparse representation classifier

    NASA Astrophysics Data System (ADS)

    Ran, Qiong; Zhang, Mengmeng; Li, Wei; Du, Qian

    2016-10-01

    A one-class sparse representation classifier (OCSRC) is proposed to solve the multitemporal change detection problem for identifying disaster affected areas. The OCSRC method, which is adapted from a sparse representation classifier (SRC), incorporates the one-class strategy from a one-class support vector machine (OCSVM) to seek accurate representation for the class of changed areas. It assumes that pixels from the changed areas can be well represented by samples from this class, thus the representation errors are taken as the possibilities of change. Performances of OCSRC and OCSVM are tested and compared with multitemporal multispectral HJ-1A images acquired in Heilongjiang Province before and after the flood in 2013. The entire image, together with two subimages, are used for overall comparison and detailed discussion. Receiver-operating-characteristics curve results show that OCSRC outperforms OCSVM by a lower false-positive rate at a defined true-positive rate (TPR), and the gap is more obvious with high TPR values. The same outcome is also manifested in the change detection image results, with less misclassified pixels for OCSRC at certain TPR values, which implies a more accurate description of the changed area.

  19. Consistent sparse representations of EEG ERP and ICA components based on wavelet and chirplet dictionaries.

    PubMed

    Qiu, Jun-Wei; Zao, John K; Wang, Peng-Hua; Chou, Yu-Hsiang

    2010-01-01

    A randomized search algorithm for sparse representations of EEG event-related potentials (ERPs) and their statistically independent components is presented. This algorithm combines greedy matching pursuit (MP) technique with covariance matrix adaptation evolution strategy (CMA-ES) to select small number of signal atoms from over-complete wavelet and chirplet dictionaries that offer best approximations of quasi-sparse ERP signals. During the search process, adaptive pruning of signal parameters was used to eliminate redundant or degenerative atoms. As a result, the CMA-ES/MP algorithm is capable of producing accurate efficient and consistent sparse representations of ERP signals and their ICA components. This paper explains the working principles of the algorithm and presents the preliminary results of its use.

  20. Epileptic EEG classification based on kernel sparse representation.

    PubMed

    Yuan, Qi; Zhou, Weidong; Yuan, Shasha; Li, Xueli; Wang, Jiwen; Jia, Guijuan

    2014-06-01

    The automatic identification of epileptic EEG signals is significant in both relieving heavy workload of visual inspection of EEG recordings and treatment of epilepsy. This paper presents a novel method based on the theory of sparse representation to identify epileptic EEGs. At first, the raw EEG epochs are preprocessed via Gaussian low pass filtering and differential operation. Then, in the scheme of sparse representation based classification (SRC), a test EEG sample is sparsely represented on the training set by solving l1-minimization problem, and the represented residuals associated with ictal and interictal training samples are computed. The test EEG sample is categorized as the class that yields the minimum represented residual. So unlike the conventional EEG classification methods, the choice and calculation of EEG features are avoided in the proposed framework. Moreover, the kernel trick is employed to generate a kernel version of the SRC method for improving the separability between ictal and interictal classes. The satisfactory recognition accuracy of 98.63% for ictal and interictal EEG classification and for ictal and normal EEG classification has been achieved by the kernel SRC. In addition, the fast speed makes the kernel SRC suit for the real-time seizure monitoring application in the near future.

  1. Finger vein verification system based on sparse representation.

    PubMed

    Xin, Yang; Liu, Zhi; Zhang, Haixia; Zhang, Hong

    2012-09-01

    Finger vein verification is a promising biometric pattern for personal identification in terms of security and convenience. The recognition performance of this technology heavily relies on the quality of finger vein images and on the recognition algorithm. To achieve efficient recognition performance, a special finger vein imaging device is developed, and a finger vein recognition method based on sparse representation is proposed. The motivation for the proposed method is that finger vein images exhibit a sparse property. In the proposed system, the regions of interest (ROIs) in the finger vein images are segmented and enhanced. Sparse representation and sparsity preserving projection on ROIs are performed to obtain the features. Finally, the features are measured for recognition. An equal error rate of 0.017% was achieved based on the finger vein image database, which contains images that were captured by using the near-IR imaging device that was developed in this study. The experimental results demonstrate that the proposed method is faster and more robust than previous methods.

  2. Discriminative object tracking via sparse representation and online dictionary learning.

    PubMed

    Xie, Yuan; Zhang, Wensheng; Li, Cuihua; Lin, Shuyang; Qu, Yanyun; Zhang, Yinghua

    2014-04-01

    We propose a robust tracking algorithm based on local sparse coding with discriminative dictionary learning and new keypoint matching schema. This algorithm consists of two parts: the local sparse coding with online updated discriminative dictionary for tracking (SOD part), and the keypoint matching refinement for enhancing the tracking performance (KP part). In the SOD part, the local image patches of the target object and background are represented by their sparse codes using an over-complete discriminative dictionary. Such discriminative dictionary, which encodes the information of both the foreground and the background, may provide more discriminative power. Furthermore, in order to adapt the dictionary to the variation of the foreground and background during the tracking, an online learning method is employed to update the dictionary. The KP part utilizes refined keypoint matching schema to improve the performance of the SOD. With the help of sparse representation and online updated discriminative dictionary, the KP part are more robust than the traditional method to reject the incorrect matches and eliminate the outliers. The proposed method is embedded into a Bayesian inference framework for visual tracking. Experimental results on several challenging video sequences demonstrate the effectiveness and robustness of our approach.

  3. Sparse representation-based spectral clustering for SAR image segmentation

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangrong; Wei, Zhengli; Feng, Jie; Jiao, Licheng

    2011-12-01

    A new method, sparse representation based spectral clustering (SC) with Nyström method, is proposed for synthetic aperture radar (SAR) image segmentation. Different from the conventional SC, this proposed technique is developed by using the sparse coefficients which obtained by solving l1 minimization problem to construct the affinity matrix and the Nyström method is applied to alleviate the segmentation process. The advantage of our proposed method is that we do not need to select the scaling parameter in the Gaussian kernel function artificially. We apply the proposed method, k-means and the classic spectral clustering algorithm with Nyström method to SAR image segmentation. The results show that compared with the other two methods, the proposed method can obtain much better segmentation results.

  4. Inpainting with sparse linear combinations of exemplars

    SciTech Connect

    Wohlberg, Brendt

    2008-01-01

    We introduce a new exemplar-based inpainting algorithm based on representing the region to be inpainted as a sparse linear combination of blocks extracted from similar parts of the image being inpainted. This method is conceptually simple, being computed by functional minimization, and avoids the complexity of correctly ordering the filling in of missing regions of other exemplar-based methods. Initial performance comparisons on small inpainting regions indicate that this method provides similar or better performance than other recent methods.

  5. Example-Based Image Colorization Using Locality Consistent Sparse Representation.

    PubMed

    Li, Bo; Zhao, Fuchen; Su, Zhuo; Liang, Xiangguo; Lai, Yu-Kun; Rosin, Paul L

    2017-11-01

    Image colorization aims to produce a natural looking color image from a given gray-scale image, which remains a challenging problem. In this paper, we propose a novel example-based image colorization method exploiting a new locality consistent sparse representation. Given a single reference color image, our method automatically colorizes the target gray-scale image by sparse pursuit. For efficiency and robustness, our method operates at the superpixel level. We extract low-level intensity features, mid-level texture features, and high-level semantic features for each superpixel, which are then concatenated to form its descriptor. The collection of feature vectors for all the superpixels from the reference image composes the dictionary. We formulate colorization of target superpixels as a dictionary-based sparse reconstruction problem. Inspired by the observation that superpixels with similar spatial location and/or feature representation are likely to match spatially close regions from the reference image, we further introduce a locality promoting regularization term into the energy formulation, which substantially improves the matching consistency and subsequent colorization results. Target superpixels are colorized based on the chrominance information from the dominant reference superpixels. Finally, to further improve coherence while preserving sharpness, we develop a new edge-preserving filter for chrominance channels with the guidance from the target gray-scale image. To the best of our knowledge, this is the first work on sparse pursuit image colorization from single reference images. Experimental results demonstrate that our colorization method outperforms the state-of-the-art methods, both visually and quantitatively using a user study.

  6. Sparse signal representation and its applications in ultrasonic NDE.

    PubMed

    Zhang, Guang-Ming; Zhang, Cheng-Zhong; Harvey, David M

    2012-03-01

    Many sparse signal representation (SSR) algorithms have been developed in the past decade. The advantages of SSR such as compact representations and super resolution lead to the state of the art performance of SSR for processing ultrasonic non-destructive evaluation (NDE) signals. Choosing a suitable SSR algorithm and designing an appropriate overcomplete dictionary is a key for success. After a brief review of sparse signal representation methods and the design of overcomplete dictionaries, this paper addresses the recent accomplishments of SSR for processing ultrasonic NDE signals. The advantages and limitations of SSR algorithms and various overcomplete dictionaries widely-used in ultrasonic NDE applications are explored in depth. Their performance improvement compared to conventional signal processing methods in many applications such as ultrasonic flaw detection and noise suppression, echo separation and echo estimation, and ultrasonic imaging is investigated. The challenging issues met in practical ultrasonic NDE applications for example the design of a good dictionary are discussed. Representative experimental results are presented for demonstration. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Sparse representation based image interpolation with nonlocal autoregressive modeling.

    PubMed

    Dong, Weisheng; Zhang, Lei; Lukac, Rastislav; Shi, Guangming

    2013-04-01

    Sparse representation is proven to be a promising approach to image super-resolution, where the low-resolution (LR) image is usually modeled as the down-sampled version of its high-resolution (HR) counterpart after blurring. When the blurring kernel is the Dirac delta function, i.e., the LR image is directly down-sampled from its HR counterpart without blurring, the super-resolution problem becomes an image interpolation problem. In such cases, however, the conventional sparse representation models (SRM) become less effective, because the data fidelity term fails to constrain the image local structures. In natural images, fortunately, many nonlocal similar patches to a given patch could provide nonlocal constraint to the local structure. In this paper, we incorporate the image nonlocal self-similarity into SRM for image interpolation. More specifically, a nonlocal autoregressive model (NARM) is proposed and taken as the data fidelity term in SRM. We show that the NARM-induced sampling matrix is less coherent with the representation dictionary, and consequently makes SRM more effective for image interpolation. Our extensive experimental results demonstrate that the proposed NARM-based image interpolation method can effectively reconstruct the edge structures and suppress the jaggy/ringing artifacts, achieving the best image interpolation results so far in terms of PSNR as well as perceptual quality metrics such as SSIM and FSIM.

  8. Learning Sparse Representation for Objective Image Retargeting Quality Assessment.

    PubMed

    Jiang, Qiuping; Shao, Feng; Lin, Weisi; Jiang, Gangyi

    2017-04-13

    The goal of image retargeting is to adapt source images to target displays with different sizes and aspect ratios. Different retargeting operators create different retargeted images, and a key problem is to evaluate the performance of each retargeting operator. Subjective evaluation is most reliable, but it is cumbersome and labor-consuming, and more importantly, it is hard to be embedded into online optimization systems. This paper focuses on exploring the effectiveness of sparse representation for objective image retargeting quality assessment. The principle idea is to extract distortion sensitive features from one image (e.g., retargeted image) and further investigate how many of these features are preserved or changed in another one (e.g., source image) to measure the perceptual similarity between them. To create a compact and robust feature representation, we learn two overcomplete dictionaries to represent the distortion sensitive features of an image. Features including local geometric structure and global context information are both addressed in the proposed framework. The intrinsic discriminative power of sparse representation is then exploited to measure the similarity between the source and retargeted images. Finally, individual quality scores are fused into an overall quality by a typical regression method. Experimental results on several databases have demonstrated the superiority of the proposed method.

  9. 3D ear identification based on sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying

    2014-01-01

    Biometrics based personal authentication is an effective way for automatically recognizing, with a high confidence, a person's identity. Recently, 3D ear shape has attracted tremendous interests in research field due to its richness of feature and ease of acquisition. However, the existing ICP (Iterative Closet Point)-based 3D ear matching methods prevalent in the literature are not quite efficient to cope with the one-to-many identification case. In this paper, we aim to fill this gap by proposing a novel effective fully automatic 3D ear identification system. We at first propose an accurate and efficient template-based ear detection method. By utilizing such a method, the extracted ear regions are represented in a common canonical coordinate system determined by the ear contour template, which facilitates much the following stages of feature extraction and classification. For each extracted 3D ear, a feature vector is generated as its representation by making use of a PCA-based local feature descriptor. At the stage of classification, we resort to the sparse representation based classification approach, which actually solves an l1-minimization problem. To the best of our knowledge, this is the first work introducing the sparse representation framework into the field of 3D ear identification. Extensive experiments conducted on a benchmark dataset corroborate the effectiveness and efficiency of the proposed approach. The associated Matlab source code and the evaluation results have been made publicly online available at http://sse.tongji.edu.cn/linzhang/ear/srcear/srcear.htm.

  10. An enhanced sparse representation strategy for signal classification

    NASA Astrophysics Data System (ADS)

    Zhou, Yin; Gao, Jinglun; Barner, Kenneth E.

    2012-06-01

    Sparse representation based classification (SRC) has achieved state-of-the-art results on face recognition. It is hence desired to extend its power to a broader range of classification tasks in pattern recognition. SRC first encodes a query sample as a linear combination of a few atoms from a predefined dictionary. It then identifies the label by evaluating which class results in the minimum reconstruction error. The effectiveness of SRC is limited by an important assumption that data points from different classes are not distributed along the same radius direction. Otherwise, this approach will lose their discrimination ability, even though data from different classes are actually well-separated in terms of Euclidean distance. This assumption is reasonable for face recognition as images of the same subject under different intensity levels are still considered to be of same-class. However, the assumption is not always satisfied when dealing with many other real-world data, e.g., the Iris dataset, where classes are stratified along the radius direction. In this paper, we propose a new coding strategy, called Nearest- Farthest Neighbors based SRC (NF-SRC), to effectively overcome the limitation within SRC. The dictionary is composed of both the Nearest Neighbors and the Farthest Neighbors. While the Nearest Neighbors are used to narrow the selection of candidate samples, the Farthest Neighbors are employed to make the dictionary more redundant. NF-SRC encodes each query signal in a greedy way similar to OMP. The proposed approach is evaluated over extensive experiments. The encouraging results demonstrate the feasibility of the proposed method.

  11. Supervised Discriminative Group Sparse Representation for Mild Cognitive Impairment Diagnosis

    PubMed Central

    Suk, Heung-Il; Wee, Chong-Yaw; Lee, Seong-Whan; Shen, Dinggang

    2014-01-01

    Research on an early detection of Mild Cognitive Impairment (MCI), a prodromal stage of Alzheimer’s Disease (AD), with resting-state functional Magnetic Resonance Imaging (rs-fMRI) has been of great interest for the last decade. Witnessed by recent studies, functional connectivity is a useful concept in extracting brain network features and finding biomarkers for brain disease diagnosis. However, it still remains challenging for the estimation of functional connectivity from rs-fMRI due to the inevitable high dimensional problem. In order to tackle this problem, we utilize a group sparse representation along with a structural equation model. Unlike the conventional group sparse representation method that does not explicitly consider class-label information, which can help enhance the diagnostic performance, in this paper, we propose a novel supervised discriminative group sparse representation method by penalizing a large within-class variance and a small between-class variance of connectivity coefficients. Thanks to the newly devised penalization terms, we can learn connectivity coefficients that are similar within the same class and distinct between classes, thus helping enhance the diagnostic accuracy. The proposed method also allows the learned common network structure to preserve the network specific and label-related characteristics. In our experiments on the rs-fMRI data of 37 subjects (12 MCI; 25 healthy normal control) with a cross-validation technique, we demonstrated the validity and effectiveness of the proposed method, showing the diagnostic accuracy of 89.19% and the sensitivity of 0.9167. PMID:25501275

  12. A MRI-CT prostate registration using sparse representation technique

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Jani, Ashesh B.; Rossi, Peter J.; Mao, Hui; Curran, Walter J.; Liu, Tian

    2016-03-01

    Purpose: To develop a new MRI-CT prostate registration using patch-based deformation prediction framework to improve MRI-guided prostate radiotherapy by incorporating multiparametric MRI into planning CT images. Methods: The main contribution is to estimate the deformation between prostate MRI and CT images in a patch-wise fashion by using the sparse representation technique. We assume that two image patches should follow the same deformation if their patch-wise appearance patterns are similar. Specifically, there are two stages in our proposed framework, i.e., the training stage and the application stage. In the training stage, each prostate MR images are carefully registered to the corresponding CT images and all training MR and CT images are carefully registered to a selected CT template. Thus, we obtain the dense deformation field for each training MR and CT image. In the application stage, for registering a new subject MR image with the same subject CT image, we first select a small number of key points at the distinctive regions of this subject CT image. Then, for each key point in the subject CT image, we extract the image patch, centered at the underlying key point. Then, we adaptively construct the coupled dictionary for the underlying point where each atom in the dictionary consists of image patches and the respective deformations obtained from training pair-wise MRI-CT images. Next, the subject image patch can be sparsely represented by a linear combination of training image patches in the dictionary, where we apply the same sparse coefficients to the respective deformations in the dictionary to predict the deformation for the subject MR image patch. After we repeat the same procedure for each subject CT key point, we use B-splines to interpolate a dense deformation field, which is used as the initialization to allow the registration algorithm estimating the remaining small segment of deformations from MRI to CT image

  13. Sparse Representation Based Classification with Structure Preserving Dimension Reduction

    DTIC Science & Technology

    2014-03-13

    Haibo He Received: 14 September 2012 / Accepted: 18 February 2014 Springer Science+Business Media New York 2014 Abstract Sparse-representation-based...cognitive radio. In: IEEE international confer- ence on communications. 2012 . p. 5608–12. 31. Hu S, Yao Y, Yang Z, Zheng D. Cog-prma protocol for cr...benefit of group sparsity. Ann Stat. 2010;38:1978–2004. 33. Kim S, Koh K, Lustig M, Boyd S, Gorinevsky D. An interior- point method for large-scale l1

  14. Learning feature representations with a cost-relevant sparse autoencoder.

    PubMed

    Längkvist, Martin; Loutfi, Amy

    2015-02-01

    There is an increasing interest in the machine learning community to automatically learn feature representations directly from the (unlabeled) data instead of using hand-designed features. The autoencoder is one method that can be used for this purpose. However, for data sets with a high degree of noise, a large amount of the representational capacity in the autoencoder is used to minimize the reconstruction error for these noisy inputs. This paper proposes a method that improves the feature learning process by focusing on the task relevant information in the data. This selective attention is achieved by weighting the reconstruction error and reducing the influence of noisy inputs during the learning process. The proposed model is trained on a number of publicly available image data sets and the test error rate is compared to a standard sparse autoencoder and other methods, such as the denoising autoencoder and contractive autoencoder.

  15. Blind deconvolution using an improved L0 sparse representation

    NASA Astrophysics Data System (ADS)

    Ye, Pengzhao; Feng, Huajun; Li, Qi; Xu, Zhihai; Chen, Yueting

    2014-09-01

    In this paper, we present a method for single image blind deconvolution. Many common forms of blind deconvolution methods need to previously generate a salient image, while the paper presents a novel L0 sparse expression to directly solve the ill-positioned problem. It has no need to filter the blurred image as a restoration step and can use the gradient information as a fidelity term during optimization. The key to blind deconvolution problem is to estimate an accurate kernel. First, based on L2 sparse expression using gradient operator as a prior, the kernel can be estimated roughly and efficiently in the frequency domain. We adopt the multi-scale scheme which can estimate blur kernel from coarser level to finer level. After the estimation of this level's kernel, L0 sparse representation is employed as the fidelity term during restoration. After derivation, L0 norm can be approximately converted to a sum term and L1 norm term which can be addressed by the Split-Bregman method. By using the estimated blur kernel and the TV deconvolution model, the final restoration image is obtained. Experimental results show that the proposed method is fast and can accurately reconstruct the kernel, especially when the blur is motion blur, defocus blur or the superposition of the two. The restored image is of higher quality than that of some of the art algorithms.

  16. Dictionary learning method for joint sparse representation-based image fusion

    NASA Astrophysics Data System (ADS)

    Zhang, Qiheng; Fu, Yuli; Li, Haifeng; Zou, Jian

    2013-05-01

    Recently, sparse representation (SR) and joint sparse representation (JSR) have attracted a lot of interest in image fusion. The SR models signals by sparse linear combinations of prototype signal atoms that make a dictionary. The JSR indicates that different signals from the various sensors of the same scene form an ensemble. These signals have a common sparse component and each individual signal owns an innovation sparse component. The JSR offers lower computational complexity compared with SR. First, for JSR-based image fusion, we give a new fusion rule. Then, motivated by the method of optimal directions (MOD), for JSR, we propose a novel dictionary learning method (MODJSR) whose dictionary updating procedure is derived by employing the JSR structure one time with singular value decomposition (SVD). MODJSR has lower complexity than the K-SVD algorithm which is often used in previous JSR-based fusion algorithms. To capture the image details more efficiently, we proposed the generalized JSR in which the signals ensemble depends on two dictionaries. MODJSR is extended to MODGJSR in this case. MODJSR/MODGJSR can simultaneously carry out dictionary learning, denoising, and fusion of noisy source images. Some experiments are given to demonstrate the validity of the MODJSR/MODGJSR for image fusion.

  17. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation

    PubMed Central

    Sun, Rui; Zhang, Guanghai; Yan, Xiaoxing; Gao, Jun

    2016-01-01

    Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods. PMID:27537888

  18. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation.

    PubMed

    Sun, Rui; Zhang, Guanghai; Yan, Xiaoxing; Gao, Jun

    2016-08-16

    Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods.

  19. Magnetic resonance brain tissue segmentation based on sparse representations

    NASA Astrophysics Data System (ADS)

    Rueda, Andrea

    2015-12-01

    Segmentation or delineation of specific organs and structures in medical images is an important task in the clinical diagnosis and treatment, since it allows to characterize pathologies through imaging measures (biomarkers). In brain imaging, segmentation of main tissues or specific structures is challenging, due to the anatomic variability and complexity, and the presence of image artifacts (noise, intensity inhomogeneities, partial volume effect). In this paper, an automatic segmentation strategy is proposed, based on sparse representations and coupled dictionaries. Image intensity patterns are singly related to tissue labels at the level of small patches, gathering this information in coupled intensity/segmentation dictionaries. This dictionaries are used within a sparse representation framework to find the projection of a new intensity image onto the intensity dictionary, and the same projection can be used with the segmentation dictionary to estimate the corresponding segmentation. Preliminary results obtained with two publicly available datasets suggest that the proposal is capable of estimating adequate segmentations for gray matter (GM) and white matter (WM) tissues, with an average overlapping of 0:79 for GM and 0:71 for WM (with respect to original segmentations).

  20. 3D Ear Identification Based on Sparse Representation

    PubMed Central

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying

    2014-01-01

    Biometrics based personal authentication is an effective way for automatically recognizing, with a high confidence, a person’s identity. Recently, 3D ear shape has attracted tremendous interests in research field due to its richness of feature and ease of acquisition. However, the existing ICP (Iterative Closet Point)-based 3D ear matching methods prevalent in the literature are not quite efficient to cope with the one-to-many identification case. In this paper, we aim to fill this gap by proposing a novel effective fully automatic 3D ear identification system. We at first propose an accurate and efficient template-based ear detection method. By utilizing such a method, the extracted ear regions are represented in a common canonical coordinate system determined by the ear contour template, which facilitates much the following stages of feature extraction and classification. For each extracted 3D ear, a feature vector is generated as its representation by making use of a PCA-based local feature descriptor. At the stage of classification, we resort to the sparse representation based classification approach, which actually solves an l1-minimization problem. To the best of our knowledge, this is the first work introducing the sparse representation framework into the field of 3D ear identification. Extensive experiments conducted on a benchmark dataset corroborate the effectiveness and efficiency of the proposed approach. The associated Matlab source code and the evaluation results have been made publicly online available at http://sse.tongji.edu.cn/linzhang/ear/srcear/srcear.htm. PMID:24740247

  1. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable

    PubMed Central

    Isnard, Vincent; Taffou, Marine; Viaud-Delmon, Isabelle; Suied, Clara

    2016-01-01

    Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second) could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs). This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition. PMID:26950589

  2. Two-stage nonnegative sparse representation for large-scale face recognition.

    PubMed

    He, Ran; Zheng, Wei-Shi; Hu, Bao-Gang; Kong, Xiang-Wei

    2013-01-01

    This paper proposes a novel nonnegative sparse representation approach, called two-stage sparse representation (TSR), for robust face recognition on a large-scale database. Based on the divide and conquer strategy, TSR decomposes the procedure of robust face recognition into outlier detection stage and recognition stage. In the first stage, we propose a general multisubspace framework to learn a robust metric in which noise and outliers in image pixels are detected. Potential loss functions, including L1 , L2,1, and correntropy are studied. In the second stage, based on the learned metric and collaborative representation, we propose an efficient nonnegative sparse representation algorithm to find an approximation solution of sparse representation. According to the L1 ball theory in sparse representation, the approximated solution is unique and can be optimized efficiently. Then a filtering strategy is developed to avoid the computation of the sparse representation on the whole large-scale dataset. Moreover, theoretical analysis also gives the necessary condition for nonnegative least squares technique to find a sparse solution. Extensive experiments on several public databases have demonstrated that the proposed TSR approach, in general, achieves better classification accuracy than the state-of-the-art sparse representation methods. More importantly, a significant reduction of computational costs is reached in comparison with sparse representation classifier; this enables the TSR to be more suitable for robust face recognition on a large-scale dataset.

  3. Image denoising via sparse and redundant representations over learned dictionaries.

    PubMed

    Elad, Michael; Aharon, Michal

    2006-12-01

    We address the image denoising problem, where zero-mean white and homogeneous Gaussian additive noise is to be removed from a given image. The approach taken is based on sparse and redundant representations over trained dictionaries. Using the K-SVD algorithm, we obtain a dictionary that describes the image content effectively. Two training options are considered: using the corrupted image itself, or training on a corpus of high-quality image database. Since the K-SVD is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm. This leads to a state-of-the-art denoising performance, equivalent and sometimes surpassing recently published leading alternative denoising methods.

  4. Learned dictionaries for sparse image representation: properties and results

    NASA Astrophysics Data System (ADS)

    Skretting, Karl; Engan, Kjersti

    2011-09-01

    Sparse representation of images using learned dictionaries have been shown to work well for applications like image denoising, impainting, image compression, etc. In this paper dictionary properties are reviewed from a theoretical approach, and experimental results for learned dictionaries are presented. The main dictionary properties are the upper and lower frame (dictionary) bounds, and (mutual) coherence properties based on the angle between dictionary atoms. Both l0 sparsity and l1 sparsity are considered by using a matching pursuit method, order recursive matching Pursuit (ORMP), and a basis pursuit method, i.e. LARS or Lasso. For dictionary learning the following methods are considered: Iterative least squares (ILS-DLA or MOD), recursive least squares (RLS-DLA), K-SVD and online dictionary learning (ODL). Finally, it is shown how these properties relate to an image compression example.

  5. Face recognition under variable illumination via sparse representation of patches

    NASA Astrophysics Data System (ADS)

    Fan, Shouke; Liu, Rui; Feng, Weiguo; Zhu, Ming

    2013-10-01

    The objective of this work is to recognize faces under variations in illumination. Previous works have indicated that the variations in illumination can dramatically reduce the performance of face recognition. To this end - ;an efficient method for face recognition which is robust under variable illumination is proposed in this paper. First of all, a discrete cosine transform(DCT) in the logarithm domain is employed to preprocess the images, removing the illumination variations by discarding an appropriate number of low-frequency DCT coefficients. Then, a face image is partitioned into several patches, and we classify the patches using Sparse Representation-based Classification, respectively. At last, the identity of a test image can be determined by the classification results of its patches. Experimental results on the Yale B database and the CMU PIE database show that excellent recognition rates can be achieved by the proposed method.

  6. Sparse Representation of Electrodermal Activity With Knowledge-Driven Dictionaries

    PubMed Central

    Tsiartas, Andreas; Stein, Leah I.; Cermak, Sharon A.; Narayanan, Shrikanth S.

    2015-01-01

    Biometric sensors and portable devices are being increasingly embedded into our everyday life, creating the need for robust physiological models that efficiently represent, analyze, and interpret the acquired signals. We propose a knowledge-driven method to represent electrodermal activity (EDA), a psychophysiological signal linked to stress, affect, and cognitive processing. We build EDA-specific dictionaries that accurately model both the slow varying tonic part and the signal fluctuations, called skin conductance responses (SCR), and use greedy sparse representation techniques to decompose the signal into a small number of atoms from the dictionary. Quantitative evaluation of our method considers signal reconstruction, compression rate, and information retrieval measures, that capture the ability of the model to incorporate the main signal characteristics, such as SCR occurrences. Compared to previous studies fitting a predetermined structure to the signal, results indicate that our approach provides benefits across all aforementioned criteria. This paper demonstrates the ability of appropriate dictionaries along with sparse decomposition methods to reliably represent EDA signals and provides a foundation for automatic measurement of SCR characteristics and the extraction of meaningful EDA features. PMID:25494494

  7. Fault feature extraction of rolling element bearings using sparse representation

    NASA Astrophysics Data System (ADS)

    He, Guolin; Ding, Kang; Lin, Huibin

    2016-03-01

    Influenced by factors such as speed fluctuation, rolling element sliding and periodical variation of load distribution and impact force on the measuring direction of sensor, the impulse response signals caused by defective rolling bearing are non-stationary, and the amplitudes of the impulse may even drop to zero when the fault is out of load zone. The non-stationary characteristic and impulse missing phenomenon reduce the effectiveness of the commonly used demodulation method on rolling element bearing fault diagnosis. Based on sparse representation theories, a new approach for fault diagnosis of rolling element bearing is proposed. The over-complete dictionary is constructed by the unit impulse response function of damped second-order system, whose natural frequencies and relative damping ratios are directly identified from the fault signal by correlation filtering method. It leads to a high similarity between atoms and defect induced impulse, and also a sharply reduction of the redundancy of the dictionary. To improve the matching accuracy and calculation speed of sparse coefficient solving, the fault signal is divided into segments and the matching pursuit algorithm is carried out by segments. After splicing together all the reconstructed signals, the fault feature is extracted successfully. The simulation and experimental results show that the proposed method is effective for the fault diagnosis of rolling element bearing in large rolling element sliding and low signal to noise ratio circumstances.

  8. Inpainting of historical seismograms using sparse representation method

    NASA Astrophysics Data System (ADS)

    Wang, Lifu; Sun, Yi; Cai, Xiaogang

    2015-01-01

    This paper presents a method of inpainting historical seismograms recorded by a pen and paper drum-type seismograph. In the seismogram, some portions of the wave may be lost or distorted owing to time marks or violent shaking. In this study, the seismic waveform is divided into several frames of equal length, and the lost or distorted portions are restored frame by frame. Because a seismogram contains several repetitive patterns in the entire waveform, each frame can be sparsely represented on the basis of these patterns. Therefore, the sparse representation model is employed to represent historical seismograms. In addition, an inpainting model that employs sparsity as a prior is formulated, and it is used to restore the lost portions by solving a L0-norm minimization problem. However, this minimization problem may be ill posed and result in an incorrect outcome if the missing interval duration of the wave is very large. Therefore, to solve this ill-posed problem, a prior based on the Fourier spectrum of the waveform is added to the inpainting method. Simulation results prove that the proposed inpainting method can restore the missing wave well.

  9. Pedestrian detection from thermal images: A sparse representation based approach

    NASA Astrophysics Data System (ADS)

    Qi, Bin; John, Vijay; Liu, Zheng; Mita, Seiichi

    2016-05-01

    Pedestrian detection, a key technology in computer vision, plays a paramount role in the applications of advanced driver assistant systems (ADASs) and autonomous vehicles. The objective of pedestrian detection is to identify and locate people in a dynamic environment so that accidents can be avoided. With significant variations introduced by illumination, occlusion, articulated pose, and complex background, pedestrian detection is a challenging task for visual perception. Different from visible images, thermal images are captured and presented with intensity maps based objects' emissivity, and thus have an enhanced spectral range to make human beings perceptible from the cool background. In this study, a sparse representation based approach is proposed for pedestrian detection from thermal images. We first adopted the histogram of sparse code to represent image features and then detect pedestrian with the extracted features in an unimodal and a multimodal framework respectively. In the unimodal framework, two types of dictionaries, i.e. joint dictionary and individual dictionary, are built by learning from prepared training samples. In the multimodal framework, a weighted fusion scheme is proposed to further highlight the contributions from features with higher separability. To validate the proposed approach, experiments were conducted to compare with three widely used features: Haar wavelets (HWs), histogram of oriented gradients (HOG), and histogram of phase congruency (HPC) as well as two classification methods, i.e. AdaBoost and support vector machine (SVM). Experimental results on a publicly available data set demonstrate the superiority of the proposed approach.

  10. Sinogram denoising via simultaneous sparse representation in learned dictionaries.

    PubMed

    Karimi, Davood; Ward, Rabab K

    2016-05-07

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster.

  11. Sinogram denoising via simultaneous sparse representation in learned dictionaries

    NASA Astrophysics Data System (ADS)

    Karimi, Davood; Ward, Rabab K.

    2016-05-01

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster.

  12. Flutter signal extracting technique based on FOG and self-adaptive sparse representation algorithm

    NASA Astrophysics Data System (ADS)

    Lei, Jian; Meng, Xiangtao; Xiang, Zheng

    2016-10-01

    Due to various moving parts inside, when a spacecraft runs in orbits, its structure could get a minor angular vibration, which results in vague image formation of space camera. Thus, image compensation technique is required to eliminate or alleviate the effect of movement on image formation and it is necessary to realize precise measuring of flutter angle. Due to the advantages such as high sensitivity, broad bandwidth, simple structure and no inner mechanical moving parts, FOG (fiber optical gyro) is adopted in this study to measure minor angular vibration. Then, movement leading to image degeneration is achieved by calculation. The idea of the movement information extracting algorithm based on self-adaptive sparse representation is to use arctangent function approximating L0 norm to construct unconstrained noisy-signal-aimed sparse reconstruction model and then solve the model by a method based on steepest descent algorithm and BFGS algorithm to estimate sparse signal. Then taking the advantage of the principle of random noises not able to be represented by linear combination of elements, useful signal and random noised are separated effectively. Because the main interference of minor angular vibration to image formation of space camera is random noises, sparse representation algorithm could extract useful information to a large extent and acts as a fitting pre-process method of image restoration. The self-adaptive sparse representation algorithm presented in this paper is used to process the measured minor-angle-vibration signal of FOG used by some certain spacecraft. By component analysis of the processing results, we can find out that the algorithm could extract micro angular vibration signal of FOG precisely and effectively, and can achieve the precision degree of 0.1".

  13. Classification of Clouds in Satellite Imagery Using Adaptive Fuzzy Sparse Representation.

    PubMed

    Jin, Wei; Gong, Fei; Zeng, Xingbin; Fu, Randi

    2016-12-16

    Automatic cloud detection and classification using satellite cloud imagery have various meteorological applications such as weather forecasting and climate monitoring. Cloud pattern analysis is one of the research hotspots recently. Since satellites sense the clouds remotely from space, and different cloud types often overlap and convert into each other, there must be some fuzziness and uncertainty in satellite cloud imagery. Satellite observation is susceptible to noises, while traditional cloud classification methods are sensitive to noises and outliers; it is hard for traditional cloud classification methods to achieve reliable results. To deal with these problems, a satellite cloud classification method using adaptive fuzzy sparse representation-based classification (AFSRC) is proposed. Firstly, by defining adaptive parameters related to attenuation rate and critical membership, an improved fuzzy membership is introduced to accommodate the fuzziness and uncertainty of satellite cloud imagery; secondly, by effective combination of the improved fuzzy membership function and sparse representation-based classification (SRC), atoms in training dictionary are optimized; finally, an adaptive fuzzy sparse representation classifier for cloud classification is proposed. Experiment results on FY-2G satellite cloud image show that, the proposed method not only improves the accuracy of cloud classification, but also has strong stability and adaptability with high computational efficiency.

  14. Sparse representations and convex optimization as tools for LOFAR radio interferometric imaging

    NASA Astrophysics Data System (ADS)

    Girard, J. N.; Garsden, H.; Starck, J. L.; Corbel, S.; Woiselle, A.; Tasse, C.; McKean, J. P.; Bobin, J.

    2015-08-01

    Compressed sensing theory is slowly making its way to solve more and more astronomical inverse problems. We address here the application of sparse representations, convex optimization and proximal theory to radio interferometric imaging. First, we expose the theory behind interferometric imaging, sparse representations and convex optimization, and second, we illustrate their application with numerical tests with SASIR, an implementation of the FISTA, a Forward-Backward splitting algorithm hosted in a LOFAR imager. Various tests have been conducted in Garsden et al., 2015. The main results are: i) an improved angular resolution (super resolution of a factor ≈ 2) with point sources as compared to CLEAN on the same data, ii) correct photometry measurements on a field of point sources at high dynamic range and iii) the imaging of extended sources with improved fidelity. SASIR provides better reconstructions (five time less residuals) of the extended emission as compared to CLEAN. With the advent of large radiotelescopes, there is scope for improving classical imaging methods with convex optimization methods combined with sparse representations.

  15. Classification of Clouds in Satellite Imagery Using Adaptive Fuzzy Sparse Representation

    PubMed Central

    Jin, Wei; Gong, Fei; Zeng, Xingbin; Fu, Randi

    2016-01-01

    Automatic cloud detection and classification using satellite cloud imagery have various meteorological applications such as weather forecasting and climate monitoring. Cloud pattern analysis is one of the research hotspots recently. Since satellites sense the clouds remotely from space, and different cloud types often overlap and convert into each other, there must be some fuzziness and uncertainty in satellite cloud imagery. Satellite observation is susceptible to noises, while traditional cloud classification methods are sensitive to noises and outliers; it is hard for traditional cloud classification methods to achieve reliable results. To deal with these problems, a satellite cloud classification method using adaptive fuzzy sparse representation-based classification (AFSRC) is proposed. Firstly, by defining adaptive parameters related to attenuation rate and critical membership, an improved fuzzy membership is introduced to accommodate the fuzziness and uncertainty of satellite cloud imagery; secondly, by effective combination of the improved fuzzy membership function and sparse representation-based classification (SRC), atoms in training dictionary are optimized; finally, an adaptive fuzzy sparse representation classifier for cloud classification is proposed. Experiment results on FY-2G satellite cloud image show that, the proposed method not only improves the accuracy of cloud classification, but also has strong stability and adaptability with high computational efficiency. PMID:27999261

  16. A novel image compression-encryption hybrid algorithm based on the analysis sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Ye; Xu, Biao; Zhou, Nanrun

    2017-06-01

    Recent advances on the compressive sensing theory were invoked for image compression-encryption based on the synthesis sparse model. In this paper we concentrate on an alternative sparse representation model, i.e., the analysis sparse model, to propose a novel image compression-encryption hybrid algorithm. The analysis sparse representation of the original image is obtained with an overcomplete fixed dictionary that the order of the dictionary atoms is scrambled, and the sparse representation can be considered as an encrypted version of the image. Moreover, the sparse representation is compressed to reduce its dimension and re-encrypted by the compressive sensing simultaneously. To enhance the security of the algorithm, a pixel-scrambling method is employed to re-encrypt the measurements of the compressive sensing. Various simulation results verify that the proposed image compression-encryption hybrid algorithm could provide a considerable compression performance with a good security.

  17. Locality-preserving sparse representation-based classification in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Gao, Lianru; Yu, Haoyang; Zhang, Bing; Li, Qingting

    2016-10-01

    This paper proposes to combine locality-preserving projections (LPP) and sparse representation (SR) for hyperspectral image classification. The LPP is first used to reduce the dimensionality of all the training and testing data by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold, where the high-dimensional data lies. Then, SR codes the projected testing pixels as sparse linear combinations of all the training samples to classify the testing pixels by evaluating which class leads to the minimum approximation error. The integration of LPP and SR represents an innovative contribution to the literature. The proposed approach, called locality-preserving SR-based classification, addresses the imbalance between high dimensionality of hyperspectral data and the limited number of training samples. Experimental results on three real hyperspectral data sets demonstrate that the proposed approach outperforms the original counterpart, i.e., SR-based classification.

  18. Face sketch synthesis via sparse representation-based greedy search.

    PubMed

    Shengchuan Zhang; Xinbo Gao; Nannan Wang; Jie Li; Mingjin Zhang

    2015-08-01

    Face sketch synthesis has wide applications in digital entertainment and law enforcement. Although there is much research on face sketch synthesis, most existing algorithms cannot handle some nonfacial factors, such as hair style, hairpins, and glasses if these factors are excluded in the training set. In addition, previous methods only work on well controlled conditions and fail on images with different backgrounds and sizes as the training set. To this end, this paper presents a novel method that combines both the similarity between different image patches and prior knowledge to synthesize face sketches. Given training photo-sketch pairs, the proposed method learns a photo patch feature dictionary from the training photo patches and replaces the photo patches with their sparse coefficients during the searching process. For a test photo patch, we first obtain its sparse coefficient via the learnt dictionary and then search its nearest neighbors (candidate patches) in the whole training photo patches with sparse coefficients. After purifying the nearest neighbors with prior knowledge, the final sketch corresponding to the test photo can be obtained by Bayesian inference. The contributions of this paper are as follows: 1) we relax the nearest neighbor search area from local region to the whole image without too much time consuming and 2) our method can produce nonfacial factors that are not contained in the training set and is robust against image backgrounds and can even ignore the alignment and image size aspects of test photos. Our experimental results show that the proposed method outperforms several state-of-the-arts in terms of perceptual and objective metrics.

  19. Learning Sparse Feature Representations using Probabilistic Quadtrees and Deep Belief Nets

    DTIC Science & Technology

    2015-04-24

    Feature Representations usingProbabilistic Quadtrees and Deep Belief Nets Learning sparse feature representations is a useful instru- ment for solving an...novel framework for the classifi cation of handwritten digits that learns sparse representations using probabilistic quadtrees and Deep Belief Nets...S) AND ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Deep Belief Networks; MNIST REPORT

  20. Sparse representation-based ECG signal enhancement and QRS detection.

    PubMed

    Zhou, Yichao; Hu, Xiyuan; Tang, Zhenmin; Ahn, Andrew C

    2016-12-01

    Electrocardiogram (ECG) signal enhancement and QRS complex detection is a critical preprocessing step for further heart disease analysis and diagnosis. In this paper, we propose a sparse representation-based ECG signal enhancement and QRS complex detection algorithm. Unlike traditional Fourier or wavelet transform-based methods, which use fixed bases, the proposed algorithm models the ECG signal as the superposition of a few inner structures plus additive random noise, where these structures (referred to here as atoms) can be learned from the input signal or a training set. Using these atoms and their properties, we can accurately approximate the original ECG signal and remove the noise and other artifacts such as baseline wandering. Additionally, some of the atoms with larger kurtosis values can be modified and used as an indication function to detect and locate the QRS complexes in the enhanced ECG signals. To demonstrate the robustness and efficacy of the proposed algorithm, we compare it with several state-of-the-art ECG enhancement and QRS detection algorithms using both simulated and real-life ECG recordings.

  1. Image sequence denoising via sparse and redundant representations.

    PubMed

    Protter, Matan; Elad, Michael

    2009-01-01

    In this paper, we consider denoising of image sequences that are corrupted by zero-mean additive white Gaussian noise. Relative to single image denoising techniques, denoising of sequences aims to also utilize the temporal dimension. This assists in getting both faster algorithms and better output quality. This paper focuses on utilizing sparse and redundant representations for image sequence denoising, extending the work reported in. In the single image setting, the K-SVD algorithm is used to train a sparsifying dictionary for the corrupted image. This paper generalizes the above algorithm by offering several extensions: i) the atoms used are 3-D; ii) the dictionary is propagated from one frame to the next, reducing the number of required iterations; and iii) averaging is done on patches in both spatial and temporal neighboring locations. These modifications lead to substantial benefits in complexity and denoising performance, compared to simply running the single image algorithm sequentially. The algorithm's performance is experimentally compared to several state-of-the-art algorithms, demonstrating comparable or favorable results.

  2. Extreme learning machine and adaptive sparse representation for image classification.

    PubMed

    Cao, Jiuwen; Zhang, Kai; Luo, Minxia; Yin, Chun; Lai, Xiaoping

    2016-09-01

    Recent research has shown the speed advantage of extreme learning machine (ELM) and the accuracy advantage of sparse representation classification (SRC) in the area of image classification. Those two methods, however, have their respective drawbacks, e.g., in general, ELM is known to be less robust to noise while SRC is known to be time-consuming. Consequently, ELM and SRC complement each other in computational complexity and classification accuracy. In order to unify such mutual complementarity and thus further enhance the classification performance, we propose an efficient hybrid classifier to exploit the advantages of ELM and SRC in this paper. More precisely, the proposed classifier consists of two stages: first, an ELM network is trained by supervised learning. Second, a discriminative criterion about the reliability of the obtained ELM output is adopted to decide whether the query image can be correctly classified or not. If the output is reliable, the classification will be performed by ELM; otherwise the query image will be fed to SRC. Meanwhile, in the stage of SRC, a sub-dictionary that is adaptive to the query image instead of the entire dictionary is extracted via the ELM output. The computational burden of SRC thus can be reduced. Extensive experiments on handwritten digit classification, landmark recognition and face recognition demonstrate that the proposed hybrid classifier outperforms ELM and SRC in classification accuracy with outstanding computational efficiency.

  3. Sparse representation utilizing tight frame for phase retrieval

    NASA Astrophysics Data System (ADS)

    Shi, Baoshun; Lian, Qiusheng; Chen, Shuzhen

    2015-12-01

    We treat the phase retrieval (PR) problem of reconstructing the interest signal from its Fourier magnitude. Since the Fourier phase information is lost, the problem is ill-posed. Several techniques have been used to address this problem by utilizing various priors such as non-negative, support, and Fourier magnitude constraints. Recent methods exploiting sparsity are developed to improve the reconstruction quality. However, the previous algorithms of utilizing sparsity prior suffer from either the low reconstruction quality at low oversampled factors or being sensitive to noise. To address these issues, we propose a framework that exploits sparsity of the signal in the translation invariant Haar pyramid (TIHP) tight frame. Based on this sparsity prior, we formulate the sparse representation regularization term and incorporate it into the PR optimization problem. We propose the alternating iterative algorithm for solving the corresponding non-convex problem by dividing the problem into several subproblems. We give the optimal solution to each subproblem, and experimental simulations under the noise-free and noisy scenario indicate that our proposed algorithm can obtain a better reconstruction quality compared to the conventional alternative projection methods, even outperform the recent sparsity-based algorithms in terms of reconstruction quality.

  4. Pavement crack characteristic detection based on sparse representation

    NASA Astrophysics Data System (ADS)

    Sun, Xiaoming; Huang, Jianping; Liu, Wanyu; Xu, Mantao

    2012-12-01

    Pavement crack detection plays an important role in pavement maintaining and management. The three-dimensional (3D) pavement crack detection technique based on laser is a recent trend due to its ability of discriminating dark areas, which are not caused by pavement distress such as tire marks, oil spills and shadows. In the field of 3D pavement crack detection, the most important thing is the accurate extraction of cracks in individual pavement profile without destroying pavement profile. So after analyzing the pavement profile signal characteristics and the changeability of pavement crack characteristics, a new method based on the sparse representation is developed to decompose pavement profile signal into a summation of the mainly pavement profile and cracks. Based on the characteristics of the pavement profile signal and crack, the mixed dictionary is constructed with an over-complete exponential function and an over-complete trapezoidal membership function, and the signal is separated by learning in this mixed dictionary with a matching pursuit algorithm. Some experiments were conducted and promising results were obtained, showing that we can detect the pavement crack efficiently and achieve a good separation of crack from pavement profile without destroying pavement profile.

  5. Learning sparse discriminative representations for land cover classification in the Arctic

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.; Gangodagamage, Chandana

    2012-10-01

    Neuroscience-inspired machine vision algorithms are of current interest in the areas of detection and monitoring of climate change impacts, and general Land Use/Land Cover classification using satellite image data. We describe an approach for automatic classification of land cover in multispectral satellite imagery of the Arctic using sparse representations over learned dictionaries. We demonstrate our method using DigitalGlobe Worldview-2 8-band visible/near infrared high spatial resolution imagery of the MacKenzie River basin. We use an on-line batch Hebbian learning rule to build spectral-textural dictionaries that are adapted to this multispectral data. We learn our dictionaries from millions of overlapping image patches and then use a pursuit search to generate sparse classification features. We explore unsupervised clustering in the sparse representation space to produce land-cover category labels. This approach combines spectral and spatial textural characteristics to detect geologic, vegetative, and hydrologic features. We compare our technique to standard remote sensing algorithms. Our results suggest that neuroscience-based models are a promising approach to practical pattern recognition problems in remote sensing, even for datasets using spectral bands not found in natural visual systems.

  6. Robust Ear Recognition via Nonnegative Sparse Representation of Gabor Orientation Information

    PubMed Central

    Mu, Zhichun; Zeng, Hui; Luo, Shuang

    2014-01-01

    Orientation information is critical to the accuracy of ear recognition systems. In this paper, a new feature extraction approach is investigated for ear recognition by using orientation information of Gabor wavelets. The proposed Gabor orientation feature can not only avoid too much redundancy in conventional Gabor feature but also tend to extract more precise orientation information of the ear shape contours. Then, Gabor orientation feature based nonnegative sparse representation classification (Gabor orientation + NSRC) is proposed for ear recognition. Compared with SRC in which the sparse coding coefficients can be negative, the nonnegativity of NSRC conforms to the intuitive notion of combining parts to form a whole and therefore is more consistent with the biological modeling of visual data. Additionally, the use of Gabor orientation features increases the discriminative power of NSRC. Extensive experimental results show that the proposed Gabor orientation feature based nonnegative sparse representation classification paradigm achieves much better recognition performance and is found to be more robust to challenging problems such as pose changes, illumination variations, and ear partial occlusion in real-world applications. PMID:24723792

  7. Robust ear recognition via nonnegative sparse representation of Gabor orientation information.

    PubMed

    Zhang, Baoqing; Mu, Zhichun; Zeng, Hui; Luo, Shuang

    2014-01-01

    Orientation information is critical to the accuracy of ear recognition systems. In this paper, a new feature extraction approach is investigated for ear recognition by using orientation information of Gabor wavelets. The proposed Gabor orientation feature can not only avoid too much redundancy in conventional Gabor feature but also tend to extract more precise orientation information of the ear shape contours. Then, Gabor orientation feature based nonnegative sparse representation classification (Gabor orientation + NSRC) is proposed for ear recognition. Compared with SRC in which the sparse coding coefficients can be negative, the nonnegativity of NSRC conforms to the intuitive notion of combining parts to form a whole and therefore is more consistent with the biological modeling of visual data. Additionally, the use of Gabor orientation features increases the discriminative power of NSRC. Extensive experimental results show that the proposed Gabor orientation feature based nonnegative sparse representation classification paradigm achieves much better recognition performance and is found to be more robust to challenging problems such as pose changes, illumination variations, and ear partial occlusion in real-world applications.

  8. An Improved Sparse Representation over Learned Dictionary Method for Seizure Detection.

    PubMed

    Li, Junhui; Zhou, Weidong; Yuan, Shasha; Zhang, Yanli; Li, Chengcheng; Wu, Qi

    2016-02-01

    Automatic seizure detection has played an important role in the monitoring, diagnosis and treatment of epilepsy. In this paper, a patient specific method is proposed for seizure detection in the long-term intracranial electroencephalogram (EEG) recordings. This seizure detection method is based on sparse representation with online dictionary learning and elastic net constraint. The online learned dictionary could sparsely represent the testing samples more accurately, and the elastic net constraint which combines the 11-norm and 12-norm not only makes the coefficients sparse but also avoids over-fitting problem. First, the EEG signals are preprocessed using wavelet filtering and differential filtering, and the kernel function is applied to make the samples closer to linearly separable. Then the dictionaries of seizure and nonseizure are respectively learned from original ictal and interictal training samples with online dictionary optimization algorithm to compose the training dictionary. After that, the test samples are sparsely coded over the learned dictionary and the residuals associated with ictal and interictal sub-dictionary are calculated, respectively. Eventually, the test samples are classified as two distinct categories, seizure or nonseizure, by comparing the reconstructed residuals. The average segment-based sensitivity of 95.45%, specificity of 99.08%, and event-based sensitivity of 94.44% with false detection rate of 0.23/h and average latency of -5.14 s have been achieved with our proposed method.

  9. A New Discriminative Sparse Representation Method for Robust Face Recognition via l₂ Regularization.

    PubMed

    Xu, Yong; Zhong, Zuofeng; Yang, Jian; You, Jane; Zhang, David

    2016-06-24

    Sparse representation has shown an attractive performance in a number of applications. However, the available sparse representation methods still suffer from some problems, and it is necessary to design more efficient methods. Particularly, to design a computationally inexpensive, easily solvable, and robust sparse representation method is a significant task. In this paper, we explore the issue of designing the simple, robust, and powerfully efficient sparse representation methods for image classification. The contributions of this paper are as follows. First, a novel discriminative sparse representation method is proposed and its noticeable performance in image classification is demonstrated by the experimental results. More importantly, the proposed method outperforms the existing state-of-the-art sparse representation methods. Second, the proposed method is not only very computationally efficient but also has an intuitive and easily understandable idea. It exploits a simple algorithm to obtain a closed-form solution and discriminative representation of the test sample. Third, the feasibility, computational efficiency, and remarkable classification accuracy of the proposed l₂ regularization-based representation are comprehensively shown by extensive experiments and analysis. The code of the proposed method is available at http://www.yongxu.org/lunwen.html.

  10. Class-wise Sparse and Collaborative Patch Representation for Face Recognition.

    PubMed

    Lai, Jian; Jiang, Xudong

    2016-03-22

    Sparse representation has shown its merits in solving some classification problems and delivered some impressive results in face recognition. However, the unsupervised optimization of the sparse representation may result in undesired classification outcome if the variations of the data population are not well represented by the training samples. In this paper, a method of class-wise sparse representation (CSR) is proposed to tackle the problems of the conventional sample-wise sparse representation and applied to face recognition. It seeks an optimum representation of the query image by minimizing the classwise sparsity of the training data. To tackle the problem of the uncontrolled training data, this paper further proposes a collaborative patch (CP) framework, together with the proposed CSR, named CSR-CP. Different from the conventional patch based methods that optimize each patch representation separately, the CSR-CP approach optimizes all patches together to seek a collaborative patch group-wise sparse representation by putting all patches of an image into a group. It alleviates the problem of losing discriminative information in the training data caused by the partition of the image into patches. Extensive experiments on several benchmark face databases demonstrate that the proposed CSR-CP significantly outperforms the sparse representation related holistic and patch based approaches.

  11. Temporal Super Resolution Enhancement of Echocardiographic Images Based on Sparse Representation.

    PubMed

    Gifani, Parisa; Behnam, Hamid; Haddadi, Farzan; Sani, Zahra Alizadeh; Shojaeifard, Maryam

    2016-01-01

    A challenging issue for echocardiographic image interpretation is the accurate analysis of small transient motions of myocardium and valves during real-time visualization. A higher frame rate video may reduce this difficulty, and temporal super resolution (TSR) is useful for illustrating the fast-moving structures. In this paper, we introduce a novel framework that optimizes TSR enhancement of echocardiographic images by utilizing temporal information and sparse representation. The goal of this method is to increase the frame rate of echocardiographic videos, and therefore enable more accurate analyses of moving structures. For the proposed method, we first derived temporal information by extracting intensity variation time curves (IVTCs) assessed for each pixel. We then designed both low-resolution and high-resolution overcomplete dictionaries based on prior knowledge of the temporal signals and a set of prespecified known functions. The IVTCs can then be described as linear combinations of a few prototype atoms in the low-resolution dictionary. We used the Bayesian compressive sensing (BCS) sparse recovery algorithm to find the sparse coefficients of the signals. We extracted the sparse coefficients and the corresponding active atoms in the low-resolution dictionary to construct new sparse coefficients corresponding to the high-resolution dictionary. Using the estimated atoms and the high-resolution dictionary, a new IVTC with more samples was constructed. Finally, by placing the new IVTC signals in the original IVTC positions, we were able to reconstruct the original echocardiography video with more frames. The proposed method does not require training of low-resolution and high-resolution dictionaries, nor does it require motion estimation; it does not blur fast-moving objects, and does not have blocking artifacts.

  12. Sparse representation based multi-threshold segmentation for hyperspectral target detection

    NASA Astrophysics Data System (ADS)

    Feng, Wei-yi; Chen, Qian; Miao, Zhuang; He, Wei-ji; Gu, Guo-hua; Zhuang, Jia-yan

    2013-08-01

    A sparse representation based multi-threshold segmentation (SRMTS) algorithm for target detection in hyperspectral images is proposed. Benefiting from the sparse representation, the high-dimensional spectral data can be characterized into a series of sparse feature vectors which has only a few nonzero coefficients. Through setting an appropriate threshold, the noise removed sparse spectral vectors are divided into two subspaces in the sparse domain consistent with the sample spectrum to separate the target from the background. Then a correlation and a vector 1-norm are calculated respectively in the subspaces. The sparse characteristic of the target is used to ext ract the target with a multi -threshold method. Unlike the conventional hyperspectral dimensionality reduction methods used in target detection algorithms, like Principal Components Analysis (PCA) and Maximum Noise Fraction (MNF), this algorithm maintains the spectral characteristics while removing the noise due to the sparse representation. In the experiments, an orthogonal wavelet sparse base is used to sparse the spectral information and a best contraction threshold to remove the hyperspectral image noise according to the noise estimation of the test images. Compared with co mmon algorithms, such as Adaptive Cosine Estimator (ACE), Constrained Energy Minimizat ion (CEM) and the noise removed MNF-CEM algorithm, the proposed algorithm demonstrates higher detection rates and robustness via the ROC curves.

  13. Weighted sparse representation for human ear recognition based on local descriptor

    NASA Astrophysics Data System (ADS)

    Mawloud, Guermoui; Djamel, Melaab

    2016-01-01

    A two-stage ear recognition framework is presented where two local descriptors and a sparse representation algorithm are combined. In a first stage, the algorithm proceeds by deducing a subset of the closest training neighbors to the test ear sample. The selection is based on the K-nearest neighbors classifier in the pattern of oriented edge magnitude feature space. In a second phase, the co-occurrence of adjacent local binary pattern features are extracted from the preselected subset and combined to form a dictionary. Afterward, sparse representation classifier is employed on the developed dictionary in order to infer the closest element to the test sample. Thus, by splitting up the ear image into a number of segments and applying the described recognition routine on each of them, the algorithm finalizes by attributing a final class label based on majority voting over the individual labels pointed out by each segment. Experimental results demonstrate the effectiveness as well as the robustness of the proposed scheme over leading state-of-the-art methods. Especially when the ear image is occluded, the proposed algorithm exhibits a great robustness and reaches the recognition performances outlined in the state of the art.

  14. A Max-Margin Perspective on Sparse Representation-Based Classification

    DTIC Science & Technology

    2013-11-30

    ABSTRACT 16. SECURITY CLASSIFICATION OF: 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES 12. DISTRIBUTION AVAILIBILITY...Perspective on Sparse Representation-Based Classification Sparse Representation-based Classification (SRC) is a powerful tool in distinguishing signal...a reconstructive perspective, which neither offer- s any guarantee on its classification performance nor pro- The views, opinions and/or findings

  15. Recovering key biological constituents through sparse representation of gene expression.

    PubMed

    Prat, Yosef; Fromer, Menachem; Linial, Nathan; Linial, Michal

    2011-03-01

    Large-scale RNA expression measurements are generating enormous quantities of data. During the last two decades, many methods were developed for extracting insights regarding the interrelationships between genes from such data. The mathematical and computational perspectives that underlie these methods are usually algebraic or probabilistic. Here, we introduce an unexplored geometric view point where expression levels of genes in multiple experiments are interpreted as vectors in a high-dimensional space. Specifically, we find, for the expression profile of each particular gene, its approximation as a linear combination of profiles of a few other genes. This method is inspired by recent developments in the realm of compressed sensing in the machine learning domain. To demonstrate the power of our approach in extracting valuable information from the expression data, we independently applied it to large-scale experiments carried out on the yeast and malaria parasite whole transcriptomes. The parameters extracted from the sparse reconstruction of the expression profiles, when fed to a supervised learning platform, were used to successfully predict the relationships between genes throughout the Gene Ontology hierarchy and protein-protein interaction map. Extensive assessment of the biological results shows high accuracy in both recovering known predictions and in yielding accurate predictions missing from the current databases. We suggest that the geometrical approach presented here is suitable for a broad range of high-dimensional experimental data.

  16. Inpainting With Sparse Linear Combinations of Exemplars

    DTIC Science & Technology

    2010-05-01

    Alamos, NM 87545, USA ABSTRACT We introduce a new exemplar-based inpainting algorithm that represents the region to be inpainted as a sparse linear combi...exemplar-based methods. Initial performance comparisons on small inpaint - ing regions indicate that this method provides similar or better performance than...other recent methods. Index Terms— Image restoration, Inpainting , Exemplar 1. INTRODUCTION Exemplar based methods are becoming increasingly popular

  17. Micro-Expression Recognition based on 2D Gabor Filter and Sparse Representation

    NASA Astrophysics Data System (ADS)

    Zheng, Hao

    2017-01-01

    Micro-expression recognition is always a challenging problem for its quick facial expression. This paper proposed a novel method named 2D Gabor filter and Sparse Representation (2DGSR) to deal with the recognition of micro-expression. In our method, 2D Gabor filter is used for enhancing the robustness of the variations due to increasing the discrimination power. While the sparse representation is applied to deal with the subtlety, and cast recognition as a sparse approximation problem. We compare our method to other popular methods in three spontaneous micro-expression recognition databases. The results show that our method has more excellent performance than other methods.

  18. Enhanced low-rank representation via sparse manifold adaption for semi-supervised learning.

    PubMed

    Peng, Yong; Lu, Bao-Liang; Wang, Suhang

    2015-05-01

    Constructing an informative and discriminative graph plays an important role in various pattern recognition tasks such as clustering and classification. Among the existing graph-based learning models, low-rank representation (LRR) is a very competitive one, which has been extensively employed in spectral clustering and semi-supervised learning (SSL). In SSL, the graph is composed of both labeled and unlabeled samples, where the edge weights are calculated based on the LRR coefficients. However, most of existing LRR related approaches fail to consider the geometrical structure of data, which has been shown beneficial for discriminative tasks. In this paper, we propose an enhanced LRR via sparse manifold adaption, termed manifold low-rank representation (MLRR), to learn low-rank data representation. MLRR can explicitly take the data local manifold structure into consideration, which can be identified by the geometric sparsity idea; specifically, the local tangent space of each data point was sought by solving a sparse representation objective. Therefore, the graph to depict the relationship of data points can be built once the manifold information is obtained. We incorporate a regularizer into LRR to make the learned coefficients preserve the geometric constraints revealed in the data space. As a result, MLRR combines both the global information emphasized by low-rank property and the local information emphasized by the identified manifold structure. Extensive experimental results on semi-supervised classification tasks demonstrate that MLRR is an excellent method in comparison with several state-of-the-art graph construction approaches. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Low-Rank and Joint Sparse Representations for Multi-Modal Recognition.

    PubMed

    Zhang, Heng; Patel, Vishal M; Chellappa, Rama

    2017-10-01

    We propose multi-task and multivariate methods for multi-modal recognition based on low-rank and joint sparse representations. Our formulations can be viewed as generalized versions of multivariate low-rank and sparse regression, where sparse and low-rank representations across all modalities are imposed. One of our methods simultaneously couples information within different modalities by enforcing the common low-rank and joint sparse constraints among multi-modal observations. We also modify our formulations by including an occlusion term that is assumed to be sparse. The alternating direction method of multipliers is proposed to efficiently solve the resulting optimization problems. Extensive experiments on three publicly available multi-modal biometrics and object recognition data sets show that our methods compare favorably with other feature-level fusion methods.

  20. Adaptive Nonlocal Sparse Representation for Dual-Camera Compressive Hyperspectral Imaging.

    PubMed

    Wang, Lizhi; Xiong, Zhiwei; Shi, Guangming; Wu, Feng; Zeng, Wenjun

    2016-10-25

    Leveraging the compressive sensing (CS) theory, coded aperture snapshot spectral imaging (CASSI) provides an efficient solution to recover 3D hyperspectral data from a 2D measurement. The dual-camera design of CASSI, by adding an uncoded panchromatic measurement, enhances the reconstruction fidelity while maintaining the snapshot advantage. In this paper, we propose an adaptive nonlocal sparse representation (ANSR) model to boost the performance of dualcamera compressive hyperspectral imaging (DCCHI). Specifically, the CS reconstruction problem is formulated as a 3D cube based sparse representation to make full use of the nonlocal similarity in both the spatial and spectral domains. Our key observation is that, the panchromatic image, besides playing the role of direct measurement, can be further exploited to help the nonlocal similarity estimation. Therefore, we design a joint similarity metric by adaptively combining the internal similarity within the reconstructed hyperspectral image and the external similarity within the panchromatic image. In this way, the fidelity of CS reconstruction is greatly enhanced. Both simulation and hardware experimental results show significant improvement of the proposed method over the state-of-the-art.

  1. [Recognition of water-injected meat based on visible/near-infrared spectrum and sparse representation].

    PubMed

    Hao, Dong-mei; Zhou, Ya-nan; Wang, Yu; Zhang, Song; Yang, Yi-min; Lin, Ling; Li, Gang; Wang, Xiu-li

    2015-01-01

    The present paper proposed a new nondestructive method based on visible/near infrared spectrum (Vis/NIRS) and sparse representation to rapidly and accurately discriminate between raw meat and water-injected meat. Water-injected meat model was built by injecting water into non-destructed meat samples comprising pigskin, fat layer and muscle layer. Vis/NIRS data were collected from raw meat and six scales of water-injected meat with spectrometers. To reduce the redundant information in the spectrum and improve the difference between the samples,. some preprocessing steps were performed for the spectral data, including light modulation and normalization. Effective spectral bands were extracted from the preprocessed spectral data. The meat samples were classified as raw meat and water-injected meat, and further, water-injected meat with different water injection rates. All the training samples were used to compose an atom dictionary, and test samples were represented by the sparsest linear combinations of these atoms via l1-minimization. Projection errors of test samples with respect to each category were calculated. A test sample was classified to the category with the minimum projection error, and leave-one-out cross-validation was conducted. The recognition performance from sparse representation was compared with that from support vector machine (SVM).. Experimental results showed that the overall recognition accuracy of sparse representation for raw meat and water-injected meat was more than 90%, which was higher than that of SVM. For water-injected meat samples with different water injection rates, the recognition accuracy presented a positive correlation with the water injection rate difference. Spare representation-based classifier eliminates the need for the training and feature extraction steps required by conventional pattern recognition models, and is suitable for processing data of high dimensionality and small sample size. Furthermore, it has a low

  2. Sparse Representation for Computer Vision and Pattern Recognition

    DTIC Science & Technology

    2009-05-01

    Duarte, M. Elad, F. Lecumberry , J. Mairel, J. Ponce, I. Ramirez, F. Rodriguez, and A. Szlam. J. Duarte, F. Lecumberry , J. Mairal, and I. Ramirez...F. Lecumberry , and G. Sapiro. Sparse modeling with mixture priors and learned incoherent dictionaries. pre-print, 2009. [56] S. Rao, R. Tron, R. Vidal

  3. Sparse Representations for Three-Dimensional Range Data Restoration (Preprint)

    DTIC Science & Technology

    2011-02-01

    Similar to images, in scanning 3D data occlusion or missing infor- mation can occur. We now investigate methods for filling/ inpainting the holes in the 3D...range data, assuming the location of the holes is known.3 In [2], image inpainting was investigated using sparse models. Based on this work, we address

  4. Deformable segmentation via sparse representation and dictionary learning.

    PubMed

    Zhang, Shaoting; Zhan, Yiqiang; Metaxas, Dimitris N

    2012-10-01

    "Shape" and "appearance", the two pillars of a deformable model, complement each other in object segmentation. In many medical imaging applications, while the low-level appearance information is weak or mis-leading, shape priors play a more important role to guide a correct segmentation, thanks to the strong shape characteristics of biological structures. Recently a novel shape prior modeling method has been proposed based on sparse learning theory. Instead of learning a generative shape model, shape priors are incorporated on-the-fly through the sparse shape composition (SSC). SSC is robust to non-Gaussian errors and still preserves individual shape characteristics even when such characteristics is not statistically significant. Although it seems straightforward to incorporate SSC into a deformable segmentation framework as shape priors, the large-scale sparse optimization of SSC has low runtime efficiency, which cannot satisfy clinical requirements. In this paper, we design two strategies to decrease the computational complexity of SSC, making a robust, accurate and efficient deformable segmentation system. (1) When the shape repository contains a large number of instances, which is often the case in 2D problems, K-SVD is used to learn a more compact but still informative shape dictionary. (2) If the derived shape instance has a large number of vertices, which often appears in 3D problems, an affinity propagation method is used to partition the surface into small sub-regions, on which the sparse shape composition is performed locally. Both strategies dramatically decrease the scale of the sparse optimization problem and hence speed up the algorithm. Our method is applied on a diverse set of biomedical image analysis problems. Compared to the original SSC, these two newly-proposed modules not only significant reduce the computational complexity, but also improve the overall accuracy.

  5. Manifold Kernel Sparse Representation of Symmetric Positive-Definite Matrices and Its Applications.

    PubMed

    Wu, Yuwei; Jia, Yunde; Li, Peihua; Zhang, Jian; Yuan, Junsong

    2015-11-01

    The symmetric positive-definite (SPD) matrix, as a connected Riemannian manifold, has become increasingly popular for encoding image information. Most existing sparse models are still primarily developed in the Euclidean space. They do not consider the non-linear geometrical structure of the data space, and thus are not directly applicable to the Riemannian manifold. In this paper, we propose a novel sparse representation method of SPD matrices in the data-dependent manifold kernel space. The graph Laplacian is incorporated into the kernel space to better reflect the underlying geometry of SPD matrices. Under the proposed framework, we design two different positive definite kernel functions that can be readily transformed to the corresponding manifold kernels. The sparse representation obtained has more discriminating power. Extensive experimental results demonstrate good performance of manifold kernel sparse codes in image classification, face recognition, and visual tracking.

  6. Optimized sparse-particle aerosol representations for modeling cloud-aerosol interactions

    NASA Astrophysics Data System (ADS)

    Fierce, Laura; McGraw, Robert

    2016-04-01

    Sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the method of moments. Given a set of moment constraints, we show how linear programming can be used to identify collections of sparse particles that approximately maximize distributional entropy. The collections of sparse particles derived from this approach reproduce CCN activity of the exact model aerosol distributions with high accuracy. Additionally, the linear programming techniques described in this study can be used to bound key aerosol properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy moment-based approach is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a new aerosol simulation scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.

  7. Archetypal Analysis for Sparse Representation-Based Hyperspectral Sub-Pixel Quantification

    NASA Astrophysics Data System (ADS)

    Drees, L.; Roscher, R.

    2017-05-01

    This paper focuses on the quantification of land cover fractions in an urban area of Berlin, Germany, using simulated hyperspectral EnMAP data with a spatial resolution of 30m×30m. For this, sparse representation is applied, where each pixel with unknown surface characteristics is expressed by a weighted linear combination of elementary spectra with known land cover class. The elementary spectra are determined from image reference data using simplex volume maximization, which is a fast heuristic technique for archetypal analysis. In the experiments, the estimation of class fractions based on the archetypal spectral library is compared to the estimation obtained by a manually designed spectral library by means of reconstruction error, mean absolute error of the fraction estimates, sum of fractions and the number of used elementary spectra. We will show, that a collection of archetypes can be an adequate and efficient alternative to the spectral library with respect to mentioned criteria.

  8. Low-dose computed tomography image denoising based on joint wavelet and sparse representation.

    PubMed

    Ghadrdan, Samira; Alirezaie, Javad; Dillenseger, Jean-Louis; Babyn, Paul

    2014-01-01

    Image denoising and signal enhancement are the most challenging issues in low dose computed tomography (CT) imaging. Sparse representational methods have shown initial promise for these applications. In this work we present a wavelet based sparse representation denoising technique utilizing dictionary learning and clustering. By using wavelets we extract the most suitable features in the images to obtain accurate dictionary atoms for the denoising algorithm. To achieve improved results we also lower the number of clusters which reduces computational complexity. In addition, a single image noise level estimation is developed to update the cluster centers in higher PSNRs. Our results along with the computational efficiency of the proposed algorithm clearly demonstrates the improvement of the proposed algorithm over other clustering based sparse representation (CSR) and K-SVD methods.

  9. Sparse Representation-Based Image Quality Index With Adaptive Sub-Dictionaries.

    PubMed

    Li, Leida; Cai, Hao; Zhang, Yabin; Lin, Weisi; Kot, Alex C; Sun, Xingming

    2016-08-01

    Distortions cause structural changes in digital images, leading to degraded visual quality. Dictionary-based sparse representation has been widely studied recently due to its ability to extract inherent image structures. Meantime, it can extract image features with slightly higher level semantics. Intuitively, sparse representation can be used for image quality assessment, because visible distortions can cause significant changes to the sparse features. In this paper, a new sparse representation-based image quality assessment model is proposed based on the construction of adaptive sub-dictionaries. An overcomplete dictionary trained from natural images is employed to capture the structure changes between the reference and distorted images by sparse feature extraction via adaptive sub-dictionary selection. Based on the observation that image sparse features are invariant to weak degradations and the perceived image quality is generally influenced by diverse issues, three auxiliary quality features are added, including gradient, color, and luminance information. The proposed method is not sensitive to training images, so a universal dictionary can be adopted for quality evaluation. Extensive experiments on five public image quality databases demonstrate that the proposed method produces the state-of-the-art results, and it delivers consistently well performances when tested in different image quality databases.

  10. Sparse representation approaches for the classification of high-dimensional biological data

    PubMed Central

    2013-01-01

    Background High-throughput genomic and proteomic data have important applications in medicine including prevention, diagnosis, treatment, and prognosis of diseases, and molecular biology, for example pathway identification. Many of such applications can be formulated to classification and dimension reduction problems in machine learning. There are computationally challenging issues with regards to accurately classifying such data, and which due to dimensionality, noise and redundancy, to name a few. The principle of sparse representation has been applied to analyzing high-dimensional biological data within the frameworks of clustering, classification, and dimension reduction approaches. However, the existing sparse representation methods are inefficient. The kernel extensions are not well addressed either. Moreover, the sparse representation techniques have not been comprehensively studied yet in bioinformatics. Results In this paper, a Bayesian treatment is presented on sparse representations. Various sparse coding and dictionary learning models are discussed. We propose fast parallel active-set optimization algorithm for each model. Kernel versions are devised based on their dimension-free property. These models are applied for classifying high-dimensional biological data. Conclusions In our experiment, we compared our models with other methods on both accuracy and computing time. It is shown that our models can achieve satisfactory accuracy, and their performance are very efficient. PMID:24565287

  11. Dynamic time warping and sparse representation classification for birdsong phrase classification using limited training data.

    PubMed

    Tan, Lee N; Alwan, Abeer; Kossan, George; Cody, Martin L; Taylor, Charles E

    2015-03-01

    Annotation of phrases in birdsongs can be helpful to behavioral and population studies. To reduce the need for manual annotation, an automated birdsong phrase classification algorithm for limited data is developed. Limited data occur because of limited recordings or the existence of rare phrases. In this paper, classification of up to 81 phrase classes of Cassin's Vireo is performed using one to five training samples per class. The algorithm involves dynamic time warping (DTW) and two passes of sparse representation (SR) classification. DTW improves the similarity between training and test phrases from the same class in the presence of individual bird differences and phrase segmentation inconsistencies. The SR classifier works by finding a sparse linear combination of training feature vectors from all classes that best approximates the test feature vector. When the class decisions from DTW and the first pass SR classification are different, SR classification is repeated using training samples from these two conflicting classes. Compared to DTW, support vector machines, and an SR classifier without DTW, the proposed classifier achieves the highest classification accuracies of 94% and 89% on manually segmented and automatically segmented phrases, respectively, from unseen Cassin's Vireo individuals, using five training samples per class.

  12. Robust visual tracking and vehicle classification via sparse representation.

    PubMed

    Mei, Xue; Ling, Haibin

    2011-11-01

    In this paper, we propose a robust visual tracking method by casting tracking as a sparse approximation problem in a particle filter framework. In this framework, occlusion, noise, and other challenging issues are addressed seamlessly through a set of trivial templates. Specifically, to find the tracking target in a new frame, each target candidate is sparsely represented in the space spanned by target templates and trivial templates. The sparsity is achieved by solving an l1-regularized least-squares problem. Then, the candidate with the smallest projection error is taken as the tracking target. After that, tracking is continued using a Bayesian state inference framework. Two strategies are used to further improve the tracking performance. First, target templates are dynamically updated to capture appearance changes. Second, nonnegativity constraints are enforced to filter out clutter which negatively resembles tracking targets. We test the proposed approach on numerous sequences involving different types of challenges, including occlusion and variations in illumination, scale, and pose. The proposed approach demonstrates excellent performance in comparison with previously proposed trackers. We also extend the method for simultaneous tracking and recognition by introducing a static template set which stores target images from different classes. The recognition result at each frame is propagated to produce the final result for the whole video. The approach is validated on a vehicle tracking and classification task using outdoor infrared video sequences.

  13. Odor Experience Facilitates Sparse Representations of New Odors in a Large-Scale Olfactory Bulb Model

    PubMed Central

    Zhou, Shanglin; Migliore, Michele; Yu, Yuguo

    2016-01-01

    Prior odor experience has a profound effect on the coding of new odor inputs by animals. The olfactory bulb, the first relay of the olfactory pathway, can substantially shape the representations of odor inputs. How prior odor experience affects the representation of new odor inputs in olfactory bulb and its underlying network mechanism are still unclear. Here we carried out a series of simulations based on a large-scale realistic mitral-granule network model and found that prior odor experience not only accelerated formation of the network, but it also significantly strengthened sparse responses in the mitral cell network while decreasing sparse responses in the granule cell network. This modulation of sparse representations may be due to the increase of inhibitory synaptic weights. Correlations among mitral cells within the network and correlations between mitral network responses to different odors decreased gradually when the number of prior training odors was increased, resulting in a greater decorrelation of the bulb representations of input odors. Based on these findings, we conclude that the degree of prior odor experience facilitates degrees of sparse representations of new odors by the mitral cell network through experience-enhanced inhibition mechanism. PMID:26903819

  14. Sparse representation based on local time-frequency template matching for bearing transient fault feature extraction

    NASA Astrophysics Data System (ADS)

    He, Qingbo; Ding, Xiaoxi

    2016-05-01

    The transients caused by the localized fault are important measurement information for bearing fault diagnosis. Thus it is crucial to extract the transients from the bearing vibration or acoustic signals that are always corrupted by a large amount of background noise. In this paper, an iterative transient feature extraction approach is proposed based on time-frequency (TF) domain sparse representation. The approach is realized by presenting a new method, called local TF template matching. In this method, the TF atoms are constructed based on the TF distribution (TFD) of the Morlet wavelet bases and local TF templates are formulated from the TF atoms for the matching process. The instantaneous frequency (IF) ridge calculated from the TFD of an analyzed signal provides the frequency parameter values for the TF atoms as well as an effective template matching path on the TF plane. In each iteration, local TF templates are employed to do correlation with the TFD of the analyzed signal along the IF ridge tube for identifying the optimum parameters of transient wavelet model. With this iterative procedure, transients can be extracted in the TF domain from measured signals one by one. The final signal can be synthesized by combining the extracted TF atoms and the phase of the raw signal. The local TF template matching builds an effective TF matching-based sparse representation approach with the merit of satisfying the native pulse waveform structure of transients. The effectiveness of the proposed method is verified by practical defective bearing signals. Comparison results also show that the proposed method is superior to traditional methods in transient feature extraction.

  15. Compressive Fresnel digital holography using Fresnelet based sparse representation

    NASA Astrophysics Data System (ADS)

    Ramachandran, Prakash; Alex, Zachariah C.; Nelleri, Anith

    2015-04-01

    Compressive sensing (CS) in digital holography requires only very less number of pixel level detections in hologram plane for accurate image reconstruction and this is achieved by exploiting the sparsity of the object wave. When the input object fields are non-sparse in spatial domain, CS demands a suitable sparsification method like wavelet decomposition. The Fresnelet, a suitable wavelet basis for processing Fresnel digital holograms is an efficient sparsifier for the complex Fresnel field obtained by the Fresnel transform of the object field and minimizes the mutual coherence between sensing and sparsifying matrices involved in CS. The paper demonstrates the merits of Fresnelet based sparsification in compressive digital Fresnel holography over conventional method of sparsifying the input object field. The phase shifting digital Fresnel holography (PSDH) is used to retrieve the complex Fresnel field for the chosen problem. The results are presented from a numerical experiment to show the proof of the concept.

  16. Spectral Super-Resolution for Hyperspectral Images via Sparse Representations

    NASA Astrophysics Data System (ADS)

    Fotiadou, Konstantina; Tsagkatakis, Grigorios; Tsakalides, Panagiotis

    2016-08-01

    The spectral dimension of hyperspectral imaging (HSI) systems plays a fundamental role in numerous terrestrial and earth observation applications, including spectral unmixing, target detection, and classification among others. However, in several cases the spectral resolution of HSI systems is sacrificed for the shake of spatial resolution, as such in the case of snapshot spectral imaging systems that acquire simultaneously the 3D data- cube. We address these limitations by introducing an efficient post-acquisition spectral resolution enhancement scheme that synthesizes the full spectrum from only few acquired spectral bands. To achieve this goal we utilize a regularized sparse-based learning procedure where the relations between high and low-spectral resolution hyper-pixels are efficiently encoded via a coupled dictionary learning scheme. Experimental results and quantitative validation on data acquired by NASA's EO-1 mission's Hyperion sensor, demonstrate the potential of the proposed approach for accurate spectral resolution enhancement of hyperspectral imaging systems.

  17. Automated identification of crystallographic ligands using sparse-density representations

    PubMed Central

    Carolan, C. G.; Lamzin, V. S.

    2014-01-01

    A novel procedure for the automatic identification of ligands in macromolecular crystallographic electron-density maps is introduced. It is based on the sparse parameterization of density clusters and the matching of the pseudo-atomic grids thus created to conformationally variant ligands using mathematical descriptors of molecular shape, size and topology. In large-scale tests on experimental data derived from the Protein Data Bank, the procedure could quickly identify the deposited ligand within the top-ranked compounds from a database of candidates. This indicates the suitability of the method for the identification of binding entities in fragment-based drug screening and in model completion in macromolecular structure determination. PMID:25004962

  18. Classification via Sparse Representation of Steerable Wavelet Frames on Grassmann Manifold: Application to Target Recognition in SAR Image.

    PubMed

    Dong, Ganggang; Kuang, Gangyao; Wang, Na; Wang, Wei

    2017-04-07

    Automatic target recognition has been studied widely over the years, yet it is still an open problem. The main obstacle consists in extended operating conditions, e.g., depression angle change, configuration variation, articulation, occlusion. To deal with them, this paper proposes a new classification strategy. We develop a new representation model via the steerable wavelet frames. The proposed representation model is entirely viewed as an element on Grassmann manifolds. To achieve target classification, we embed Grassmann manifolds into an implicit Reproducing Kernel Hilbert Space (RKHS), where the kernel sparse learning can be applied. Specifically, the mappings of training sample in RKHS are concatenated to form an over-complete dictionary. It is then used to encode the counterpart of query as a linear combination of its atoms. By designed Grassmann kernel function, it is capable to obtain the sparse representation, from which the inference can be reached. The novelty of this paper comes from (i) the development of representation model by the set of directional components of Riesz transform; (ii) the quantitative measure of similarity for proposed representation model by Grassmann metric; (iii) the generation of global kernel function by Grassmann kernel. Extensive comparative studies are performed to demonstrate the advantage of proposed strategy.

  19. Single-Trial Sparse Representation-Based Approach for VEP Extraction.

    PubMed

    Yu, Nannan; Hu, Funian; Zou, Dexuan; Ding, Qisheng; Lu, Hanbing

    2016-01-01

    Sparse representation is a powerful tool in signal denoising, and visual evoked potentials (VEPs) have been proven to have strong sparsity over an appropriate dictionary. Inspired by this idea, we present in this paper a novel sparse representation-based approach to solving the VEP extraction problem. The extraction process is performed in three stages. First, instead of using the mixed signals containing the electroencephalogram (EEG) and VEPs, we utilise an EEG from a previous trial, which did not contain VEPs, to identify the parameters of the EEG autoregressive (AR) model. Second, instead of the moving average (MA) model, sparse representation is used to model the VEPs in the autoregressive-moving average (ARMA) model. Finally, we calculate the sparse coefficients and derive VEPs by using the AR model. Next, we tested the performance of the proposed algorithm with synthetic and real data, after which we compared the results with that of an AR model with exogenous input modelling and a mixed overcomplete dictionary-based sparse component decomposition method. Utilising the synthetic data, the algorithms are then employed to estimate the latencies of P100 of the VEPs corrupted by added simulated EEG at different signal-to-noise ratio (SNR) values. The validations demonstrate that our method can well preserve the details of the VEPs for latency estimation, even in low SNR environments.

  20. Learning local appearances with sparse representation for robust and fast visual tracking.

    PubMed

    Bai, Tianxiang; Li, You-Fu; Zhou, Xiaolong

    2015-04-01

    In this paper, we present a novel appearance model using sparse representation and online dictionary learning techniques for visual tracking. In our approach, the visual appearance is represented by sparse representation, and the online dictionary learning strategy is used to adapt the appearance variations during tracking. We unify the sparse representation and online dictionary learning by defining a sparsity consistency constraint that facilitates the generative and discriminative capabilities of the appearance model. An elastic-net constraint is enforced during the dictionary learning stage to capture the characteristics of the local appearances that are insensitive to partial occlusions. Hence, the target appearance is effectively recovered from the corruptions using the sparse coefficients with respect to the learned sparse bases containing local appearances. In the proposed method, the dictionary is undercomplete and can thus be efficiently implemented for tracking. Moreover, we employ a median absolute deviation based robust similarity metric to eliminate the outliers and evaluate the likelihood between the observations and the model. Finally, we integrate the proposed appearance model with the particle filter framework to form a robust visual tracking algorithm. Experiments on benchmark video sequences show that the proposed appearance model outperforms the other state-of-the-art approaches in tracking performance.

  1. Sparse representation of whole-brain fMRI signals for identification of functional networks.

    PubMed

    Lv, Jinglei; Jiang, Xi; Li, Xiang; Zhu, Dajiang; Chen, Hanbo; Zhang, Tuo; Zhang, Shu; Hu, Xintao; Han, Junwei; Huang, Heng; Zhang, Jing; Guo, Lei; Liu, Tianming

    2015-02-01

    There have been several recent studies that used sparse representation for fMRI signal analysis and activation detection based on the assumption that each voxel's fMRI signal is linearly composed of sparse components. Previous studies have employed sparse coding to model functional networks in various modalities and scales. These prior contributions inspired the exploration of whether/how sparse representation can be used to identify functional networks in a voxel-wise way and on the whole brain scale. This paper presents a novel, alternative methodology of identifying multiple functional networks via sparse representation of whole-brain task-based fMRI signals. Our basic idea is that all fMRI signals within the whole brain of one subject are aggregated into a big data matrix, which is then factorized into an over-complete dictionary basis matrix and a reference weight matrix via an effective online dictionary learning algorithm. Our extensive experimental results have shown that this novel methodology can uncover multiple functional networks that can be well characterized and interpreted in spatial, temporal and frequency domains based on current brain science knowledge. Importantly, these well-characterized functional network components are quite reproducible in different brains. In general, our methods offer a novel, effective and unified solution to multiple fMRI data analysis tasks including activation detection, de-activation detection, and functional network identification. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Single-Trial Sparse Representation-Based Approach for VEP Extraction

    PubMed Central

    Yu, Nannan; Hu, Funian; Zou, Dexuan; Ding, Qisheng

    2016-01-01

    Sparse representation is a powerful tool in signal denoising, and visual evoked potentials (VEPs) have been proven to have strong sparsity over an appropriate dictionary. Inspired by this idea, we present in this paper a novel sparse representation-based approach to solving the VEP extraction problem. The extraction process is performed in three stages. First, instead of using the mixed signals containing the electroencephalogram (EEG) and VEPs, we utilise an EEG from a previous trial, which did not contain VEPs, to identify the parameters of the EEG autoregressive (AR) model. Second, instead of the moving average (MA) model, sparse representation is used to model the VEPs in the autoregressive-moving average (ARMA) model. Finally, we calculate the sparse coefficients and derive VEPs by using the AR model. Next, we tested the performance of the proposed algorithm with synthetic and real data, after which we compared the results with that of an AR model with exogenous input modelling and a mixed overcomplete dictionary-based sparse component decomposition method. Utilising the synthetic data, the algorithms are then employed to estimate the latencies of P100 of the VEPs corrupted by added simulated EEG at different signal-to-noise ratio (SNR) values. The validations demonstrate that our method can well preserve the details of the VEPs for latency estimation, even in low SNR environments. PMID:27807541

  3. A joint sparse representation-based method for double-trial evoked potentials estimation.

    PubMed

    Yu, Nannan; Liu, Haikuan; Wang, Xiaoyan; Lu, Hanbing

    2013-12-01

    In this paper, we present a novel approach to solving an evoked potentials estimating problem. Generally, the evoked potentials in two consecutive trials obtained by repeated identical stimuli of the nerves are extremely similar. In order to trace evoked potentials, we propose a joint sparse representation-based double-trial evoked potentials estimation method, taking full advantage of this similarity. The estimation process is performed in three stages: first, according to the similarity of evoked potentials and the randomness of a spontaneous electroencephalogram, the two consecutive observations of evoked potentials are considered as superpositions of the common component and the unique components; second, making use of their characteristics, the two sparse dictionaries are constructed; and finally, we apply the joint sparse representation method in order to extract the common component of double-trial observations, instead of the evoked potential in each trial. A series of experiments carried out on simulated and human test responses confirmed the superior performance of our method.

  4. Sparse Representation and Dictionary Learning as Feature Extraction in Vessel Imagery

    DTIC Science & Technology

    2014-12-01

    TECHNICAL REPORT 2070 December 2014 Sparse Representation and Dictionary Learning as Feature Extraction in Vessel Imagery...2 2.1.1 Dictionary Learning...8]. The descriptors are then clustered and pooled with respect to a dictionary of vocabulary features obtained from training imagery. The image is

  5. Robust infrared small target detection via non-negativity constraint-based sparse representation.

    PubMed

    Wan, Minjie; Gu, Guohua; Qian, Weixian; Ren, Kan; Chen, Qian

    2016-09-20

    Infrared (IR) small target detection is one of the vital techniques in many military applications, including IR remote sensing, early warning, and IR precise guidance. Over-complete dictionary based sparse representation is an effective image representation method that can capture geometrical features of IR small targets by the redundancy of the dictionary. In this paper, we concentrate on solving the problem of robust infrared small target detection under various scenes via sparse representation theory. First, a frequency saliency detection based preprocessing is developed to extract suspected regions that may possibly contain the target so that the subsequent computing load is reduced. Second, a target over-complete dictionary is constructed by a varietal two-dimensional Gaussian model with an extent feature constraint and a background term. Third, a sparse representation model with a non-negativity constraint is proposed for the suspected regions to calculate the corresponding coefficient vectors. Fourth, the detection problem is skillfully converted to an l1-regularized optimization through an accelerated proximal gradient (APG) method. Finally, based on the distinct sparsity difference, an evaluation index called sparse rate (SR) is presented to extract the real target by an adaptive segmentation directly. Large numbers of experiments demonstrate both the effectiveness and robustness of this method.

  6. Locality Constrained Joint Dynamic Sparse Representation for Local Matching Based Face Recognition

    PubMed Central

    Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun

    2014-01-01

    Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC. PMID:25419662

  7. Robust multi-atlas label propagation by deep sparse representation

    PubMed Central

    Zu, Chen; Wang, Zhengxia; Zhang, Daoqiang; Liang, Peipeng; Shi, Yonghong; Shen, Dinggang; Wu, Guorong

    2016-01-01

    Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer (label-specific dictionaries) consists of groups of representative atlas patches and the subsequent layers (residual dictionaries) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared

  8. Robust multi-atlas label propagation by deep sparse representation.

    PubMed

    Zu, Chen; Wang, Zhengxia; Zhang, Daoqiang; Liang, Peipeng; Shi, Yonghong; Shen, Dinggang; Wu, Guorong

    2017-03-01

    Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer (label-specific dictionaries) consists of groups of representative atlas patches and the subsequent layers (residual dictionaries) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared

  9. Noise reduction by sparse representation in learned dictionaries for application to blind tip reconstruction problem

    NASA Astrophysics Data System (ADS)

    Jóźwiak, Grzegorz

    2017-03-01

    Scanning probe microscopy (SPM) is a well known tool used for the investigation of phenomena in objects in the nanometer size range. However, quantitative results are limited by the size and the shape of the nanoprobe used in experiments. Blind tip reconstruction (BTR) is a very popular method used to reconstruct the upper boundary on the shape of the probe. This method is known to be very sensitive to all kinds of interference in the atomic force microscopy (AFM) image. Due to mathematical morphology calculus, the interference makes the BTR results biased rather than randomly disrupted. For this reason, the careful choice of methods used for image enhancement and denoising, as well as the shape of a calibration sample are very important. In the paper, the results of thorough investigations on the shape of a calibration standard are shown. A novel shape is proposed and a tool for the simulation of AFM images of this calibration standard was designed. It was shown that careful choice of the initial tip allows us to use images of hole structures to blindly reconstruct the shape of a probe. The simulator was used to test the impact of modern filtration algorithms on the BTR process. These techniques are based on sparse approximation with function dictionaries learned on the basis of an image itself. Various learning algorithms and parameters were tested to determine the optimal combination for sparse representation. It was observed that the strong reduction of noise does not guarantee strong reduction in reconstruction errors. It seems that further improvements will be possible by the combination of BTR and a noise reduction procedure.

  10. Low-rank and eigenface based sparse representation for face recognition.

    PubMed

    Hou, Yi-Fu; Sun, Zhan-Li; Chong, Yan-Wen; Zheng, Chun-Hou

    2014-01-01

    In this paper, based on low-rank representation and eigenface extraction, we present an improvement to the well known Sparse Representation based Classification (SRC). Firstly, the low-rank images of the face images of each individual in training subset are extracted by the Robust Principal Component Analysis (Robust PCA) to alleviate the influence of noises (e.g., illumination difference and occlusions). Secondly, Singular Value Decomposition (SVD) is applied to extract the eigenfaces from these low-rank and approximate images. Finally, we utilize these eigenfaces to construct a compact and discriminative dictionary for sparse representation. We evaluate our method on five popular databases. Experimental results demonstrate the effectiveness and robustness of our method.

  11. Low-Rank and Eigenface Based Sparse Representation for Face Recognition

    PubMed Central

    Hou, Yi-Fu; Sun, Zhan-Li; Chong, Yan-Wen; Zheng, Chun-Hou

    2014-01-01

    In this paper, based on low-rank representation and eigenface extraction, we present an improvement to the well known Sparse Representation based Classification (SRC). Firstly, the low-rank images of the face images of each individual in training subset are extracted by the Robust Principal Component Analysis (Robust PCA) to alleviate the influence of noises (e.g., illumination difference and occlusions). Secondly, Singular Value Decomposition (SVD) is applied to extract the eigenfaces from these low-rank and approximate images. Finally, we utilize these eigenfaces to construct a compact and discriminative dictionary for sparse representation. We evaluate our method on five popular databases. Experimental results demonstrate the effectiveness and robustness of our method. PMID:25334027

  12. Seismic detection method for small-scale discontinuities based on dictionary learning and sparse representation

    NASA Astrophysics Data System (ADS)

    Yu, Caixia; Zhao, Jingtao; Wang, Yanfei

    2017-02-01

    Studying small-scale geologic discontinuities, such as faults, cavities and fractures, plays a vital role in analyzing the inner conditions of reservoirs, as these geologic structures and elements can provide storage spaces and migration pathways for petroleum. However, these geologic discontinuities have weak energy and are easily contaminated with noises, and therefore effectively extracting them from seismic data becomes a challenging problem. In this paper, a method for detecting small-scale discontinuities using dictionary learning and sparse representation is proposed that can dig up high-resolution information by sparse coding. A K-SVD (K-means clustering via Singular Value Decomposition) sparse representation model that contains two stage of iteration procedure: sparse coding and dictionary updating, is suggested for mathematically expressing these seismic small-scale discontinuities. Generally, the orthogonal matching pursuit (OMP) algorithm is employed for sparse coding. However, the method can only update one dictionary atom at one time. In order to improve calculation efficiency, a regularized version of OMP algorithm is presented for simultaneously updating a number of atoms at one time. Two numerical experiments demonstrate the validity of the developed method for clarifying and enhancing small-scale discontinuities. The field example of carbonate reservoirs further demonstrates its effectiveness in revealing masked tiny faults and small-scale cavities.

  13. Signal denoising and ultrasonic flaw detection via overcomplete and sparse representations.

    PubMed

    Zhang, Guang-Ming; Harvey, David M; Braden, Derek R

    2008-11-01

    Sparse signal representations from overcomplete dictionaries are the most recent technique in the signal processing community. Applications of this technique extend into many fields. In this paper, this technique is utilized to cope with ultrasonic flaw detection and noise suppression problem. In particular, a noisy ultrasonic signal is decomposed into sparse representations using a sparse Bayesian learning algorithm and an overcomplete dictionary customized from a Gabor dictionary by incorporating some a priori information of the transducer used. Nonlinear postprocessing including thresholding and pruning is then applied to the decomposed coefficients to reduce the noise contribution and extract the flaw information. Because of the high compact essence of sparse representations, flaw echoes are packed into a few significant coefficients, and noise energy is likely scattered all over the dictionary atoms, generating insignificant coefficients. This property greatly increases the efficiency of the pruning and thresholding operations and is extremely useful for detecting flaw echoes embedded in background noise. The performance of the proposed approach is verified experimentally and compared with the wavelet transform signal processor. Experimental results to detect ultrasonic flaw echoes contaminated by white Gaussian additive noise or correlated noise are presented in the paper.

  14. Detection of dual-band infrared small target based on joint dynamic sparse representation

    NASA Astrophysics Data System (ADS)

    Zhou, Jinwei; Li, Jicheng; Shi, Zhiguang; Lu, Xiaowei; Ren, Dongwei

    2015-10-01

    Infrared small target detection is a crucial and yet still is a difficult issue in aeronautic and astronautic applications. Sparse representation is an important mathematic tool and has been used extensively in image processing in recent years. Joint sparse representation is applied in dual-band infrared dim target detection in this paper. Firstly, according to the characters of dim targets in dual-band infrared images, 2-dimension Gaussian intensity model was used to construct target dictionary, then the dictionary was classified into different sub-classes according to different positions of Gaussian function's center point in image block; The fact that dual-band small targets detection can use the same dictionary and the sparsity doesn't lie in atom-level but in sub-class level was utilized, hence the detection of targets in dual-band infrared images was converted to be a joint dynamic sparse representation problem. And the dynamic active sets were used to describe the sparse constraint of coefficients. Two modified sparsity concentration index (SCI) criteria was proposed to evaluate whether targets exist in the images. In experiments, it shows that the proposed algorithm can achieve better detecting performance and dual-band detection is much more robust to noise compared with single-band detection. Moreover, the proposed method can be expanded to multi-spectrum small target detection.

  15. Segmentation of Hyperacute Cerebral Infarcts Based on Sparse Representation of Diffusion Weighted Imaging

    PubMed Central

    Zhang, Xiaodong; Jing, Shasha; Gao, Peiyi; Xue, Jing; Su, Lu; Li, Weiping; Ren, Lijie

    2016-01-01

    Segmentation of infarcts at hyperacute stage is challenging as they exhibit substantial variability which may even be hard for experts to delineate manually. In this paper, a sparse representation based classification method is explored. For each patient, four volumetric data items including three volumes of diffusion weighted imaging and a computed asymmetry map are employed to extract patch features which are then fed to dictionary learning and classification based on sparse representation. Elastic net is adopted to replace the traditional L0-norm/L1-norm constraints on sparse representation to stabilize sparse code. To decrease computation cost and to reduce false positives, regions-of-interest are determined to confine candidate infarct voxels. The proposed method has been validated on 98 consecutive patients recruited within 6 hours from onset. It is shown that the proposed method could handle well infarcts with intensity variability and ill-defined edges to yield significantly higher Dice coefficient (0.755 ± 0.118) than the other two methods and their enhanced versions by confining their segmentations within the regions-of-interest (average Dice coefficient less than 0.610). The proposed method could provide a potential tool to quantify infarcts from diffusion weighted imaging at hyperacute stage with accuracy and speed to assist the decision making especially for thrombolytic therapy. PMID:27746825

  16. Sparse representations of gravitational waves from precessing compact binaries.

    PubMed

    Blackman, Jonathan; Szilagyi, Bela; Galley, Chad R; Tiglio, Manuel

    2014-07-11

    Many relevant applications in gravitational wave physics share a significant common problem: the seven-dimensional parameter space of gravitational waveforms from precessing compact binary inspirals and coalescences is large enough to prohibit covering the space of waveforms with sufficient density. We find that by using the reduced basis method together with a parametrization of waveforms based on their phase and precession, we can construct ultracompact yet high-accuracy representations of this large space. As a demonstration, we show that less than 100 judiciously chosen precessing inspiral waveforms are needed for 200 cycles, mass ratios from 1 to 10, and spin magnitudes ≤0.9. In fact, using only the first 10 reduced basis waveforms yields a maximum mismatch of 0.016 over the whole range of considered parameters. We test whether the parameters selected from the inspiral regime result in an accurate reduced basis when including merger and ringdown; we find that this is indeed the case in the context of a nonprecessing effective-one-body model. This evidence suggests that as few as ∼100 numerical simulations of binary black hole coalescences may accurately represent the seven-dimensional parameter space of precession waveforms for the considered ranges.

  17. Joint sparse representation of brain activity patterns in multi-task fMRI data.

    PubMed

    Ramezani, M; Marble, K; Trang, H; Johnsrude, I S; Abolmaesumi, P

    2015-01-01

    A single-task functional magnetic resonance imaging (fMRI) experiment may only partially highlight alterations to functional brain networks affected by a particular disorder. Multivariate analysis across multiple fMRI tasks may increase the sensitivity of fMRI-based diagnosis. Prior research using multi-task analysis in fMRI, such as those that use joint independent component analysis (jICA), has mainly assumed that brain activity patterns evoked by different tasks are independent. This may not be valid in practice. Here, we use sparsity, which is a natural characteristic of fMRI data in the spatial domain, and propose a joint sparse representation analysis (jSRA) method to identify common information across different functional subtraction (contrast) images in data from a multi-task fMRI experiment. Sparse representation methods do not require independence, or that the brain activity patterns be nonoverlapping. We use functional subtraction images within the joint sparse representation analysis to generate joint activation sources and their corresponding sparse modulation profiles. We evaluate the use of sparse representation analysis to capture individual differences with simulated fMRI data and with experimental fMRI data. The experimental fMRI data was acquired from 16 young (age: 19-26) and 16 older (age: 57-73) adults obtained from multiple speech comprehension tasks within subjects, where an independent measure (namely, age in years) can be used to differentiate between groups. Simulation results show that this method yields greater sensitivity, precision, and higher Jaccard indexes (which measures similarity and diversity of the true and estimated brain activation sources) than does the jICA method. Moreover, superiority of the jSRA method in capturing individual differences was successfully demonstrated using experimental fMRI data.

  18. Gyrator transform based double random phase encoding with sparse representation for information authentication

    NASA Astrophysics Data System (ADS)

    Chen, Jun-xin; Zhu, Zhi-liang; Fu, Chong; Yu, Hai; Zhang, Li-bo

    2015-07-01

    Optical information security systems have drawn long-term concerns. In this paper, an optical information authentication approach using gyrator transform based double random phase encoding with sparse representation is proposed. Different from traditional optical encryption schemes, only sparse version of the ciphertext is preserved, and hence the decrypted result is completely unrecognizable and shows no similarity to the plaintext. However, we demonstrate that the noise-like decipher result can be effectively authenticated by means of optical correlation approach. Simulations prove that the proposed method is feasible and effective, and can provide additional protection for optical security systems.

  19. Progressive sparse representation-based classification using local discrete cosine transform evaluation for image recognition

    NASA Astrophysics Data System (ADS)

    Song, Xiaoning; Feng, Zhen-Hua; Hu, Guosheng; Yang, Xibei; Yang, Jingyu; Qi, Yunsong

    2015-09-01

    This paper proposes a progressive sparse representation-based classification algorithm using local discrete cosine transform (DCT) evaluation to perform face recognition. Specifically, the sum of the contributions of all training samples of each subject is first taken as the contribution of this subject, then the redundant subject with the smallest contribution to the test sample is iteratively eliminated. Second, the progressive method aims at representing the test sample as a linear combination of all the remaining training samples, by which the representation capability of each training sample is exploited to determine the optimal "nearest neighbors" for the test sample. Third, the transformed DCT evaluation is constructed to measure the similarity between the test sample and each local training sample using cosine distance metrics in the DCT domain. The final goal of the proposed method is to determine an optimal weighted sum of nearest neighbors that are obtained under the local correlative degree evaluation, which is approximately equal to the test sample, and we can use this weighted linear combination to perform robust classification. Experimental results conducted on the ORL database of faces (created by the Olivetti Research Laboratory in Cambridge), the FERET face database (managed by the Defense Advanced Research Projects Agency and the National Institute of Standards and Technology), AR face database (created by Aleix Martinez and Robert Benavente in the Computer Vision Center at U.A.B), and USPS handwritten digit database (gathered at the Center of Excellence in Document Analysis and Recognition at SUNY Buffalo) demonstrate the effectiveness of the proposed method.

  20. A dedicated greedy pursuit algorithm for sparse spectral representation of music sound.

    PubMed

    Rebollo-Neira, Laura; Aggarwal, Gagan

    2016-10-01

    A dedicated algorithm for sparse spectral representation of music sound is presented. The goal is to enable the representation of a piece of music signal as a linear superposition of as few spectral components as possible, without affecting the quality of the reproduction. A representation of this nature is said to be sparse. In the present context sparsity is accomplished by greedy selection of the spectral components, from an overcomplete set called a dictionary. The proposed algorithm is tailored to be applied with trigonometric dictionaries. Its distinctive feature being that it avoids the need for the actual construction of the whole dictionary, by implementing the required operations via the fast Fourier transform. The achieved sparsity is theoretically equivalent to that rendered by the orthogonal matching pursuit (OMP) method. The contribution of the proposed dedicated implementation is to extend the applicability of the standard OMP algorithm, by reducing its storage and computational demands. The suitability of the approach for producing sparse spectral representation is illustrated by comparison with the traditional method, in the line of the short time Fourier transform, involving only the corresponding orthonormal trigonometric basis.

  1. Embedded sparse representation of fMRI data via group-wise dictionary optimization

    NASA Astrophysics Data System (ADS)

    Zhu, Dajiang; Lin, Binbin; Faskowitz, Joshua; Ye, Jieping; Thompson, Paul M.

    2016-03-01

    Sparse learning enables dimension reduction and efficient modeling of high dimensional signals and images, but it may need to be tailored to best suit specific applications and datasets. Here we used sparse learning to efficiently represent functional magnetic resonance imaging (fMRI) data from the human brain. We propose a novel embedded sparse representation (ESR), to identify the most consistent dictionary atoms across different brain datasets via an iterative group-wise dictionary optimization procedure. In this framework, we introduced additional criteria to make the learned dictionary atoms more consistent across different subjects. We successfully identified four common dictionary atoms that follow the external task stimuli with very high accuracy. After projecting the corresponding coefficient vectors back into the 3-D brain volume space, the spatial patterns are also consistent with traditional fMRI analysis results. Our framework reveals common features of brain activation in a population, as a new, efficient fMRI analysis method.

  2. Texton and Sparse Representation Based Texture Classification of Lung Parenchyma in CT Images

    PubMed Central

    Yang, Jie; Feng, Xinyang; Angelini, Elsa D.; Laine, Andrew F.

    2017-01-01

    Automated texture analysis of lung computed tomography (CT) images is a critical tool in subtyping pulmonary emphysema and diagnosing chronic obstructive pulmonary disease (COPD). Texton-based methods encode lung textures with nearest-texton frequency histograms, and have achieved high performance for supervised classification of emphysema subtypes from annotated lung CT images. In this work, we first explore characterizing lung textures with sparse decomposition from texton dictionaries, using different regularization strategies, and then extend the sparsity-inducing constraint to the construction of the dictionaries. The methods were evaluated on a publicly available lung CT database of annotated emphysema subtypes. We show that enforcing sparse decompositions from texton dictionaries and unsupervised dictionary learning can achieve high classification accuracy (>90%). The flexibility of sparse-inducing models embedded either in the representation stage or dictionary learning stage has potential in providing consistency in classification performance on heterogeneous lung CT datasets with further parameter tuning. PMID:28268558

  3. An efficient classification method based on principal component and sparse representation.

    PubMed

    Zhai, Lin; Fu, Shujun; Zhang, Caiming; Liu, Yunxian; Wang, Lu; Liu, Guohua; Yang, Mingqiang

    2016-01-01

    As an important application in optical imaging, palmprint recognition is interfered by many unfavorable factors. An effective fusion of blockwise bi-directional two-dimensional principal component analysis and grouping sparse classification is presented. The dimension reduction and normalizing are implemented by the blockwise bi-directional two-dimensional principal component analysis for palmprint images to extract feature matrixes, which are assembled into an overcomplete dictionary in sparse classification. A subspace orthogonal matching pursuit algorithm is designed to solve the grouping sparse representation. Finally, the classification result is gained by comparing the residual between testing and reconstructed images. Experiments are carried out on a palmprint database, and the results show that this method has better robustness against position and illumination changes of palmprint images, and can get higher rate of palmprint recognition.

  4. Enhancement of snow cover change detection with sparse representation and dictionary learning

    NASA Astrophysics Data System (ADS)

    Varade, D.; Dikshit, O.

    2014-11-01

    Sparse representation and decoding is often used for denoising images and compression of images with respect to inherent features. In this paper, we adopt a methodology incorporating sparse representation of a snow cover change map using the K-SVD trained dictionary and sparse decoding to enhance the change map. The pixels often falsely characterized as "changes" are eliminated using this approach. The preliminary change map was generated using differenced NDSI or S3 maps in case of Resourcesat-2 and Landsat 8 OLI imagery respectively. These maps are extracted into patches for compressed sensing using Discrete Cosine Transform (DCT) to generate an initial dictionary which is trained by the K-SVD approach. The trained dictionary is used for sparse coding of the change map using the Orthogonal Matching Pursuit (OMP) algorithm. The reconstructed change map incorporates a greater degree of smoothing and represents the features (snow cover changes) with better accuracy. The enhanced change map is segmented using kmeans to discriminate between the changed and non-changed pixels. The segmented enhanced change map is compared, firstly with the difference of Support Vector Machine (SVM) classified NDSI maps and secondly with a reference data generated as a mask by visual interpretation of the two input images. The methodology is evaluated using multi-spectral datasets from Resourcesat-2 and Landsat-8. The k-hat statistic is computed to determine the accuracy of the proposed approach.

  5. High Capacity Reversible Data Hiding in Encrypted Images by Patch-Level Sparse Representation.

    PubMed

    Cao, Xiaochun; Du, Ling; Wei, Xingxing; Meng, Dan; Guo, Xiaojie

    2016-05-01

    Reversible data hiding in encrypted images has attracted considerable attention from the communities of privacy security and protection. The success of the previous methods in this area has shown that a superior performance can be achieved by exploiting the redundancy within the image. Specifically, because the pixels in the local structures (like patches or regions) have a strong similarity, they can be heavily compressed, thus resulting in a large hiding room. In this paper, to better explore the correlation between neighbor pixels, we propose to consider the patch-level sparse representation when hiding the secret data. The widely used sparse coding technique has demonstrated that a patch can be linearly represented by some atoms in an over-complete dictionary. As the sparse coding is an approximation solution, the leading residual errors are encoded and self-embedded within the cover image. Furthermore, the learned dictionary is also embedded into the encrypted image. Thanks to the powerful representation of sparse coding, a large vacated room can be achieved, and thus the data hider can embed more secret messages in the encrypted image. Extensive experiments demonstrate that the proposed method significantly outperforms the state-of-the-art methods in terms of the embedding rate and the image quality.

  6. High resolution OCT image generation using super resolution via sparse representation

    NASA Astrophysics Data System (ADS)

    Asif, Muhammad; Akram, Muhammad Usman; Hassan, Taimur; Shaukat, Arslan; Waqar, Razi

    2017-02-01

    In this paper we propose a technique for obtaining a high resolution (HR) image from a single low resolution (LR) image -using joint learning dictionary - on the basis of image statistic research. It suggests that with an appropriate choice of an over-complete dictionary, image patches can be well represented as a sparse linear combination. Medical imaging for clinical analysis and medical intervention is being used for creating visual representations of the interior of a body, as well as visual representation of the function of some organs or tissues (physiology). A number of medical imaging techniques are in use like MRI, CT scan, X-rays and Optical Coherence Tomography (OCT). OCT is one of the new technologies in medical imaging and one of its uses is in ophthalmology where it is being used for analysis of the choroidal thickness in the eyes in healthy and disease states such as age-related macular degeneration, central serous chorioretinopathy, diabetic retinopathy and inherited retinal dystrophies. We have proposed a technique for enhancing the OCT images which can be used for clearly identifying and analyzing the particular diseases. Our method uses dictionary learning technique for generating a high resolution image from a single input LR image. We train two joint dictionaries, one with OCT images and the second with multiple different natural images, and compare the results with previous SR technique. Proposed method for both dictionaries produces HR images which are comparatively superior in quality with the other proposed method of SR. Proposed technique is very effective for noisy OCT images and produces up-sampled and enhanced OCT images.

  7. Multimodal image data fusion for Alzheimer's Disease diagnosis by sparse representation.

    PubMed

    Ortiz, Andrés; Fajardo, Daniel; Górriz, Juan M; Ramírez, Javier; Martínez-Murcia, Francisco J

    2014-01-01

    Alzheimer's Diasese (AD) diagnosis can be carried out by analysing functional or structural changes in the brain. Functional changes associated to neurological disorders can be figured out by positron emission tomography (PET) as it allows to study the activation of certain areas of the brain during specific task development. On the other hand, neurological disorders can also be discovered by analysing structural changes in the brain which are usually assessed by Magnetic Resonance Imaging (MRI). In fact, computer-aided diagnosis tools (CAD) that have been recently devised for the diagnosis of neurological disorders use functional or structural data. However, functional and structural data can be fused out in order to improve the accuracy and to diminish the false positive rate in CAD tools. In this paper we present a method for the diagnosis of AD which fuses multimodal image (PET and MRI) data by combining Sparse Representation Classifiers (SRC). The method presented in this work shows accuracy values up to 95% and clearly outperforms the classification outcomes obtained using single-modality images.

  8. Super-resolution of hyperspectral images using sparse representation and Gabor prior

    NASA Astrophysics Data System (ADS)

    Patel, Rakesh C.; Joshi, Manjunath V.

    2016-04-01

    Super-resolution (SR) as a postprocessing technique is quite useful in enhancing the spatial resolution of hyperspectral (HS) images without affecting its spectral resolution. We present an approach to increase the spatial resolution of HS images by making use of sparse representation and Gabor prior. The low-resolution HS observations consisting of large number of bands are represented as a linear combination of a small number of basis images using principal component analysis (PCA), and the significant components are used in our work. We first obtain initial estimates of SR on this reduced dimension by using compressive sensing-based method. Since SR is an ill-posed problem, the final solution is obtained by using a regularization framework. The novelty of our approach lies in: (1) estimation of optimal point spread function in the form of decimation matrix, and (2) using a new prior called "Gabor prior" to super-resolve the significant PCA components. Experiments are conducted on two different HS datasets namely, 31-band natural HS image set collected under controlled laboratory environment and a set of 224-band real HS images collected by airborne visible/infrared imaging spectrometer remote sensing sensor. Visual inspections and quantitative comparison confirm that our method enhances spatial information without introducing significant spectral distortion. Our conclusions include: (1) incorporate the sensor characteristics in the form of estimated decimation matrix for SR, and (2) preserve various frequencies in super-resolved image by making use of Gabor prior.

  9. Noninvasive diabetes mellitus detection using facial block color with a sparse representation classifier.

    PubMed

    Zhang, Bob; Vijaya kumar, B V K; Zhang, David

    2014-04-01

    Diabetes mellitus (DM) is gradually becoming an epidemic, affecting almost every single country. This has placed a tremendous amount of burden on governments and healthcare officials. In this paper, we propose a new noninvasive method to detect DM based on facial block color features with a sparse representation classifier (SRC). A noninvasive capture device with image correction is initially used to capture a facial image consisting of four facial blocks strategically placed around the face. Six centroids from a facial color gamut are applied to calculate the facial color features of each block. This means that a given facial block can be represented by its facial color features. For SRC, two subdictionaries, a Healthy facial color features subdictionary and DM facial color features subdictionary, are employed in the SRC process. Experimental results are shown for a dataset consisting of 142 Healthy and 284 DM samples. Using a combination of the facial blocks, the SRC can distinguish Healthy and DM classes with an average accuracy of 97.54%.

  10. Multiple Kernel Sparse Representation based Orthogonal Discriminative Projection and Its Cost-Sensitive Extension.

    PubMed

    Zhang, Guoqing; Sun, Huaijiang; Xia, Guiyu; Sun, Quansen

    2016-07-07

    Sparse representation based classification (SRC) has been developed and shown great potential for real-world application. Based on SRC, Yang et al. [10] devised a SRC steered discriminative projection (SRC-DP) method. However, as a linear algorithm, SRC-DP cannot handle the data with highly nonlinear distribution. Kernel sparse representation-based classifier (KSRC) is a non-linear extension of SRC and can remedy the drawback of SRC. KSRC requires the use of a predetermined kernel function and selection of the kernel function and its parameters is difficult. Recently, multiple kernel learning for SRC (MKL-SRC) [22] has been proposed to learn a kernel from a set of base kernels. However, MKL-SRC only considers the within-class reconstruction residual while ignoring the between-class relationship, when learning the kernel weights. In this paper, we propose a novel multiple kernel sparse representation-based classifier (MKSRC), and then we use it as a criterion to design a multiple kernel sparse representation based orthogonal discriminative projection method (MK-SR-ODP). The proposed algorithm aims at learning a projection matrix and a corresponding kernel from the given base kernels such that in the low dimension subspace the between-class reconstruction residual is maximized and the within-class reconstruction residual is minimized. Furthermore, to achieve a minimum overall loss by performing recognition in the learned low-dimensional subspace, we introduce cost information into the dimensionality reduction method. The solutions for the proposed method can be efficiently found based on trace ratio optimization method [33]. Extensive experimental results demonstrate the superiority of the proposed algorithm when compared with the state-of-the-art methods.

  11. Sparse coding based dense feature representation model for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Oguslu, Ender; Zhou, Guoqing; Zheng, Zezhong; Iftekharuddin, Khan; Li, Jiang

    2015-11-01

    We present a sparse coding based dense feature representation model (a preliminary version of the paper was presented at the SPIE Remote Sensing Conference, Dresden, Germany, 2013) for hyperspectral image (HSI) classification. The proposed method learns a new representation for each pixel in HSI through the following four steps: sub-band construction, dictionary learning, encoding, and feature selection. The new representation usually has a very high dimensionality requiring a large amount of computational resources. We applied the l1/lq regularized multiclass logistic regression technique to reduce the size of the new representation. We integrated the method with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) to discriminate different types of land cover. We evaluated the proposed algorithm on three well-known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit, and image fusion and recursive filtering. Experimental results show that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification.

  12. A dedicated greedy pursuit algorithm for sparse spectral representation of music sound

    NASA Astrophysics Data System (ADS)

    Rebollo-Neira, Laura; Aggarwal, Gagan

    2016-10-01

    A dedicated algorithm for sparse spectral representation of music sound is presented. The goal is to enable the representation of a piece of music signal, as a linear superposition of as few spectral components as possible. A representation of this nature is said to be sparse. In the present context sparsity is accomplished by greedy selection of the spectral components, from an overcomplete set called a dictionary. The proposed algorithm is tailored to be applied with trigonometric dictionaries. Its distinctive feature being that it avoids the need for the actual construction of the whole dictionary, by implementing the required operations via the Fast Fourier Transform. The achieved sparsity is theoretically equivalent to that rendered by the Orthogonal Matching Pursuit method. The contribution of the proposed dedicated implementation is to extend the applicability of the standard Orthogonal Matching Pursuit algorithm, by reducing its storage and computational demands. The suitability of the approach for producing sparse spectral models is illustrated by comparison with the traditional method, in the line of the Short Time Fourier Transform, involving only the corresponding orthonormal trigonometric basis.

  13. Distant failure prediction for early stage NSCLC by analyzing PET with sparse representation

    NASA Astrophysics Data System (ADS)

    Hao, Hongxia; Zhou, Zhiguo; Wang, Jing

    2017-03-01

    Positron emission tomography (PET) imaging has been widely explored for treatment outcome prediction. Radiomicsdriven methods provide a new insight to quantitatively explore underlying information from PET images. However, it is still a challenging problem to automatically extract clinically meaningful features for prognosis. In this work, we develop a PET-guided distant failure predictive model for early stage non-small cell lung cancer (NSCLC) patients after stereotactic ablative radiotherapy (SABR) by using sparse representation. The proposed method does not need precalculated features and can learn intrinsically distinctive features contributing to classification of patients with distant failure. The proposed framework includes two main parts: 1) intra-tumor heterogeneity description; and 2) dictionary pair learning based sparse representation. Tumor heterogeneity is initially captured through anisotropic kernel and represented as a set of concatenated vectors, which forms the sample gallery. Then, given a test tumor image, its identity (i.e., distant failure or not) is classified by applying the dictionary pair learning based sparse representation. We evaluate the proposed approach on 48 NSCLC patients treated by SABR at our institute. Experimental results show that the proposed approach can achieve an area under the characteristic curve (AUC) of 0.70 with a sensitivity of 69.87% and a specificity of 69.51% using a five-fold cross validation.

  14. Infrared small target detection in heavy sky scene clutter based on sparse representation

    NASA Astrophysics Data System (ADS)

    Liu, Depeng; Li, Zhengzhou; Liu, Bing; Chen, Wenhao; Liu, Tianmei; Cao, Lei

    2017-09-01

    A novel infrared small target detection method based on sky clutter and target sparse representation is proposed in this paper to cope with the representing uncertainty of clutter and target. The sky scene background clutter is described by fractal random field, and it is perceived and eliminated via the sparse representation on fractal background over-complete dictionary (FBOD). The infrared small target signal is simulated by generalized Gaussian intensity model, and it is expressed by the generalized Gaussian target over-complete dictionary (GGTOD), which could describe small target more efficiently than traditional structured dictionaries. Infrared image is decomposed on the union of FBOD and GGTOD, and the sparse representation energy that target signal and background clutter decomposed on GGTOD differ so distinctly that it is adopted to distinguish target from clutter. Some experiments are induced and the experimental results show that the proposed approach could improve the small target detection performance especially under heavy clutter for background clutter could be efficiently perceived and suppressed by FBOD and the changing target could also be represented accurately by GGTOD.

  15. Robust brain parcellation using sparse representation on resting-state fMRI.

    PubMed

    Zhang, Yu; Caspers, Svenja; Fan, Lingzhong; Fan, Yong; Song, Ming; Liu, Cirong; Mo, Yin; Roski, Christian; Eickhoff, Simon; Amunts, Katrin; Jiang, Tianzi

    2015-11-01

    Resting-state fMRI (rs-fMRI) has been widely used to segregate the brain into individual modules based on the presence of distinct connectivity patterns. Many parcellation methods have been proposed for brain parcellation using rs-fMRI, but their results have been somewhat inconsistent, potentially due to various types of noise. In this study, we provide a robust parcellation method for rs-fMRI-based brain parcellation, which constructs a sparse similarity graph based on the sparse representation coefficients of each seed voxel and then uses spectral clustering to identify distinct modules. Both the local time-varying BOLD signals and whole-brain connectivity patterns may be used as features and yield similar parcellation results. The robustness of our method was tested on both simulated and real rs-fMRI datasets. In particular, on simulated rs-fMRI data, sparse representation achieved good performance across different noise levels, including high accuracy of parcellation and high robustness to noise. On real rs-fMRI data, stable parcellation of the medial frontal cortex (MFC) and parietal operculum (OP) were achieved on three different datasets, with high reproducibility within each dataset and high consistency across these results. Besides, the parcellation of MFC was little influenced by the degrees of spatial smoothing. Furthermore, the consistent parcellation of OP was also well corresponding to cytoarchitectonic subdivisions and known somatotopic organizations. Our results demonstrate a new promising approach to robust brain parcellation using resting-state fMRI by sparse representation.

  16. Sparse Representation for Infrared Dim Target Detection via a Discriminative Over-Complete Dictionary Learned Online

    PubMed Central

    Li, Zheng-Zhou; Chen, Jing; Hou, Qian; Fu, Hong-Xia; Dai, Zhen; Jin, Gang; Li, Ru-Zhang; Liu, Chang-Ju

    2014-01-01

    It is difficult for structural over-complete dictionaries such as the Gabor function and discriminative over-complete dictionary, which are learned offline and classified manually, to represent natural images with the goal of ideal sparseness and to enhance the difference between background clutter and target signals. This paper proposes an infrared dim target detection approach based on sparse representation on a discriminative over-complete dictionary. An adaptive morphological over-complete dictionary is trained and constructed online according to the content of infrared image by K-singular value decomposition (K-SVD) algorithm. Then the adaptive morphological over-complete dictionary is divided automatically into a target over-complete dictionary describing target signals, and a background over-complete dictionary embedding background by the criteria that the atoms in the target over-complete dictionary could be decomposed more sparsely based on a Gaussian over-complete dictionary than the one in the background over-complete dictionary. This discriminative over-complete dictionary can not only capture significant features of background clutter and dim targets better than a structural over-complete dictionary, but also strengthens the sparse feature difference between background and target more efficiently than a discriminative over-complete dictionary learned offline and classified manually. The target and background clutter can be sparsely decomposed over their corresponding over-complete dictionaries, yet couldn't be sparsely decomposed based on their opposite over-complete dictionary, so their residuals after reconstruction by the prescribed number of target and background atoms differ very visibly. Some experiments are included and the results show that this proposed approach could not only improve the sparsity more efficiently, but also enhance the performance of small target detection more effectively. PMID:24871988

  17. Sparse representation for infrared Dim target detection via a discriminative over-complete dictionary learned online.

    PubMed

    Li, Zheng-Zhou; Chen, Jing; Hou, Qian; Fu, Hong-Xia; Dai, Zhen; Jin, Gang; Li, Ru-Zhang; Liu, Chang-Ju

    2014-05-27

    It is difficult for structural over-complete dictionaries such as the Gabor function and discriminative over-complete dictionary, which are learned offline and classified manually, to represent natural images with the goal of ideal sparseness and to enhance the difference between background clutter and target signals. This paper proposes an infrared dim target detection approach based on sparse representation on a discriminative over-complete dictionary. An adaptive morphological over-complete dictionary is trained and constructed online according to the content of infrared image by K-singular value decomposition (K-SVD) algorithm. Then the adaptive morphological over-complete dictionary is divided automatically into a target over-complete dictionary describing target signals, and a background over-complete dictionary embedding background by the criteria that the atoms in the target over-complete dictionary could be decomposed more sparsely based on a Gaussian over-complete dictionary than the one in the background over-complete dictionary. This discriminative over-complete dictionary can not only capture significant features of background clutter and dim targets better than a structural over-complete dictionary, but also strengthens the sparse feature difference between background and target more efficiently than a discriminative over-complete dictionary learned offline and classified manually. The target and background clutter can be sparsely decomposed over their corresponding over-complete dictionaries, yet couldn't be sparsely decomposed based on their opposite over-complete dictionary, so their residuals after reconstruction by the prescribed number of target and background atoms differ very visibly. Some experiments are included and the results show that this proposed approach could not only improve the sparsity more efficiently, but also enhance the performance of small target detection more effectively.

  18. A network of spiking neurons for computing sparse representations in an energy efficient way

    PubMed Central

    Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B.

    2013-01-01

    Computing sparse redundant representations is an important problem both in applied mathematics and neuroscience. In many applications, this problem must be solved in an energy efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating via low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, such operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We compare the numerical performance of HDA with existing algorithms and show that in the asymptotic regime the representation error of HDA decays with time, t, as 1/t. We show that HDA is stable against time-varying noise, specifically, the representation error decays as 1/t for Gaussian white noise. PMID:22920853

  19. Deep Learning of Part-Based Representation of Data Using Sparse Autoencoders With Nonnegativity Constraints.

    PubMed

    Hosseini-Asl, Ehsan; Zurada, Jacek M; Nasraoui, Olfa

    2016-12-01

    We demonstrate a new deep learning autoencoder network, trained by a nonnegativity constraint algorithm (nonnegativity-constrained autoencoder), that learns features that show part-based representation of data. The learning algorithm is based on constraining negative weights. The performance of the algorithm is assessed based on decomposing data into parts and its prediction performance is tested on three standard image data sets and one text data set. The results indicate that the nonnegativity constraint forces the autoencoder to learn features that amount to a part-based representation of data, while improving sparsity and reconstruction quality in comparison with the traditional sparse autoencoder and nonnegative matrix factorization. It is also shown that this newly acquired representation improves the prediction performance of a deep neural network.

  20. A Sparse Hierarchical Map Representation for Mars Science Laboratory Science Operations

    NASA Astrophysics Data System (ADS)

    Nefian, A. V.; Edwards, L. J.; Keely, L.; Lees, D. S.; Fluckinger, L.; Malin, M. C.; Parker, T. J.

    2015-12-01

    We describe a solution for multi-scale Mars terrain modeling and mapping with Digital Elevation Models (DEMs) and co-registered orthogonally projected imagery (ortho-images). High resolution DEMs and ortho-images derived from Mars Science Laboratory (MSL) rover science and navigation cameras are represented in context with lower resolution, wide coverage DEMs and ortho-images derived from Mars Reconnaissance Orbiter (MRO) HiRISE and CTX camera images and Mars Express (MEX) mission HRSC images. Merging MSL rover image derived terrain models with those from orbital images at a uniform high resolution would require super-sampling of the orbital data across a large area to maintain significant context. This solution is not practical, and would result in a mapping product of enormous size. Instead, we choose a sparse hierarchical map representation. Each level in this hierarchical representation is a map described by a set of tiles with fixed number of samples and fixed resolution. The number of samples in a tile is fixed for all levels and each level is associated with a specific resolution. In this work, the resolution ratio between two adjacent levels is set to two. The map at each level is sparse and it contains only the tiles for which data is available at the resolution of the given level. For example, at the highest resolution level only MSL science camera models are available and only a small set of tiles are generated in a sparse map. At the lowest resolution, the map contains the complete set of tiles. The reference level of the representation is chosen to be the HiRISE terrain model and CTX, HRSC and MSL data are projected onto this model before being mapped. While our terrain representation was developed for use in "Antares", a visual planning and sequencing tool for MSL science cameras developed at NASA Ames Research Center, it is general purpose and has a number of potential geo-science visualization applications.

  1. Model's sparse representation based on reduced mixed GMsFE basis methods

    NASA Astrophysics Data System (ADS)

    Jiang, Lijian; Li, Qiuqi

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous

  2. Spectrum recovery method based on sparse representation for segmented multi-Gaussian model

    NASA Astrophysics Data System (ADS)

    Teng, Yidan; Zhang, Ye; Ti, Chunli; Su, Nan

    2016-09-01

    Hyperspectral images can realize crackajack features discriminability for supplying diagnostic characteristics with high spectral resolution. However, various degradations may generate negative influence on the spectral information, including water absorption, bands-continuous noise. On the other hand, the huge data volume and strong redundancy among spectrums produced intense demand on compressing HSIs in spectral dimension, which also leads to the loss of spectral information. The reconstruction of spectral diagnostic characteristics has irreplaceable significance for the subsequent application of HSIs. This paper introduces a spectrum restoration method for HSIs making use of segmented multi-Gaussian model (SMGM) and sparse representation. A SMGM is established to indicating the unsymmetrical spectral absorption and reflection characteristics, meanwhile, its rationality and sparse property are discussed. With the application of compressed sensing (CS) theory, we implement sparse representation to the SMGM. Then, the degraded and compressed HSIs can be reconstructed utilizing the uninjured or key bands. Finally, we take low rank matrix recovery (LRMR) algorithm for post processing to restore the spatial details. The proposed method was tested on the spectral data captured on the ground with artificial water absorption condition and an AVIRIS-HSI data set. The experimental results in terms of qualitative and quantitative assessments demonstrate that the effectiveness on recovering the spectral information from both degradations and loss compression. The spectral diagnostic characteristics and the spatial geometry feature are well preserved.

  3. SROT: Sparse representation-based over-sampling technique for classification of imbalanced dataset

    NASA Astrophysics Data System (ADS)

    Zou, Xionggao; Feng, Yueping; Li, Huiying; Jiang, Shuyu

    2017-08-01

    As one of the most popular research fields in machine learning, the research on imbalanced dataset receives more and more attentions in recent years. The imbalanced problem usually occurs in when minority classes have extremely fewer samples than the others. Traditional classification algorithms have not taken the distribution of dataset into consideration, thus they fail to deal with the problem of class-imbalanced learning, and the performance of classification tends to be dominated by the majority class. SMOTE is one of the most effective over-sampling methods processing this problem, which changes the distribution of training sets by increasing the size of minority class. However, SMOTE would easily result in over-fitting on account of too many repetitive data samples. According to this issue, this paper proposes an improved method based on sparse representation theory and over-sampling technique, named SROT (Sparse Representation-based Over-sampling Technique). The SROT uses a sparse dictionary to create synthetic samples directly for solving the imbalanced problem. The experiments are performed on 10 UCI datasets using C4.5 as the learning algorithm. The experimental results show that compared our algorithm with Random Over-sampling techniques, SMOTE and other methods, SROT can achieve better performance on AUC value.

  4. Sparse representation of higher-order functional interaction patterns in task-based FMRI data.

    PubMed

    Zhang, Shu; Li, Xiang; Lv, Jinglei; Jiang, Xi; Zhu, Dajiang; Chen, Hanbo; Zhang, Tuo; Guo, Lei; Liu, Tianming

    2013-01-01

    Traditional task-based fMRI activation detection methods, e.g., the widely used general linear model (GLM), assume that the brain's hemodynamic responses follow the block-based or event-related stimulus paradigm. Typically, these activation detections are performed voxel-wise independently, and then are usually followed by statistical corrections. Despite remarkable successes and wide adoption of these methods, it remains largely unknown how functional brain regions interact with each other within specific networks during task performance blocks and in the baseline. In this paper, we present a novel algorithmic pipeline to statistically infer and sparsely represent higher-order functional interaction patterns within the working memory network during task performance and in the baseline. Specifically, a collection of higher-order interactions are inferred via the greedy equivalence search (GES) algorithm for both task and baseline blocks. In the next stage, an effective online dictionary learning algorithm is utilized for sparse representation of the inferred higher-order interaction patterns. Application of this framework on a working memory task-based fMRI data reveals interesting and meaningful distributions of the learned sparse dictionary atoms in task and baseline blocks. In comparison with traditional voxel-wise activation detection and recent pair-wise functional connectivity analysis, our framework offers a new methodology for representation and exploration of higher-order functional activities in the brain.

  5. Infrared moving small target detection based on saliency extraction and image sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaomin; Ren, Kan; Gao, Jin; Li, Chaowei; Gu, Guohua; Wan, Minjie

    2016-10-01

    Moving small target detection in infrared image is a crucial technique of infrared search and tracking system. This paper present a novel small target detection technique based on frequency-domain saliency extraction and image sparse representation. First, we exploit the features of Fourier spectrum image and magnitude spectrum of Fourier transform to make a rough extract of saliency regions and use a threshold segmentation system to classify the regions which look salient from the background, which gives us a binary image as result. Second, a new patch-image model and over-complete dictionary were introduced to the detection system, then the infrared small target detection was converted into a problem solving and optimization process of patch-image information reconstruction based on sparse representation. More specifically, the test image and binary image can be decomposed into some image patches follow certain rules. We select the target potential area according to the binary patch-image which contains salient region information, then exploit the over-complete infrared small target dictionary to reconstruct the test image blocks which may contain targets. The coefficients of target image patch satisfy sparse features. Finally, for image sequence, Euclidean distance was used to reduce false alarm ratio and increase the detection accuracy of moving small targets in infrared images due to the target position correlation between frames.

  6. Mass type-specific sparse representation for mass classification in computer-aided detection on mammograms

    PubMed Central

    2013-01-01

    Background Breast cancer is the leading cause of both incidence and mortality in women population. For this reason, much research effort has been devoted to develop Computer-Aided Detection (CAD) systems for early detection of the breast cancers on mammograms. In this paper, we propose a new and novel dictionary configuration underpinning sparse representation based classification (SRC). The key idea of the proposed algorithm is to improve the sparsity in terms of mass margins for the purpose of improving classification performance in CAD systems. Methods The aim of the proposed SRC framework is to construct separate dictionaries according to the types of mass margins. The underlying idea behind our method is that the separated dictionaries can enhance the sparsity of mass class (true-positive), leading to an improved performance for differentiating mammographic masses from normal tissues (false-positive). When a mass sample is given for classification, the sparse solutions based on corresponding dictionaries are separately solved and combined at score level. Experiments have been performed on both database (DB) named as Digital Database for Screening Mammography (DDSM) and clinical Full Field Digital Mammogram (FFDM) DBs. In our experiments, sparsity concentration in the true class (SCTC) and area under the Receiver operating characteristic (ROC) curve (AUC) were measured for the comparison between the proposed method and a conventional single dictionary based approach. In addition, a support vector machine (SVM) was used for comparing our method with state-of-the-arts classifier extensively used for mass classification. Results Comparing with the conventional single dictionary configuration, the proposed approach is able to improve SCTC of up to 13.9% and 23.6% on DDSM and FFDM DBs, respectively. Moreover, the proposed method is able to improve AUC with 8.2% and 22.1% on DDSM and FFDM DBs, respectively. Comparing to SVM classifier, the proposed method improves

  7. Remote sensing image segmentation using local sparse structure constrained latent low rank representation

    NASA Astrophysics Data System (ADS)

    Tian, Shu; Zhang, Ye; Yan, Yimin; Su, Nan; Zhang, Junping

    2016-09-01

    Latent low-rank representation (LatLRR) has been attached considerable attention in the field of remote sensing image segmentation, due to its effectiveness in exploring the multiple subspace structures of data. However, the increasingly heterogeneous texture information in the high spatial resolution remote sensing images, leads to more severe interference of pixels in local neighborhood, and the LatLRR fails to capture the local complex structure information. Therefore, we present a local sparse structure constrainted latent low-rank representation (LSSLatLRR) segmentation method, which explicitly imposes the local sparse structure constraint on LatLRR to capture the intrinsic local structure in manifold structure feature subspaces. The whole segmentation framework can be viewed as two stages in cascade. In the first stage, we use the local histogram transform to extract the texture local histogram features (LHOG) at each pixel, which can efficiently capture the complex and micro-texture pattern. In the second stage, a local sparse structure (LSS) formulation is established on LHOG, which aims to preserve the local intrinsic structure and enhance the relationship between pixels having similar local characteristics. Meanwhile, by integrating the LSS and the LatLRR, we can efficiently capture the local sparse and low-rank structure in the mixture of feature subspace, and we adopt the subspace segmentation method to improve the segmentation accuracy. Experimental results on the remote sensing images with different spatial resolution show that, compared with three state-of-the-art image segmentation methods, the proposed method achieves more accurate segmentation results.

  8. Sparse representation-based volumetric super-resolution algorithm for 3D CT images of reservoir rocks

    NASA Astrophysics Data System (ADS)

    Li, Zhengji; Teng, Qizhi; He, Xiaohai; Yue, Guihua; Wang, Zhengyong

    2017-09-01

    The parameter evaluation of reservoir rocks can help us to identify components and calculate the permeability and other parameters, and it plays an important role in the petroleum industry. Until now, computed tomography (CT) has remained an irreplaceable way to acquire the microstructure of reservoir rocks. During the evaluation and analysis, large samples and high-resolution images are required in order to obtain accurate results. Owing to the inherent limitations of CT, however, a large field of view results in low-resolution images, and high-resolution images entail a smaller field of view. Our method is a promising solution to these data collection limitations. In this study, a framework for sparse representation-based 3D volumetric super-resolution is proposed to enhance the resolution of 3D voxel images of reservoirs scanned with CT. A single reservoir structure and its downgraded model are divided into a large number of 3D cubes of voxel pairs and these cube pairs are used to calculate two overcomplete dictionaries and the sparse-representation coefficients in order to estimate the high frequency component. Future more, to better result, a new feature extract method with combine BM4D together with Laplacian filter are introduced. In addition, we conducted a visual evaluation of the method, and used the PSNR and FSIM to evaluate it qualitatively.

  9. A Vehicle Active Safety Model: Vehicle Speed Control Based on Driver Vigilance Detection Using Wearable EEG and Sparse Representation

    PubMed Central

    Zhang, Zutao; Luo, Dianyuan; Rasim, Yagubov; Li, Yanjun; Meng, Guanjun; Xu, Jian; Wang, Chunbai

    2016-01-01

    In this paper, we present a vehicle active safety model for vehicle speed control based on driver vigilance detection using low-cost, comfortable, wearable electroencephalographic (EEG) sensors and sparse representation. The proposed system consists of three main steps, namely wireless wearable EEG collection, driver vigilance detection, and vehicle speed control strategy. First of all, a homemade low-cost comfortable wearable brain-computer interface (BCI) system with eight channels is designed for collecting the driver’s EEG signal. Second, wavelet de-noising and down-sample algorithms are utilized to enhance the quality of EEG data, and Fast Fourier Transformation (FFT) is adopted to extract the EEG power spectrum density (PSD). In this step, sparse representation classification combined with k-singular value decomposition (KSVD) is firstly introduced in PSD to estimate the driver’s vigilance level . Finally, a novel safety strategy of vehicle speed control, which controls the electronic throttle opening and automatic braking after driver fatigue detection using the above method, is presented to avoid serious collisions and traffic accidents. The simulation and practical testing results demonstrate the feasibility of the vehicle active safety model. PMID:26907278

  10. A Vehicle Active Safety Model: Vehicle Speed Control Based on Driver Vigilance Detection Using Wearable EEG and Sparse Representation.

    PubMed

    Zhang, Zutao; Luo, Dianyuan; Rasim, Yagubov; Li, Yanjun; Meng, Guanjun; Xu, Jian; Wang, Chunbai

    2016-02-19

    In this paper, we present a vehicle active safety model for vehicle speed control based on driver vigilance detection using low-cost, comfortable, wearable electroencephalographic (EEG) sensors and sparse representation. The proposed system consists of three main steps, namely wireless wearable EEG collection, driver vigilance detection, and vehicle speed control strategy. First of all, a homemade low-cost comfortable wearable brain-computer interface (BCI) system with eight channels is designed for collecting the driver's EEG signal. Second, wavelet de-noising and down-sample algorithms are utilized to enhance the quality of EEG data, and Fast Fourier Transformation (FFT) is adopted to extract the EEG power spectrum density (PSD). In this step, sparse representation classification combined with k-singular value decomposition (KSVD) is firstly introduced in PSD to estimate the driver's vigilance level. Finally, a novel safety strategy of vehicle speed control, which controls the electronic throttle opening and automatic braking after driver fatigue detection using the above method, is presented to avoid serious collisions and traffic accidents. The simulation and practical testing results demonstrate the feasibility of the vehicle active safety model.

  11. Ballistic targets micro-motion and geometrical shape parameters estimation from sparse decomposition representation of infrared signatures.

    PubMed

    Liu, Junliang; Chen, Shangfeng; Lu, Huanzhang; Zhao, Bendong

    2017-02-01

    Micro-motion dynamics and geometrical shape are considered to be essential evidence for infrared (IR) ballistic target recognition. However, it is usually hard or even impossible to describe the geometrical shape of an unknown target with a finite number of parameters, which results in a very difficult task to estimate target micro-motion parameters from the IR signals. Considering the shapes of ballistic targets are relatively simple, this paper explores a joint optimization technique to estimate micro-motion and dominant geometrical shape parameters from sparse decomposition representation of IR irradiance intensity signatures. By dividing an observed target surface into a number of segmented patches, an IR signature of the target can be approximately modeled as a linear combination of the observation IR signatures from the dominant segmented patches. Given this, a sparse decomposition representation of the IR signature is established with the dictionary elements defined as each segmented patch's IR signature. Then, an iterative optimization method, based on the batch second-order gradient descent algorithm, is proposed to jointly estimate target micro-motion and geometrical shape parameters. Experimental results demonstrate that the micro-motion and geometrical shape parameters can be effectively estimated using the proposed method, when the noise of the IR signature is in an acceptable level, for example, SNR>0  dB.

  12. Sparse representation of HCP grayordinate data reveals novel functional architecture of cerebral cortex.

    PubMed

    Jiang, Xi; Li, Xiang; Lv, Jinglei; Zhang, Tuo; Zhang, Shu; Guo, Lei; Liu, Tianming

    2015-12-01

    The recently publicly released Human Connectome Project (HCP) grayordinate-based fMRI data not only has high spatial and temporal resolution, but also offers group-corresponding fMRI signals across a large population for the first time in the brain imaging field, thus significantly facilitating mapping the functional brain architecture with much higher resolution and in a group-wise fashion. In this article, we adopt the HCP grayordinate task-based fMRI (tfMRI) data to systematically identify and characterize task-based heterogeneous functional regions (THFRs) on cortical surface, i.e., the regions that are activated during multiple tasks conditions and contribute to multiple task-evoked systems during a specific task performance, and to assess the spatial patterns of identified THFRs on cortical gyri and sulci by applying a computational framework of sparse representations of grayordinate brain tfMRI signals. Experimental results demonstrate that both consistent task-evoked networks and intrinsic connectivity networks across all subjects and tasks in HCP grayordinate data are effectively and robustly reconstructed via the proposed sparse representation framework. Moreover, it is found that there are relatively consistent THFRs locating at bilateral parietal lobe, frontal lobe, and visual association cortices across all subjects and tasks. Particularly, those identified THFRs locate significantly more on gyral regions than on sulcal regions. These results based on sparse representation of HCP grayordinate data reveal novel functional architecture of cortical gyri and sulci, and might provide a foundation to better understand functional mechanisms of the human cerebral cortex in the future.

  13. Sparse Representation of HCP Grayordinate Data Reveals Novel Functional Architecture of Cerebral Cortex

    PubMed Central

    Jiang, Xi; Li, Xiang; Lv, Jinglei; Zhang, Tuo; Zhang, Shu; Guo, Lei; Liu, Tianming

    2015-01-01

    The recently publicly released Human Connectome Project (HCP) grayordinate-based fMRI data not only has high spatial and temporal resolution, but also offers group-corresponding fMRI signals across a large population for the first time in the brain imaging field, thus significantly facilitating mapping the functional brain architecture with much higher resolution and in a group-wise fashion. In this paper, we adopt the HCP grayordinate task-based fMRI (tfMRI) data to systematically identify and characterize task-based heterogeneous functional regions (THFRs) on cortical surface, i.e., the regions that are activated during multiple tasks conditions and contribute to multiple task-evoked systems during a specific task performance, and to assess the spatial patterns of identified THFRs on cortical gyri and sulci by applying a computational framework of sparse representations of grayordinate brain tfMRI signals. Experimental results demonstrate that both consistent task-evoked networks and intrinsic connectivity networks across all subjects and tasks in HCP grayordinate data are effectively and robustly reconstructed via the proposed sparse representation framework. Moreover, it is found that there are relatively consistent THFRs locating at bilateral parietal lobe, frontal lobe, and visual association cortices across all subjects and tasks. Particularly, those identified THFRs locate significantly more on gyral regions than on sulcal regions. These results based on sparse representation of HCP grayordinate data reveal novel functional architecture of cortical gyri and sulci, and might provide a foundation to better understand functional mechanisms of the human cerebral cortex in the future. PMID:26466353

  14. NoGOA: predicting noisy GO annotations using evidences and sparse representation.

    PubMed

    Yu, Guoxian; Lu, Chang; Wang, Jun

    2017-07-21

    Gene Ontology (GO) is a community effort to represent functional features of gene products. GO annotations (GOA) provide functional associations between GO terms and gene products. Due to resources limitation, only a small portion of annotations are manually checked by curators, and the others are electronically inferred. Although quality control techniques have been applied to ensure the quality of annotations, the community consistently report that there are still considerable noisy (or incorrect) annotations. Given the wide application of annotations, however, how to identify noisy annotations is an important but yet seldom studied open problem. We introduce a novel approach called NoGOA to predict noisy annotations. NoGOA applies sparse representation on the gene-term association matrix to reduce the impact of noisy annotations, and takes advantage of sparse representation coefficients to measure the semantic similarity between genes. Secondly, it preliminarily predicts noisy annotations of a gene based on aggregated votes from semantic neighborhood genes of that gene. Next, NoGOA estimates the ratio of noisy annotations for each evidence code based on direct annotations in GOA files archived on different periods, and then weights entries of the association matrix via estimated ratios and propagates weights to ancestors of direct annotations using GO hierarchy. Finally, it integrates evidence-weighted association matrix and aggregated votes to predict noisy annotations. Experiments on archived GOA files of six model species (H. sapiens, A. thaliana, S. cerevisiae, G. gallus, B. Taurus and M. musculus) demonstrate that NoGOA achieves significantly better results than other related methods and removing noisy annotations improves the performance of gene function prediction. The comparative study justifies the effectiveness of integrating evidence codes with sparse representation for predicting noisy GO annotations. Codes and datasets are available at http://mlda.swu.edu.cn/codes.php?name=NoGOA .

  15. Encrypted data stream identification using randomness sparse representation and fuzzy Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Hou, Rui; Yi, Lei; Meng, Juan; Pan, Zhisong; Zhou, Yuhuan

    2016-07-01

    The accurate identification of encrypted data stream helps to regulate illegal data, detect network attacks and protect users' information. In this paper, a novel encrypted data stream identification algorithm is introduced. The proposed method is based on randomness characteristics of encrypted data stream. We use a l1-norm regularized logistic regression to improve sparse representation of randomness features and Fuzzy Gaussian Mixture Model (FGMM) to improve identification accuracy. Experimental results demonstrate that the method can be adopted as an effective technique for encrypted data stream identification.

  16. Multiple-image encryption and authentication with sparse representation by space multiplexing.

    PubMed

    Gong, Qiong; Liu, Xuyan; Li, Genquan; Qin, Yi

    2013-11-01

    A multiple-image encryption and authentication approach by space multiplexing has been proposed. The redundant spaces in the previous security systems employing sparse representation strategy are optimized. With the proposal the information of multiple images can be integrated into a synthesized ciphertext that is convenient for storage and transmission. Only when all the keys are correct can the information of the primary images be authenticated. Computer simulation results have demonstrated that the proposed method is feasible and effective. Moreover, the proposal is also proved to be robust against occlusion and noise attacks.

  17. SPARSE REPRESENTATIONS WITH DATA FIDELITY TERM VIA AN ITERATIVELY REWEIGHTED LEAST SQUARES ALGORITHM

    SciTech Connect

    WOHLBERG, BRENDT; RODRIGUEZ, PAUL

    2007-01-08

    Basis Pursuit and Basis Pursuit Denoising, well established techniques for computing sparse representations, minimize an {ell}{sup 2} data fidelity term subject to an {ell}{sup 1} sparsity constraint or regularization term on the solution by mapping the problem to a linear or quadratic program. Basis Pursuit Denoising with an {ell}{sup 1} data fidelity term has recently been proposed, also implemented via a mapping to a linear program. They introduce an alternative approach via an iteratively Reweighted Least Squares algorithm, providing greater flexibility in the choice of data fidelity term norm, and computational advantages in certain circumstances.

  18. Reducing streak artifacts in computed tomography via sparse representation in coupled dictionaries.

    PubMed

    Karimi, Davood; Ward, Rabab

    2016-03-01

    Reducing the number of acquired projections is a simple and efficient way to reduce the radiation dose in computed tomography (CT). Unfortunately, this results in streak artifacts in the reconstructed images that can significantly reduce their diagnostic value. This paper presents a novel algorithm for suppressing these artifacts in 3D CT. The proposed algorithm is based on the sparse representation of small blocks of 3D CT images in learned overcomplete dictionaries. It learns two dictionaries, the first dictionary (D(a)) is for artifact-full images that have been reconstructed from a small number (approximately 100) of projections. The other dictionary (D(c)) is for clean artifact-free images. The core idea behind the proposed algorithm is to relate the representation coefficients of an artifact-full block in D(a) to the representation coefficients of the corresponding artifact-free block in D(c). The relation between these coefficients is modeled with a linear mapping. The two dictionaries and the linear relation between the coefficients are learned simultaneously from the training data. To remove the artifacts from a test image, small blocks are extracted from this image and their sparse representation is computed in D(a). The linear map is then used to compute the corresponding coefficients in D(c), which are then used to produce the artifact-suppressed blocks. The authors apply the proposed algorithm on real cone-beam CT images. Their results show that the proposed algorithm can effectively suppress the artifacts and substantially improve the quality of the reconstructed images. The images produced by the proposed algorithm have a higher quality than the images reconstructed by the FDK algorithm from twice as many projections. The proposed sparsity-based algorithm can be a valuable tool for postprocessing of CT images reconstructed from a small number of projections. Therefore, it has the potential to be an effective tool for low-dose CT.

  19. Connectivity strength-weighted sparse group representation-based brain network construction for MCI classification.

    PubMed

    Yu, Renping; Zhang, Han; An, Le; Chen, Xiaobo; Wei, Zhihui; Shen, Dinggang

    2017-02-02

    Brain functional network analysis has shown great potential in understanding brain functions and also in identifying biomarkers for brain diseases, such as Alzheimer's disease (AD) and its early stage, mild cognitive impairment (MCI). In these applications, accurate construction of biologically meaningful brain network is critical. Sparse learning has been widely used for brain network construction; however, its l1 -norm penalty simply penalizes each edge of a brain network equally, without considering the original connectivity strength which is one of the most important inherent linkwise characters. Besides, based on the similarity of the linkwise connectivity, brain network shows prominent group structure (i.e., a set of edges sharing similar attributes). In this article, we propose a novel brain functional network modeling framework with a "connectivity strength-weighted sparse group constraint." In particular, the network modeling can be optimized by considering both raw connectivity strength and its group structure, without losing the merit of sparsity. Our proposed method is applied to MCI classification, a challenging task for early AD diagnosis. Experimental results based on the resting-state functional MRI, from 50 MCI patients and 49 healthy controls, show that our proposed method is more effective (i.e., achieving a significantly higher classification accuracy, 84.8%) than other competing methods (e.g., sparse representation, accuracy = 65.6%). Post hoc inspection of the informative features further shows more biologically meaningful brain functional connectivities obtained by our proposed method. Hum Brain Mapp, 2017. © 2017 Wiley Periodicals, Inc.

  20. A Novel Method of Automatic Plant Species Identification Using Sparse Representation of Leaf Tooth Features.

    PubMed

    Jin, Taisong; Hou, Xueliang; Li, Pifan; Zhou, Feifei

    2015-01-01

    Automatic species identification has many advantages over traditional species identification. Currently, most plant automatic identification methods focus on the features of leaf shape, venation and texture, which are promising for the identification of some plant species. However, leaf tooth, a feature commonly used in traditional species identification, is ignored. In this paper, a novel automatic species identification method using sparse representation of leaf tooth features is proposed. In this method, image corners are detected first, and the abnormal image corner is removed by the PauTa criteria. Next, the top and bottom leaf tooth edges are discriminated to effectively correspond to the extracted image corners; then, four leaf tooth features (Leaf-num, Leaf-rate, Leaf-sharpness and Leaf-obliqueness) are extracted and concatenated into a feature vector. Finally, a sparse representation-based classifier is used to identify a plant species sample. Tests on a real-world leaf image dataset show that our proposed method is feasible for species identification.

  1. A Novel Method of Automatic Plant Species Identification Using Sparse Representation of Leaf Tooth Features

    PubMed Central

    Jin, Taisong; Hou, Xueliang; Li, Pifan; Zhou, Feifei

    2015-01-01

    Automatic species identification has many advantages over traditional species identification. Currently, most plant automatic identification methods focus on the features of leaf shape, venation and texture, which are promising for the identification of some plant species. However, leaf tooth, a feature commonly used in traditional species identification, is ignored. In this paper, a novel automatic species identification method using sparse representation of leaf tooth features is proposed. In this method, image corners are detected first, and the abnormal image corner is removed by the PauTa criteria. Next, the top and bottom leaf tooth edges are discriminated to effectively correspond to the extracted image corners; then, four leaf tooth features (Leaf-num, Leaf-rate, Leaf-sharpness and Leaf-obliqueness) are extracted and concatenated into a feature vector. Finally, a sparse representation-based classifier is used to identify a plant species sample. Tests on a real-world leaf image dataset show that our proposed method is feasible for species identification. PMID:26440281

  2. Contour tracking in echocardiographic sequences via sparse representation and dictionary learning.

    PubMed

    Huang, Xiaojie; Dione, Donald P; Compas, Colin B; Papademetris, Xenophon; Lin, Ben A; Bregasi, Alda; Sinusas, Albert J; Staib, Lawrence H; Duncan, James S

    2014-02-01

    This paper presents a dynamical appearance model based on sparse representation and dictionary learning for tracking both endocardial and epicardial contours of the left ventricle in echocardiographic sequences. Instead of learning offline spatiotemporal priors from databases, we exploit the inherent spatiotemporal coherence of individual data to constraint cardiac contour estimation. The contour tracker is initialized with a manual tracing of the first frame. It employs multiscale sparse representation of local image appearance and learns online multiscale appearance dictionaries in a boosting framework as the image sequence is segmented frame-by-frame sequentially. The weights of multiscale appearance dictionaries are optimized automatically. Our region-based level set segmentation integrates a spectrum of complementary multilevel information including intensity, multiscale local appearance, and dynamical shape prediction. The approach is validated on twenty-six 4D canine echocardiographic images acquired from both healthy and post-infarct canines. The segmentation results agree well with expert manual tracings. The ejection fraction estimates also show good agreement with manual results. Advantages of our approach are demonstrated by comparisons with a conventional pure intensity model, a registration-based contour tracker, and a state-of-the-art database-dependent offline dynamical shape model. We also demonstrate the feasibility of clinical application by applying the method to four 4D human data sets.

  3. Sparse Representation of Deformable 3D Organs with Spherical Harmonics and Structured Dictionary.

    PubMed

    Wang, Dan; Tewfik, Ahmed H; Zhang, Yingchun; Shen, Yunhe

    2011-01-01

    This paper proposed a novel algorithm to sparsely represent a deformable surface (SRDS) with low dimensionality based on spherical harmonic decomposition (SHD) and orthogonal subspace pursuit (OSP). The key idea in SRDS method is to identify the subspaces from a training data set in the transformed spherical harmonic domain and then cluster each deformation into the best-fit subspace for fast and accurate representation. This algorithm is also generalized into applications of organs with both interior and exterior surfaces. To test the feasibility, we first use the computer models to demonstrate that the proposed approach matches the accuracy of complex mathematical modeling techniques and then both ex vivo and in vivo experiments are conducted using 3D magnetic resonance imaging (MRI) scans for verification in practical settings. All results demonstrated that the proposed algorithm features sparse representation of deformable surfaces with low dimensionality and high accuracy. Specifically, the precision evaluated as maximum error distance between the reconstructed surface and the MRI ground truth is better than 3 mm in real MRI experiments.

  4. Sparse Representation with Spatio-Temporal Online Dictionary Learning for Efficient Video Coding.

    PubMed

    Dai, Wenrui; Shen, Yangmei; Tang, Xin; Zou, Junni; Xiong, Hongkai; Chen, Chang Wen

    2016-07-27

    Classical dictionary learning methods for video coding suer from high computational complexity and interfered coding eciency by disregarding its underlying distribution. This paper proposes a spatio-temporal online dictionary learning (STOL) algorithm to speed up the convergence rate of dictionary learning with a guarantee of approximation error. The proposed algorithm incorporates stochastic gradient descents to form a dictionary of pairs of 3-D low-frequency and highfrequency spatio-temporal volumes. In each iteration of the learning process, it randomly selects one sample volume and updates the atoms of dictionary by minimizing the expected cost, rather than optimizes empirical cost over the complete training data like batch learning methods, e.g. K-SVD. Since the selected volumes are supposed to be i.i.d. samples from the underlying distribution, decomposition coecients attained from the trained dictionary are desirable for sparse representation. Theoretically, it is proved that the proposed STOL could achieve better approximation for sparse representation than K-SVD and maintain both structured sparsity and hierarchical sparsity. It is shown to outperform batch gradient descent methods (K-SVD) in the sense of convergence speed and computational complexity, and its upper bound for prediction error is asymptotically equal to the training error. With lower computational complexity, extensive experiments validate that the STOL based coding scheme achieves performance improvements than H.264/AVC or HEVC as well as existing super-resolution based methods in ratedistortion performance and visual quality.

  5. A Fast Algorithm for Learning Overcomplete Dictionary for Sparse Representation Based on Proximal Operators.

    PubMed

    Li, Zhenni; Ding, Shuxue; Li, Yujie

    2015-09-01

    We present a fast, efficient algorithm for learning an overcomplete dictionary for sparse representation of signals. The whole problem is considered as a minimization of the approximation error function with a coherence penalty for the dictionary atoms and with the sparsity regularization of the coefficient matrix. Because the problem is nonconvex and nonsmooth, this minimization problem cannot be solved efficiently by an ordinary optimization method. We propose a decomposition scheme and an alternating optimization that can turn the problem into a set of minimizations of piecewise quadratic and univariate subproblems, each of which is a single variable vector problem, of either one dictionary atom or one coefficient vector. Although the subproblems are still nonsmooth, remarkably they become much simpler so that we can find a closed-form solution by introducing a proximal operator. This leads to an efficient algorithm for sparse representation. To our knowledge, applying the proximal operator to the problem with an incoherence term and obtaining the optimal dictionary atoms in closed form with a proximal operator technique have not previously been studied. The main advantages of the proposed algorithm are that, as suggested by our analysis and simulation study, it has lower computational complexity and a higher convergence rate than state-of-the-art algorithms. In addition, for real applications, it shows good performance and significant reductions in computational time.

  6. Joint detection and segmentation of vertebral bodies in CT images by sparse representation error minimization

    NASA Astrophysics Data System (ADS)

    Korez, Robert; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž

    2016-03-01

    Automated detection and segmentation of vertebral bodies from spinal computed tomography (CT) images is usually a prerequisite step for numerous spine-related medical applications, such as diagnosis, surgical planning and follow-up assessment of spinal pathologies. However, automated detection and segmentation are challenging tasks due to a relatively high degree of anatomical complexity, presence of unclear boundaries and articulation of vertebrae with each other. In this paper, we describe a sparse representation error minimization (SEM) framework for joint detection and segmentation of vertebral bodies in CT images. By minimizing the sparse representation error of sampled intensity values, we are able to recover the oriented bounding box (OBB) and segmentation binary mask for each vertebral body in the CT image. The performance of the proposed SEM framework was evaluated on five CT images of the thoracolumbar spine. The resulting Euclidean distance of 1:75+/-1:02 mm, computed between the center points of recovered and corresponding reference OBBs, and Dice coefficient of 92:3+/-2:7%, computed between the resulting and corresponding reference segmentation binary masks, indicate that the proposed framework can successfully detect and segment vertebral bodies in CT images of the thoracolumbar spine.

  7. Contour Tracking in Echocardiographic Sequences via Sparse Representation and Dictionary Learning

    PubMed Central

    Huang, Xiaojie; Dione, Donald P.; Compas, Colin B.; Papademetris, Xenophon; Lin, Ben A.; Bregasi, Alda; Sinusas, Albert J.; Staib, Lawrence H.; Duncan, James S.

    2013-01-01

    This paper presents a dynamical appearance model based on sparse representation and dictionary learning for tracking both endocardial and epicardial contours of the left ventricle in echocardiographic sequences. Instead of learning offline spatiotemporal priors from databases, we exploit the inherent spatiotemporal coherence of individual data to constraint cardiac contour estimation. The contour tracker is initialized with a manual tracing of the first frame. It employs multiscale sparse representation of local image appearance and learns online multiscale appearance dictionaries in a boosting framework as the image sequence is segmented frame-by-frame sequentially. The weights of multiscale appearance dictionaries are optimized automatically. Our region-based level set segmentation integrates a spectrum of complementary multilevel information including intensity, multiscale local appearance, and dynamical shape prediction. The approach is validated on twenty-six 4D canine echocardiographic images acquired from both healthy and post-infarct canines. The segmentation results agree well with expert manual tracings. The ejection fraction estimates also show good agreement with manual results. Advantages of our approach are demonstrated by comparisons with a conventional pure intensity model, a registration-based contour tracker, and a state-of-the-art database-dependent offline dynamical shape model. We also demonstrate the feasibility of clinical application by applying the method to four 4D human data sets. PMID:24292554

  8. Aspect-Aided Dynamic Non-Negative Sparse Representation-Based Microwave Image Classification

    PubMed Central

    Zhang, Xinzheng; Yang, Qiuyue; Liu, Miaomiao; Jia, Yunjian; Liu, Shujun; Li, Guojun

    2016-01-01

    Classification of target microwave images is an important application in much areas such as security, surveillance, etc. With respect to the task of microwave image classification, a recognition algorithm based on aspect-aided dynamic non-negative least square (ADNNLS) sparse representation is proposed. Firstly, an aspect sector is determined, the center of which is the estimated aspect angle of the testing sample. The training samples in the aspect sector are divided into active atoms and inactive atoms by smooth self-representative learning. Secondly, for each testing sample, the corresponding active atoms are selected dynamically, thereby establishing dynamic dictionary. Thirdly, the testing sample is represented with ℓ1-regularized non-negative sparse representation under the corresponding dynamic dictionary. Finally, the class label of the testing sample is identified by use of the minimum reconstruction error. Verification of the proposed algorithm was conducted using the Moving and Stationary Target Acquisition and Recognition (MSTAR) database which was acquired by synthetic aperture radar. Experiment results validated that the proposed approach was able to capture the local aspect characteristics of microwave images effectively, thereby improving the classification performance. PMID:27598172

  9. Sparse Component Analysis Using Time-Frequency Representations for Operational Modal Analysis

    PubMed Central

    Qin, Shaoqian; Guo, Jie; Zhu, Changan

    2015-01-01

    Sparse component analysis (SCA) has been widely used for blind source separation(BSS) for many years. Recently, SCA has been applied to operational modal analysis (OMA), which is also known as output-only modal identification. This paper considers the sparsity of sources' time-frequency (TF) representation and proposes a new TF-domain SCA under the OMA framework. First, the measurements from the sensors are transformed to the TF domain to get a sparse representation. Then, single-source-points (SSPs) are detected to better reveal the hyperlines which correspond to the columns of the mixing matrix. The K-hyperline clustering algorithm is used to identify the direction vectors of the hyperlines and then the mixing matrix is calculated. Finally, basis pursuit de-noising technique is used to recover the modal responses, from which the modal parameters are computed. The proposed method is valid even if the number of active modes exceed the number of sensors. Numerical simulation and experimental verification demonstrate the good performance of the proposed method. PMID:25789492

  10. Tight Graph Framelets for Sparse Diffusion MRI q-Space Representation.

    PubMed

    Yap, Pew-Thian; Dong, Bin; Zhang, Yong; Shen, Dinggang

    2016-10-01

    In diffusion MRI, the outcome of estimation problems can often be improved by taking into account the correlation of diffusion-weighted images scanned with neighboring wavevectors in q-space. For this purpose, we propose in this paper to employ tight wavelet frames constructed on non-flat domains for multi-scale sparse representation of diffusion signals. This representation is well suited for signals sampled regularly or irregularly, such as on a grid or on multiple shells, in q-space. Using spectral graph theory, the frames are constructed based on quasi-affine systems (i.e., generalized dilations and shifts of a finite collection of wavelet functions) defined on graphs, which can be seen as a discrete representation of manifolds. The associated wavelet analysis and synthesis transforms can be computed efficiently and accurately without the need for explicit eigen-decomposition of the graph Laplacian, allowing scalability to very large problems. We demonstrate the effectiveness of this representation, generated using what we call tight graph framelets, in two specific applications: denoising and super-resolution in q-space using ℓ0 regularization. The associated optimization problem involves only thresholding and solving a trivial inverse problem in an iterative manner. The effectiveness of graph framelets is confirmed via evaluation using synthetic data with noncentral chi noise and real data with repeated scans.

  11. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    SciTech Connect

    Wang, Li; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang; Chen, Ken Chung; Shen, Steve G. F.; Yan, Jin; Lee, Philip K. M.; Chow, Ben; Liu, Nancy X.; Xia, James J.; Shen, Dinggang

    2014-04-15

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  12. Detection and classification of non-stationary signals using sparse representations in adaptive dictionaries

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.

    Automatic classification of non-stationary radio frequency (RF) signals is of particular interest in persistent surveillance and remote sensing applications. Such signals are often acquired in noisy, cluttered environments, and may be characterized by complex or unknown analytical models, making feature extraction and classification difficult. This thesis proposes an adaptive classification approach for poorly characterized targets and backgrounds based on sparse representations in non-analytical dictionaries learned from data. Conventional analytical orthogonal dictionaries, e.g., Short Time Fourier and Wavelet Transforms, can be suboptimal for classification of non-stationary signals, as they provide a rigid tiling of the time-frequency space, and are not specifically designed for a particular signal class. They generally do not lead to sparse decompositions (i.e., with very few non-zero coefficients), and use in classification requires separate feature selection algorithms. Pursuit-type decompositions in analytical overcomplete (non-orthogonal) dictionaries yield sparse representations, by design, and work well for signals that are similar to the dictionary elements. The pursuit search, however, has a high computational cost, and the method can perform poorly in the presence of realistic noise and clutter. One such overcomplete analytical dictionary method is also analyzed in this thesis for comparative purposes. The main thrust of the thesis is learning discriminative RF dictionaries directly from data, without relying on analytical constraints or additional knowledge about the signal characteristics. A pursuit search is used over the learned dictionaries to generate sparse classification features in order to identify time windows that contain a target pulse. Two state-of-the-art dictionary learning methods are compared, the K-SVD algorithm and Hebbian learning, in terms of their classification performance as a function of dictionary training parameters

  13. Fusion of sparse representation and dictionary matching for identification of humans in uncontrolled environment.

    PubMed

    Fernandes, Steven Lawrence; Bala, G Josemin

    2016-09-01

    gait recognitionare developed. Then a novel biomechanics based gait recognition is developed using Sparse Representation to generate what we term as "score 1." Further another novel technique for composite sketch matching is developed using Dictionary Matching to generate what we term as "score 2." Finally, score level fusion using Dempster Shafer and Proportional Conflict Distribution Rule Number 5 is performed. The proposed fusion approach is validated using a database containing biomechanics based gait sequences and biometric based composite sketches. From our analysis we find that a fusion of gait recognition and composite sketch matching provides excellent results for real-time human identification.

  14. Complex noise suppression using a sparse representation and 3D filtering of images

    NASA Astrophysics Data System (ADS)

    Kravchenko, V. F.; Ponomaryov, V. I.; Pustovoit, V. I.; Palacios-Enriquez, A.

    2017-08-01

    A novel method for the filtering of images corrupted by complex noise composed of randomly distributed impulses and additive Gaussian noise has been substantiated for the first time. The method consists of three main stages: the detection and filtering of pixels corrupted by impulsive noise, the subsequent image processing to suppress the additive noise based on 3D filtering and a sparse representation of signals in a basis of wavelets, and the concluding image processing procedure to clean the final image of the errors emerged at the previous stages. A physical interpretation of the filtering method under complex noise conditions is given. A filtering block diagram has been developed in accordance with the novel approach. Simulations of the novel image filtering method have shown an advantage of the proposed filtering scheme in terms of generally recognized criteria, such as the structural similarity index measure and the peak signal-to-noise ratio, and when visually comparing the filtered images.

  15. Sparse representation of transients in wavelet basis and its application in gearbox fault feature extraction

    NASA Astrophysics Data System (ADS)

    Fan, Wei; Cai, Gaigai; Zhu, Z. K.; Shen, Changqing; Huang, Weiguo; Shang, Li

    2015-05-01

    Vibration signals from a defective gearbox are often associated with important measurement information useful for gearbox fault diagnosis. The extraction of transient features from the vibration signals has always been a key issue for detecting the localized fault. In this paper, a new transient feature extraction technique is proposed for gearbox fault diagnosis based on sparse representation in wavelet basis. With the proposed method, both the impulse time and the period of transients can be effectively identified, and thus the transient features can be extracted. The effectiveness of the proposed method is verified by the simulated signals as well as the practical gearbox vibration signals. Comparison study shows that the proposed method outperforms empirical mode decomposition (EMD) in transient feature extraction.

  16. RSS Fingerprint Based Indoor Localization Using Sparse Representation with Spatio-Temporal Constraint.

    PubMed

    Piao, Xinglin; Zhang, Yong; Li, Tingshu; Hu, Yongli; Liu, Hao; Zhang, Ke; Ge, Yun

    2016-11-03

    The Received Signal Strength (RSS) fingerprint-based indoor localization is an important research topic in wireless network communications. Most current RSS fingerprint-based indoor localization methods do not explore and utilize the spatial or temporal correlation existing in fingerprint data and measurement data, which is helpful for improving localization accuracy. In this paper, we propose an RSS fingerprint-based indoor localization method by integrating the spatio-temporal constraints into the sparse representation model. The proposed model utilizes the inherent spatial correlation of fingerprint data in the fingerprint matching and uses the temporal continuity of the RSS measurement data in the localization phase. Experiments on the simulated data and the localization tests in the real scenes show that the proposed method improves the localization accuracy and stability effectively compared with state-of-the-art indoor localization methods.

  17. Fast L1-based sparse representation of EEG for motor imagery signal classification.

    PubMed

    Younghak Shin; Heung-No Lee; Balasingham, Ilangko

    2016-08-01

    Improvement of classification performance is one of the key challenges in electroencephalogram (EEG) based motor imagery brain-computer interface (BCI). Recently, sparse representation based classification (SRC) method has been shown to provide satisfactory classification accuracy in motor imagery classification. In this paper, we aim to evaluate the performance of the SRC method in terms of not only its classification accuracy but also of its computation time. For this purpose, we investigate the performance of recently developed fast L1 minimization methods for their use in SRC, such as homotopy and fast iterative soft-thresholding algorithm (FISTA). From experimental analysis, we note that the SRC method with the fast L1 minimization algorithms is shown to provide robust classification performance, compared to support vector machine (SVM), both in time and accuracy.

  18. Fast Acquisition and Reconstruction of Optical Coherence Tomography Images via Sparse Representation

    PubMed Central

    Li, Shutao; McNabb, Ryan P.; Nie, Qing; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.; Farsiu, Sina

    2014-01-01

    In this paper, we present a novel technique, based on compressive sensing principles, for reconstruction and enhancement of multi-dimensional image data. Our method is a major improvement and generalization of the multi-scale sparsity based tomographic denoising (MSBTD) algorithm we recently introduced for reducing speckle noise. Our new technique exhibits several advantages over MSBTD, including its capability to simultaneously reduce noise and interpolate missing data. Unlike MSBTD, our new method does not require an a priori high-quality image from the target imaging subject and thus offers the potential to shorten clinical imaging sessions. This novel image restoration method, which we termed sparsity based simultaneous denoising and interpolation (SBSDI), utilizes sparse representation dictionaries constructed from previously collected datasets. We tested the SBSDI algorithm on retinal spectral domain optical coherence tomography images captured in the clinic. Experiments showed that the SBSDI algorithm qualitatively and quantitatively outperforms other state-of-the-art methods. PMID:23846467

  19. Secure and Robust Iris Recognition Using Random Projections and Sparse Representations.

    PubMed

    Pillai, Jaishanker K; Patel, Vishal M; Chellappa, Rama; Ratha, Nalini K

    2011-09-01

    Noncontact biometrics such as face and iris have additional benefits over contact-based biometrics such as fingerprint and hand geometry. However, three important challenges need to be addressed in a noncontact biometrics-based authentication system: ability to handle unconstrained acquisition, robust and accurate matching, and privacy enhancement without compromising security. In this paper, we propose a unified framework based on random projections and sparse representations, that can simultaneously address all three issues mentioned above in relation to iris biometrics. Our proposed quality measure can handle segmentation errors and a wide variety of possible artifacts during iris acquisition. We demonstrate how the proposed approach can be easily extended to handle alignment variations and recognition from iris videos, resulting in a robust and accurate system. The proposed approach includes enhancements to privacy and security by providing ways to create cancelable iris templates. Results on public data sets show significant benefits of the proposed approach.

  20. Heterogeneous iris image hallucination using sparse representation on a learned heterogeneous patch dictionary

    NASA Astrophysics Data System (ADS)

    Li, Yung-Hui; Zheng, Bo-Ren; Ji, Dai-Yan; Tien, Chung-Hao; Liu, Po-Tsun

    2014-09-01

    Cross sensor iris matching may seriously degrade the recognition performance because of the sensor mis-match problem of iris images between the enrollment and test stage. In this paper, we propose two novel patch-based heterogeneous dictionary learning method to attack this problem. The first method applies the latest sparse representation theory while the second method tries to learn the correspondence relationship through PCA in heterogeneous patch space. Both methods learn the basic atoms in iris textures across different image sensors and build connections between them. After such connections are built, at test stage, it is possible to hallucinate (synthesize) iris images across different sensors. By matching training images with hallucinated images, the recognition rate can be successfully enhanced. The experimental results showed the satisfied results both visually and in terms of recognition rate. Experimenting with an iris database consisting of 3015 images, we show that the EER is decreased 39.4% relatively by the proposed method.

  1. Image super-resolution reconstruction via RBM-based joint dictionary learning and sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaohui; Liu, Anran; Lei, Qian

    2015-12-01

    In this paper, we propose a method for single image super-resolution(SR). Given the training set produced from large amount of high-low resolution image patches, an over-complete joint dictionary is firstly learned from a pair of high-low resolution image feature space based on Restricted Boltzmann Machines (RBM). Then for each low resolution image patch densely extracted from an up-scaled low resolution input image , its high resolution image patch can be reconstructed based on sparse representation. Finally, the reconstructed image patches are overlapped to form a large image, and a high resolution image can be achieved by means of iterated residual image compensation. Experimental results verify the effectiveness of the proposed method.

  2. RSS Fingerprint Based Indoor Localization Using Sparse Representation with Spatio-Temporal Constraint

    PubMed Central

    Piao, Xinglin; Zhang, Yong; Li, Tingshu; Hu, Yongli; Liu, Hao; Zhang, Ke; Ge, Yun

    2016-01-01

    The Received Signal Strength (RSS) fingerprint-based indoor localization is an important research topic in wireless network communications. Most current RSS fingerprint-based indoor localization methods do not explore and utilize the spatial or temporal correlation existing in fingerprint data and measurement data, which is helpful for improving localization accuracy. In this paper, we propose an RSS fingerprint-based indoor localization method by integrating the spatio-temporal constraints into the sparse representation model. The proposed model utilizes the inherent spatial correlation of fingerprint data in the fingerprint matching and uses the temporal continuity of the RSS measurement data in the localization phase. Experiments on the simulated data and the localization tests in the real scenes show that the proposed method improves the localization accuracy and stability effectively compared with state-of-the-art indoor localization methods. PMID:27827882

  3. A proximal iteration for deconvolving Poisson noisy images using sparse representations.

    PubMed

    Dupé, François-Xavier; Fadili, Jalal M; Starck, Jean-Luc

    2009-02-01

    We propose an image deconvolution algorithm when the data is contaminated by Poisson noise. The image to restore is assumed to be sparsely represented in a dictionary of waveforms such as the wavelet or curvelet transforms. Our key contributions are as follows. First, we handle the Poisson noise properly by using the Anscombe variance stabilizing transform leading to a nonlinear degradation equation with additive Gaussian noise. Second, the deconvolution problem is formulated as the minimization of a convex functional with a data-fidelity term reflecting the noise properties, and a nonsmooth sparsity-promoting penalty over the image representation coefficients (e.g., l(1) -norm). An additional term is also included in the functional to ensure positivity of the restored image. Third, a fast iterative forward-backward splitting algorithm is proposed to solve the minimization problem. We derive existence and uniqueness conditions of the solution, and establish convergence of the iterative algorithm. Finally, a GCV-based model selection procedure is proposed to objectively select the regularization parameter. Experimental results are carried out to show the striking benefits gained from taking into account the Poisson statistics of the noise. These results also suggest that using sparse-domain regularization may be tractable in many deconvolution applications with Poisson noise such as astronomy and microscopy.

  4. Accelerated reconstruction of electrical impedance tomography images via patch based sparse representation

    NASA Astrophysics Data System (ADS)

    Wang, Qi; Lian, Zhijie; Wang, Jianming; Chen, Qingliang; Sun, Yukuan; Li, Xiuyan; Duan, Xiaojie; Cui, Ziqiang; Wang, Huaxiang

    2016-11-01

    Electrical impedance tomography (EIT) reconstruction is a nonlinear and ill-posed problem. Exact reconstruction of an EIT image inverts a high dimensional mathematical model to calculate the conductivity field, which causes significant problems regarding that the computational complexity will reduce the achievable frame rate, which is considered as a major advantage of EIT imaging. The single-step method, state estimation method, and projection method were always used to accelerate reconstruction process. The basic principle of these methods is to reduce computational complexity. However, maintaining high resolution in space together with not much cost is still challenging, especially for complex conductivity distribution. This study proposes an idea to accelerate image reconstruction of EIT based on compressive sensing (CS) theory, namely, CSEIT method. The novel CSEIT method reduces the sampling rate through minimizing redundancy in measurements, so that detailed information of reconstruction is not lost. In order to obtain sparse solution, which is the prior condition of signal recovery required by CS theory, a novel image reconstruction algorithm based on patch-based sparse representation is proposed. By applying the new framework of CSEIT, the data acquisition time, or the sampling rate, is reduced by more than two times, while the accuracy of reconstruction is significantly improved.

  5. Temperature and emissivity separation via sparse representation with thermal airborne hyperspectral imager data

    NASA Astrophysics Data System (ADS)

    Li, Chengyi; Tian, Shufang; Li, Shijie; Yin, Mei

    2016-10-01

    The thermal airborne hyperspectral imager (TASI), which has 32 channels that provide continuous spectral coverage within wavelengths of 8 to 11.5 μm, is very beneficial for land surface temperature and land surface emissivity (LSE) retrieval. In remote sensing applications, emissivity is important for features classification and temperature is important for environmental monitoring, global climate change, and target recognition studies. This paper proposed a temperature and emissivity separation method via sparse representation (SR-TES) with TASI data, which employs a sparseness differences point of view whereby the atmospheric spectrum cannot be considered SR under the LSE spectral dictionary. We built the dictionary from Johns Hopkins University's spectral library as an overcomplete base, and the dictionary learning K-SVD algorithm was adopted. The simulation results showed that SR-TES performed better than the TES algorithm in the case of noise impact, and the results from TASI data for the Liuyuan research region were reasonable; partial validation revealed a root mean square error of 0.0144 for broad emissivity, which preliminarily proves that this method is feasible.

  6. Epileptic Seizure Detection with Log-Euclidean Gaussian Kernel-Based Sparse Representation.

    PubMed

    Yuan, Shasha; Zhou, Weidong; Wu, Qi; Zhang, Yanli

    2016-05-01

    Epileptic seizure detection plays an important role in the diagnosis of epilepsy and reducing the massive workload of reviewing electroencephalography (EEG) recordings. In this work, a novel algorithm is developed to detect seizures employing log-Euclidean Gaussian kernel-based sparse representation (SR) in long-term EEG recordings. Unlike the traditional SR for vector data in Euclidean space, the log-Euclidean Gaussian kernel-based SR framework is proposed for seizure detection in the space of the symmetric positive definite (SPD) matrices, which form a Riemannian manifold. Since the Riemannian manifold is nonlinear, the log-Euclidean Gaussian kernel function is applied to embed it into a reproducing kernel Hilbert space (RKHS) for performing SR. The EEG signals of all channels are divided into epochs and the SPD matrices representing EEG epochs are generated by covariance descriptors. Then, the testing samples are sparsely coded over the dictionary composed by training samples utilizing log-Euclidean Gaussian kernel-based SR. The classification of testing samples is achieved by computing the minimal reconstructed residuals. The proposed method is evaluated on the Freiburg EEG dataset of 21 patients and shows its notable performance on both epoch-based and event-based assessments. Moreover, this method handles multiple channels of EEG recordings synchronously which is more speedy and efficient than traditional seizure detection methods.

  7. Sparse Representation of Brain Aging: Extracting Covariance Patterns from Structural MRI

    PubMed Central

    Su, Longfei; Wang, Lubin; Chen, Fanglin; Shen, Hui; Li, Baojuan; Hu, Dewen

    2012-01-01

    An enhanced understanding of how normal aging alters brain structure is urgently needed for the early diagnosis and treatment of age-related mental diseases. Structural magnetic resonance imaging (MRI) is a reliable technique used to detect age-related changes in the human brain. Currently, multivariate pattern analysis (MVPA) enables the exploration of subtle and distributed changes of data obtained from structural MRI images. In this study, a new MVPA approach based on sparse representation has been employed to investigate the anatomical covariance patterns of normal aging. Two groups of participants (group 1∶290 participants; group 2∶56 participants) were evaluated in this study. These two groups were scanned with two 1.5 T MRI machines. In the first group, we obtained the discriminative patterns using a t-test filter and sparse representation step. We were able to distinguish the young from old cohort with a very high accuracy using only a few voxels of the discriminative patterns (group 1∶98.4%; group 2∶96.4%). The experimental results showed that the selected voxels may be categorized into two components according to the two steps in the proposed method. The first component focuses on the precentral and postcentral gyri, and the caudate nucleus, which play an important role in sensorimotor tasks. The strongest volume reduction with age was observed in these clusters. The second component is mainly distributed over the cerebellum, thalamus, and right inferior frontal gyrus. These regions are not only critical nodes of the sensorimotor circuitry but also the cognitive circuitry although their volume shows a relative resilience against aging. Considering the voxels selection procedure, we suggest that the aging of the sensorimotor and cognitive brain regions identified in this study has a covarying relationship with each other. PMID:22590522

  8. Spatiotemporal Context Awareness for Urban Traffic Modeling and Prediction: Sparse Representation Based Variable Selection

    PubMed Central

    Yang, Su; Shi, Shixiong; Hu, Xiaobing; Wang, Minjie

    2015-01-01

    Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1) Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2) The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3) The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks. PMID:26496370

  9. Spatiotemporal Context Awareness for Urban Traffic Modeling and Prediction: Sparse Representation Based Variable Selection.

    PubMed

    Yang, Su; Shi, Shixiong; Hu, Xiaobing; Wang, Minjie

    2015-01-01

    Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1) Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2) The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3) The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks.

  10. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation

    PubMed Central

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-01-01

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images. PMID:26980176

  11. Automated Variability Selection in Time-domain Imaging Surveys Using Sparse Representations with Learned Dictionaries

    NASA Astrophysics Data System (ADS)

    Wozniak, Przemyslaw R.; Moody, D. I.; Ji, Z.; Brumby, S. P.; Brink, H.; Richards, J.; Bloom, J. S.

    2013-01-01

    Exponential growth in data streams and discovery power delivered by modern time-domain imaging surveys creates a pressing need for variability extraction algorithms that are both fully automated and highly reliable. The current state of the art methods based on image differencing are limited by the fact that for every real variable source the algorithm returns a large number of bogus "detections" caused by atmospheric effects and instrumental signatures coupled with imperfect image processing. Here we present a new approach to this problem inspired by recent advances in computer vision and train the machine directly on pixel data. The training data set comes from the Palomar Transient Factory survey and consists of small images centered around transient candidates with known real/bogus classification. This set of 441-dimensional vectors (21x21 pixel images) is then transformed to a linear representation using the so called dictionary, an overcomplete basis constructed separately for each class. The learning algorithm captures the fact that the intrinsic dimensionality of the input images is typically much lower than the size of the dictionary, and therefore the data vectors are well approximated with a small number of dictionary elements. This sparse representation can be used to construct informative features for any suitable machine learning classifier. In our preliminary analysis automatically extracted features approach the performance of features constructed by humans using subject domain knowledge.

  12. Learning to detect objects in images via a sparse, part-based representation.

    PubMed

    Agarwal, Shivani; Awan, Aatif; Roth, Dan

    2004-11-01

    We study the problem of detecting objects in still, gray-scale images. Our primary focus is the development of a learning-based approach to the problem that makes use of a sparse, part-based representation. A vocabulary of distinctive object parts is automatically constructed from a set of sample images of the object class of interest; images are then represented using parts from this vocabulary, together with spatial relations observed among the parts. Based on this representation, a learning algorithm is used to automatically learn to detect instances of the object class in new images. The approach can be applied to any object with distinguishable parts in a relatively fixed spatial configuration; it is evaluated here on difficult sets of real-world images containing side views of cars, and is seen to successfully detect objects in varying conditions amidst background clutter and mild occlusion. In evaluating object detection approaches, several important methodological issues arise that have not been satisfactorily addressed in previous work. A secondary focus of this paper is to highlight these issues and to develop rigorous evaluation standards for the object detection problem. A critical evaluation of our approach under the proposed standards is presented.

  13. Sparse Distributed Representation of Odors in a Large-scale Olfactory Bulb Circuit

    PubMed Central

    Yu, Yuguo; McTavish, Thomas S.; Hines, Michael L.; Shepherd, Gordon M.; Valenti, Cesare; Migliore, Michele

    2013-01-01

    In the olfactory bulb, lateral inhibition mediated by granule cells has been suggested to modulate the timing of mitral cell firing, thereby shaping the representation of input odorants. Current experimental techniques, however, do not enable a clear study of how the mitral-granule cell network sculpts odor inputs to represent odor information spatially and temporally. To address this critical step in the neural basis of odor recognition, we built a biophysical network model of mitral and granule cells, corresponding to 1/100th of the real system in the rat, and used direct experimental imaging data of glomeruli activated by various odors. The model allows the systematic investigation and generation of testable hypotheses of the functional mechanisms underlying odor representation in the olfactory bulb circuit. Specifically, we demonstrate that lateral inhibition emerges within the olfactory bulb network through recurrent dendrodendritic synapses when constrained by a range of balanced excitatory and inhibitory conductances. We find that the spatio-temporal dynamics of lateral inhibition plays a critical role in building the glomerular-related cell clusters observed in experiments, through the modulation of synaptic weights during odor training. Lateral inhibition also mediates the development of sparse and synchronized spiking patterns of mitral cells related to odor inputs within the network, with the frequency of these synchronized spiking patterns also modulated by the sniff cycle. PMID:23555237

  14. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation.

    PubMed

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-03-16

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images.

  15. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation

    NASA Astrophysics Data System (ADS)

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-03-01

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images.

  16. Rough ground surface clutter removal in air-coupled ground penetrating radar data using low-rank and sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Burns, Dylan; Orfeo, Dan; Huston, Dryver R.; Xia, Tian

    2017-04-01

    This paper explores a low-rank and sparse representation based technique to remove the clutter produced by rough ground surface for air-coupled ground penetrating radar (GPR). For rough ground surface, the surface clutter components in different A-Scan traces are not aligned on the depth axis. To compensate for the misalignment effect and facilitate clutter removal, the A-Scan traces are aligned using cross-correlation technique first. Then the low-rank and sparse representation approach is applied to decompose the GPR data into a low-rank matrix whose columns record the ground clutter in A-Scan traces upon alignment adjustment, and a sparse matrix that features the subsurface object under test. The effectiveness of the proposed clutter removal method has been evaluated through simulations.

  17. A Novel Gene Selection Method Based on Sparse Representation and Max-Relevance and Min-Redundancy.

    PubMed

    Chen, Min; He, Xiaoming; Duan, ShaoBin; Deng, YingWei

    2017-01-01

    Gene selection method as an important data preprocessing work has been followed. The criteria Maximum relevance and minimum redundancy (MRMR) has been commonly used for gene selection, which has a satisfactory performance in evaluating the correlation between two genes. However, for viewing genes in isolation, it ignores the influence of other genes. In this study, we propose a new method based on sparse representation and MRMR algorithm (SRCMRM), using the sparse representation coefficient to represent the relevance of genes and correlation between genes and categories. The SRCMRMR algorithm contains two steps. Firstly, the genes irrelevant to the classification target are removed by using sparse representation coefficient. Secondly, sparse representation coefficient is used to calculate the correlation between genes and the most representative gene with the highest evaluation. To validate the performance of the SRCMRM, our method is compared with various algorithms. The proposed method achieves better classification accuracy for all datasets. The effectiveness and stability of our method have been proven through various experiments, which means that our method has practical significance. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  18. Categorizing biomedicine images using novel image features and sparse coding representation

    PubMed Central

    2013-01-01

    Background Images embedded in biomedical publications carry rich information that often concisely summarize key hypotheses adopted, methods employed, or results obtained in a published study. Therefore, they offer valuable clues for understanding main content in a biomedical publication. Prior studies have pointed out the potential of mining images embedded in biomedical publications for automatically understanding and retrieving such images' associated source documents. Within the broad area of biomedical image processing, categorizing biomedical images is a fundamental step for building many advanced image analysis, retrieval, and mining applications. Similar to any automatic categorization effort, discriminative image features can provide the most crucial aid in the process. Method We observe that many images embedded in biomedical publications carry versatile annotation text. Based on the locations of and the spatial relationships between these text elements in an image, we thus propose some novel image features for image categorization purpose, which quantitatively characterize the spatial positions and distributions of text elements inside a biomedical image. We further adopt a sparse coding representation (SCR) based technique to categorize images embedded in biomedical publications by leveraging our newly proposed image features. Results we randomly selected 990 images of the JPG format for use in our experiments where 310 images were used as training samples and the rest were used as the testing cases. We first segmented 310 sample images following the our proposed procedure. This step produced a total of 1035 sub-images. We then manually labeled all these sub-images according to the two-level hierarchical image taxonomy proposed by [1]. Among our annotation results, 316 are microscopy images, 126 are gel electrophoresis images, 135 are line charts, 156 are bar charts, 52 are spot charts, 25 are tables, 70 are flow charts, and the remaining 155 images are

  19. Combining sparseness and smoothness improves classification accuracy and interpretability.

    PubMed

    de Brecht, Matthew; Yamagishi, Noriko

    2012-04-02

    Sparse logistic regression (SLR) has been shown to be a useful method for decoding high-dimensional fMRI and MEG data by automatically selecting relevant feature dimensions. However, when applied to signals with high spatio-temporal correlations, SLR often over-prunes the feature space, which can result in overfitting and weight vectors that are difficult to interpret. To overcome this problem, we investigate a modification of ℓ₁-normed sparse logistic regression, called smooth sparse logistic regression (SSLR), which has a spatio-temporal "smoothing" prior that encourages weights that are close in time and space to have similar values. This causes the classifier to select spatio-temporally continuous groups of features, whereas SLR classifiers often select a scattered collection of independent features. We applied the method to both simulation data and real MEG data. We found that SSLR consistently increases classification accuracy, and produces weight vectors that are more meaningful from a neuroscientific perspective.

  20. Online sparse representation for remote sensing compressed-sensed video sampling

    NASA Astrophysics Data System (ADS)

    Wang, Jie; Liu, Kun; Li, Sheng-liang; Zhang, Li

    2014-11-01

    Most recently, an emerging Compressed Sensing (CS) theory has brought a major breakthrough for data acquisition and recovery. It asserts that a signal, which is highly compressible in a known basis, can be reconstructed with high probability through sampling frequency which is well below Nyquist Sampling Frequency. When applying CS to Remote Sensing (RS) Video imaging, it can directly and efficiently acquire compressed image data by randomly projecting original data to obtain linear and non-adaptive measurements. In this paper, with the help of distributed video coding scheme which is a low-complexity technique for resource limited sensors, the frames of a RS video sequence are divided into Key frames (K frames) and Non-Key frames (CS frames). In other words, the input video sequence consists of many groups of pictures (GOPs) and each GOP consists of one K frame followed by several CS frames. Both of them are measured based on block, but at different sampling rates. In this way, the major encoding computation burden will be shifted to the decoder. At the decoder, the Side Information (SI) is generated for the CS frames using traditional Motion-Compensated Interpolation (MCI) technique according to the reconstructed key frames. The over-complete dictionary is trained by dictionary learning methods based on SI. These learning methods include ICA-like, PCA, K-SVD, MOD, etc. Using these dictionaries, the CS frames could be reconstructed according to sparse-land model. In the numerical experiments, the reconstruction performance of ICA algorithm, which is often evaluated by Peak Signal-to-Noise Ratio (PSNR), has been made compared with other online sparse representation algorithms. The simulation results show its advantages in reducing reconstruction time and robustness in reconstruction performance when applying ICA algorithm to remote sensing video reconstruction.

  1. Network dynamics underlying the formation of sparse, informative representations in the hippocampus

    PubMed Central

    Karlsson, Mattias P.

    2009-01-01

    During development, activity-dependent processes increase the specificity of neural responses to stimuli, but the role that this type of process plays in adult plasticity is unclear. We examined the dynamics of hippocampal activity as animals learned about new environments in order to understand how neural selectivity changes with experience. Hippocampal principal neurons fire when the animal is located in a particular subregion of its environment, and in any given environment the hippocampal representation is sparse: less than half of the neurons in areas CA1 and CA3 are active while the rest are essentially silent. Here we show that different dynamics govern the evolution of this sparsity in CA1 and upstream area CA3. CA1, but not CA3, produces twice as many spikes in novel as compared to familiar environments. This high rate firing continues during sharp wave ripple events in a subsequent rest period. The overall CA1 population rate declines and the number of active cells decreases as the environment becomes familiar and task performance improves, but the decline in rate is not uniform across neurons. Instead, the activity of cells with initial peak spatial rates above ~12 Hz is enhanced, while the activity of cells with lower initial peak rates is suppressed. The result of these changes is that the active CA1 population comes to consist of a relatively small group of cells with strong spatial tuning. This process is not evident in CA3, indicating that a region-specific and long timescale process operates in CA1 to create a sparse, spatially informative population of neurons. PMID:19109508

  2. Vessel segmentation and microaneurysm detection using discriminative dictionary learning and sparse representation.

    PubMed

    Javidi, Malihe; Pourreza, Hamid-Reza; Harati, Ahad

    2017-02-01

    Diabetic retinopathy (DR) is a major cause of visual impairment, and the analysis of retinal image can assist patients to take action earlier when it is more likely to be effective. The accurate segmentation of blood vessels in the retinal image can diagnose DR directly. In this paper, a novel scheme for blood vessel segmentation based on discriminative dictionary learning (DDL) and sparse representation has been proposed. The proposed system yields a strong representation which contains the semantic concept of the image. To extract blood vessel, two separate dictionaries, for vessel and non-vessel, capable of providing reconstructive and discriminative information of the retinal image are learned. In the test step, an unseen retinal image is divided into overlapping patches and classified to vessel and non-vessel patches. Then, a voting scheme is applied to generate the binary vessel map. The proposed vessel segmentation method can achieve the accuracy of 95% and a sensitivity of 75% in the same range of specificity 97% on two public datasets. The results show that the proposed method can achieve comparable results to existing methods and decrease false positive vessels in abnormal retinal images with pathological regions. Microaneurysm (MA) is the earliest sign of DR that appears as a small red dot on the surface of the retina. Despite several attempts to develop automated MA detection systems, it is still a challenging problem. In this paper, a method for MA detection, which is similar to our vessel segmentation approach, is proposed. In our method, a candidate detection algorithm based on the Morlet wavelet is applied to identify all possible MA candidates. In the next step, two discriminative dictionaries with the ability to distinguish MA from non-MA object are learned. These dictionaries are then used to classify the detected candidate objects. The evaluations indicate that the proposed MA detection method achieves higher average sensitivity about 2

  3. 3D Face Recognition Based on Multiple Keypoint Descriptors and Sparse Representation

    PubMed Central

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876

  4. Nonnegative matrix factorization and sparse representation for the automated detection of periodic limb movements in sleep.

    PubMed

    Shokrollahi, Mehrnaz; Krishnan, Sridhar; Dopsa, Dustin D; Muir, Ryan T; Black, Sandra E; Swartz, Richard H; Murray, Brian J; Boulos, Mark I

    2016-11-01

    Stroke is a leading cause of death and disability in adults, and incurs a significant economic burden to society. Periodic limb movements (PLMs) in sleep are repetitive movements involving the great toe, ankle, and hip. Evolving evidence suggests that PLMs may be associated with high blood pressure and stroke, but this relationship remains underexplored. Several issues limit the study of PLMs including the need to manually score them, which is time-consuming and costly. For this reason, we developed a novel automated method for nocturnal PLM detection, which was shown to be correlated with (a) the manually scored PLM index on polysomnography, and (b) white matter hyperintensities on brain imaging, which have been demonstrated to be associated with PLMs. Our proposed algorithm consists of three main stages: (1) representing the signal in the time-frequency plane using time-frequency matrices (TFM), (2) applying K-nonnegative matrix factorization technique to decompose the TFM matrix into its significant components, and (3) applying kernel sparse representation for classification (KSRC) to the decomposed signal. Our approach was applied to a dataset that consisted of 65 subjects who underwent polysomnography. An overall classification of 97 % was achieved for discrimination of the aforementioned signals, demonstrating the potential of the presented method.

  5. Maximum-parsimony haplotype frequencies inference based on a joint constrained sparse representation of pooled DNA.

    PubMed

    Jajamovich, Guido H; Iliadis, Alexandros; Anastassiou, Dimitris; Wang, Xiaodong

    2013-09-08

    DNA pooling constitutes a cost effective alternative in genome wide association studies. In DNA pooling, equimolar amounts of DNA from different individuals are mixed into one sample and the frequency of each allele in each position is observed in a single genotype experiment. The identification of haplotype frequencies from pooled data in addition to single locus analysis is of separate interest within these studies as haplotypes could increase statistical power and provide additional insight. We developed a method for maximum-parsimony haplotype frequency estimation from pooled DNA data based on the sparse representation of the DNA pools in a dictionary of haplotypes. Extensions to scenarios where data is noisy or even missing are also presented. The resulting method is first applied to simulated data based on the haplotypes and their associated frequencies of the AGT gene. We further evaluate our methodology on datasets consisting of SNPs from the first 7Mb of the HapMap CEU population. Noise and missing data were further introduced in the datasets in order to test the extensions of the proposed method. Both HIPPO and HAPLOPOOL were also applied to these datasets to compare performances. We evaluate our methodology on scenarios where pooling is more efficient relative to individual genotyping; that is, in datasets that contain pools with a small number of individuals. We show that in such scenarios our methodology outperforms state-of-the-art methods such as HIPPO and HAPLOPOOL.

  6. Clustering-weighted SIFT-based classification method via sparse representation

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Xu, Feng; He, Jun

    2014-07-01

    In recent years, sparse representation-based classification (SRC) has received significant attention due to its high recognition rate. However, the original SRC method requires a rigid alignment, which is crucial for its application. Therefore, features such as SIFT descriptors are introduced into the SRC method, resulting in an alignment-free method. However, a feature-based dictionary always contains considerable useful information for recognition. We explore the relationship of the similarity of the SIFT descriptors to multitask recognition and propose a clustering-weighted SIFT-based SRC method (CWS-SRC). The proposed approach is considerably more suitable for multitask recognition with sufficient samples. Using two public face databases (AR and Yale face) and a self-built car-model database, the performance of the proposed method is evaluated and compared to that of the SRC, SIFT matching, and MKD-SRC methods. Experimental results indicate that the proposed method exhibits better performance in the alignment-free scenario with sufficient samples.

  7. Remote Sensing Image Fusion Method Based on Nonsubsampled Shearlet Transform and Sparse Representation

    NASA Astrophysics Data System (ADS)

    Moonon, Altan-Ulzii; Hu, Jianwen; Li, Shutao

    2015-12-01

    The remote sensing image fusion is an important preprocessing technique in remote sensing image processing. In this paper, a remote sensing image fusion method based on the nonsubsampled shearlet transform (NSST) with sparse representation (SR) is proposed. Firstly, the low resolution multispectral (MS) image is upsampled and color space is transformed from Red-Green-Blue (RGB) to Intensity-Hue-Saturation (IHS). Then, the high resolution panchromatic (PAN) image and intensity component of MS image are decomposed by NSST to high and low frequency coefficients. The low frequency coefficients of PAN and the intensity component are fused by the SR with the learned dictionary. The high frequency coefficients of intensity component and PAN image are fused by local energy based fusion rule. Finally, the fused result is obtained by performing inverse NSST and inverse IHS transform. The experimental results on IKONOS and QuickBird satellites demonstrate that the proposed method provides better spectral quality and superior spatial information in the fused image than other remote sensing image fusion methods both in visual effect and object evaluation.

  8. Wireless Sensor Array Network DoA Estimation from Compressed Array Data via Joint Sparse Representation

    PubMed Central

    Yu, Kai; Yin, Ming; Luo, Ji-An; Wang, Yingguan; Bao, Ming; Hu, Yu-Hen; Wang, Zhi

    2016-01-01

    A compressive sensing joint sparse representation direction of arrival estimation (CSJSR-DoA) approach is proposed for wireless sensor array networks (WSAN). By exploiting the joint spatial and spectral correlations of acoustic sensor array data, the CSJSR-DoA approach provides reliable DoA estimation using randomly-sampled acoustic sensor data. Since random sampling is performed at remote sensor arrays, less data need to be transmitted over lossy wireless channels to the fusion center (FC), and the expensive source coding operation at sensor nodes can be avoided. To investigate the spatial sparsity, an upper bound of the coherence of incoming sensor signals is derived assuming a linear sensor array configuration. This bound provides a theoretical constraint on the angular separation of acoustic sources to ensure the spatial sparsity of the received acoustic sensor array signals. The Crame´r–Rao bound of the CSJSR-DoA estimator that quantifies the theoretical DoA estimation performance is also derived. The potential performance of the CSJSR-DoA approach is validated using both simulations and field experiments on a prototype WSAN platform. Compared to existing compressive sensing-based DoA estimation methods, the CSJSR-DoA approach shows significant performance improvement. PMID:27223287

  9. An application to pulmonary emphysema classification based on model of texton learning by sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Zhou, Xiangrong; Goshima, Satoshi; Chen, Huayue; Muramatsu, Chisako; Hara, Takeshi; Yokoyama, Ryojiro; Kanematsu, Masayuki; Fujita, Hiroshi

    2012-03-01

    We aim at using a new texton based texture classification method in the classification of pulmonary emphysema in computed tomography (CT) images of the lungs. Different from conventional computer-aided diagnosis (CAD) pulmonary emphysema classification methods, in this paper, firstly, the dictionary of texton is learned via applying sparse representation(SR) to image patches in the training dataset. Then the SR coefficients of the test images over the dictionary are used to construct the histograms for texture presentations. Finally, classification is performed by using a nearest neighbor classifier with a histogram dissimilarity measure as distance. The proposed approach is tested on 3840 annotated regions of interest consisting of normal tissue and mild, moderate and severe pulmonary emphysema of three subtypes. The performance of the proposed system, with an accuracy of about 88%, is comparably higher than state of the art method based on the basic rotation invariant local binary pattern histograms and the texture classification method based on texton learning by k-means, which performs almost the best among other approaches in the literature.

  10. Pulmonary emphysema classification based on an improved texton learning model by sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Zhou, Xiangrong; Goshima, Satoshi; Chen, Huayue; Muramatsu, Chisako; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Fujita, Hiroshi

    2013-03-01

    In this paper, we present a texture classification method based on texton learned via sparse representation (SR) with new feature histogram maps in the classification of emphysema. First, an overcomplete dictionary of textons is learned via KSVD learning on every class image patches in the training dataset. In this stage, high-pass filter is introduced to exclude patches in smooth area to speed up the dictionary learning process. Second, 3D joint-SR coefficients and intensity histograms of the test images are used for characterizing regions of interest (ROIs) instead of conventional feature histograms constructed from SR coefficients of the test images over the dictionary. Classification is then performed using a classifier with distance as a histogram dissimilarity measure. Four hundreds and seventy annotated ROIs extracted from 14 test subjects, including 6 paraseptal emphysema (PSE) subjects, 5 centrilobular emphysema (CLE) subjects and 3 panlobular emphysema (PLE) subjects, are used to evaluate the effectiveness and robustness of the proposed method. The proposed method is tested on 167 PSE, 240 CLE and 63 PLE ROIs consisting of mild, moderate and severe pulmonary emphysema. The accuracy of the proposed system is around 74%, 88% and 89% for PSE, CLE and PLE, respectively.

  11. A state space representation of VAR models with sparse learning for dynamic gene networks.

    PubMed

    Kojima, Kaname; Yamaguchi, Rui; Imoto, Seiya; Yamauchi, Mai; Nagasaki, Masao; Yoshida, Ryo; Shimamura, Teppei; Ueno, Kazuko; Higuchi, Tomoyuki; Gotoh, Noriko; Miyano, Satoru

    2010-01-01

    We propose a state space representation of vector autoregressive model and its sparse learning based on L1 regularization to achieve efficient estimation of dynamic gene networks based on time course microarray data. The proposed method can overcome drawbacks of the vector autoregressive model and state space model; the assumption of equal time interval and lack of separation ability of observation and systems noises in the former method and the assumption of modularity of network structure in the latter method. However, in a simple implementation the proposed model requires the calculation of large inverse matrices in a large number of times during parameter estimation process based on EM algorithm. This limits the applicability of the proposed method to a relatively small gene set. We thus introduce a new calculation technique for EM algorithm that does not require the calculation of inverse matrices. The proposed method is applied to time course microarray data of lung cells treated by stimulating EGF receptors and dosing an anticancer drug, Gefitinib. By comparing the estimated network with the control network estimated using non-treated lung cells, perturbed genes by the anticancer drug could be found, whose up- and down-stream genes in the estimated networks may be related to side effects of the anticancer drug.

  12. Sparse Representation for Signal Reconstruction in Calorimeters Operating in High Luminosity

    NASA Astrophysics Data System (ADS)

    Barbosa, Davis P.; de A. Filho, Luciano M.; Peralva, Bernardo S.; Cerqueira, Augusto S.; de Seixas, José M.

    2017-07-01

    A calorimeter signal reconstruction method, based on sparse representation (SR) of redundant data, is proposed for energy reconstruction in particle colliders operating in high-luminosity conditions. The signal overlapping is first modeled as an underdetermined linear system, leading to a convex set of feasible solutions. The solution with the smallest number of superimposed signals (the SR) that represents the recorded data is obtained through the use of an interior-point (IP) optimization procedure. From a signal processing point-of-view, the procedure performs a source separation, where the information of the amplitude of each convoluted signal is obtained. In the simulation results, a comparison of the proposed method with standard signal reconstruction one was performed. For this, a toy Monte Carlo simulation was developed, focusing in calorimeter front-end signal generation only, where the different levels of pileup and signal-to-noise ratio were used to qualify the proposed method. The results show that the method may be competitive in high-luminosity environments.

  13. Hyperspectral Image Classification via Multitask Joint Sparse Representation and Stepwise MRF Optimization.

    PubMed

    Yuan, Yuan; Lin, Jianzhe; Wang, Qi

    2016-12-01

    Hyperspectral image (HSI) classification is a crucial issue in remote sensing. Accurate classification benefits a large number of applications such as land use analysis and marine resource utilization. But high data correlation brings difficulty to reliable classification, especially for HSI with abundant spectral information. Furthermore, the traditional methods often fail to well consider the spatial coherency of HSI that also limits the classification performance. To address these inherent obstacles, a novel spectral-spatial classification scheme is proposed in this paper. The proposed method mainly focuses on multitask joint sparse representation (MJSR) and a stepwise Markov random filed framework, which are claimed to be two main contributions in this procedure. First, the MJSR not only reduces the spectral redundancy, but also retains necessary correlation in spectral field during classification. Second, the stepwise optimization further explores the spatial correlation that significantly enhances the classification accuracy and robustness. As far as several universal quality evaluation indexes are concerned, the experimental results on Indian Pines and Pavia University demonstrate the superiority of our method compared with the state-of-the-art competitors.

  14. Super-resolution algorithm based on sparse representation and wavelet preprocessing for remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Ren, Ruizhi; Gu, Lingjia; Fu, Haoyang; Sun, Chenglin

    2017-04-01

    An effective super-resolution (SR) algorithm is proposed for actual spectral remote sensing images based on sparse representation and wavelet preprocessing. The proposed SR algorithm mainly consists of dictionary training and image reconstruction. Wavelet preprocessing is used to establish four subbands, i.e., low frequency, horizontal, vertical, and diagonal high frequency, for an input image. As compared to the traditional approaches involving the direct training of image patches, the proposed approach focuses on the training of features derived from these four subbands. The proposed algorithm is verified using different spectral remote sensing images, e.g., moderate-resolution imaging spectroradiometer (MODIS) images with different bands, and the latest Chinese Jilin-1 satellite images with high spatial resolution. According to the visual experimental results obtained from the MODIS remote sensing data, the SR images using the proposed SR algorithm are superior to those using a conventional bicubic interpolation algorithm or traditional SR algorithms without preprocessing. Fusion algorithms, e.g., standard intensity-hue-saturation, principal component analysis, wavelet transform, and the proposed SR algorithms are utilized to merge the multispectral and panchromatic images acquired by the Jilin-1 satellite. The effectiveness of the proposed SR algorithm is assessed by parameters such as peak signal-to-noise ratio, structural similarity index, correlation coefficient, root-mean-square error, relative dimensionless global error in synthesis, relative average spectral error, spectral angle mapper, and the quality index Q4, and its performance is better than that of the standard image fusion algorithms.

  15. Generalization of spectral fidelity with flexible measures for the sparse representation classification of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Zhu, Yong; Huang, Xin; Li, Jiayi

    2016-10-01

    Sparse representation classification (SRC) is becoming a promising tool for hyperspectral image (HSI) classification, where the Euclidean spectral distance (ESD) is widely used to reflect the fidelity between the original and reconstructed signals. In this paper, a generalized model is proposed to extend SRC by characterizing the spectral fidelity with flexible similarity measures. To validate the flexibility, several typical similarity measures-the spectral angle similarity (SAS), spectral information divergence (SID), the structural similarity index measure (SSIM), and the ESD-are included in the generalized model. Furthermore, a general solution based on a gradient descent technique is used to solve the nonlinear optimization problem formulated by the flexible similarity measures. To test the generalized model, two actual HSIs were used, and the experimental results confirm the ability of the proposed model to accommodate the various spectral similarity measures. Performance comparisons with the ESD, SAS, SID, and SSIM criteria were also conducted, and the results consistently show the advantages of the generalized model for HSI classification in terms of overall accuracy and kappa coefficient.

  16. Simple adaptive sparse representation based classification schemes for EEG based brain-computer interface applications.

    PubMed

    Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No

    2015-11-01

    One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Multi-modality sparse representation-based classification for Alzheimer's disease and mild cognitive impairment.

    PubMed

    Xu, Lele; Wu, Xia; Chen, Kewei; Yao, Li

    2015-11-01

    The discrimination of Alzheimer's disease (AD) and its prodromal stage known as mild cognitive impairment (MCI) from normal control (NC) is important for patients' timely treatment. The simultaneous use of multi-modality data has been demonstrated to be helpful for more accurate identification. The current study focused on extending a multi-modality algorithm and evaluating the method by identifying AD/MCI. In this study, sparse representation-based classification (SRC), a well-developed method in pattern recognition and machine learning, was extended to a multi-modality classification framework named as weighted multi-modality SRC (wmSRC). Data including three modalities of volumetric magnetic resonance imaging (MRI), fluorodeoxyglucose (FDG) positron emission tomography (PET) and florbetapir PET from the Alzheimer's disease Neuroimaging Initiative database were adopted for AD/MCI classification (113 AD patients, 110 MCI patients and 117 NC subjects). Adopting wmSRC, the classification accuracy achieved 94.8% for AD vs. NC, 74.5% for MCI vs. NC, and 77.8% for progressive MCI vs. stable MCI, superior to or comparable with the results of some other state-of-the-art models in recent multi-modality researches. The wmSRC method is a promising tool for classification with multi-modality data. It could be effective for identifying diseases from NC with neuroimaging data, which could be helpful for the timely diagnosis and treatment of diseases. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Wireless Sensor Array Network DoA Estimation from Compressed Array Data via Joint Sparse Representation.

    PubMed

    Yu, Kai; Yin, Ming; Luo, Ji-An; Wang, Yingguan; Bao, Ming; Hu, Yu-Hen; Wang, Zhi

    2016-05-23

    A compressive sensing joint sparse representation direction of arrival estimation (CSJSR-DoA) approach is proposed for wireless sensor array networks (WSAN). By exploiting the joint spatial and spectral correlations of acoustic sensor array data, the CSJSR-DoA approach provides reliable DoA estimation using randomly-sampled acoustic sensor data. Since random sampling is performed at remote sensor arrays, less data need to be transmitted over lossy wireless channels to the fusion center (FC), and the expensive source coding operation at sensor nodes can be avoided. To investigate the spatial sparsity, an upper bound of the coherence of incoming sensor signals is derived assuming a linear sensor array configuration. This bound provides a theoretical constraint on the angular separation of acoustic sources to ensure the spatial sparsity of the received acoustic sensor array signals. The Cram e ´ r-Rao bound of the CSJSR-DoA estimator that quantifies the theoretical DoA estimation performance is also derived. The potential performance of the CSJSR-DoA approach is validated using both simulations and field experiments on a prototype WSAN platform. Compared to existing compressive sensing-based DoA estimation methods, the CSJSR-DoA approach shows significant performance improvement.

  19. 3D face recognition based on multiple keypoint descriptors and sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm.

  20. Sparse representation and Bayesian detection of genome copy number alterations from microarray data

    PubMed Central

    Pique-Regi, Roger; Monso-Varona, Jordi; Ortega, Antonio; Seeger, Robert C.; Triche, Timothy J.; Asgharzadeh, Shahab

    2008-01-01

    Motivation: Genomic instability in cancer leads to abnormal genome copy number alterations (CNA) that are associated with the development and behavior of tumors. Advances in microarray technology have allowed for greater resolution in detection of DNA copy number changes (amplifications or deletions) across the genome. However, the increase in number of measured signals and accompanying noise from the array probes present a challenge in accurate and fast identification of breakpoints that define CNA. This article proposes a novel detection technique that exploits the use of piece wise constant (PWC) vectors to represent genome copy number and sparse Bayesian learning (SBL) to detect CNA breakpoints. Methods: First, a compact linear algebra representation for the genome copy number is developed from normalized probe intensities. Second, SBL is applied and optimized to infer locations where copy number changes occur. Third, a backward elimination (BE) procedure is used to rank the inferred breakpoints; and a cut-off point can be efficiently adjusted in this procedure to control for the false discovery rate (FDR). Results: The performance of our algorithm is evaluated using simulated and real genome datasets and compared to other existing techniques. Our approach achieves the highest accuracy and lowest FDR while improving computational speed by several orders of magnitude. The proposed algorithm has been developed into a free standing software application (GADA, Genome Alteration Detection Algorithm). Availability: http://biron.usc.edu/~piquereg/GADA Contact: jpei@chop.swmed.edu and rpique@ieee.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:18203770

  1. Combining sparse coding and time-domain features for heart sound classification.

    PubMed

    Whitaker, Bradley M; Suresha, Pradyumna B; Liu, Chengyu; Clifford, Gari D; Anderson, David V

    2017-07-31

    This paper builds upon work submitted as part of the 2016 PhysioNet/CinC Challenge, which used sparse coding as a feature extraction tool on audio PCG data for heart sound classification. In sparse coding, preprocessed data is decomposed into a dictionary matrix and a sparse coefficient matrix. The dictionary matrix represents statistically important features of the audio segments. The sparse coefficient matrix is a mapping that represents which features are used by each segment. Working in the sparse domain, we train support vector machines (SVMs) for each audio segment (S1, systole, S2, diastole) and the full cardiac cycle. We train a sixth SVM to combine the results from the preliminary SVMs into a single binary label for the entire PCG recording. In addition to classifying heart sounds using sparse coding, this paper presents two novel modifications. The first uses a matrix norm in the dictionary update step of sparse coding to encourage the dictionary to learn discriminating features from the abnormal heart recordings. The second combines the sparse coding features with time-domain features in the final SVM stage. The original algorithm submitted to the challenge achieved a cross-validated mean accuracy (MAcc) score of 0.8652 (Se  =  0.8669 and Sp  =  0.8634). After incorporating the modifications new to this paper, we report an improved cross-validated MAcc of 0.8926 (Se  =  0.9007 and Sp  =  0.8845). Our results show that sparse coding is an effective way to define spectral features of the cardiac cycle and its sub-cycles for the purpose of classification. In addition, we demonstrate that sparse coding can be combined with additional feature extraction methods to improve classification accuracy.

  2. Automatic media-adventitia IVUS image segmentation based on sparse representation framework and dynamic directional active contour model.

    PubMed

    Zakeri, Fahimeh Sadat; Setarehdan, Seyed Kamaledin; Norouzi, Somayye

    2017-03-25

    Segmentation of the arterial wall boundaries from intravascular ultrasound images is an important image processing task in order to quantify arterial wall characteristics such as shape, area, thickness and eccentricity. Since manual segmentation of these boundaries is a laborious and time consuming procedure, many researchers attempted to develop (semi-) automatic segmentation techniques as a powerful tool for educational and clinical purposes in the past but as yet there is no any clinically approved method in the market. This paper presents a deterministic-statistical strategy for automatic media-adventitia border detection by a fourfold algorithm. First, a smoothed initial contour is extracted based on the classification in the sparse representation framework which is combined with the dynamic directional convolution vector field. Next, an active contour model is utilized for the propagation of the initial contour toward the interested borders. Finally, the extracted contour is refined in the leakage, side branch openings and calcification regions based on the image texture patterns. The performance of the proposed algorithm is evaluated by comparing the results to those manually traced borders by an expert on 312 different IVUS images obtained from four different patients. The statistical analysis of the results demonstrates the efficiency of the proposed method in the media-adventitia border detection with enough consistency in the leakage and calcification regions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. A no-reference perceptual blurriness metric based fast super-resolution of still pictures using sparse representation

    NASA Astrophysics Data System (ADS)

    Choi, Jae-Seok; Bae, Sung-Ho; Kim, Munchurl

    2015-03-01

    In recent years, perceptually-driven super-resolution (SR) methods have been proposed to lower computational complexity. Furthermore, sparse representation based super-resolution is known to produce competitive high-resolution images with lower computational costs compared to other SR methods. Nevertheless, super-resolution is still difficult to be implemented with substantially low processing power for real-time applications. In order to speed up the processing time of SR, much effort has been made with efficient methods, which selectively incorporate elaborate computation algorithms for perceptually sensitive image regions based on a metric, such as just noticeable distortion (JND). Inspired by the previous works, we first propose a novel fast super-resolution method with sparse representation, which incorporates a no-reference just noticeable blur (JNB) metric. That is, the proposed fast super-resolution method efficiently generates super-resolution images by selectively applying a sparse representation method for perceptually sensitive image areas which are detected based on the JNB metric. Experimental results show that our JNB-based fast super-resolution method is about 4 times faster than a non-perceptual sparse representation based SR method for 256× 256 test LR images. Compared to a JND-based SR method, the proposed fast JNB-based SR method is about 3 times faster, with approximately 0.1 dB higher PSNR and a slightly higher SSIM value in average. This indicates that our proposed perceptual JNB-based SR method generates high-quality SR images with much lower computational costs, opening a new possibility for real-time hardware implementations.

  4. Semi-Supervised Sparse Representation Based Classification for Face Recognition With Insufficient Labeled Samples

    NASA Astrophysics Data System (ADS)

    Gao, Yuan; Ma, Jiayi; Yuille, Alan L.

    2017-05-01

    This paper addresses the problem of face recognition when there is only few, or even only a single, labeled examples of the face that we wish to recognize. Moreover, these examples are typically corrupted by nuisance variables, both linear (i.e., additive nuisance variables such as bad lighting, wearing of glasses) and non-linear (i.e., non-additive pixel-wise nuisance variables such as expression changes). The small number of labeled examples means that it is hard to remove these nuisance variables between the training and testing faces to obtain good recognition performance. To address the problem we propose a method called Semi-Supervised Sparse Representation based Classification (S$^3$RC). This is based on recent work on sparsity where faces are represented in terms of two dictionaries: a gallery dictionary consisting of one or more examples of each person, and a variation dictionary representing linear nuisance variables (e.g., different lighting conditions, different glasses). The main idea is that (i) we use the variation dictionary to characterize the linear nuisance variables via the sparsity framework, then (ii) prototype face images are estimated as a gallery dictionary via a Gaussian Mixture Model (GMM), with mixed labeled and unlabeled samples in a semi-supervised manner, to deal with the non-linear nuisance variations between labeled and unlabeled samples. We have done experiments with insufficient labeled samples, even when there is only a single labeled sample per person. Our results on the AR, Multi-PIE, CAS-PEAL, and LFW databases demonstrate that the proposed method is able to deliver significantly improved performance over existing methods.

  5. Generating virtual training samples for sparse representation of face images and face recognition

    NASA Astrophysics Data System (ADS)

    Du, Yong; Wang, Yu

    2016-03-01

    There are many challenges in face recognition. In real-world scenes, images of the same face vary with changing illuminations, different expressions and poses, multiform ornaments, or even altered mental status. Limited available training samples cannot convey these possible changes in the training phase sufficiently, and this has become one of the restrictions to improve the face recognition accuracy. In this article, we view the multiplication of two images of the face as a virtual face image to expand the training set and devise a representation-based method to perform face recognition. The generated virtual samples really reflect some possible appearance and pose variations of the face. By multiplying a training sample with another sample from the same subject, we can strengthen the facial contour feature and greatly suppress the noise. Thus, more human essential information is retained. Also, uncertainty of the training data is simultaneously reduced with the increase of the training samples, which is beneficial for the training phase. The devised representation-based classifier uses both the original and new generated samples to perform the classification. In the classification phase, we first determine K nearest training samples for the current test sample by calculating the Euclidean distances between the test sample and training samples. Then, a linear combination of these selected training samples is used to represent the test sample, and the representation result is used to classify the test sample. The experimental results show that the proposed method outperforms some state-of-the-art face recognition methods.

  6. Assessing the effects of cocaine dependence and pathological gambling using group-wise sparse representation of natural stimulus FMRI data.

    PubMed

    Ren, Yudan; Fang, Jun; Lv, Jinglei; Hu, Xintao; Guo, Cong Christine; Guo, Lei; Xu, Jiansong; Potenza, Marc N; Liu, Tianming

    2016-10-04

    Assessing functional brain activation patterns in neuropsychiatric disorders such as cocaine dependence (CD) or pathological gambling (PG) under naturalistic stimuli has received rising interest in recent years. In this paper, we propose and apply a novel group-wise sparse representation framework to assess differences in neural responses to naturalistic stimuli across multiple groups of participants (healthy control, cocaine dependence, pathological gambling). Specifically, natural stimulus fMRI (N-fMRI) signals from all three groups of subjects are aggregated into a big data matrix, which is then decomposed into a common signal basis dictionary and associated weight coefficient matrices via an effective online dictionary learning and sparse coding method. The coefficient matrices associated with each common dictionary atom are statistically assessed for each group separately. With the inter-group comparisons based on the group-wise correspondence established by the common dictionary, our experimental results demonstrated that the group-wise sparse coding and representation strategy can effectively and specifically detect brain networks/regions affected by different pathological conditions of the brain under naturalistic stimuli.

  7. A sparse Bayesian representation for super-resolution of cardiac MR images.

    PubMed

    Velasco, Nelson F; Rueda, Andrea; Santa Marta, Cristina; Romero, Eduardo

    2017-02-01

    High-quality cardiac magnetic resonance (CMR) images can be hardly obtained when intrinsic noise sources are present, namely heart and breathing movements. Yet heart images may be acquired in real time, the image quality is really limited and most sequences use ECG gating to capture images at each stage of the cardiac cycle during several heart beats. This paper presents a novel super-resolution algorithm that improves the cardiac image quality using a sparse Bayesian approach. The high-resolution version of the cardiac image is constructed by combining the information of the low-resolution series -observations from different non-orthogonal series composed of anisotropic voxels - with a prior distribution of the high-resolution local coefficients that enforces sparsity. In addition, a global prior, extracted from the observed data, regularizes the solution. Quantitative and qualitative validations were performed in synthetic and real images w.r.t to a baseline, showing an average increment between 2.8 and 3.2 dB in the Peak Signal-to-Noise Ratio (PSNR), between 1.8% and 2.6% in the Structural Similarity Index (SSIM) and 2.% to 4% in quality assessment (IL-NIQE). The obtained results demonstrated that the proposed method is able to accurately reconstruct a cardiac image, recovering the original shape with less artifacts and low noise. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Object detection approach using generative sparse, hierarchical networks with top-down and lateral connections for combining texture/color detection and shape/contour detection

    DOEpatents

    Paiton, Dylan M.; Kenyon, Garrett T.; Brumby, Steven P.; Schultz, Peter F.; George, John S.

    2015-07-28

    An approach to detecting objects in an image dataset may combine texture/color detection, shape/contour detection, and/or motion detection using sparse, generative, hierarchical models with lateral and top-down connections. A first independent representation of objects in an image dataset may be produced using a color/texture detection algorithm. A second independent representation of objects in the image dataset may be produced using a shape/contour detection algorithm. A third independent representation of objects in the image dataset may be produced using a motion detection algorithm. The first, second, and third independent representations may then be combined into a single coherent output using a combinatorial algorithm.

  9. Object detection approach using generative sparse, hierarchical networks with top-down and lateral connections for combining texture/color detection and shape/contour detection

    DOEpatents

    Paiton, Dylan M.; Kenyon, Garrett T.; Brumby, Steven P.; Schultz, Peter F.; George, John S.

    2016-10-25

    An approach to detecting objects in an image dataset may combine texture/color detection, shape/contour detection, and/or motion detection using sparse, generative, hierarchical models with lateral and top-down connections. A first independent representation of objects in an image dataset may be produced using a color/texture detection algorithm. A second independent representation of objects in the image dataset may be produced using a shape/contour detection algorithm. A third independent representation of objects in the image dataset may be produced using a motion detection algorithm. The first, second, and third independent representations may then be combined into a single coherent output using a combinatorial algorithm.

  10. Hyperspectral Image Super-resolution via Non-negative Structured Sparse Representation.

    PubMed

    Dong, Weisheng; Fu, Fazuo; Shi, Guangming; Cao, Xun; Wu, Jinjian; Li, Guangyu; Li, Xin

    2016-03-22

    Hyperspectral imaging has many applications from agriculture and astronomy to surveillance and mineralogy. However, it is often challenging to obtain High-resolution (HR) hyperspectral images using existing hyperspectral imaging techniques due to various hardware limitations. In this paper, we propose a new Hyperspectral image super-resolution method from a low-resolution (LR) image and a HR reference image of the same scene. The estimation of the HR hyperspectral image is formulated as a joint estimation of the hyperspectral dictionary and the sparse codes based on the prior knowledge of the spatialspectral sparsity of the hyperspectral image. The hyperspectral dictionary representing prototype reflectance spectra vectors of the scene is first learned from the input LR image. Specifically, an efficient non-negative dictionary learning algorithm using the block-coordinate descent optimization technique is proposed. Then, sparse codes of the desired HR hyperspectral image with respect to learned hyperspectral basis are estimated from the pair of LR and HR reference images. To improve the accuracy of non-negtative sparse coding, a clustering-based structured sparse coding method is proposed to exploit the spatial correlation among the learned sparse codes. Experimental results on both public datasets and real LR hypspectral images suggest that the proposed method substantially outperforms several existing HR hyperspectral image recovery techniques in the literature in terms of both objective quality metrics and computational efficiency.

  11. Hyperspectral Image Super-Resolution via Non-Negative Structured Sparse Representation.

    PubMed

    Dong, Weisheng; Fu, Fazuo; Shi, Guangming; Cao, Xun; Wu, Jinjian; Li, Guangyu; Li, Guangyu

    2016-05-01

    Hyperspectral imaging has many applications from agriculture and astronomy to surveillance and mineralogy. However, it is often challenging to obtain high-resolution (HR) hyperspectral images using existing hyperspectral imaging techniques due to various hardware limitations. In this paper, we propose a new hyperspectral image super-resolution method from a low-resolution (LR) image and a HR reference image of the same scene. The estimation of the HR hyperspectral image is formulated as a joint estimation of the hyperspectral dictionary and the sparse codes based on the prior knowledge of the spatial-spectral sparsity of the hyperspectral image. The hyperspectral dictionary representing prototype reflectance spectra vectors of the scene is first learned from the input LR image. Specifically, an efficient non-negative dictionary learning algorithm using the block-coordinate descent optimization technique is proposed. Then, the sparse codes of the desired HR hyperspectral image with respect to learned hyperspectral basis are estimated from the pair of LR and HR reference images. To improve the accuracy of non-negative sparse coding, a clustering-based structured sparse coding method is proposed to exploit the spatial correlation among the learned sparse codes. The experimental results on both public datasets and real LR hypspectral images suggest that the proposed method substantially outperforms several existing HR hyperspectral image recovery techniques in the literature in terms of both objective quality metrics and computational efficiency.

  12. Sparse Representation of Multimodality Sensing Databases for Data Mining and Retrieval

    DTIC Science & Technology

    2015-04-09

    informationtheoretic similarity measures for pairwise matching; 3) hierarchical similarity-based clustering and database updating. Information-theoretic...similarity-based clustering and database updating. Information-theoretic measures, sparse approximation and dimensionality reduction will play key roles...III. Adaptive Evolutionary Clustering , Journal of Data Miningand Knowledge Discovery (07 2013) Selim Esedoglu, Alfred O. Hero, Jeff Calder. A PDE

  13. Making group inferences using sparse representation of resting-state functional mRI data with application to sleep deprivation.

    PubMed

    Shen, Hui; Xu, Huaze; Wang, Lubin; Lei, Yu; Yang, Liu; Zhang, Peng; Qin, Jian; Zeng, Ling-Li; Zhou, Zongtan; Yang, Zheng; Hu, Dewen

    2017-09-01

    Past studies on drawing group inferences for functional magnetic resonance imaging (fMRI) data usually assume that a brain region is involved in only one functional brain network. However, recent evidence has demonstrated that some brain regions might simultaneously participate in multiple functional networks. Here, we presented a novel approach for making group inferences using sparse representation of resting-state fMRI data and its application to the identification of changes in functional networks in the brains of 37 healthy young adult participants after 36 h of sleep deprivation (SD) in contrast to the rested wakefulness (RW) stage. Our analysis based on group-level sparse representation revealed that multiple functional networks involved in memory, emotion, attention, and vigilance processing were impaired by SD. Of particular interest, the thalamus was observed to contribute to multiple functional networks in which differentiated response patterns were exhibited. These results not only further elucidate the impact of SD on brain function but also demonstrate the ability of the proposed approach to provide new insights into the functional organization of the resting-state brain by permitting spatial overlap between networks and facilitating the description of the varied relationships of the overlapping regions with other regions of the brain in the context of different functional systems. Hum Brain Mapp 38:4671-4689, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  14. Characterizing and Differentiating Task-based and Resting State FMRI Signals via Two-stage Sparse Representations

    PubMed Central

    Zhang, Shu; Li, Xiang; Lv, Jinglei; Jiang, Xi; Guo, Lei; Liu, Tianming

    2015-01-01

    A relatively underexplored question in fMRI is whether there are intrinsic differences in terms of signal composition patterns that can effectively characterize and differentiate task-based or resting state fMRI (tfMRI or rsfMRI) signals. In this paper, we propose a novel two-stage sparse representation framework to examine the fundamental difference between tfMRI and rsfMRI signals. Specifically, in the first stage, the whole-brain tfMRI or rsfMRI signals of each subject were composed into a big data matrix, which was then factorized into a subject-specific dictionary matrix and a weight coefficient matrix for sparse representation. In the second stage, all of the dictionary matrices from both tfMRI/rsfMRI data across multiple subjects were composed into another big data-matrix, which was further sparsely represented by a cross-subjects common dictionary and a weight matrix. This framework has been applied on the recently publicly released Human Connectome Project (HCP) fMRI data and experimental results revealed that there are distinctive and descriptive atoms in the cross-subjects common dictionary that can effectively characterize and differentiate tfMRI and rsfMRI signals, achieving 100% classification accuracy. Moreover, our methods and results can be meaningfully interpreted, e.g., the well-known default mode network (DMN) activities can be recovered from the very noisy and heterogeneous aggregated big-data of tfMRI and rsfMRI signals across all subjects in HCP Q1 release. PMID:25732072

  15. Characterizing and differentiating task-based and resting state fMRI signals via two-stage sparse representations.

    PubMed

    Zhang, Shu; Li, Xiang; Lv, Jinglei; Jiang, Xi; Guo, Lei; Liu, Tianming

    2016-03-01

    A relatively underexplored question in fMRI is whether there are intrinsic differences in terms of signal composition patterns that can effectively characterize and differentiate task-based or resting state fMRI (tfMRI or rsfMRI) signals. In this paper, we propose a novel two-stage sparse representation framework to examine the fundamental difference between tfMRI and rsfMRI signals. Specifically, in the first stage, the whole-brain tfMRI or rsfMRI signals of each subject were composed into a big data matrix, which was then factorized into a subject-specific dictionary matrix and a weight coefficient matrix for sparse representation. In the second stage, all of the dictionary matrices from both tfMRI/rsfMRI data across multiple subjects were composed into another big data-matrix, which was further sparsely represented by a cross-subjects common dictionary and a weight matrix. This framework has been applied on the recently publicly released Human Connectome Project (HCP) fMRI data and experimental results revealed that there are distinctive and descriptive atoms in the cross-subjects common dictionary that can effectively characterize and differentiate tfMRI and rsfMRI signals, achieving 100% classification accuracy. Moreover, our methods and results can be meaningfully interpreted, e.g., the well-known default mode network (DMN) activities can be recovered from the very noisy and heterogeneous aggregated big-data of tfMRI and rsfMRI signals across all subjects in HCP Q1 release.

  16. Machinery vibration signal denoising based on learned dictionary and sparse representation

    NASA Astrophysics Data System (ADS)

    Guo, Liang; Gao, Hongli; Li, Jun; Huang, Haifeng; Zhang, Xiaochen

    2015-07-01

    Mechanical vibration signal denoising has been an import problem for machine damage assessment and health monitoring. Wavelet transfer and sparse reconstruction are the powerful and practical methods. However, those methods are based on the fixed basis functions or atoms. In this paper, a novel method is presented. The atoms used to represent signals are learned from the raw signal. And in order to satisfy the requirements of real-time signal processing, an online dictionary learning algorithm is adopted. Orthogonal matching pursuit is applied to extract the most pursuit column in the dictionary. At last, denoised signal is calculated with the sparse vector and learned dictionary. A simulation signal and real bearing fault signal are utilized to evaluate the improved performance of the proposed method through the comparison with kinds of denoising algorithms. Then Its computing efficiency is demonstrated by an illustrative runtime example. The results show that the proposed method outperforms current algorithms with efficiency calculation.

  17. Sparse representation of multi parametric DCE-MRI features using K-SVD for classifying gene expression based breast cancer recurrence risk

    NASA Astrophysics Data System (ADS)

    Mahrooghy, Majid; Ashraf, Ahmed B.; Daye, Dania; Mies, Carolyn; Rosen, Mark; Feldman, Michael; Kontos, Despina

    2014-03-01

    We evaluate the prognostic value of sparse representation-based features by applying the K-SVD algorithm on multiparametric kinetic, textural, and morphologic features in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). K-SVD is an iterative dimensionality reduction method that optimally reduces the initial feature space by updating the dictionary columns jointly with the sparse representation coefficients. Therefore, by using K-SVD, we not only provide sparse representation of the features and condense the information in a few coefficients but also we reduce the dimensionality. The extracted K-SVD features are evaluated by a machine learning algorithm including a logistic regression classifier for the task of classifying high versus low breast cancer recurrence risk as determined by a validated gene expression assay. The features are evaluated using ROC curve analysis and leave one-out cross validation for different sparse representation and dimensionality reduction numbers. Optimal sparse representation is obtained when the number of dictionary elements is 4 (K=4) and maximum non-zero coefficients is 2 (L=2). We compare K-SVD with ANOVA based feature selection for the same prognostic features. The ROC results show that the AUC of the K-SVD based (K=4, L=2), the ANOVA based, and the original features (i.e., no dimensionality reduction) are 0.78, 0.71. and 0.68, respectively. From the results, it can be inferred that by using sparse representation of the originally extracted multi-parametric, high-dimensional data, we can condense the information on a few coefficients with the highest predictive value. In addition, the dimensionality reduction introduced by K-SVD can prevent models from over-fitting.

  18. A tight and explicit representation of Q in sparse QR factorization

    SciTech Connect

    Ng, E.G.; Peyton, B.W.

    1992-05-01

    In QR factorization of a sparse m{times}n matrix A (m {ge} n) the orthogonal factor Q is often stored implicitly as a lower trapezoidal matrix H known as the Householder matrix. This paper presents a simple characterization of the row structure of Q, which could be used as the basis for a sparse data structure that can store Q explicitly. The new characterization is a simple extension of a well known row-oriented characterization of the structure of H. Hare, Johnson, Olesky, and van den Driessche have recently provided a complete sparsity analysis of the QR factorization. Let U be the matrix consisting of the first n columns of Q. Using results from, we show that the data structures for H and U resulting from our characterizations are tight when A is a strong Hall matrix. We also show that H and the lower trapezoidal part of U have the same sparsity characterization when A is strong Hall. We then show that this characterization can be extended to any weak Hall matrix that has been permuted into block upper triangular form. Finally, we show that permuting to block triangular form never increases the fill incurred during the factorization.

  19. Harnessing data structure for recovery of randomly missing structural vibration responses time history: Sparse representation versus low-rank structure

    NASA Astrophysics Data System (ADS)

    Yang, Yongchao; Nagarajaiah, Satish

    2016-06-01

    Randomly missing data of structural vibration responses time history often occurs in structural dynamics and health monitoring. For example, structural vibration responses are often corrupted by outliers or erroneous measurements due to sensor malfunction; in wireless sensing platforms, data loss during wireless communication is a common issue. Besides, to alleviate the wireless data sampling or communication burden, certain accounts of data are often discarded during sampling or before transmission. In these and other applications, recovery of the randomly missing structural vibration responses from the available, incomplete data, is essential for system identification and structural health monitoring; it is an ill-posed inverse problem, however. This paper explicitly harnesses the data structure itself-of the structural vibration responses-to address this (inverse) problem. What is relevant is an empirical, but often practically true, observation, that is, typically there are only few modes active in the structural vibration responses; hence a sparse representation (in frequency domain) of the single-channel data vector, or, a low-rank structure (by singular value decomposition) of the multi-channel data matrix. Exploiting such prior knowledge of data structure (intra-channel sparse or inter-channel low-rank), the new theories of ℓ1-minimization sparse recovery and nuclear-norm-minimization low-rank matrix completion enable recovery of the randomly missing or corrupted structural vibration response data. The performance of these two alternatives, in terms of recovery accuracy and computational time under different data missing rates, is investigated on a few structural vibration response data sets-the seismic responses of the super high-rise Canton Tower and the structural health monitoring accelerations of a real large-scale cable-stayed bridge. Encouraging results are obtained and the applicability and limitation of the presented methods are discussed.

  20. A Non-destructive Terahertz Spectroscopy-Based Method for Transgenic Rice Seed Discrimination via Sparse Representation

    NASA Astrophysics Data System (ADS)

    Hu, Xiaohua; Lang, Wenhui; Liu, Wei; Xu, Xue; Yang, Jianbo; Zheng, Lei

    2017-08-01

    Terahertz (THz) spectroscopy technique has been researched and developed for rapid and non-destructive detection of food safety and quality due to its low-energy and non-ionizing characteristics. The objective of this study was to develop a flexible identification model to discriminate transgenic and non-transgenic rice seeds based on terahertz (THz) spectroscopy. To extract THz spectral features and reduce the feature dimension, sparse representation (SR) is employed in this work. A sufficient sparsity level is selected to train the sparse coding of the THz data, and the random forest (RF) method is then applied to obtain a discrimination model. The results show that there exist differences between transgenic and non-transgenic rice seeds in THz spectral band and, comparing with Least squares support vector machines (LS-SVM) method, SR-RF is a better model for discrimination (accuracy is 95% in prediction set, 100% in calibration set, respectively). The conclusion is that SR may be more useful in the application of THz spectroscopy to reduce dimension and the SR-RF provides a new, effective, and flexible method for detection and identification of transgenic and non-transgenic rice seeds with THz spectral system.

  1. Sparse representation of signals: from astrophysics to real-time data analysis for fusion plasmas and system optimization analysis for ITER and TCV

    NASA Astrophysics Data System (ADS)

    Testa, D.; Carfantan, H.; Albergante, M.; Blanchard, P.; Bourguignon, S.; Fasoli, A.; Goodyear, A.; Klein, A.; Lister, J. B.; Panis, T.; Contributors, JET

    2016-12-01

    Efficient, real-time and automated data analysis is one of the key elements for achieving scientific success in complex engineering and physical systems, two examples of which include the JET and ITER tokamaks. One problem which is common to these fields is the determination of the pulsation modes from an irregularly sampled time series. To this end, there are a wealth of signal processing techniques that are being applied to post-pulse and real-time data analysis in such complex systems. Here, we wish to present a review of the applications of a method based on the sparse representation of signals, using examples of the synergies that can be exploited when combining ideas and methods from very different fields, such as astronomy, astrophysics and thermonuclear fusion plasmas. Examples of this work in astronomy and astrophysics are the analysis of pulsation modes in various classes of stars and the orbit determination software of the Pioneer spacecraft. Two examples of this work in thermonuclear fusion plasmas include the detection of magneto-hydrodynamic instabilities, which is now performed routinely in JET in real-time on a sub-millisecond time scale, and the studies leading to the optimization of the magnetic diagnostic system in ITER and TCV. These questions have been solved by formulating them as inverse problems, despite the fact that these applicative frameworks are extremely different from the classical use of sparse representations, from both the theoretical and computational point of view. The requirements, prospects and ideas for the signal processing and real-time data analysis applications of this method to the routine operation of ITER will also be discussed. Finally, a very recent development has been the attempt to apply this method to the deconvolution of the measurement of electric potential performed during a ground-based survey of a proto-Villanovian necropolis in central Italy.

  2. Dual Temporal and Spatial Sparse Representation for Inferring Group-wise Brain Networks from Resting-state fMRI Dataset.

    PubMed

    Gong, Junhui; Liu, Xiaoyan; Liu, Tianming; Zhou, Jiansong; Sun, Gang; Tian, Juanxiu

    2017-08-09

    Recently, sparse representation has been successfully used to identify brain networks from task-based fMRI dataset. However, when using the strategy to analyze resting-state fMRI dataset, it is still a challenge to automatically infer the group-wise brain networks under consideration of group commonalities and subject-specific characteristics. In the paper, a novel method based on dual temporal and spatial sparse representation (DTSSR) is proposed to meet this challenge. Firstly, the brain functional networks with subject-specific characteristics are obtained via sparse representation with online dictionary learning for the fMRI time series (temporal domain) of each subject. Next, based on the current brain science knowledge, a simple mathematical model is proposed to describe the complex nonlinear dynamic coupling mechanism of the brain networks, with which the group-wise intrinsic connectivity networks (ICNs) can be inferred by sparse representation for these brain functional networks (spatial domain) of all subjects. Experiments on Leiden_2180 dataset show that most group-wise ICNs obtained by the proposed DTSSR are interpretable by current brain science knowledge and are consistent with previous literature reports. The robustness of DTSSR and the reproducibility of the results are demonstrated by experiments on three different datasets (Leiden_2180, Leiden_2200 and our own dataset). Results of the present work shed new light on exploring the coupling mechanism of BFNs from perspective of information science.

  3. Temperature variation effects on sparse representation of guided-waves for damage diagnosis in pipelines

    NASA Astrophysics Data System (ADS)

    Eybpoosh, Matineh; Berges, Mario; Noh, Hae Young

    2015-04-01

    Multiple ultrasonic guided-wave modes propagating along a pipe travel with different velocities which are themselves a function of frequency. Reflections from the features of the structure (e.g., boundaries, pipe welding, damage, etc.), and their complex superposition, adds to the complexity of guided-waves. Guided-wave based damage diagnosis of pipelines becomes even more challenging when environmental and operational conditions (EOCs) vary (e.g., temperature, flow rate, inner pressure, etc.). These complexities make guided-wave based damage diagnosis of operating pipelines a challenging task. This paper reviews the approaches to-date addressing these challenges, and highlights the preferred characteristics of a method that simplifies guided-wave signals for damage diagnosis purposes. A method is proposed to extract a sparse subset of guided-wave signals in time-domain, while retaining optimal damage information for detection purpose. In this paper, the general concept of this method is proved through an extensive set of experiments. Effects of temperature variation on detection performance of the proposed method, and on discriminatory power of the extracted damage-sensitive features are investigated. The potential of the proposed method for real-time damage detection is illustrated, for wide range of temperature variation scenarios (i.e., temperature difference between training and test data varying between -2°C and 13°C).

  4. Automatic Myonuclear Detection in Isolated Single Muscle Fibers Using Robust Ellipse Fitting and Sparse Representation

    PubMed Central

    Su, Hai; Xing, Fuyong; Lee, Jonah D.; Peterson, Charlotte A.; Yang, Lin

    2015-01-01

    Accurate and robust detection of myonuclei in isolated single muscle fibers is required to calculate myonuclear domain size. However, this task is challenging because: 1) shape and size variations of the nuclei, 2) overlapping nuclear clumps, and 3) multiple z-stack images with out-of-focus regions. In this paper, we have proposed a novel automatic detection algorithm to robustly quantify myonuclei in isolated single skeletal muscle fibers. The original z-stack images are first converted into one all-in-focus image using multi-focus image fusion. A sufficient number of ellipse fitting hypotheses are then generated from them yonuclei contour segments using heteroscedastic errors-invariables (HEIV) regression. A set of representative training samples and a set of discriminative features are selected by a two-stage sparse model. The selected samples with representative features are utilized to train a classifier to select the best candidates. A modified inner geodesic distance based mean-shift clustering algorithm is used to produce the final nuclei detection results. The proposed method was extensively tested using 42 sets of z-stack images containing over 1,500 myonuclei. The method demonstrates excellent results that are better than current state-of-the-art approaches. PMID:26356342

  5. Turbulent heat transfer from a sparsely vegetated surface - Two-component representation

    NASA Technical Reports Server (NTRS)

    Otterman, J.; Novak, M. D.; Starr, D. O'C.

    1993-01-01

    The conventional calculation of heat fluxes from a vegetated surface involving the coefficient of turbulent heat transfer which increases logarithmically with surface roughness, is inappropriate such highly structured surfaces as desert scrub or open forest. An approach is developed here for computing sensible heat flux from sparsely vegetated surfaces, where the absorption of insolation and the transfer of absorbed heat to the atmosphere are calculated separately for the plants and for the soil. This approach is applied to a desert-scrub surface in the northern Sinai, for which the turbulent transfer coefficient of sensible heat flux from the plants is much larger than that from the soil below, as shown by an analysis of plant, soil, and air temperatures. The plant density is expressed as the sum of products (plant-height) x (plant-diameter) of plants per unit horizontal surface area. The solar heat absorbed by the plants is assumed to be transferred immediately to the airflow. The effective turbulent transfer coefficient k(g-eff) for sensible heat from the desert-scrub/soil surface computed under this assumption increases sharply with increasing solar zenith angle, as the plants absorb a greater fraction of the incoming irradiation. The surface absorptivity (the coalbedo) also increases sharply with increasing solar zenith angle, and thus the sensible heat flux from such complex surfaces is a much broader function of time of day than when computed under constant k(g-eff) and constant albedo assumptions.

  6. Concept Abstractness and the Representation of Noun-Noun Combinations

    ERIC Educational Resources Information Center

    Xu, Xu; Paulson, Lisa

    2013-01-01

    Research on noun-noun combinations has been largely focusing on concrete concepts. Three experiments examined the role of concept abstractness in the representation of noun-noun combinations. In Experiment 1, participants provided written interpretations for phrases constituted by nouns of varying degrees of abstractness. Interpretive focus (the…

  7. Concept Abstractness and the Representation of Noun-Noun Combinations

    ERIC Educational Resources Information Center

    Xu, Xu; Paulson, Lisa

    2013-01-01

    Research on noun-noun combinations has been largely focusing on concrete concepts. Three experiments examined the role of concept abstractness in the representation of noun-noun combinations. In Experiment 1, participants provided written interpretations for phrases constituted by nouns of varying degrees of abstractness. Interpretive focus (the…

  8. Combination of direct matching and collaborative representation for face recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Chongyang

    2013-06-01

    It has been proved that representation-based classification (RBC) can achieve high accuracy in face recognition. However, conventional RBC has a very high computational cost. Collaborative representation proposed in [1] not only has the advantages of RBC but also is computationally very efficient. In this paper, a combination of direct matching of images and collaborative representation is proposed for face recognition. Experimental results show that the proposed method can always classify more accurately than collaborative representation! The underlying reason is that direct matching of images and collaborative representation use different ways to calculate the dissimilarity between the test sample and training sample. As a result, the score obtained using direct matching of images is very complementary to the score obtained using collaborative representation. Actually, the analysis shows that the matching scores generated from direct matching of images and collaborative representation always have a low correlation. This allows the proposed method to exploit more information for face recognition and to produce a better result.

  9. Automatic approach to solve the morphological galaxy classification problem using the sparse representation technique and dictionary learning

    NASA Astrophysics Data System (ADS)

    Diaz-Hernandez, R.; Ortiz-Esquivel, A.; Peregrina-Barreto, H.; Altamirano-Robles, L.; Gonzalez-Bernal, J.

    2016-06-01

    The observation of celestial objects in the sky is a practice that helps astronomers to understand the way in which the Universe is structured. However, due to the large number of observed objects with modern telescopes, the analysis of these by hand is a difficult task. An important part in galaxy research is the morphological structure classification based on the Hubble sequence. In this research, we present an approach to solve the morphological galaxy classification problem in an automatic way by using the Sparse Representation technique and dictionary learning with K-SVD. For the tests in this work, we use a database of galaxies extracted from the Principal Galaxy Catalog (PGC) and the APM Equatorial Catalogue of Galaxies obtaining a total of 2403 useful galaxies. In order to represent each galaxy frame, we propose to calculate a set of 20 features such as Hu's invariant moments, galaxy nucleus eccentricity, gabor galaxy ratio and some other features commonly used in galaxy classification. A stage of feature relevance analysis was performed using Relief-f in order to determine which are the best parameters for the classification tests using 2, 3, 4, 5, 6 and 7 galaxy classes making signal vectors of different length values with the most important features. For the classification task, we use a 20-random cross-validation technique to evaluate classification accuracy with all signal sets achieving a score of 82.27 % for 2 galaxy classes and up to 44.27 % for 7 galaxy classes.

  10. Application of Unsupervised Clustering using Sparse Representations on Learned Dictionaries to develop Land Cover Classifications in Arctic Landscapes

    NASA Astrophysics Data System (ADS)

    Rowland, J. C.; Moody, D. I.; Brumby, S.; Gangodagamage, C.

    2012-12-01

    Techniques for automated feature extraction, including neuroscience-inspired machine vision, are of great interest for landscape characterization and change detection in support of global climate change science and modeling. Successful application of novel unsupervised feature extraction and clustering algorithms for use in Land Cover Classification requires the ability to determine what landscape attributes are represented by automated clustering. A closely related challenge is learning how to precondition the input data streams to the unsupervised classification algorithms in order to obtain clusters that represent Land Cover category of relevance to landsurface change and modeling applications. We present results from an ongoing effort to apply novel clustering methodologies developed primarily for neuroscience machine vision applications to the environmental sciences. We use a Hebbian learning rule to build spectral-textural dictionaries that are adapted to the data. We learn our dictionaries from millions of overlapping image patches and then use a pursuit search to generate sparse classification features. These sparse representations of pixel patches are used to perform unsupervised k-means clustering. In our application, we use 8-band multispectral Worldview-2 data from three arctic study areas: Barrow, Alaska; the Selawik River, Alaska; and a watershed near the Mackenzie River delta in northwest Canada. Our goal is to develop a robust classification methodology that will allow for the automated discretization of the landscape into distinct units based on attributes such as vegetation, surface hydrological properties (e.g. soil moisture and inundation), and topographic/geomorphic characteristics. The challenge of developing a meaningful land cover classification includes both learning how optimize the clustering algorithm and successfully interpreting the results. In applying the unsupervised clustering, we have the flexibility of selecting both the window

  11. Image resolution enhancement using edge extraction and sparse representation in wavelet domain for real-time application

    NASA Astrophysics Data System (ADS)

    Ponomaryov, Volodymyr I.; Chavez-Roman, Herminio; Gonzalez-Huitron, Victor

    2014-05-01

    The paper presents the design and hardware implementation of novel framework for image resolution enhancement employing the wavelet domain. The principal idea of resolution enhancement consists of using edge preservation procedure and mutual interpolation between the input low-resolution (LR) image and the HF sub-band images performed via the Discrete Wavelet Transform (DWT). The LR image is used in the sparse representation for the resolutionenhancement process, employing a 1-D interpolation in set of angle directions; following, the computations of the new samples are found, estimating the missing samples. Finally, pixels are performed via the Lanczos interpolation. To preserve more edge information additional edge extraction in HF sub-bands is performed in the DWT decomposition of input image. The differences between the LL sub-band image and LR input image is used to correct the HF component, generating a significantly sharper reconstructed image. All sub-band images are used to generate the new HR image applying the inverse DWT (IDWT). Additionally, the novel framework employs a denoising procedure by using the Non-Local Means for the input LR image. An efficiency analysis of the designed and other state-of-the-art filters have been performed on the DSP TMS320DM648 by Texas Instruments through MATLAB's Simulink module and on the video card (NVIDIA®Quadro® K2000), showing that novel SR procedure can be used in real-time processing applications. Experimental results have confirmed that implemented framework outperforms existing SR algorithms in terms of objective criteria (PSNR, MAE and SSIM) as well as in subjective perception, justifying better image resolution.

  12. A cartoon-texture decomposition-based image deburring model by using framelet-based sparse representation

    NASA Astrophysics Data System (ADS)

    Chen, Huasong; Qu, Xiangju; Jin, Ying; Li, Zhenhua; He, Anzhi

    2016-10-01

    Image deblurring is a fundamental problem in image processing. Conventional methods often deal with the degraded image as a whole while ignoring that an image contains two different components: cartoon and texture. Recently, total variation (TV) based image decomposition methods are introduced into image deblurring problem. However, these methods often suffer from the well-known stair-casing effects of TV. In this paper, a new cartoon -texture based sparsity regularization method is proposed for non-blind image deblurring. Based on image decomposition, it respectively regularizes the cartoon with a combined term including framelet-domain-based sparse prior and a quadratic regularization and the texture with the sparsity of discrete cosine transform domain. Then an adaptive alternative split Bregman iteration is proposed to solve the new multi-term sparsity regularization model. Experimental results demonstrate that our method can recover both cartoon and texture of images simultaneously, and therefore can improve the visual effect, the PSNR and the SSIM of the deblurred image efficiently than TV and the undecomposed methods.

  13. Review of Sparse Representation-Based Classification Methods on EEG Signal Processing for Epilepsy Detection, Brain-Computer Interface and Cognitive Impairment

    PubMed Central

    Wen, Dong; Jia, Peilei; Lian, Qiusheng; Zhou, Yanhong; Lu, Chengbiao

    2016-01-01

    At present, the sparse representation-based classification (SRC) has become an important approach in electroencephalograph (EEG) signal analysis, by which the data is sparsely represented on the basis of a fixed dictionary or learned dictionary and classified based on the reconstruction criteria. SRC methods have been used to analyze the EEG signals of epilepsy, cognitive impairment and brain computer interface (BCI), which made rapid progress including the improvement in computational accuracy, efficiency and robustness. However, these methods have deficiencies in real-time performance, generalization ability and the dependence of labeled sample in the analysis of the EEG signals. This mini review described the advantages and disadvantages of the SRC methods in the EEG signal analysis with the expectation that these methods can provide the better tools for analyzing EEG signals. PMID:27458376

  14. Review of Sparse Representation-Based Classification Methods on EEG Signal Processing for Epilepsy Detection, Brain-Computer Interface and Cognitive Impairment.

    PubMed

    Wen, Dong; Jia, Peilei; Lian, Qiusheng; Zhou, Yanhong; Lu, Chengbiao

    2016-01-01

    At present, the sparse representation-based classification (SRC) has become an important approach in electroencephalograph (EEG) signal analysis, by which the data is sparsely represented on the basis of a fixed dictionary or learned dictionary and classified based on the reconstruction criteria. SRC methods have been used to analyze the EEG signals of epilepsy, cognitive impairment and brain computer interface (BCI), which made rapid progress including the improvement in computational accuracy, efficiency and robustness. However, these methods have deficiencies in real-time performance, generalization ability and the dependence of labeled sample in the analysis of the EEG signals. This mini review described the advantages and disadvantages of the SRC methods in the EEG signal analysis with the expectation that these methods can provide the better tools for analyzing EEG signals.

  15. Application of linear graph embedding as a dimensionality reduction technique and sparse representation classifier as a post classifier for the classification of epilepsy risk levels from EEG signals

    NASA Astrophysics Data System (ADS)

    Prabhakar, Sunil Kumar; Rajaguru, Harikumar

    2015-12-01

    The most common and frequently occurring neurological disorder is epilepsy and the main method useful for the diagnosis of epilepsy is electroencephalogram (EEG) signal analysis. Due to the length of EEG recordings, EEG signal analysis method is quite time-consuming when it is processed manually by an expert. This paper proposes the application of Linear Graph Embedding (LGE) concept as a dimensionality reduction technique for processing the epileptic encephalographic signals and then it is classified using Sparse Representation Classifiers (SRC). SRC is used to analyze the classification of epilepsy risk levels from EEG signals and the parameters such as Sensitivity, Specificity, Time Delay, Quality Value, Performance Index and Accuracy are analyzed.

  16. Concept abstractness and the representation of noun-noun combinations.

    PubMed

    Xu, Xu; Paulson, Lisa

    2013-10-01

    Research on noun-noun combinations has been largely focusing on concrete concepts. Three experiments examined the role of concept abstractness in the representation of noun-noun combinations. In Experiment 1, participants provided written interpretations for phrases constituted by nouns of varying degrees of abstractness. Interpretive focus (the modifier, the head noun, or both) for each noun-noun phrase was evaluated. In Experiment 2, a different group of participants directly indicated which noun, the modifier or the head noun, was the focus of the meaning of each noun-noun phrase. Finally, participants in Experiment 3 assessed similarity of noun-noun phrases with overlapping modifiers. If abstractness of constituent nouns affects information distribution between modifier and head noun, it would accordingly affect similarity ratings for noun-noun phrases with overlapping modifiers. Results showed that the pattern of information distribution within a noun-noun phrase differed between phrases with abstract head nouns and phrases with concrete head nouns. Specifically, the center of representation was more on the head noun side for phrases with concrete head nouns, whereas the center seemed to shift to the direction of the modifier for phrases with abstract head nouns. Some issues related to effects of concept abstractness on noun-noun combinations are discussed.

  17. Sex Education Representations in Spanish Combined Biology and Geology Textbooks

    NASA Astrophysics Data System (ADS)

    García-Cabeza, Belén; Sánchez-Bello, Ana

    2013-07-01

    Sex education is principally dealt with as part of the combined subject of Biology and Geology in the Spanish school curriculum. Teachers of this subject are not specifically trained to teach sex education, and thus the contents of their assigned textbooks are the main source of information available to them in this field. The main goal of this study was to determine what information Biology and Geology textbooks provide with regard to sex education and the vision of sexuality they give, but above all to reveal which perspectives of sex education they legitimise and which they silence. We analysed the textbooks in question by interpreting both visual and text representations, as a means of enabling us to investigate the nature of the discourse on sex education. With this aim, we have used a qualitative methodology, based on the content analysis. The main analytical tool was an in-house grid constructed to allow us to analyse the visual and textual representations. Our analysis of the combined Biology and Geology textbooks for Secondary Year 3 revealed that there is a tendency to reproduce models of sex education that take place within a framework of the more traditional discourses. Besides, the results suggested that the most of the sample chosen for this study makes a superficial, incomplete, incorrect or biased approach to sex education.

  18. Visual tracking via robust multitask sparse prototypes

    NASA Astrophysics Data System (ADS)

    Zhang, Huanlong; Hu, Shiqiang; Yu, Junyang

    2015-03-01

    Sparse representation has been applied to an online subspace learning-based tracking problem. To handle partial occlusion effectively, some researchers introduce l1 regularization to principal component analysis (PCA) reconstruction. However, in these traditional tracking methods, the representation of each object observation is often viewed as an individual task so the inter-relationship between PCA basis vectors is ignored. We propose a new online visual tracking algorithm with multitask sparse prototypes, which combines multitask sparse learning with PCA-based subspace representation. We first extend a visual tracking algorithm with sparse prototypes in multitask learning framework to mine inter-relations between subtasks. Then, to avoid the problem that enforcing all subtasks to share the same structure may result in degraded tracking results, we impose group sparse constraints on the coefficients of PCA basis vectors and element-wise sparse constraints on the error coefficients, respectively. Finally, we show that the proposed optimization problem can be effectively solved using the accelerated proximal gradient method with the fast convergence. Experimental results compared with the state-of-the-art tracking methods demonstrate that the proposed algorithm achieves favorable performance when the object undergoes partial occlusion, motion blur, and illumination changes.

  19. Index finger motor imagery EEG pattern recognition in BCI applications using dictionary cleaned sparse representation-based classification for healthy people

    NASA Astrophysics Data System (ADS)

    Miao, Minmin; Zeng, Hong; Wang, Aimin; Zhao, Fengkui; Liu, Feixiang

    2017-09-01

    Electroencephalogram (EEG)-based motor imagery (MI) brain-computer interface (BCI) has shown its effectiveness for the control of rehabilitation devices designed for large body parts of the patients with neurologic impairments. In order to validate the feasibility of using EEG to decode the MI of a single index finger and constructing a BCI-enhanced finger rehabilitation system, we collected EEG data during right hand index finger MI and rest state for five healthy subjects and proposed a pattern recognition approach for classifying these two mental states. First, Fisher's linear discriminant criteria and power spectral density analysis were used to analyze the event-related desynchronization patterns. Second, both band power and approximate entropy were extracted as features. Third, aiming to eliminate the abnormal samples in the dictionary and improve the classification performance of the conventional sparse representation-based classification (SRC) method, we proposed a novel dictionary cleaned sparse representation-based classification (DCSRC) method for final classification. The experimental results show that the proposed DCSRC method gives better classification accuracies than SRC and an average classification accuracy of 81.32% is obtained for five subjects. Thus, it is demonstrated that single right hand index finger MI can be decoded from the sensorimotor rhythms, and the feature patterns of index finger MI and rest state can be well recognized for robotic exoskeleton initiation.

  20. Combining DCQGMP-Based Sparse Decomposition and MPDR Beamformer for Multi-Type Interferences Mitigation for GNSS Receivers

    PubMed Central

    Guo, Qiang; Qi, Liangang

    2017-01-01

    In the coexistence of multiple types of interfering signals, the performance of interference suppression methods based on time and frequency domains is degraded seriously, and the technique using an antenna array requires a large enough size and huge hardware costs. To combat multi-type interferences better for GNSS receivers, this paper proposes a cascaded multi-type interferences mitigation method combining improved double chain quantum genetic matching pursuit (DCQGMP)-based sparse decomposition and an MPDR beamformer. The key idea behind the proposed method is that the multiple types of interfering signals can be excised by taking advantage of their sparse features in different domains. In the first stage, the single-tone (multi-tone) and linear chirp interfering signals are canceled by sparse decomposition according to their sparsity in the over-complete dictionary. In order to improve the timeliness of matching pursuit (MP)-based sparse decomposition, a DCQGMP is introduced by combining an improved double chain quantum genetic algorithm (DCQGA) and the MP algorithm, and the DCQGMP algorithm is extended to handle the multi-channel signals according to the correlation among the signals in different channels. In the second stage, the minimum power distortionless response (MPDR) beamformer is utilized to nullify the residuary interferences (e.g., wideband Gaussian noise interferences). Several simulation results show that the proposed method can not only improve the interference mitigation degree of freedom (DoF) of the array antenna, but also effectively deal with the interference arriving from the same direction with the GNSS signal, which can be sparse represented in the over-complete dictionary. Moreover, it does not bring serious distortions into the navigation signal. PMID:28394290

  1. Combining DCQGMP-Based Sparse Decomposition and MPDR Beamformer for Multi-Type Interferences Mitigation for GNSS Receivers.

    PubMed

    Guo, Qiang; Qi, Liangang

    2017-04-10

    In the coexistence of multiple types of interfering signals, the performance of interference suppression methods based on time and frequency domains is degraded seriously, and the technique using an antenna array requires a large enough size and huge hardware costs. To combat multi-type interferences better for GNSS receivers, this paper proposes a cascaded multi-type interferences mitigation method combining improved double chain quantum genetic matching pursuit (DCQGMP)-based sparse decomposition and an MPDR beamformer. The key idea behind the proposed method is that the multiple types of interfering signals can be excised by taking advantage of their sparse features in different domains. In the first stage, the single-tone (multi-tone) and linear chirp interfering signals are canceled by sparse decomposition according to their sparsity in the over-complete dictionary. In order to improve the timeliness of matching pursuit (MP)-based sparse decomposition, a DCQGMP is introduced by combining an improved double chain quantum genetic algorithm (DCQGA) and the MP algorithm, and the DCQGMP algorithm is extended to handle the multi-channel signals according to the correlation among the signals in different channels. In the second stage, the minimum power distortionless response (MPDR) beamformer is utilized to nullify the residuary interferences (e.g., wideband Gaussian noise interferences). Several simulation results show that the proposed method can not only improve the interference mitigation degree of freedom (DoF) of the array antenna, but also effectively deal with the interference arriving from the same direction with the GNSS signal, which can be sparse represented in the over-complete dictionary. Moreover, it does not bring serious distortions into the navigation signal.

  2. Sparse Representation Based Frequency Detection and Uncertainty Reduction in Blade Tip Timing Measurement for Multi-Mode Blade Vibration Monitoring

    PubMed Central

    Pan, Minghao; Yang, Yongmin; Guan, Fengjiao; Hu, Haifeng; Xu, Hailong

    2017-01-01

    The accurate monitoring of blade vibration under operating conditions is essential in turbo-machinery testing. Blade tip timing (BTT) is a promising non-contact technique for the measurement of blade vibrations. However, the BTT sampling data are inherently under-sampled and contaminated with several measurement uncertainties. How to recover frequency spectra of blade vibrations though processing these under-sampled biased signals is a bottleneck problem. A novel method of BTT signal processing for alleviating measurement uncertainties in recovery of multi-mode blade vibration frequency spectrum is proposed in this paper. The method can be divided into four phases. First, a single measurement vector model is built by exploiting that the blade vibration signals are sparse in frequency spectra. Secondly, the uniqueness of the nonnegative sparse solution is studied to achieve the vibration frequency spectrum. Thirdly, typical sources of BTT measurement uncertainties are quantitatively analyzed. Finally, an improved vibration frequency spectra recovery method is proposed to get a guaranteed level of sparse solution when measurement results are biased. Simulations and experiments are performed to prove the feasibility of the proposed method. The most outstanding advantage is that this method can prevent the recovered multi-mode vibration spectra from being affected by BTT measurement uncertainties without increasing the probe number. PMID:28758952

  3. An energy-based sparse representation of ultrasonic guided-waves for online damage detection of pipelines under varying environmental and operational conditions

    NASA Astrophysics Data System (ADS)

    Eybpoosh, Matineh; Berges, Mario; Noh, Hae Young

    2017-01-01

    This work addresses the main challenges in real-world application of guided-waves for damage detection of pipelines, namely their complex nature and sensitivity to environmental and operational conditions (EOCs). Different propagation characteristics of the wave modes, their distinctive sensitivities to different types and ranges of EOCs, and to different damage scenarios, make the interpretation of diffuse-field guided-wave signals a challenging task. This paper proposes an unsupervised feature-extraction method for online damage detection of pipelines under varying EOCs. The objective is to simplify diffuse-field guided-wave signals to a sparse subset of the arrivals that contains the majority of the energy carried by the signal. We show that such a subset is less affected by EOCs compared to the complete time-traces of the signals. Moreover, it is shown that the effects of damage on the energy of this subset suppress those of EOCs. A set of signals from the undamaged state of a pipe are used as reference records. The reference dataset is used to extract the aforementioned sparse representation. During the monitoring stage, the sparse subset, representing the undamaged pipe, will not accurately reconstruct the energy of a signal from a damaged pipe. In other words, such a sparse representation of guided-waves is sensitive to occurrence of damage. Therefore, the energy estimation errors are used as damage-sensitive features for damage detection purposes. A diverse set of experimental analyses are conducted to verify the hypotheses of the proposed feature-extraction approach, and to validate the detection performance of the damage-sensitive features. The empirical validation of the proposed method includes (1) detecting a structural abnormality in an aluminum pipe, under varying temperature at different ranges, (2) detecting multiple small damages of different types, at different locations, in a steel pipe, under varying temperature, (3) detecting a structural

  4. Multi-stage classification method oriented to aerial image based on low-rank recovery and multi-feature fusion sparse representation.

    PubMed

    Ma, Xu; Cheng, Yongmei; Hao, Shuai

    2016-12-10

    Automatic classification of terrain surfaces from an aerial image is essential for an autonomous unmanned aerial vehicle (UAV) landing at an unprepared site by using vision. Diverse terrain surfaces may show similar spectral properties due to the illumination and noise that easily cause poor classification performance. To address this issue, a multi-stage classification algorithm based on low-rank recovery and multi-feature fusion sparse representation is proposed. First, color moments and Gabor texture feature are extracted from training data and stacked as column vectors of a dictionary. Then we perform low-rank matrix recovery for the dictionary by using augmented Lagrange multipliers and construct a multi-stage terrain classifier. Experimental results on an aerial map database that we prepared verify the classification accuracy and robustness of the proposed method.

  5. Predicting standard-dose PET image from low-dose PET and multimodal MR images using mapping-based sparse representation

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Zhang, Pei; An, Le; Ma, Guangkai; Kang, Jiayin; Shi, Feng; Wu, Xi; Zhou, Jiliu; Lalush, David S.; Lin, Weili; Shen, Dinggang

    2016-01-01

    Positron emission tomography (PET) has been widely used in clinical diagnosis for diseases and disorders. To obtain high-quality PET images requires a standard-dose radionuclide (tracer) injection into the human body, which inevitably increases risk of radiation exposure. One possible solution to this problem is to predict the standard-dose PET image from its low-dose counterpart and its corresponding multimodal magnetic resonance (MR) images. Inspired by the success of patch-based sparse representation (SR) in super-resolution image reconstruction, we propose a mapping-based SR (m-SR) framework for standard-dose PET image prediction. Compared with the conventional patch-based SR, our method uses a mapping strategy to ensure that the sparse coefficients, estimated from the multimodal MR images and low-dose PET image, can be applied directly to the prediction of standard-dose PET image. As the mapping between multimodal MR images (or low-dose PET image) and standard-dose PET images can be particularly complex, one step of mapping is often insufficient. To this end, an incremental refinement framework is therefore proposed. Specifically, the predicted standard-dose PET image is further mapped to the target standard-dose PET image, and then the SR is performed again to predict a new standard-dose PET image. This procedure can be repeated for prediction refinement of the iterations. Also, a patch selection based dictionary construction method is further used to speed up the prediction process. The proposed method is validated on a human brain dataset. The experimental results show that our method can outperform benchmark methods in both qualitative and quantitative measures.

  6. Double shrinking sparse dimension reduction.

    PubMed

    Zhou, Tianyi; Tao, Dacheng

    2013-01-01

    Learning tasks such as classification and clustering usually perform better and cost less (time and space) on compressed representations than on the original data. Previous works mainly compress data via dimension reduction. In this paper, we propose "double shrinking" to compress image data on both dimensionality and cardinality via building either sparse low-dimensional representations or a sparse projection matrix for dimension reduction. We formulate a double shrinking model (DSM) as an l(1) regularized variance maximization with constraint ||x||(2)=1, and develop a double shrinking algorithm (DSA) to optimize DSM. DSA is a path-following algorithm that can build the whole solution path of locally optimal solutions of different sparse levels. Each solution on the path is a "warm start" for searching the next sparser one. In each iteration of DSA, the direction, the step size, and the Lagrangian multiplier are deduced from the Karush-Kuhn-Tucker conditions. The magnitudes of trivial variables are shrunk and the importances of critical variables are simultaneously augmented along the selected direction with the determined step length. Double shrinking can be applied to manifold learning and feature selections for better interpretation of features, and can be combined with classification and clustering to boost their performance. The experimental results suggest that double shrinking produces efficient and effective data compression.

  7. Effects of sparse sampling in combination with iterative reconstruction on quantitative bone microstructure assessment

    NASA Astrophysics Data System (ADS)

    Mei, Kai; Kopp, Felix K.; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Kirschke, Jan S.; Noël, Peter B.; Baum, Thomas

    2017-03-01

    The trabecular bone microstructure is a key to the early diagnosis and advanced therapy monitoring of osteoporosis. Regularly measuring bone microstructure with conventional multi-detector computer tomography (MDCT) would expose patients with a relatively high radiation dose. One possible solution to reduce exposure to patients is sampling fewer projection angles. This approach can be supported by advanced reconstruction algorithms, with their ability to achieve better image quality under reduced projection angles or high levels of noise. In this work, we investigated the performance of iterative reconstruction from sparse sampled projection data on trabecular bone microstructure in in-vivo MDCT scans of human spines. The computed MDCT images were evaluated by calculating bone microstructure parameters. We demonstrated that bone microstructure parameters were still computationally distinguishable when half or less of the radiation dose was employed.

  8. Comparison of Support-Vector Machine and Sparse Representation Using a Modified Rule-Based Method for Automated Myocardial Ischemia Detection

    PubMed Central

    Tseng, Yi-Li; Lin, Keng-Sheng; Jaw, Fu-Shan

    2016-01-01

    An automatic method is presented for detecting myocardial ischemia, which can be considered as the early symptom of acute coronary events. Myocardial ischemia commonly manifests as ST- and T-wave changes on ECG signals. The methods in this study are proposed to detect abnormal ECG beats using knowledge-based features and classification methods. A novel classification method, sparse representation-based classification (SRC), is involved to improve the performance of the existing algorithms. A comparison was made between two classification methods, SRC and support-vector machine (SVM), using rule-based vectors as input feature space. The two methods are proposed with quantitative evaluation to validate their performances. The results of SRC method encompassed with rule-based features demonstrate higher sensitivity than that of SVM. However, the specificity and precision are a trade-off. Moreover, SRC method is less dependent on the selection of rule-based features and can achieve high performance using fewer features. The overall performances of the two methods proposed in this study are better than the previous methods. PMID:26925158

  9. Time-Frequency Analysis of Non-Stationary Biological Signals with Sparse Linear Regression Based Fourier Linear Combiner.

    PubMed

    Wang, Yubo; Veluvolu, Kalyana C

    2017-06-14

    It is often difficult to analyze biological signals because of their nonlinear and non-stationary characteristics. This necessitates the usage of time-frequency decomposition methods for analyzing the subtle changes in these signals that are often connected to an underlying phenomena. This paper presents a new approach to analyze the time-varying characteristics of such signals by employing a simple truncated Fourier series model, namely the band-limited multiple Fourier linear combiner (BMFLC). In contrast to the earlier designs, we first identified the sparsity imposed on the signal model in order to reformulate the model to a sparse linear regression model. The coefficients of the proposed model are then estimated by a convex optimization algorithm. The performance of the proposed method was analyzed with benchmark test signals. An energy ratio metric is employed to quantify the spectral performance and results show that the proposed method Sparse-BMFLC has high mean energy (0.9976) ratio and outperforms existing methods such as short-time Fourier transfrom (STFT), continuous Wavelet transform (CWT) and BMFLC Kalman Smoother. Furthermore, the proposed method provides an overall 6.22% in reconstruction error.

  10. Image fusion using sparse overcomplete feature dictionaries

    DOEpatents

    Brumby, Steven P.; Bettencourt, Luis; Kenyon, Garrett T.; Chartrand, Rick; Wohlberg, Brendt

    2015-10-06

    Approaches for deciding what individuals in a population of visual system "neurons" are looking for using sparse overcomplete feature dictionaries are provided. A sparse overcomplete feature dictionary may be learned for an image dataset and a local sparse representation of the image dataset may be built using the learned feature dictionary. A local maximum pooling operation may be applied on the local sparse representation to produce a translation-tolerant representation of the image dataset. An object may then be classified and/or clustered within the translation-tolerant representation of the image dataset using a supervised classification algorithm and/or an unsupervised clustering algorithm.

  11. Asteroids' physical models from combined dense and sparse photometry and scaling of the YORP effect by the observed obliquity distribution

    NASA Astrophysics Data System (ADS)

    Hanuš, J.; Ďurech, J.; Brož, M.; Marciniak, A.; Warner, B. D.; Pilcher, F.; Stephens, R.; Behrend, R.; Carry, B.; Čapek, D.; Antonini, P.; Audejean, M.; Augustesen, K.; Barbotin, E.; Baudouin, P.; Bayol, A.; Bernasconi, L.; Borczyk, W.; Bosch, J.-G.; Brochard, E.; Brunetto, L.; Casulli, S.; Cazenave, A.; Charbonnel, S.; Christophe, B.; Colas, F.; Coloma, J.; Conjat, M.; Cooney, W.; Correira, H.; Cotrez, V.; Coupier, A.; Crippa, R.; Cristofanelli, M.; Dalmas, Ch.; Danavaro, C.; Demeautis, C.; Droege, T.; Durkee, R.; Esseiva, N.; Esteban, M.; Fagas, M.; Farroni, G.; Fauvaud, M.; Fauvaud, S.; Del Freo, F.; Garcia, L.; Geier, S.; Godon, C.; Grangeon, K.; Hamanowa, H.; Hamanowa, H.; Heck, N.; Hellmich, S.; Higgins, D.; Hirsch, R.; Husarik, M.; Itkonen, T.; Jade, O.; Kamiński, K.; Kankiewicz, P.; Klotz, A.; Koff, R. A.; Kryszczyńska, A.; Kwiatkowski, T.; Laffont, A.; Leroy, A.; Lecacheux, J.; Leonie, Y.; Leyrat, C.; Manzini, F.; Martin, A.; Masi, G.; Matter, D.; Michałowski, J.; Michałowski, M. J.; Michałowski, T.; Michelet, J.; Michelsen, R.; Morelle, E.; Mottola, S.; Naves, R.; Nomen, J.; Oey, J.; Ogłoza, W.; Oksanen, A.; Oszkiewicz, D.; Pääkkönen, P.; Paiella, M.; Pallares, H.; Paulo, J.; Pavic, M.; Payet, B.; Polińska, M.; Polishook, D.; Poncy, R.; Revaz, Y.; Rinner, C.; Rocca, M.; Roche, A.; Romeuf, D.; Roy, R.; Saguin, H.; Salom, P. A.; Sanchez, S.; Santacana, G.; Santana-Ros, T.; Sareyan, J.-P.; Sobkowiak, K.; Sposetti, S.; Starkey, D.; Stoss, R.; Strajnic, J.; Teng, J.-P.; Trégon, B.; Vagnozzi, A.; Velichko, F. P.; Waelchli, N.; Wagrez, K.; Wücher, H.

    2013-03-01

    Context. The larger number of models of asteroid shapes and their rotational states derived by the lightcurve inversion give us better insight into both the nature of individual objects and the whole asteroid population. With a larger statistical sample we can study the physical properties of asteroid populations, such as main-belt asteroids or individual asteroid families, in more detail. Shape models can also be used in combination with other types of observational data (IR, adaptive optics images, stellar occultations), e.g., to determine sizes and thermal properties. Aims: We use all available photometric data of asteroids to derive their physical models by the lightcurve inversion method and compare the observed pole latitude distributions of all asteroids with known convex shape models with the simulated pole latitude distributions. Methods: We used classical dense photometric lightcurves from several sources (Uppsala Asteroid Photometric Catalogue, Palomar Transient Factory survey, and from individual observers) and sparse-in-time photometry from the U.S. Naval Observatory in Flagstaff, Catalina Sky Survey, and La Palma surveys (IAU codes 689, 703, 950) in the lightcurve inversion method to determine asteroid convex models and their rotational states. We also extended a simple dynamical model for the spin evolution of asteroids used in our previous paper. Results: We present 119 new asteroid models derived from combined dense and sparse-in-time photometry. We discuss the reliability of asteroid shape models derived only from Catalina Sky Survey data (IAU code 703) and present 20 such models. By using different values for a scaling parameter cYORP (corresponds to the magnitude of the YORP momentum) in the dynamical model for the spin evolution and by comparing synthetic and observed pole-latitude distributions, we were able to constrain the typical values of the cYORP parameter as between 0.05 and 0.6. Table 3 is available in electronic form at http://www.aanda.org

  12. Image fusion via nonlocal sparse K-SVD dictionary learning.

    PubMed

    Li, Ying; Li, Fangyi; Bai, Bendu; Shen, Qiang

    2016-03-01

    Image fusion aims to merge two or more images captured via various sensors of the same scene to construct a more informative image by integrating their details. Generally, such integration is achieved through the manipulation of the representations of the images concerned. Sparse representation plays an important role in the effective description of images, offering a great potential in a variety of image processing tasks, including image fusion. Supported by sparse representation, in this paper, an approach for image fusion by the use of a novel dictionary learning scheme is proposed. The nonlocal self-similarity property of the images is exploited, not only at the stage of learning the underlying description dictionary but during the process of image fusion. In particular, the property of nonlocal self-similarity is combined with the traditional sparse dictionary. This results in an improved learned dictionary, hereafter referred to as the nonlocal sparse K-SVD dictionary (where K-SVD stands for the K times singular value decomposition that is commonly used in the literature), and abbreviated to NL_SK_SVD. The performance of the NL_SK_SVD dictionary is applied for image fusion using simultaneous orthogonal matching pursuit. The proposed approach is evaluated with different types of images, and compared with a number of alternative image fusion techniques. The resultant superior fused images using the present approach demonstrates the efficacy of the NL_SK_SVD dictionary in sparse image representation.

  13. Evaluating coastal sea surface heights based on a novel sub-waveform approach using sparse representation and conditional random fields

    NASA Astrophysics Data System (ADS)

    Uebbing, Bernd; Roscher, Ribana; Kusche, Jürgen

    2016-04-01

    Satellite radar altimeters allow global monitoring of mean sea level changes over the last two decades. However, coastal regions are less well observed due to influences on the returned signal energy by land located inside the altimeter footprint. The altimeter emits a radar pulse, which is reflected at the nadir-surface and measures the two-way travel time, as well as the returned energy as a function of time, resulting in a return waveform. Over the open ocean the waveform shape corresponds to a theoretical model which can be used to infer information on range corrections, significant wave height or wind speed. However, in coastal areas the shape of the waveform is significantly influenced by return signals from land, located in the altimeter footprint, leading to peaks which tend to bias the estimated parameters. Recently, several approaches dealing with this problem have been published, including utilizing only parts of the waveform (sub-waveforms), estimating the parameters in two steps or estimating additional peak parameters. We present a new approach in estimating sub-waveforms using conditional random fields (CRF) based on spatio-temporal waveform information. The CRF piece-wise approximates the measured waveforms based on a pre-derived dictionary of theoretical waveforms for various combinations of the geophysical parameters; neighboring range gates are likely to be assigned to the same underlying sub-waveform model. Depending on the choice of hyperparameters in the CRF estimation, the classification into sub-waveforms can either be more fine or coarse resulting in multiple sub-waveform hypotheses. After the sub-waveforms have been detected, existing retracking algorithms can be applied to derive water heights or other desired geophysical parameters from particular sub-waveforms. To identify the optimal heights from the multiple hypotheses, instead of utilizing a known reference height, we apply a Dijkstra-algorithm to find the "shortest path" of all

  14. Motor memory is encoded as a gain-field combination of intrinsic and extrinsic action representations.

    PubMed

    Brayanov, Jordan B; Press, Daniel Z; Smith, Maurice A

    2012-10-24

    Actions can be planned in either an intrinsic (body-based) reference frame or an extrinsic (world-based) frame, and understanding how the internal representations associated with these frames contribute to the learning of motor actions is a key issue in motor control. We studied the internal representation of this learning in human subjects by analyzing generalization patterns across an array of different movement directions and workspaces after training a visuomotor rotation in a single movement direction in one workspace. This provided a dense sampling of the generalization function across intrinsic and extrinsic reference frames, which allowed us to dissociate intrinsic and extrinsic representations and determine the manner in which they contributed to the motor memory for a trained action. A first experiment showed that the generalization pattern reflected a memory that was intermediate between intrinsic and extrinsic representations. A second experiment showed that this intermediate representation could not arise from separate intrinsic and extrinsic learning. Instead, we find that the representation of learning is based on a gain-field combination of local representations in intrinsic and extrinsic coordinates. This gain-field representation generalizes between actions by effectively computing similarity based on the (Mahalanobis) distance across intrinsic and extrinsic coordinates and is in line with neural recordings showing mixed intrinsic-extrinsic representations in motor and parietal cortices.

  15. Sparse representations-based super-resolution of key-frames extracted from frames-sequences generated by a visual sensor network.

    PubMed

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2014-02-21

    Visual sensor networks (VSNs) usually generate a low-resolution (LR) frame-sequence due to energy and processing constraints. These LR-frames are not very appropriate for use in certain surveillance applications. It is very important to enhance the resolution of the captured LR-frames using resolution enhancement schemes. In this paper, an effective framework for a super-resolution (SR) scheme is proposed that enhances the resolution of LR key-frames extracted from frame-sequences captured by visual-sensors. In a VSN, a visual processing hub (VPH) collects a huge amount of visual data from camera sensors. In the proposed framework, at the VPH, key-frames are extracted using our recent key-frame extraction technique and are streamed to the base station (BS) after compression. A novel effective SR scheme is applied at BS to produce a high-resolution (HR) output from the received key-frames. The proposed SR scheme uses optimized orthogonal matching pursuit (OOMP) for sparse-representation recovery in SR. OOMP does better in terms of detecting true sparsity than orthogonal matching pursuit (OMP). This property of the OOMP helps produce a HR image which is closer to the original image. The K-SVD dictionary learning procedure is incorporated for dictionary learning. Batch-OMP improves the dictionary learning process by removing the limitation in handling a large set of observed signals. Experimental results validate the effectiveness of the proposed scheme and show its superiority over other state-of-the-art schemes.

  16. Sparse Representations-Based Super-Resolution of Key-Frames Extracted from Frames-Sequences Generated by a Visual Sensor Network

    PubMed Central

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2014-01-01

    Visual sensor networks (VSNs) usually generate a low-resolution (LR) frame-sequence due to energy and processing constraints. These LR-frames are not very appropriate for use in certain surveillance applications. It is very important to enhance the resolution of the captured LR-frames using resolution enhancement schemes. In this paper, an effective framework for a super-resolution (SR) scheme is proposed that enhances the resolution of LR key-frames extracted from frame-sequences captured by visual-sensors. In a VSN, a visual processing hub (VPH) collects a huge amount of visual data from camera sensors. In the proposed framework, at the VPH, key-frames are extracted using our recent key-frame extraction technique and are streamed to the base station (BS) after compression. A novel effective SR scheme is applied at BS to produce a high-resolution (HR) output from the received key-frames. The proposed SR scheme uses optimized orthogonal matching pursuit (OOMP) for sparse-representation recovery in SR. OOMP does better in terms of detecting true sparsity than orthogonal matching pursuit (OMP). This property of the OOMP helps produce a HR image which is closer to the original image. The K-SVD dictionary learning procedure is incorporated for dictionary learning. Batch-OMP improves the dictionary learning process by removing the limitation in handling a large set of observed signals. Experimental results validate the effectiveness of the proposed scheme and show its superiority over other state-of-the-art schemes. PMID:24566632

  17. Structured sparse models for classification

    NASA Astrophysics Data System (ADS)

    Castrodad, Alexey

    The main focus of this thesis is the modeling and classification of high dimensional data using structured sparsity. Sparse models, where data is assumed to be well represented as a linear combination of a few elements from a dictionary, have gained considerable attention in recent years, and its use has led to state-of-the-art results in many signal and image processing tasks. The success of sparse modeling is highly due to its ability to efficiently use the redundancy of the data and find its underlying structure. On a classification setting, we capitalize on this advantage to properly model and separate the structure of the classes. We design and validate modeling solutions to challenging problems arising in computer vision and remote sensing. We propose both supervised and unsupervised schemes for the modeling of human actions from motion imagery under a wide variety of acquisition condi- tions. In the supervised case, the main goal is to classify the human actions in the video given a predefined set of actions to learn from. In the unsupervised case, the main goal is to an- alyze the spatio-temporal dynamics of the individuals in the scene without having any prior information on the actions themselves. We also propose a model for remotely sensed hysper- spectral imagery, where the main goal is to perform automatic spectral source separation and mapping at the subpixel level. Finally, we present a sparse model for sensor fusion to exploit the common structure and enforce collaboration of hyperspectral with LiDAR data for better mapping capabilities. In all these scenarios, we demonstrate that these data can be expressed as a combination of atoms from a class-structured dictionary. These data representation becomes essentially a "mixture of classes," and by directly exploiting the sparse codes, one can attain highly accurate classification performance with relatively unsophisticated classifiers.

  18. Variable Selection for Sparse High-Dimensional Nonlinear Regression Models by Combining Nonnegative Garrote and Sure Independence Screening

    PubMed Central

    Xue, Hongqi; Wu, Yichao; Wu, Hulin

    2013-01-01

    In many regression problems, the relations between the covariates and the response may be nonlinear. Motivated by the application of reconstructing a gene regulatory network, we consider a sparse high-dimensional additive model with the additive components being some known nonlinear functions with unknown parameters. To identify the subset of important covariates, we propose a new method for simultaneous variable selection and parameter estimation by iteratively combining a large-scale variable screening (the nonlinear independence screening, NLIS) and a moderate-scale model selection (the nonnegative garrote, NNG) for the nonlinear additive regressions. We have shown that the NLIS procedure possesses the sure screening property and it is able to handle problems with non-polynomial dimensionality; and for finite dimension problems, the NNG for the nonlinear additive regressions has selection consistency for the unimportant covariates and also estimation consistency for the parameter estimates of the important covariates. The proposed method is applied to simulated data and a real data example for identifying gene regulations to illustrate its numerical performance. PMID:25170239

  19. Efficient convolutional sparse coding

    DOEpatents

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  20. Multiresolution image representation using combined 2-D and 1-D directional filter banks.

    PubMed

    Tanaka, Yuichi; Ikehara, Masaaki; Nguyen, Truong Q

    2009-02-01

    In this paper, effective multiresolution image representations using a combination of 2-D filter bank (FB) and directional wavelet transform (WT) are presented. The proposed methods yield simple implementation and low computation costs compared to previous 1-D and 2-D FB combinations or adaptive directional WT methods. Furthermore, they are nonredundant transforms and realize quad-tree like multiresolution representations. In applications on nonlinear approximation, image coding, and denoising, the proposed filter banks show visual quality improvements and have higher PSNR than the conventional separable WT or the contourlet.

  1. Sparse coding with memristor networks.

    PubMed

    Sheridan, Patrick M; Cai, Fuxi; Du, Chao; Ma, Wen; Zhang, Zhengya; Lu, Wei D

    2017-08-01

    Sparse representation of information provides a powerful means to perform feature extraction on high-dimensional data and is of broad interest for applications in signal processing, computer vision, object recognition and neurobiology. Sparse coding is also believed to be a key mechanism by which biological neural systems can efficiently process a large amount of complex sensory data while consuming very little power. Here, we report the experimental implementation of sparse coding algorithms in a bio-inspired approach using a 32 × 32 crossbar array of analog memristors. This network enables efficient implementation of pattern matching and lateral neuron inhibition and allows input data to be sparsely encoded using neuron activities and stored dictionary elements. Different dictionary sets can be trained and stored in the same system, depending on the nature of the input signals. Using the sparse coding algorithm, we also perform natural image processing based on a learned dictionary.

  2. Nonnegative local coordinate factorization for image representation.

    PubMed

    Chen, Yan; Zhang, Jiemi; Cai, Deng; Liu, Wei; He, Xiaofei

    2013-03-01

    Recently, nonnegative matrix factorization (NMF) has become increasingly popular for feature extraction in computer vision and pattern recognition. NMF seeks two nonnegative matrices whose product can best approximate the original matrix. The nonnegativity constraints lead to sparse parts-based representations that can be more robust than nonsparse global features. To obtain more accurate control over the sparseness, in this paper, we propose a novel method called nonnegative local coordinate factorization (NLCF) for feature extraction. NLCF adds a local coordinate constraint into the standard NMF objective function. Specifically, we require that the learned basis vectors be as close to the original data points as possible. In this way, each data point can be represented by a linear combination of only a few nearby basis vectors, which naturally leads to sparse representation. Extensive experimental results suggest that the proposed approach provides a better representation and achieves higher accuracy in image clustering.

  3. Sparse Methods for Biomedical Data.

    PubMed

    Ye, Jieping; Liu, Jun

    2012-06-01

    Following recent technological revolutions, the investigation of massive biomedical data with growing scale, diversity, and complexity has taken a center stage in modern data analysis. Although complex, the underlying representations of many biomedical data are often sparse. For example, for a certain disease such as leukemia, even though humans have tens of thousands of genes, only a few genes are relevant to the disease; a gene network is sparse since a regulatory pathway involves only a small number of genes; many biomedical signals are sparse or compressible in the sense that they have concise representations when expressed in a proper basis. Therefore, finding sparse representations is fundamentally important for scientific discovery. Sparse methods based on the [Formula: see text] norm have attracted a great amount of research efforts in the past decade due to its sparsity-inducing property, convenient convexity, and strong theoretical guarantees. They have achieved great success in various applications such as biomarker selection, biological network construction, and magnetic resonance imaging. In this paper, we review state-of-the-art sparse methods and their applications to biomedical data.

  4. Sparse Methods for Biomedical Data

    PubMed Central

    Ye, Jieping; Liu, Jun

    2013-01-01

    Following recent technological revolutions, the investigation of massive biomedical data with growing scale, diversity, and complexity has taken a center stage in modern data analysis. Although complex, the underlying representations of many biomedical data are often sparse. For example, for a certain disease such as leukemia, even though humans have tens of thousands of genes, only a few genes are relevant to the disease; a gene network is sparse since a regulatory pathway involves only a small number of genes; many biomedical signals are sparse or compressible in the sense that they have concise representations when expressed in a proper basis. Therefore, finding sparse representations is fundamentally important for scientific discovery. Sparse methods based on the ℓ1 norm have attracted a great amount of research efforts in the past decade due to its sparsity-inducing property, convenient convexity, and strong theoretical guarantees. They have achieved great success in various applications such as biomarker selection, biological network construction, and magnetic resonance imaging. In this paper, we review state-of-the-art sparse methods and their applications to biomedical data. PMID:24076585

  5. Combining rainfall data from rain gauges and TRMM in hydrological modelling of Laotian data-sparse basins

    NASA Astrophysics Data System (ADS)

    Liu, Xing; Liu, Fa Ming; Wang, Xiao Xia; Li, Xiao Dong; Fan, Yu Yan; Cai, Shi Xiang; Ao, Tian Qi

    2017-06-01

    At present, prediction of streamflow simulation in data-sparse basins of the South East Asia is a challenging task due to the absence of reliable ground-based rainfall information, while satellite-based rainfall estimates are immensely useful to improve our understanding of spatio-temporal variation of rainfall, particularly for data-sparse basins. In this study the TRMM 3B42 V7 and its bias-corrected data were, respectively, used to drive a physically based distributed hydrological model BTOPMC to perform daily streamflow simulations in Nam Khan River and Nam Like River basins during the years from 2000 to 2004 so as to investigate the potential use of the TRMM in complementing rain gauge data in hydrological modelling of data-sparse basins. The results show that although larger difference exists in the high streamflow process and the low streamflow process, the daily simulations fed with TRMM precipitation data could basically reflect the daily streamflow processes at the four stations and determine the time to peak. Furthermore, the calibrated parameters in the Nam Khan River basin are more suitable than that in the Nam Like River basin. By comparing the two precipitation data, it indicates that the integration of TRMM precipitation data and rain gauge data have a promising prospect on the hydrological process simulation in data-sparse basin.

  6. Building Hierarchical Representations for Oracle Character and Sketch Recognition.

    PubMed

    Jun Guo; Changhu Wang; Roman-Rangel, Edgar; Hongyang Chao; Yong Rui

    2016-01-01

    In this paper, we study oracle character recognition and general sketch recognition. First, a data set of oracle characters, which are the oldest hieroglyphs in China yet remain a part of modern Chinese characters, is collected for analysis. Second, typical visual representations in shape- and sketch-related works are evaluated. We analyze the problems suffered when addressing these representations and determine several representation design criteria. Based on the analysis, we propose a novel hierarchical representation that combines a Gabor-related low-level representation and a sparse-encoder-related mid-level representation. Extensive experiments show the effectiveness of the proposed representation in both oracle character recognition and general sketch recognition. The proposed representation is also complementary to convolutional neural network (CNN)-based models. We introduce a solution to combine the proposed representation with CNN-based models, and achieve better performances over both approaches. This solution has beaten humans at recognizing general sketches.

  7. Sparse Texture Active Contour

    PubMed Central

    Gao, Yi; Bouix, Sylvain; Shenton, Martha; Tannenbaum, Allen

    2014-01-01

    In image segmentation, we are often interested in using certain quantities to characterize the object, and perform the classification based on them: mean intensity, gradient magnitude, responses to certain predefined filters, etc. Unfortunately, in many cases such quantities are not adequate to model complex textured objects. Along a different line of research, the sparse characteristic of natural signals has been recognized and studied in recent years. Therefore, how such sparsity can be utilized, in a non-parametric way, to model the object texture and assist the textural image segmentation process is studied in this work, and a segmentation scheme based on the sparse representation of the texture information is proposed. More explicitly, the texture is encoded by the dictionaries constructed from the user initialization. Then, an active contour is evolved to optimize the fidelity of the representation provided by the dictionary of the target. In doing so, not only a non-parametric texture modeling technique is provided, but also the sparsity of the representation guarantees the computation efficiency. The experiments are carried out on the publicly available image data sets which contain a large variety of texture images, to analyze the user interaction, performance statistics, and to highlight the algorithm’s capability of robustly extracting textured regions from an image. PMID:23799695

  8. Combining Generative and Discriminative Representation Learning for Lung CT Analysis With Convolutional Restricted Boltzmann Machines.

    PubMed

    van Tulder, Gijs; de Bruijne, Marleen

    2016-05-01

    The choice of features greatly influences the performance of a tissue classification system. Despite this, many systems are built with standard, predefined filter banks that are not optimized for that particular application. Representation learning methods such as restricted Boltzmann machines may outperform these standard filter banks because they learn a feature description directly from the training data. Like many other representation learning methods, restricted Boltzmann machines are unsupervised and are trained with a generative learning objective; this allows them to learn representations from unlabeled data, but does not necessarily produce features that are optimal for classification. In this paper we propose the convolutional classification restricted Boltzmann machine, which combines a generative and a discriminative learning objective. This allows it to learn filters that are good both for describing the training data and for classification. We present experiments with feature learning for lung texture classification and airway detection in CT images. In both applications, a combination of learning objectives outperformed purely discriminative or generative learning, increasing, for instance, the lung tissue classification accuracy by 1 to 8 percentage points. This shows that discriminative learning can help an otherwise unsupervised feature learner to learn filters that are optimized for classification.

  9. Extending peripersonal space representation without tool-use: evidence from a combined behavioral-computational approach

    PubMed Central

    Serino, Andrea; Canzoneri, Elisa; Marzolla, Marilena; di Pellegrino, Giuseppe; Magosso, Elisa

    2015-01-01

    Stimuli from different sensory modalities occurring on or close to the body are integrated in a multisensory representation of the space surrounding the body, i.e., peripersonal space (PPS). PPS dynamically modifies depending on experience, e.g., it extends after using a tool to reach far objects. However, the neural mechanism underlying PPS plasticity after tool use is largely unknown. Here we use a combined computational-behavioral approach to propose and test a possible mechanism accounting for PPS extension. We first present a neural network model simulating audio-tactile representation in the PPS around one hand. Simulation experiments showed that our model reproduced the main property of PPS neurons, i.e., selective multisensory response for stimuli occurring close to the hand. We used the neural network model to simulate the effects of a tool-use training. In terms of sensory inputs, tool use was conceptualized as a concurrent tactile stimulation from the hand, due to holding the tool, and an auditory stimulation from the far space, due to tool-mediated action. Results showed that after exposure to those inputs, PPS neurons responded also to multisensory stimuli far from the hand. The model thus suggests that synchronous pairing of tactile hand stimulation and auditory stimulation from the far space is sufficient to extend PPS, such as after tool-use. Such prediction was confirmed by a behavioral experiment, where we used an audio-tactile interaction paradigm to measure the boundaries of PPS representation. We found that PPS extended after synchronous tactile-hand stimulation and auditory-far stimulation in a group of healthy volunteers. Control experiments both in simulation and behavioral settings showed that the same amount of tactile and auditory inputs administered out of synchrony did not change PPS representation. We conclude by proposing a simple, biological-plausible model to explain plasticity in PPS representation after tool-use, which is

  10. Extending peripersonal space representation without tool-use: evidence from a combined behavioral-computational approach.

    PubMed

    Serino, Andrea; Canzoneri, Elisa; Marzolla, Marilena; di Pellegrino, Giuseppe; Magosso, Elisa

    2015-01-01

    Stimuli from different sensory modalities occurring on or close to the body are integrated in a multisensory representation of the space surrounding the body, i.e., peripersonal space (PPS). PPS dynamically modifies depending on experience, e.g., it extends after using a tool to reach far objects. However, the neural mechanism underlying PPS plasticity after tool use is largely unknown. Here we use a combined computational-behavioral approach to propose and test a possible mechanism accounting for PPS extension. We first present a neural network model simulating audio-tactile representation in the PPS around one hand. Simulation experiments showed that our model reproduced the main property of PPS neurons, i.e., selective multisensory response for stimuli occurring close to the hand. We used the neural network model to simulate the effects of a tool-use training. In terms of sensory inputs, tool use was conceptualized as a concurrent tactile stimulation from the hand, due to holding the tool, and an auditory stimulation from the far space, due to tool-mediated action. Results showed that after exposure to those inputs, PPS neurons responded also to multisensory stimuli far from the hand. The model thus suggests that synchronous pairing of tactile hand stimulation and auditory stimulation from the far space is sufficient to extend PPS, such as after tool-use. Such prediction was confirmed by a behavioral experiment, where we used an audio-tactile interaction paradigm to measure the boundaries of PPS representation. We found that PPS extended after synchronous tactile-hand stimulation and auditory-far stimulation in a group of healthy volunteers. Control experiments both in simulation and behavioral settings showed that the same amount of tactile and auditory inputs administered out of synchrony did not change PPS representation. We conclude by proposing a simple, biological-plausible model to explain plasticity in PPS representation after tool-use, which is

  11. Sparse recovery via convex optimization

    NASA Astrophysics Data System (ADS)

    Randall, Paige Alicia

    This thesis considers the problem of estimating a sparse signal from a few (possibly noisy) linear measurements. In other words, we have y = Ax + z where A is a measurement matrix with more columns than rows, x is a sparse signal to be estimated, z is a noise vector, and y is a vector of measurements. This setup arises frequently in many problems ranging from MRI imaging to genomics to compressed sensing.We begin by relating our setup to an error correction problem over the reals, where a received encoded message is corrupted by a few arbitrary errors, as well as smaller dense errors. We show that under suitable conditions on the encoding matrix and on the number of arbitrary errors, one is able to accurately recover the message.We next show that we are able to achieve oracle optimality for x, up to a log factor and a factor of sqrt{s}, when we require the matrix A to obey an incoherence property. The incoherence property is novel in that it allows the coherence of A to be as large as O(1/ log n) and still allows sparsities as large as O(m/log n). This is in contrast to other existing results involving coherence where the coherence can only be as large as O(1/sqrt{m}) to allow sparsities as large as O(sqrt{m}). We also do not make the common assumption that the matrix A obeys a restricted eigenvalue condition.We then show that we can recover a (non-sparse) signal from a few linear measurements when the signal has an exactly sparse representation in an overcomplete dictionary. We again only require that the dictionary obey an incoherence property.Finally, we introduce the method of l_1 analysis and show that it is guaranteed to give good recovery of a signal from a few measurements, when the signal can be well represented in a dictionary. We require that the combined measurement/dictionary matrix satisfies a uniform uncertainty principle and we compare our results with the more standard l_1 synthesis approach.All our methods involve solving an l_1 minimization

  12. Online Dictionary Learning for Sparse Coding

    DTIC Science & Technology

    2009-04-01

    cessing tasks such as denoising (Elad & Aharon, 2006) as well as higher-level tasks such as classification (Raina et al., 2007; Mairal et al., 2008a...Bruckstein, A. M. (2006). The K- SVD : An algorithm for designing of overcomplete dic- tionaries for sparse representations. IEEE Trans. SP...Tibshirani, R. (2004). Least angle regression. Ann. Statist. Elad, M., & Aharon, M. (2006). Image denoising via sparse and redundant representations

  13. How environment and self-motion combine in neural representations of space.

    PubMed

    Evans, Talfan; Bicanski, Andrej; Bush, Daniel; Burgess, Neil

    2016-11-15

    Estimates of location or orientation can be constructed solely from sensory information representing environmental cues. In unfamiliar or sensory-poor environments, these estimates can also be maintained and updated by integrating self-motion information. However, the accumulation of error dictates that updated representations of heading direction and location become progressively less reliable over time, and must be corrected by environmental sensory inputs when available. Anatomical, electrophysiological and behavioural evidence indicates that angular and translational path integration contributes to the firing of head direction cells and grid cells. We discuss how sensory inputs may be combined with self-motion information in the firing patterns of these cells. For head direction cells, direct projections from egocentric sensory representations of distal cues can help to correct cumulative errors. Grid cells may benefit from sensory inputs via boundary vector cells and place cells. However, the allocentric code of boundary vector cells and place cells requires consistent head-direction information in order to translate the sensory signal of egocentric boundary distance into allocentric boundary vector cell firing, suggesting that the different spatial representations found in and around the hippocampal formation are interdependent. We conclude that, rather than representing pure path integration, the firing of head-direction cells and grid cells reflects the interface between self-motion and environmental sensory information. Together with place cells and boundary vector cells they can support a coherent unitary representation of space based on both environmental sensory inputs and path integration signals. © 2015 The Authors. The Journal of Physiology © 2015 The Physiological Society.

  14. A combined representation method for use in band structure calculations. 1: Method

    NASA Technical Reports Server (NTRS)

    Friedli, C.; Ashcroft, N. W.

    1975-01-01

    A representation was described whose basis levels combine the important physical aspects of a finite set of plane waves with those of a set of Bloch tight-binding levels. The chosen combination has a particularly simple dependence on the wave vector within the Brillouin Zone, and its use in reducing the standard one-electron band structure problem to the usual secular equation has the advantage that the lattice sums involved in the calculation of the matrix elements are actually independent of the wave vector. For systems with complicated crystal structures, for which the Korringa-Kohn-Rostoker (KKR), Augmented-Plane Wave (APW) and Orthogonalized-Plane Wave (OPW) methods are difficult to apply, the present method leads to results with satisfactory accuracy and convergence.

  15. Sparse-view image reconstruction via total absolute curvature combining total variation for X-ray computed tomography.

    PubMed

    Zheng, Zhizhong; Cai, Ailong; Li, Lei; Yan, Bin; Le, Fulong; Wang, Linyuan; Hu, Guoen

    2017-07-07

    Sparse-view imaging is a promising scanning approach which has fast scanning rate and low-radiation dose in X-ray computed tomography (CT). Conventional L1-norm based total variation (TV) has been widely used in image reconstruction since the advent of compressive sensing theory. However, with only the first order information of the image used, the TV often generates dissatisfactory image for some applications. As is widely known, image curvature is among the most important second order features of images and can potentially be applied in image reconstruction for quality improvement. This study incorporates the curvature in the optimization model and proposes a new total absolute curvature (TAC) based reconstruction method. The proposed model contains both total absolute curvature and total variation (TAC-TV), which are intended for better description of the featured complicated image. As for the practical algorithm development, the efficient alternating direction method of multipliers (ADMM) is utilized, which generates a practical and easy-coded algorithm. The TAC-TV iterations mainly contain FFTs, soft-thresholding and projection operations and can be launched on graphics processing unit, which leads to relatively high performance. To evaluate the presented algorithm, both qualitative and quantitative studies were performed using various few view datasets. The results illustrated that the proposed approach yielded better reconstruction quality and satisfied convergence property compared with TV-based methods.

  16. Low-dose X-ray computed tomography image reconstruction with a combined low-mAs and sparse-view protocol

    PubMed Central

    Gao, Yang; Bian, Zhaoying; Huang, Jing; Zhang, Yunwan; Niu, Shanzhou; Feng, Qianjin; Chen, Wufan; Liang, Zhengrong; Ma, Jianhua

    2014-01-01

    To realize low-dose imaging in X-ray computed tomography (CT) examination, lowering milliampere-seconds (low-mAs) or reducing the required number of projection views (sparse-view) per rotation around the body has been widely studied as an easy and effective approach. In this study, we are focusing on low-dose CT image reconstruction from the sinograms acquired with a combined low-mAs and sparse-view protocol and propose a two-step image reconstruction strategy. Specifically, to suppress significant statistical noise in the noisy and insufficient sinograms, an adaptive sinogram restoration (ASR) method is first proposed with consideration of the statistical property of sinogram data, and then to further acquire a high-quality image, a total variation based projection onto convex sets (TV-POCS) method is adopted with a slight modification. For simplicity, the present reconstruction strategy was termed as “ASR-TV-POCS.” To evaluate the present ASR-TV-POCS method, both qualitative and quantitative studies were performed on a physical phantom. Experimental results have demonstrated that the present ASR-TV-POCS method can achieve promising gains over other existing methods in terms of the noise reduction, contrast-to-noise ratio, and edge detail preservation. PMID:24977611

  17. Combination of geodetic measurements by means of a multi-resolution representation

    NASA Astrophysics Data System (ADS)

    Goebel, G.; Schmidt, M. G.; Börger, K.; List, H.; Bosch, W.

    2010-12-01

    Recent and in particular current satellite gravity missions provide important contributions for global Earth gravity models, and these global models can be refined by airborne and terrestrial gravity observations. The most common representation of a gravity field model in terms of spherical harmonics has the disadvantages that it is difficult to represent small spatial details and cannot handle data gaps appropriately. An adequate modeling using a multi-resolution representation (MRP) is necessary in order to exploit the highest degree of information out of all these mentioned measurements. The MRP provides a simple hierarchical framework for identifying the properties of a signal. The procedure starts from the measurements, performs the decomposition into frequency-dependent detail signals by applying a pyramidal algorithm and allows for data compression and filtering, i.e. data manipulations. Since different geodetic measurement types (terrestrial, airborne, spaceborne) cover different parts of the frequency spectrum, it seems reasonable to calculate the detail signals of the lower levels mainly from satellite data, the detail signals of medium levels mainly from airborne and the detail signals of the higher levels mainly from terrestrial data. A concept is presented how these different measurement types can be combined within the MRP. In this presentation the basic principles on strategies and concepts for the generation of MRPs will be shown. Examples of regional gravity field determination are presented.

  18. Effects of damage location and size on sparse representation of guided-waves for damage diagnosis of pipelines under varying temperature

    NASA Astrophysics Data System (ADS)

    Eybpoosh, Matineh; Berges, Mario; Noh, Hae Young

    2015-04-01

    In spite of their many advantages, real-world application of guided-waves for structural health monitoring (SHM) of pipelines is still quite limited. The challenges can be discussed under three headings: (1) Multiple modes, (2) Multipath reflections, and (3) Sensitivity to environmental and operational conditions (EOCs). These challenges are reviewed in the authors' previous work. This paper is part of a study whose objective is to overcome these challenges for damage diagnosis of pipes, while addressing the limitations of the current approaches. That is, develop methods that simplify signal while retaining damage information, perform well as EOCs vary, and minimize the use of transducers. In this paper, a supervised method is proposed to extract a sparse subset of the ultrasonic guided-wave signals that contain optimal damage information for detection purposes. That is, a discriminant vector is calculated so that the projections of undamaged and damaged pipes on this vector is separated. In the training stage, data is recorded from intact pipe, and from a pipe with an artificial structural abnormality (to simulate any variation from intact condition). During the monitoring stage, test signals are projected on the discriminant vector, and these projections are used as damage-sensitive features for detection purposes. Being a supervised method, factors such as EOC variations, and difference in the characteristics of the structural abnormality in training and test data, may affect the detection performance. This paper reports the experiments investigating the extent to which the differences in damage size and damage location, as well as temperatures, can influence the discriminatory power of the extracted damage-sensitive features. The results suggest that, for practical ranges of monitoring and damage sizes of interest, the proposed method has low sensitivity to such training factors. High detection performances are obtained for temperature differences up to 14

  19. Wavelet Sparse Approximate Inverse Preconditioners

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.; Tang, W.-P.; Wan, W. L.

    1996-01-01

    There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.

  20. Faster learning algorithm convergence utilizing a combined time-frequency representation as basis

    NASA Astrophysics Data System (ADS)

    Hendriks, A. J.; Uys, Hermann; du Plessis, Anton; Steenkamp, Christine

    2013-10-01

    Light is capable of directly manipulating and probing molecular dynamics at its most fundamental level. One versatile approach to influencing such dynamics exploits temporally shaped femtosecond laser pulses. Oftentimes the control mechanisms necessary to induce a desired reaction cannot be determined theoretically a priori. However under certain circumstances these mechanisms can be extracted experimentally through trial and error. This can be implemented systematically by using an evolutionary learning algorithm (LA) with closed loop feedback. Most frequently, pulse shaping algorithms operate within either the time or frequency domain, however seldom both. This may influence the physical insight gained due to dependence on the search basis, as well as influence the speed the algorithm takes to converge. As an alternative to the Fourier domain basis, we make use of a combined time-frequency representation known as the von Neumann basis where we observe temporal and spectral effects at the same time. We report on the numerical and experimental results obtained using the Fourier, as well as the von Neumann basis to maximize the second harmonic generation (SHG) output in a non-linear crystal. We show that the von Neumann representation converges faster than the Fourier domain when compared to searches in the Fourier domain. We also show a reduced parameter space is required for the Fourier domain to converge efficiently, but not for von Neumann domain. Finally we show the highest SHG signal is not only a consequence of the shortest pulse, but that the pulse central frequency also plays a key role. Taken together these results suggest that the von Neumann basis can be used as a viable alternative to the Fourier domain with improved convergence time and potentially deeper physical insight.

  1. Gravitational microlensing - Powerful combination of ray-shooting and parametric representation of caustics

    NASA Technical Reports Server (NTRS)

    Wambsganss, J.; Witt, H. J.; Schneider, P.

    1992-01-01

    We present a combination of two very different methods for numerically calculating the effects of gravitational microlensing: the backward-ray-tracing that results in two-dimensional magnification patterns, and the parametric representation of caustic lines; they are in a way complementary to each other. The combination of these methods is much more powerful than the sum of its parts. It allows to determine the total magnification and the number of microimages as a function of source position. The mean number of microimages is calculated analytically and compared to the numerical results. The peaks in the lightcurves, as obtained from one-dimensional tracks through the magnification pattern, can now be divided into two groups: those which correspond to a source crossing a caustic, and those which are due to sources passing outside cusps. We determine the frequencies of those two types of events as a function of the surface mass density, and the probability distributions of their magnitudes. We find that for low surface mass density as many as 40 percent of all events in a lightcurve are not due to caustic crossings, but rather due to passings outside cusps.

  2. Local sparse component analysis for blind source separation: an application to resting state FMRI.

    PubMed

    Vieira, Gilson; Amaro, Edson; Baccala, Luiz A

    2014-01-01

    We propose a new Blind Source Separation technique for whole-brain activity estimation that best profits from FMRI's intrinsic spatial sparsity. The Local Sparse Component Analysis (LSCA) combines wavelet analysis, group-separable regularizers, contiguity-constrained clusterization and principal components analysis (PCA) into a unique spatial sparse representation of FMRI images towards efficient dimensionality reduction without sacrificing physiological characteristics by avoiding artificial stochastic model constraints. The LSCA outperforms classical PCA source reconstruction for artificial data sets over many noise levels. A real FMRI data illustration reveals resting-state activities in regions hard to observe, such as thalamus and basal ganglia, because of their small spatial scale.

  3. Emergence of representations through repeated training on pronouncing novel letter combinations leads to efficient reading.

    PubMed

    Takashima, Atsuko; Hulzink, Iris; Wagensveld, Barbara; Verhoeven, Ludo

    2016-08-01

    Printed text can be decoded by utilizing different processing routes depending on the familiarity of the script. A predominant use of word-level decoding strategies can be expected in the case of a familiar script, and an almost exclusive use of letter-level decoding strategies for unfamiliar scripts. Behavioural studies have revealed that frequently occurring words are read more efficiently, suggesting that these words are read in a more holistic way at the word-level, than infrequent and unfamiliar words. To test whether repeated exposure to specific letter combinations leads to holistic reading, we monitored both behavioural and neural responses during novel script decoding and examined changes related to repeated exposure. We trained a group of Dutch university students to decode pseudowords written in an unfamiliar script, i.e., Korean Hangul characters. We compared behavioural and neural responses to pronouncing trained versus untrained two-character pseudowords (equivalent to two-syllable pseudowords). We tested once shortly after the initial training and again after a four days' delay that included another training session. We found that trained pseudowords were pronounced faster and more accurately than novel combinations of radicals (equivalent to letters). Imaging data revealed that pronunciation of trained pseudowords engaged the posterior temporo-parietal region, and engagement of this network was predictive of reading efficiency a month later. The results imply that repeated exposure to specific combinations of graphemes can lead to emergence of holistic representations that result in efficient reading. Furthermore, inter-individual differences revealed that good learners retained efficiency more than bad learners one month later. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. JiTTree: A Just-in-Time Compiled Sparse GPU Volume Data Structure.

    PubMed

    Labschütz, Matthias; Bruckner, Stefan; Gröller, M Eduard; Hadwiger, Markus; Rautek, Peter

    2016-01-01

    Sparse volume data structures enable the efficient representation of large but sparse volumes in GPU memory for computation and visualization. However, the choice of a specific data structure for a given data set depends on several factors, such as the memory budget, the sparsity of the data, and data access patterns. In general, there is no single optimal sparse data structure, but a set of several candidates with individual strengths and drawbacks. One solution to this problem are hybrid data structures which locally adapt themselves to the sparsity. However, they typically suffer from increased traversal overhead which limits their utility in many applications. This paper presents JiTTree, a novel sparse hybrid volume data structure that uses just-in-time compilation to overcome these problems. By combining multiple sparse data structures and reducing traversal overhead we leverage their individual advantages. We demonstrate that hybrid data structures adapt well to a large range of data sets. They are especially superior to other sparse data structures for data sets that locally vary in sparsity. Possible optimization criteria are memory, performance and a combination thereof. Through just-in-time (JIT) compilation, JiTTree reduces the traversal overhead of the resulting optimal data structure. As a result, our hybrid volume data structure enables efficient computations on the GPU, while being superior in terms of memory usage when compared to non-hybrid data structures.

  5. Inversion of magnetotelluric data in a sparse model domain

    NASA Astrophysics Data System (ADS)

    Nittinger, Christian G.; Becken, Michael

    2016-08-01

    The inversion of magnetotelluric data into subsurface electrical conductivity poses an ill-posed problem. Smoothing constraints are widely employed to estimate a regularized solution. Here, we present an alternative inversion scheme that estimates a sparse representation of the model in a wavelet basis. The objective of the inversion is to determine the few non-zero wavelet coefficients which are required to fit the data. This approach falls into the class of sparsity constrained inversion schemes and minimizes the combination of the data misfit in a least-squares ℓ2 sense and of a model coefficient norm in an ℓ1 sense (ℓ2-ℓ1 minimization). The ℓ1 coefficient norm renders the solution sparse in a suitable representation such as the multiresolution wavelet basis, but does not impose explicit structural penalties on the model as it is the case for ℓ2 regularization. The presented numerical algorithm solves the mixed ℓ2-ℓ1 norm minimization problem for the nonlinear magnetotelluric inverse problem. We demonstrate the feasibility of our algorithm on synthetic 2-D MT data as well as on a real data example. We found that sparse models can be estimated by inversion and that the spatial distribution of non-vanishing coefficients indicates regions in the model which are resolved.

  6. Combining precipitation data from observed and numerical models to forecast precipitation characteristics in sparsely-gauged watersheds: an application to the Amazon River basin.

    NASA Astrophysics Data System (ADS)

    Dwelle, M. C.; Ivanov, V. Y.; Berrocal, V.

    2014-12-01

    Forecasting rainfall in areas with sparse monitoring efforts is critical to making inferences about the health of ecosystems and built environments. Recent advances in scientific computing have allowed forecasting and climate models to increase their spatial and temporal resolution. Combined with observed point precipitation from monitoring stations, these models can be used to inform dynamic spatial statistical models for precipitation using methods from geostatistics and machine learning. To prove the feasibility, process, and capabilities of these statistical models, we present a case study of two statistical models of precipitation for the Amazon River basin from 2003-2010 that can infer a spatial process at a point using areal data from numerical model output. We investigate the seasonality and accumulation of rainfall, and the occurrence of no-rainfall and large-rainfall events. These parameters are used since they provide valuable information on possible model biases when using climate models for forecasts of the future process of precipitation in the Amazon basin. This information can be vital for ecosystem, agriculture, and water-resource management. We use observed precipitation data from weather stations, three areal datasets derived from observed precipitation (CFSR, CMORPH-CRT, GPCC) and three climate model precipitation datasets from CMIP5 (MIROC4h, HadGEM2-CC, and GISS-E2H) to construct the models. The observational data in the model domain is sparse, with 195 stations in the approximate 7×106 square kilometers of the Amazon basin, and therefore requires the areal data to create a more robust model. The first model uses the method of Bayesian melding to combine and make inferences from the included data sets, and the second uses a regression model with spatially and temporally-varying coefficients. The models of precipitation are fitted using the areal products and a subset of the point data, while another subset of point data is held out for

  7. DNA binding protein identification by combining pseudo amino acid composition and profile-based protein representation

    NASA Astrophysics Data System (ADS)

    Liu, Bin; Wang, Shanyi; Wang, Xiaolong

    2015-10-01

    DNA-binding proteins play an important role in most cellular processes. Therefore, it is necessary to develop an efficient predictor for identifying DNA-binding proteins only based on the sequence information of proteins. The bottleneck for constructing a useful predictor is to find suitable features capturing the characteristics of DNA binding proteins. We applied PseAAC to DNA binding protein identification, and PseAAC was further improved by incorporating the evolutionary information by using profile-based protein representation. Finally, Combined with Support Vector Machines (SVMs), a predictor called iDNAPro-PseAAC was proposed. Experimental results on an updated benchmark dataset showed that iDNAPro-PseAAC outperformed some state-of-the-art approaches, and it can achieve stable performance on an independent dataset. By using an ensemble learning approach to incorporate more negative samples (non-DNA binding proteins) in the training process, the performance of iDNAPro-PseAAC was further improved. The web server of iDNAPro-PseAAC is available at http://bioinformatics.hitsz.edu.cn/iDNAPro-PseAAC/.

  8. Timing of emotion representation in right and left occipital region: Evidence from combined TMS-EEG.

    PubMed

    Mattavelli, Giulia; Rosanova, Mario; Casali, Adenauer G; Papagno, Costanza; Romero Lauro, Leonor J

    2016-07-01

    Neuroimaging and electrophysiological studies provide evidence of hemispheric differences in processing faces and, in particular, emotional expressions. However, the timing of emotion representation in the right and left hemisphere is still unclear. Transcranial magnetic stimulation combined with electroencephalography (TMS-EEG) was used to explore cortical responsiveness during behavioural tasks requiring processing of either identity or expression of faces. Single-pulse TMS was delivered 100ms after face onset over the medial prefrontal cortex (mPFC) while continuous EEG was recorded using a 60-channel TMS-compatible amplifier; right premotor cortex (rPMC) was also stimulated as control site. The same face stimuli with neutral, happy and fearful expressions were presented in separate blocks and participants were asked to complete either a facial identity or facial emotion matching task. Analyses performed on posterior face specific EEG components revealed that mPFC-TMS reduced the P1-N1 component. In particular, only when an explicit expression processing was required, mPFC-TMS interacted with emotion type in relation to hemispheric side at different timing; the first P1-N1 component was affected in the right hemisphere whereas the later N1-P2 component was modulated in the left hemisphere. These findings support the hypothesis that the frontal cortex exerts an early influence on the occipital cortex during face processing and suggest a different timing of the right and left hemisphere involvement in emotion discrimination. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. A Hybrid approach to molecular continuum processes combiningGaussian basis functions and the discrete variable representation

    SciTech Connect

    Rescigno, Thomas N.; Horner, Daniel A.; Yip, Frank L.; McCurdy,C. William

    2005-08-29

    Gaussian basis functions, routinely employed in molecular electronic structure calculations, can be combined with numerical grid-based functions in a discrete variable representation to provide an efficient method for computing molecular continuum wave functions. This approach, combined with exterior complex scaling, obviates the need for slowly convergent single-center expansions, and allows one to study a variety of electron-molecule collision problems. The method is illustrated by computation of various bound and continuum properties of H2+.

  10. Combined-hyperbolic-inverse-power-representation of potential energy surfaces: a preliminary assessment for H3 and HO2.

    PubMed

    Varandas, A J C

    2013-02-07

    The purpose is to fit an accurate smooth function of the many-body expansion type to a multidimensional large data set using a basis-set type method. By adopting a combined-hyperbolic-inverse-power-representation for the basis, the novel approach is tested in detail for the ground electronic state of tri-hydrogen and hydroperoxyl systems, assuming that their potential energy surfaces are single-sheeted representable. It is also shown that the method can be easily applicable to potential energy curves by considering as prototypes molecular oxygen and the hydroxyl radical.

  11. Bundle block adjustment of large-scale remote sensing data with Block-based Sparse Matrix Compression combined with Preconditioned Conjugate Gradient

    NASA Astrophysics Data System (ADS)

    Zheng, Maoteng; Zhang, Yongjun; Zhou, Shunping; Zhu, Junfeng; Xiong, Xiaodong

    2016-07-01

    In recent years, new platforms and sensors in photogrammetry, remote sensing and computer vision areas have become available, such as Unmanned Aircraft Vehicles (UAV), oblique camera systems, common digital cameras and even mobile phone cameras. Images collected by all these kinds of sensors could be used as remote sensing data sources. These sensors can obtain large-scale remote sensing data which consist of a great number of images. Bundle block adjustment of large-scale data with conventional algorithm is very time and space (memory) consuming due to the super large normal matrix arising from large-scale data. In this paper, an efficient Block-based Sparse Matrix Compression (BSMC) method combined with the Preconditioned Conjugate Gradient (PCG) algorithm is chosen to develop a stable and efficient bundle block adjustment system in order to deal with the large-scale remote sensing data. The main contribution of this work is the BSMC-based PCG algorithm which is more efficient in time and memory than the traditional algorithm without compromising the accuracy. Totally 8 datasets of real data are used to test our proposed method. Preliminary results have shown that the BSMC method can efficiently decrease the time and memory requirement of large-scale data.

  12. Golden-angle radial sparse parallel MRI: combination of compressed sensing, parallel imaging, and golden-angle radial sampling for fast and flexible dynamic volumetric MRI.

    PubMed

    Feng, Li; Grimm, Robert; Block, Kai Tobias; Chandarana, Hersh; Kim, Sungheon; Xu, Jian; Axel, Leon; Sodickson, Daniel K; Otazo, Ricardo

    2014-09-01

    To develop a fast and flexible free-breathing dynamic volumetric MRI technique, iterative Golden-angle RAdial Sparse Parallel MRI (iGRASP), that combines compressed sensing, parallel imaging, and golden-angle radial sampling. Radial k-space data are acquired continuously using the golden-angle scheme and sorted into time series by grouping an arbitrary number of consecutive spokes into temporal frames. An iterative reconstruction procedure is then performed on the undersampled time series where joint multicoil sparsity is enforced by applying a total-variation constraint along the temporal dimension. Required coil-sensitivity profiles are obtained from the time-averaged data. iGRASP achieved higher acceleration capability than either parallel imaging or coil-by-coil compressed sensing alone. It enabled dynamic volumetric imaging with high spatial and temporal resolution for various clinical applications, including free-breathing dynamic contrast-enhanced imaging in the abdomen of both adult and pediatric patients, and in the breast and neck of adult patients. The high performance and flexibility provided by iGRASP can improve clinical studies that require robustness to motion and simultaneous high spatial and temporal resolution. Magn Reson Med 72:707-717, 2014. © 2013 Wiley Periodicals, Inc. Copyright © 2013 Wiley Periodicals, Inc.

  13. Sparse Representations for Limited Data Tomography (PREPRINT)

    DTIC Science & Technology

    2007-11-01

    dictionary. Let αk∈ RJ denote the k- th row of α. The K- SVD algorithm for denoising of gray scale images essentially minimizes the objective function in...predefined (such as wavelets) or learned (e.g., by the K- SVD algorithm [8]), as in this work. Due to its highly effectiveness for tasks such as image... denoising , demosaicing, and inpainting, in particular when the dictionary is learned [9, 10], here we extend this idea to tomographic reconstruction. To

  14. Sparse Representation for Color Image Restoration (PREPRINT)

    DTIC Science & Technology

    2006-10-01

    learning dictionaries for color images and extend the K- SVD -based grayscale image denoising algorithm that appears in [2]. This work puts forward...extend the K- SVD -based gray- scale image denoising algorithm that appears in [2]. This work puts forward ways for handling non- homogeneous noise and...brief description of the K- SVD -based gray-scale image denoising algorithm as proposed in [2]. Section 4 describes the novelties offered in this paper

  15. Sparse Representation for Time-Series Classification

    DTIC Science & Technology

    2015-02-08

    classification using seismic and PIR sensors, IEEE SJ. 12(6), 1709–1718 ( 2012 ). 8. G. Mallapragada, A. Ray, and X. Jin, Symbolic dynamic filtering and lan...guage measure for behavior identification of mobile robots, IEEE TSMC. 42 (3), 647–659 ( 2012 ). 9. S. Bahrampour, A. Ray, S. Sarkar, T. Damarla, and N...pp. 521–528 (2011). 16. S. Kim and E. Xing, Tree-guided group lasso for multi-task regression with structured sparsity, arXiv:0909.1373 (2009). 17

  16. Sparse Representation of Smooth Linear Operators

    DTIC Science & Technology

    1990-08-01

    received study by many authors, resulting in constructions with a variety of properties. Meyer [13] constructed orthonormal wavelets for which h E CI(R...Lemmas 2.3 and 2.4; in fact, substitution of the finite sums which determine the elements of UTUT for the integrals in those lemmas yields the...some k the orthogonal matrices U1,..., U, defined in Section 4.1 have been computed (1 = log2(n/k)). We now present a procedure for computation of UTUT

  17. Hyperspectral Image Classification via Kernel Sparse Representation

    DTIC Science & Technology

    2013-01-01

    38] V. Roth , “The generalized LASSO,” IEEE Trans. Neural Netw., vol. 15, no. 1, pp. 16–28, Jan. 2004. [39] J. Tropp and A. Gilbert, “Signal recovery...electrical en- gineering from the University of Victoria , Victoria , BC, Canada, in 2006. Currently, she is a Ph.D. stu- dent in the Department of

  18. Sparse Regression as a Sparse Eigenvalue Problem

    NASA Technical Reports Server (NTRS)

    Moghaddam, Baback; Gruber, Amit; Weiss, Yair; Avidan, Shai

    2008-01-01

    We extend the l0-norm "subspectral" algorithms for sparse-LDA [5] and sparse-PCA [6] to general quadratic costs such as MSE in linear (kernel) regression. The resulting "Sparse Least Squares" (SLS) problem is also NP-hard, by way of its equivalence to a rank-1 sparse eigenvalue problem (e.g., binary sparse-LDA [7]). Specifically, for a general quadratic cost we use a highly-efficient technique for direct eigenvalue computation using partitioned matrix inverses which leads to dramatic x103 speed-ups over standard eigenvalue decomposition. This increased efficiency mitigates the O(n4) scaling behaviour that up to now has limited the previous algorithms' utility for high-dimensional learning problems. Moreover, the new computation prioritizes the role of the less-myopic backward elimination stage which becomes more efficient than forward selection. Similarly, branch-and-bound search for Exact Sparse Least Squares (ESLS) also benefits from partitioned matrix inverse techniques. Our Greedy Sparse Least Squares (GSLS) generalizes Natarajan's algorithm [9] also known as Order-Recursive Matching Pursuit (ORMP). Specifically, the forward half of GSLS is exactly equivalent to ORMP but more efficient. By including the backward pass, which only doubles the computation, we can achieve lower MSE than ORMP. Experimental comparisons to the state-of-the-art LARS algorithm [3] show forward-GSLS is faster, more accurate and more flexible in terms of choice of regularization

  19. Medical image classification based on multi-scale non-negative sparse coding.

    PubMed

    Zhang, Ruijie; Shen, Jian; Wei, Fushan; Li, Xiong; Sangaiah, Arun Kumar

    2017-05-27

    With the rapid development of modern medical imaging technology, medical image classification has become more and more important in medical diagnosis and clinical practice. Conventional medical image classification algorithms usually neglect the semantic gap problem between low-level features and high-level image semantic, which will largely degrade the classification performance. To solve this problem, we propose a multi-scale non-negative sparse coding based medical image classification algorithm. Firstly, Medical images are decomposed into multiple scale layers, thus diverse visual details can be extracted from different scale layers. Secondly, for each scale layer, the non-negative sparse coding model with fisher discriminative analysis is constructed to obtain the discriminative sparse representation of medical images. Then, the obtained multi-scale non-negative sparse coding features are combined to form a multi-scale feature histogram as the final representation for a medical image. Finally, SVM classifier is combined to conduct medical image classification. The experimental results demonstrate that our proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. A combined model of sensory and cognitive representations underlying tonal expectations in music: from audio signals to behavior.

    PubMed

    Collins, Tom; Tillmann, Barbara; Barrett, Frederick S; Delbé, Charles; Janata, Petr

    2014-01-01

    Listeners' expectations for melodies and harmonies in tonal music are perhaps the most studied aspect of music cognition. Long debated has been whether faster response times (RTs) to more strongly primed events (in a music theoretic sense) are driven by sensory or cognitive mechanisms, such as repetition of sensory information or activation of cognitive schemata that reflect learned tonal knowledge, respectively. We analyzed over 300 stimuli from 7 priming experiments comprising a broad range of musical material, using a model that transforms raw audio signals through a series of plausible physiological and psychological representations spanning a sensory-cognitive continuum. We show that RTs are modeled, in part, by information in periodicity pitch distributions, chroma vectors, and activations of tonal space--a representation on a toroidal surface of the major/minor key relationships in Western tonal music. We show that in tonal space, melodies are grouped by their tonal rather than timbral properties, whereas the reverse is true for the periodicity pitch representation. While tonal space variables explained more of the variation in RTs than did periodicity pitch variables, suggesting a greater contribution of cognitive influences to tonal expectation, a stepwise selection model contained variables from both representations and successfully explained the pattern of RTs across stimulus categories in 4 of the 7 experiments. The addition of closure--a cognitive representation of a specific syntactic relationship--succeeded in explaining results from all 7 experiments. We conclude that multiple representational stages along a sensory-cognitive continuum combine to shape tonal expectations in music. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  1. Structured sparse priors for image classification.

    PubMed

    Srinivas, Umamahesh; Suo, Yuanming; Dao, Minh; Monga, Vishal; Tran, Trac D

    2015-06-01

    Model-based compressive sensing (CS) exploits the structure inherent in sparse signals for the design of better signal recovery algorithms. This information about structure is often captured in the form of a prior on the sparse coefficients, with the Laplacian being the most common such choice (leading to l1 -norm minimization). Recent work has exploited the discriminative capability of sparse representations for image classification by employing class-specific dictionaries in the CS framework. Our contribution is a logical extension of these ideas into structured sparsity for classification. We introduce the notion of discriminative class-specific priors in conjunction with class specific dictionaries, specifically the spike-and-slab prior widely applied in Bayesian sparse regression. Significantly, the proposed framework takes the burden off the demand for abundant training image samples necessary for the success of sparsity-based classification schemes. We demonstrate this practical benefit of our approach in important applications, such as face recognition and object categorization.

  2. Guided wavefield reconstruction from sparse measurements

    NASA Astrophysics Data System (ADS)

    Mesnil, Olivier; Ruzzene, Massimo

    2016-02-01

    Guided wave measurements are at the basis of several Non-Destructive Evaluation (NDE) techniques. Although sparse measurements of guided wave obtained using piezoelectric sensors can efficiently detect and locate defects, extensive informa-tion on the shape and subsurface location of defects can be extracted from full-field measurements acquired by Laser Doppler Vibrometers (LDV). Wavefield acquisition from LDVs is generally a slow operation due to the fact that the wave propagation to record must be repeated for each point measurement and the initial conditions must be reached between each measurement. In this research, a Sparse Wavefield Reconstruction (SWR) process using Compressed Sensing is developed. The goal of this technique is to reduce the number of point measurements needed to apply NDE techniques by at least one order of magnitude by extrapolating the knowledge of a few randomly chosen measured pixels over an over-sampled grid. To achieve this, the Lamb wave propagation equation is used to formulate a basis of shape functions in which the wavefield has a sparse representation, in order to comply with the Compressed Sensing requirements and use l1-minimization solvers. The main assumption of this reconstruction process is that every material point of the studied area is a potential source. The Compressed Sensing matrix is defined as being the contribution that would have been received at a measurement location from each possible source, using the dispersion relations of the specimen computed using a Semi-Analytical Finite Element technique. The measurements are then processed through an l1-minimizer to find a minimum corresponding to the set of active sources and their corresponding excitation functions. This minimum represents the best combination of the parameters of the model matching the sparse measurements. Wavefields are then reconstructed using the propagation equation. The set of active sources found by minimization contains all the wave

  3. Predicting siRNA efficacy based on multiple selective siRNA representations and their combination at score level

    NASA Astrophysics Data System (ADS)

    He, Fei; Han, Ye; Gong, Jianting; Song, Jiazhi; Wang, Han; Li, Yanwen

    2017-03-01

    Small interfering RNAs (siRNAs) may induce to targeted gene knockdown, and the gene silencing effectiveness relies on the efficacy of the siRNA. Therefore, the task of this paper is to construct an effective siRNA prediction method. In our work, we try to describe siRNA from both quantitative and qualitative aspects. For quantitative analyses, we form four groups of effective features, including nucleotide frequencies, thermodynamic stability profile, thermodynamic of siRNA-mRNA interaction, and mRNA related features, as a new mixed representation, in which thermodynamic of siRNA-mRNA interaction is introduced to siRNA efficacy prediction for the first time to our best knowledge. And then an F-score based feature selection is employed to investigate the contribution of each feature and remove the weak relevant features. Meanwhile, we encode the siRNA sequence and existed empirical design rules as a qualitative siRNA representation. These two kinds of siRNA representations are combined to predict siRNA efficacy by supported Vector Regression (SVR) at score level. The experimental results indicate that our method may select the features with powerful discriminative ability and make the two kinds of siRNA representations work at full capacity. The prediction results also demonstrate that our method can outperform other popular siRNA efficacy prediction algorithms.

  4. Predicting siRNA efficacy based on multiple selective siRNA representations and their combination at score level

    PubMed Central

    He, Fei; Han, Ye; Gong, Jianting; Song, Jiazhi; Wang, Han; Li, Yanwen

    2017-01-01

    Small interfering RNAs (siRNAs) may induce to targeted gene knockdown, and the gene silencing effectiveness relies on the efficacy of the siRNA. Therefore, the task of this paper is to construct an effective siRNA prediction method. In our work, we try to describe siRNA from both quantitative and qualitative aspects. For quantitative analyses, we form four groups of effective features, including nucleotide frequencies, thermodynamic stability profile, thermodynamic of siRNA-mRNA interaction, and mRNA related features, as a new mixed representation, in which thermodynamic of siRNA-mRNA interaction is introduced to siRNA efficacy prediction for the first time to our best knowledge. And then an F-score based feature selection is employed to investigate the contribution of each feature and remove the weak relevant features. Meanwhile, we encode the siRNA sequence and existed empirical design rules as a qualitative siRNA representation. These two kinds of siRNA representations are combined to predict siRNA efficacy by supported Vector Regression (SVR) at score level. The experimental results indicate that our method may select the features with powerful discriminative ability and make the two kinds of siRNA representations work at full capacity. The prediction results also demonstrate that our method can outperform other popular siRNA efficacy prediction algorithms. PMID:28317874

  5. Deep ensemble learning of sparse regression models for brain disease diagnosis.

    PubMed

    Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang

    2017-04-01

    Recent studies on brain imaging analysis witnessed the core roles of machine learning techniques in computer-assisted intervention for brain disease diagnosis. Of various machine-learning techniques, sparse regression models have proved their effectiveness in handling high-dimensional data but with a small number of training samples, especially in medical problems. In the meantime, deep learning methods have been making great successes by outperforming the state-of-the-art performances in various applications. In this paper, we propose a novel framework that combines the two conceptually different methods of sparse regression and deep learning for Alzheimer's disease/mild cognitive impairment diagnosis and prognosis. Specifically, we first train multiple sparse regression models, each of which is trained with different values of a regularization control parameter. Thus, our multiple sparse regression models potentially select different feature subsets from the original feature set; thereby they have different powers to predict the response values, i.e., clinical label and clinical scores in our work. By regarding the response values from our sparse regression models as target-level representations, we then build a deep convolutional neural network for clinical decision making, which thus we call 'Deep Ensemble Sparse Regression Network.' To our best knowledge, this is the first work that combines sparse regression models with deep neural network. In our experiments with the ADNI cohort, we validated the effectiveness of the proposed method by achieving the highest diagnostic accuracies in three classification tasks. We also rigorously analyzed our results and compared with the previous studies on the ADNI cohort in the literature.

  6. Multimodal visual dictionary learning via heterogeneous latent semantic sparse coding

    NASA Astrophysics Data System (ADS)

    Li, Chenxiao; Ding, Guiguang; Zhou, Jile; Guo, Yuchen; Liu, Qiang

    2014-11-01

    Visual dictionary learning as a crucial task of image representation has gained increasing attention. Specifically, sparse coding is widely used due to its intrinsic advantage. In this paper, we propose a novel heterogeneous latent semantic sparse coding model. The central idea is to bridge heterogeneous modalities by capturing their common sparse latent semantic structure so that the learned visual dictionary is able to describe both the visual and textual properties of training data. Experiments on both image categorization and retrieval tasks demonstrate that our model shows superior performance over several recent methods such as K-means and Sparse Coding.

  7. Estimation of white matter fiber parameters from compressed multiresolution diffusion MRI using sparse Bayesian learning.

    PubMed

    Pisharady, Pramod Kumar; Sotiropoulos, Stamatios N; Duarte-Carvajalino, Julio M; Sapiro, Guillermo; Lenglet, Christophe

    2017-06-29

    We present a sparse Bayesian unmixing algorithm BusineX: Bayesian Unmixing for Sparse Inference-based Estimation of Fiber Crossings (X), for estimation of white matter fiber parameters from compressed (under-sampled) diffusion MRI (dMRI) data. BusineX combines compressive sensing with linear unmixing and introduces sparsity to the previously proposed multiresolution data fusion algorithm RubiX, resulting in a method for improved reconstruction, especially from data with lower number of diffusion gradients. We formulate the estimation of fiber parameters as a sparse signal recovery problem and propose a linear unmixing framework with sparse Bayesian learning for the recovery of sparse signals, the fiber orientations and volume fractions. The data is modeled using a parametric spherical deconvolution approach and represented using a dictionary created with the exponential decay components along different possible diffusion directions. Volume fractions of fibers along these directions define the dictionary weights. The proposed sparse inference, which is based on the dictionary representation, considers the sparsity of fiber populations and exploits the spatial redundancy in data representation, thereby facilitating inference from under-sampled q-space. The algorithm improves parameter estimation from dMRI through data-dependent local learning of hyperparameters, at each voxel and for each possible fiber orientation, that moderate the strength of priors governing the parameter variances. Experimental results on synthetic and in-vivo data show improved accuracy with a lower uncertainty in fiber parameter estimates. BusineX resolves a higher number of second and third fiber crossings. For under-sampled data, the algorithm is also shown to produce more reliable estimates. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Analog system for computing sparse codes

    DOEpatents

    Rozell, Christopher John; Johnson, Don Herrick; Baraniuk, Richard Gordon; Olshausen, Bruno A.; Ortman, Robert Lowell

    2010-08-24

    A parallel dynamical system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements, and for reconstructing compressively sensed images. The system is based on the principles of thresholding and local competition that solves a family of sparse approximation problems corresponding to various sparsity metrics. The system utilizes Locally Competitive Algorithms (LCAs), nodes in a population continually compete with neighboring units using (usually one-way) lateral inhibition to calculate coefficients representing an input in an over complete dictionary.

  9. Natural image sequences constrain dynamic receptive fields and imply a sparse code.

    PubMed

    Häusler, Chris; Susemihl, Alex; Nawrot, Martin P

    2013-11-06

    In their natural environment, animals experience a complex and dynamic visual scenery. Under such natural stimulus conditions, neurons in the visual cortex employ a spatially and temporally sparse code. For the input scenario of natural still images, previous work demonstrated that unsupervised feature learning combined with the constraint of sparse coding can predict physiologically measured receptive fields of simple cells in the primary visual cortex. This convincingly indicated that the mammalian visual system is adapted to the natural spatial input statistics. Here, we extend this approach to the time domain in order to predict dynamic receptive fields that can account for both spatial and temporal sparse activation in biological neurons. We rely on temporal restricted Boltzmann machines and suggest a novel temporal autoencoding training procedure. When tested on a dynamic multi-variate benchmark dataset this method outperformed existing models of this class. Learning features on a large dataset of natural movies allowed us to model spatio-temporal receptive fields for single neurons. They resemble temporally smooth transformations of previously obtained static receptive fields and are thus consistent with existing theories. A neuronal spike response model demonstrates how the dynamic receptive field facilitates temporal and population sparseness. We discuss the potential mechanisms and benefits of a spatially and temporally sparse representation of natural visual input.

  10. Visible and invisible stimulus parts integrate into global object representations as revealed by combining monocular and binocular rivalry.

    PubMed

    Vergeer, Mark; Moors, Pieter; Wagemans, Johan; van Ee, Raymond

    2016-09-01

    Our visual system faces the challenging task to construct integrated visual representations from the visual input projected on our retinae. Previous research has provided mixed evidence as to whether visual awareness of the stimulus parts is required for such integration to occur. Here, we address this issue by taking a novel approach in which we combine a monocular rivalry stimulus (i.e., a bistable rotating cylinder) with binocular rivalry. The results of Experiment 1 show that in a rivalry condition, where one half of the cylinder is perceptually suppressed, significantly more perceptual switches occur that are consistent with visual integration of the whole cylinder than occur in a control condition, where only half of the cylinder is presented at a time and the presentation of the two images is physically alternated. In Experiment 2, stimulation in the observer's dominant eye was kept dominant by presenting the half cylinder in this eye at higher contrast and by surrounding it with a flickering context. Results show that the strong convexity bias that was found in a control condition, where no stimulus was presented in the suppressed eye, almost completely disappears when the unseen half is presented in the suppressed eye, indicating that both halves visually integrate and, subsequently, compete for convexity. These findings provide evidence that unseen visual information is biased towards a representation that is congruent with the current visible representation and, hence, that principles of perceptual organization also apply to parts of the visual input that remain unseen by the observer.

  11. Vectorized Sparse Elimination.

    DTIC Science & Technology

    1984-03-01

    Grids," Proc. 6th Symposium on Reservoir Simulation , New Orleans, Feb. 1-2, 1982, pp. 489-506. [51 Arya, S., and D. A. Calahan, "Optimal Scheduling of...of Computer Architecture on Direct Sparse Matrix Routines in Petroleum Reservoir Simulation ," Sparse Matrix Symposium, Fairfield Glade, TE, October

  12. Sparse Coding and Counting for Robust Visual Tracking

    PubMed Central

    Liu, Risheng; Wang, Jing; Shang, Xiaoke; Wang, Yiyang; Su, Zhixun; Cai, Yu

    2016-01-01

    In this paper, we propose a novel sparse coding and counting method under Bayesian framework for visual tracking. In contrast to existing methods, the proposed method employs the combination of L0 and L1 norm to regularize the linear coefficients of incrementally updated linear basis. The sparsity constraint enables the tracker to effectively handle difficult challenges, such as occlusion or image corruption. To achieve real-time processing, we propose a fast and efficient numerical algorithm for solving the proposed model. Although it is an NP-hard problem, the proposed accelerated proximal gradient (APG) approach is guaranteed to converge to a solution quickly. Besides, we provide a closed solution of combining L0 and L1 regularized representation to obtain better sparsity. Experimental results on challenging video sequences demonstrate that the proposed method achieves state-of-the-art results both in accuracy and speed. PMID:27992474

  13. Signal Separation of Helicopter Radar Returns Using Wavelet-Based Sparse Signal Optimisation

    DTIC Science & Technology

    2016-10-01

    UNCLASSIFIED Signal Separation Of Helicopter Radar Returns Using Wavelet-Based Sparse Signal Optimisation Si Tran Nguyen Nguyen 1, Sandun Kodituwakku...RR–0436 ABSTRACT A novel wavelet-based sparse signal representation technique is used to separate the main and tail rotor blade components of a... separation techniques cannot be applied. A sparse signal representation technique is now proposed for this problem with the tunable Q wavelet transform

  14. Adaptive feature extraction using sparse coding for machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Liu, Haining; Liu, Chengliang; Huang, Yixiang

    2011-02-01

    In the signal processing domain, there has been growing interest in sparse coding with a learned dictionary instead of a predefined one, which is advocated as an effective mathematical description for the underlying principle of mammalian sensory systems in processing information. In this paper, sparse coding is introduced as a feature extraction technique for machinery fault diagnosis and an adaptive feature extraction scheme is proposed based on it. The two core problems of sparse coding, i.e., dictionary learning and coefficients solving, are discussed in detail. A natural extension of sparse coding, shift-invariant sparse coding, is also introduced. Then, the vibration signals of rolling element bearings are taken as the target signals to verify the proposed scheme, and shift-invariant sparse coding is used for vibration analysis. With the purpose of diagnosing the different fault conditions of bearings, features are extracted following the proposed scheme: basis functions are separately learned from each class of vibration signals trying to capture the defective impulses; a redundant dictionary is built by merging all the learned basis functions; based on the redundant dictionary, the diagnostic information is made explicit in the solved sparse representations of vibration signals; sparse features are formulated in terms of activations of atoms. The multiclass linear discriminant analysis (LDA) classifier is used to test the discriminability of the extracted sparse features and the adaptability of the learned atoms. The experiments show that sparse coding is an effective feature extraction technique for machinery fault diagnosis.

  15. Dictionaries and distributions: Combining expert knowledge and large scale textual data content analysis : Distributed dictionary representation.

    PubMed

    Garten, Justin; Hoover, Joe; Johnson, Kate M; Boghrati, Reihane; Iskiwitch, Carol; Dehghani, Morteza

    2017-03-31

    Theory-driven text analysis has made extensive use of psychological concept dictionaries, leading to a wide range of important results. These dictionaries have generally been applied through word count methods which have proven to be both simple and effective. In this paper, we introduce Distributed Dictionary Representations (DDR), a method that applies psychological dictionaries using semantic similarity rather than word counts. This allows for the measurement of the similarity between dictionaries and spans of text ranging from complete documents to individual words. We show how DDR enables dictionary authors to place greater emphasis on construct validity without sacrificing linguistic coverage. We further demonstrate the benefits of DDR on two real-world tasks and finally conduct an extensive study of the interaction between dictionary size and task performance. These studies allow us to examine how DDR and word count methods complement one another as tools for applying concept dictionaries and where each is best applied. Finally, we provide references to tools and resources to make this method both available and accessible to a broad psychological audience.

  16. Combined numerical and linguistic knowledge representation and its application to medical diagnosis

    NASA Astrophysics Data System (ADS)

    Meesad, Phayung; Yen, Gary G.

    2002-07-01

    In this study, we propose a novel hybrid intelligent system (HIS) which provides a unified integration of numerical and linguistic knowledge representations. The proposed HIS is hierarchical integration of an incremental learning fuzzy neural network (ILFN) and a linguistic model, i.e., fuzzy expert system, optimized via the genetic algorithm. The ILFN is a self-organizing network with the capability of fast, one-pass, online, and incremental learning. The linguistic model is constructed based on knowledge embedded in the trained ILFN or provided by the domain expert. The knowledge captured from the low-level ILFN can be mapped to the higher-level linguistic model and vice versa. The GA is applied to optimize the linguistic model to maintain high accuracy, comprehensibility, completeness, compactness, and consistency. After the system being completely constructed, it can incrementally learn new information in both numerical and linguistic forms. To evaluate the system's performance, the well-known benchmark Wisconsin breast cancer data set was studied for an application to medical diagnosis. The simulation results have shown that the prosed HIS perform better than the individual standalone systems. The comparison results show that the linguistic rules extracted are competitive with or even superior to some well-known methods.

  17. Improving mass detection using combined feature representations from projection views and reconstructed volume of DBT and boosting based classification with feature selection

    NASA Astrophysics Data System (ADS)

    Kim, Dae Hoe; Kim, Seong Tae; Ro, Yong Man

    2015-11-01

    In digital breast tomosynthesis (DBT), image characteristics of projection views and reconstructed volume are different and both have the advantage of detecting breast masses, e.g. reconstructed volume mitigates a tissue overlap, while projection views have less reconstruction blur artifacts. In this paper, an improved mass detection is proposed by using combined feature representations from projection views and reconstructed volume in the DBT. To take advantage of complementary effects on different image characteristics of both data, combined feature representations are extracted from both projection views and reconstructed volume concurrently. An indirect region-of-interest segmentation in projection views, which projects volume-of-interest in reconstructed volume into the corresponding projection views, is proposed to extract combined feature representations. In addition, a boosting based classification with feature selection has been employed for selecting effective feature representations among a large number of combined feature representations, and for reducing false positives. Experiments have been conducted on a clinical data set that contains malignant masses. Experimental results demonstrate that the proposed mass detection can achieve high sensitivity with a small number of false positives. In addition, the experimental results demonstrate that the selected feature representations for classifying masses complementarily come from both projection views and reconstructed volume.

  18. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals.

    PubMed

    Pinski, Peter; Riplinger, Christoph; Valeev, Edward F; Neese, Frank

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in

  19. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals

    NASA Astrophysics Data System (ADS)

    Pinski, Peter; Riplinger, Christoph; Valeev, Edward F.; Neese, Frank

    2015-07-01

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in

  20. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals

    SciTech Connect

    Pinski, Peter; Riplinger, Christoph; Neese, Frank E-mail: frank.neese@cec.mpg.de; Valeev, Edward F. E-mail: frank.neese@cec.mpg.de

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in

  1. Re-Examining Evidence for the Use of Independent Relational Representations during Conceptual Combination

    ERIC Educational Resources Information Center

    Gagne, Christina L.; Spalding, Thomas L.; Ji, Hongbo

    2005-01-01

    In a recent study of conceptual combination, Estes (2003) presented evidence for the priming of relational information in the absence of shared constituents between the prime and target (e.g., "pancake spatula" was interpreted more quickly following "bacon tongs" than following "city riots"). He argued that these data support the view that…

  2. Combining Multiple External Representations and Refutational Text: An Intervention on Learning to Interpret Box Plots

    ERIC Educational Resources Information Center

    Lem, Stephanie; Kempen, Goya; Ceulemans, Eva; Onghena, Patrick; Verschaffel, Lieven; Van Dooren, Wim

    2015-01-01

    Box plots are frequently misinterpreted and educational attempts to correct these misinterpretations have not been successful. In this study, we used two instructional techniques that seemed powerful to change the misinterpretation of the area of the box in box plots, both separately and in combination, leading to three experimental conditions,…

  3. Combining Multiple External Representations and Refutational Text: An Intervention on Learning to Interpret Box Plots

    ERIC Educational Resources Information Center

    Lem, Stephanie; Kempen, Goya; Ceulemans, Eva; Onghena, Patrick; Verschaffel, Lieven; Van Dooren, Wim

    2015-01-01

    Box plots are frequently misinterpreted and educational attempts to correct these misinterpretations have not been successful. In this study, we used two instructional techniques that seemed powerful to change the misinterpretation of the area of the box in box plots, both separately and in combination, leading to three experimental conditions,…

  4. Re-Examining Evidence for the Use of Independent Relational Representations during Conceptual Combination

    ERIC Educational Resources Information Center

    Gagne, Christina L.; Spalding, Thomas L.; Ji, Hongbo

    2005-01-01

    In a recent study of conceptual combination, Estes (2003) presented evidence for the priming of relational information in the absence of shared constituents between the prime and target (e.g., "pancake spatula" was interpreted more quickly following "bacon tongs" than following "city riots"). He argued that these data support the view that…

  5. Evolving sparse stellar populations

    NASA Astrophysics Data System (ADS)

    Bruzual, Gustavo; Gladis Magris, C.; Hernández-Pérez, Fabiola

    2017-03-01

    We examine the role that stochastic fluctuations in the IMF and in the number of interacting binaries have on the spectro-photometric properties of sparse stellar populations as a function of age and metallicity.

  6. Multichannel sparse spike inversion

    NASA Astrophysics Data System (ADS)

    Pereg, Deborah; Cohen, Israel; Vassiliou, Anthony A.

    2017-10-01

    In this paper, we address the problem of sparse multichannel seismic deconvolution. We introduce multichannel sparse spike inversion as an iterative procedure, which deconvolves the seismic data and recovers the Earth two-dimensional reflectivity image, while taking into consideration the relations between spatially neighboring traces. We demonstrate the improved performance of the proposed algorithm and its robustness to noise, compared to competitive single-channel algorithm through simulations and real seismic data examples.

  7. Local structure preserving sparse coding for infrared target recognition

    PubMed Central

    Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lianfa

    2017-01-01

    Sparse coding performs well in image classification. However, robust target recognition requires a lot of comprehensive template images and the sparse learning process is complex. We incorporate sparsity into a template matching concept to construct a local sparse structure matching (LSSM) model for general infrared target recognition. A local structure preserving sparse coding (LSPSc) formulation is proposed to simultaneously preserve the local sparse and structural information of objects. By adding a spatial local structure constraint into the classical sparse coding algorithm, LSPSc can improve the stability of sparse representation for targets and inhibit background interference in infrared images. Furthermore, a kernel LSPSc (K-LSPSc) formulation is proposed, which extends LSPSc to the kernel space to weaken the influence of the linear structure constraint in nonlinear natural data. Because of the anti-interference and fault-tolerant capabilities, both LSPSc- and K-LSPSc-based LSSM can implement target identification based on a simple template set, which just needs several images containing enough local sparse structures to learn a sufficient sparse structure dictionary of a target class. Specifically, this LSSM approach has stable performance in the target detection with scene, shape and occlusions variations. High performance is demonstrated on several datasets, indicating robust infrared target recognition in diverse environments and imaging conditions. PMID:28323824

  8. Local structure preserving sparse coding for infrared target recognition.

    PubMed

    Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lianfa

    2017-01-01

    Sparse coding performs well in image classification. However, robust target recognition requires a lot of comprehensive template images and the sparse learning process is complex. We incorporate sparsity into a template matching concept to construct a local sparse structure matching (LSSM) model for general infrared target recognition. A local structure preserving sparse coding (LSPSc) formulation is proposed to simultaneously preserve the local sparse and structural information of objects. By adding a spatial local structure constraint into the classical sparse coding algorithm, LSPSc can improve the stability of sparse representation for targets and inhibit background interference in infrared images. Furthermore, a kernel LSPSc (K-LSPSc) formulation is proposed, which extends LSPSc to the kernel space to weaken the influence of the linear structure constraint in nonlinear natural data. Because of the anti-interference and fault-tolerant capabilities, both LSPSc- and K-LSPSc-based LSSM can implement target identification based on a simple template set, which just needs several images containing enough local sparse structures to learn a sufficient sparse structure dictionary of a target class. Specifically, this LSSM approach has stable performance in the target detection with scene, shape and occlusions variations. High performance is demonstrated on several datasets, indicating robust infrared target recognition in diverse environments and imaging conditions.

  9. A sparse Bayesian learning based scheme for multi-movement recognition using sEMG.

    PubMed

    Ding, Shuai; Wang, Liang

    2016-03-01

    This paper proposed a feature extraction scheme based on sparse representation considering the non-stationary property of surface electromyography (sEMG). Sparse Bayesian learning was introduced to extract the feature with optimal class separability to improve recognition accuracy of multi-movement patterns. The extracted feature, sparse representation coefficients (SRC), represented time-varying characteristics of sEMG effectively because of the compressibility (or weak sparsity) of the signal in some transformed domains. We investigated the effect of the proposed feature by comparing with other fourteen individual features in offline recognition. The results demonstrated the proposed feature revealed important dynamic information in the sEMG signals. The multi-feature sets formed by the SRC and other single feature yielded more superior performance on recognition accuracy, compared with the single features. The best average recognition accuracy of 94.33% was gained by using SVM classifier with the multi-feature set combining the feature SRC, Williston amplitude (WAMP), wavelength (WL) and the coefficients of the fourth order autoregressive model (ARC4) via multiple kernel learning framework. The proposed feature extraction scheme (known as SRC + WAMP + WL + ARC4) is a promising method for multi-movement recognition with high accuracy.

  10. Automatic anatomy recognition of sparse objects

    NASA Astrophysics Data System (ADS)

    Zhao, Liming; Udupa, Jayaram K.; Odhner, Dewey; Wang, Huiqian; Tong, Yubing; Torigian, Drew A.

    2015-03-01

    A general body-wide automatic anatomy recognition (AAR) methodology was proposed in our previous work based on hierarchical fuzzy models of multitudes of objects which was not tied to any specific organ system, body region, or image modality. That work revealed the challenges encountered in modeling, recognizing, and delineating sparse objects throughout the body (compared to their non-sparse counterparts) if the models are based on the object's exact geometric representations. The challenges stem mainly from the variation in sparse objects in their shape, topology, geographic layout, and relationship to other objects. That led to the idea of modeling sparse objects not from the precise geometric representations of their samples but by using a properly designed optimal super form. This paper presents the underlying improved methodology which includes 5 steps: (a) Collecting image data from a specific population group G and body region Β and delineating in these images the objects in Β to be modeled; (b) Building a super form, S-form, for each object O in Β; (c) Refining the S-form of O to construct an optimal (minimal) super form, S*-form, which constitutes the (fuzzy) model of O; (d) Recognizing objects in Β using the S*-form; (e) Defining confounding and background objects in each S*-form for each object and performing optimal delineation. Our evaluations based on 50 3D computed tomography (CT) image sets in the thorax on four sparse objects indicate that substantially improved performance (FPVF~2%, FNVF~10%, and success where the previous approach failed) can be achieved using the new approach.

  11. Infrared and visible image fusion method based on saliency detection in sparse domain

    NASA Astrophysics Data System (ADS)

    Liu, C. H.; Qi, Y.; Ding, W. R.

    2017-06-01

    Infrared and visible image fusion is a key problem in the field of multi-sensor image fusion. To better preserve the significant information of the infrared and visible images in the final fused image, the saliency maps of the source images is introduced into the fusion procedure. Firstly, under the framework of the joint sparse representation (JSR) model, the global and local saliency maps of the source images are obtained based on sparse coefficients. Then, a saliency detection model is proposed, which combines the global and local saliency maps to generate an integrated saliency map. Finally, a weighted fusion algorithm based on the integrated saliency map is developed to achieve the fusion progress. The experimental results show that our method is superior to the state-of-the-art methods in terms of several universal quality evaluation indexes, as well as in the visual quality.

  12. On the decoding of intracranial data using sparse orthonormalized partial least squares

    NASA Astrophysics Data System (ADS)

    van Gerven, Marcel A. J.; Chao, Zenas C.; Heskes, Tom

    2012-04-01

    It has recently been shown that robust decoding of motor output from electrocorticogram signals in monkeys over prolonged periods of time has become feasible (Chao et al 2010 Front. Neuroeng. 3 1-10 ). In order to achieve these results, multivariate partial least-squares (PLS) regression was used. PLS uses a set of latent variables, referred to as components, to model the relationship between the input and the output data and is known to handle high-dimensional and possibly strongly correlated inputs and outputs well. We developed a new decoding method called sparse orthonormalized partial least squares (SOPLS) which was tested on a subset of the data used in Chao et al (2010) (freely obtainable from neurotycho.org (Nagasaka et al 2011 PLoS ONE 6 e22561)). We show that SOPLS reaches the same decoding performance as PLS using just two sparse components which can each be interpreted as encoding particular combinations of motor parameters. Furthermore, the sparse solution afforded by the SOPLS model allowed us to show the functional involvement of beta and gamma band responses in premotor and motor cortex for predicting the first component. Based on the literature, we conjecture that this first component is involved in the encoding of movement direction. Hence, the sparse and compact representation afforded by the SOPLS model facilitates interpretation of which spectral, spatial and temporal components are involved in successful decoding. These advantages make the proposed decoding method an important new tool in neuroprosthetics.

  13. Local Sparse Structure Denoising for Low-Light-Level Image.

    PubMed

    Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lianfa

    2015-12-01

    Sparse and redundant representations perform well in image denoising. However, sparsity-based methods fail to denoise low-light-level (LLL) images because of heavy and complex noise. They consider sparsity on image patches independently and tend to lose the texture structures. To suppress noises and maintain textures simultaneously, it is necessary to embed noise invariant features into the sparse decomposition process. We, therefore, used a local structure preserving sparse coding (LSPSc) formulation to explore the local sparse structures (both the sparsity and local structure) in image. It was found that, with the introduction of spatial local structure constraint into the general sparse coding algorithm, LSPSc could improve the robustness of sparse representation for patches in serious noise. We further used a kernel LSPSc (K-LSPSc) formulation, which extends LSPSc into the kernel space to weaken the influence of linear structure constraint in nonlinear data. Based on the robust LSPSc and K-LSPSc algorithms, we constructed a local sparse structure denoising (LSSD) model for LLL images, which was demonstrated to give high performance in the natural LLL images denoising, indicating that both the LSPSc- and K-LSPSc-based LSSD models have the stable property of noise inhibition and texture details preservation.

  14. Sparse distributed memory overview

    NASA Technical Reports Server (NTRS)

    Raugh, Mike

    1990-01-01

    The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.

  15. SU-E-I-87: Automated Liver Segmentation Method for CBCT Dataset by Combining Sparse Shape Composition and Probabilistic Atlas Construction

    SciTech Connect

    Li, Dengwang; Liu, Li; Chen, Jinhu; Li, Hongsheng

    2014-06-01

    Purpose: The aiming of this study was to extract liver structures for daily Cone beam CT (CBCT) images automatically. Methods: Datasets were collected from 50 intravenous contrast planning CT images, which were regarded as training dataset for probabilistic atlas and shape prior model construction. Firstly, probabilistic atlas and shape prior model based on sparse shape composition (SSC) were constructed by iterative deformable registration. Secondly, the artifacts and noise were removed from the daily CBCT image by an edge-preserving filtering using total variation with L1 norm (TV-L1). Furthermore, the initial liver region was obtained by registering the incoming CBCT image with the atlas utilizing edge-preserving deformable registration with multi-scale strategy, and then the initial liver region was converted to surface meshing which was registered with the shape model where the major variation of specific patient was modeled by sparse vectors. At the last stage, the shape and intensity information were incorporated into joint probabilistic model, and finally the liver structure was extracted by maximum a posteriori segmentation.Regarding the construction process, firstly the manually segmented contours were converted into meshes, and then arbitrary patient data was chosen as reference image to register with the rest of training datasets by deformable registration algorithm for constructing probabilistic atlas and prior shape model. To improve the efficiency of proposed method, the initial probabilistic atlas was used as reference image to register with other patient data for iterative construction for removing bias caused by arbitrary selection. Results: The experiment validated the accuracy of the segmentation results quantitatively by comparing with the manually ones. The volumetric overlap percentage between the automatically generated liver contours and the ground truth were on an average 88%–95% for CBCT images. Conclusion: The experiment demonstrated

  16. Symbol Systems and Pictorial Representations

    NASA Astrophysics Data System (ADS)

    Diederich, Joachim; Wright, Susan

    All problem-solvers are subject to the same ultimate constraints -- limitations on space, time, and materials (Minsky, 1985). He introduces two principles: (1) Economics: Every intelligence must develop symbol-systems for representing objects, causes and goals, and (2) Sparseness: Every evolving intelligence will eventually encounter certain very special ideas -- e.g., about arithmetic, causal reasoning, and economics -- because these particular ideas are very much simpler than other ideas with similar uses. An extra-terrestrial intelligence (ETI) would have developed symbol systems to express these ideas and would have the capacity of multi-modal processing. Vakoch (1998) states that ...``ETI may rely significantly on other sensory modalities (than vision). Particularly useful representations would be ones that may be intelligible through more than one sensory modality. For instance, the information used to create a three-dimensional representation of an object might be intelligible to ETI heavily reliant on either visual or tactile sensory processes.'' The cross-modal representations Vakoch (1998) describes and the symbol systems Minsky (1985) proposes are called ``metaphors'' when combined. Metaphors allow for highly efficient communication. Metaphors are compact, condensed ways of expressing an idea: words, sounds, gestures or images are used in novel ways to refer to something they do not literally denote. Due to the importance of Minsky's ``economics'' principle, it is therefore possible that a message heavily relies on metaphors.

  17. Haptic fMRI: combining functional neuroimaging with haptics for studying the brain's motor control representation.

    PubMed

    Menon, Samir; Brantner, Gerald; Aholt, Chris; Kay, Kendrick; Khatib, Oussama

    2013-01-01

    A challenging problem in motor control neuroimaging studies is the inability to perform complex human motor tasks given the Magnetic Resonance Imaging (MRI) scanner's disruptive magnetic fields and confined workspace. In this paper, we propose a novel experimental platform that combines Functional MRI (fMRI) neuroimaging, haptic virtual simulation environments, and an fMRI-compatible haptic device for real-time haptic interaction across the scanner workspace (above torso ∼ .65×.40×.20m(3)). We implement this Haptic fMRI platform with a novel haptic device, the Haptic fMRI Interface (HFI), and demonstrate its suitability for motor neuroimaging studies. HFI has three degrees-of-freedom (DOF), uses electromagnetic motors to enable high-fidelity haptic rendering (>350Hz), integrates radio frequency (RF) shields to prevent electromagnetic interference with fMRI (temporal SNR >100), and is kinematically designed to minimize currents induced by the MRI scanner's magnetic field during motor displacement (<2cm). HFI possesses uniform inertial and force transmission properties across the workspace, and has low friction (.05-.30N). HFI's RF noise levels, in addition, are within a 3 Tesla fMRI scanner's baseline noise variation (∼.85±.1%). Finally, HFI is haptically transparent and does not interfere with human motor tasks (tested for .4m reaches). By allowing fMRI experiments involving complex three-dimensional manipulation with haptic interaction, Haptic fMRI enables-for the first time-non-invasive neuroscience experiments involving interactive motor tasks, object manipulation, tactile perception, and visuo-motor integration.

  18. A unified approach to sparse signal processing

    NASA Astrophysics Data System (ADS)

    Marvasti, Farokh; Amini, Arash; Haddadi, Farzan; Soltanolkotabi, Mahdi; Khalaj, Babak Hossein; Aldroubi, Akram; Sanei, Saeid; Chambers, Janathon

    2012-12-01

    A unified view of the area of sparse signal processing is presented in tutorial form by bringing together various fields in which the property of sparsity has been successfully exploited. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common potential benefits of significant reduction in sampling rate and processing manipulations through sparse signal processing are revealed. The key application domains of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of the sampling process and reconstruction algorithms, linkages are made with random sampling, compressed sensing, and rate of innovation. The redundancy introduced by channel coding in finite and real Galois fields is then related to over-sampling with similar reconstruction algorithms. The error locator polynomial (ELP) and iterative methods are shown to work quite effectively for both sampling and coding applications. The methods of Prony, Pisarenko, and MUltiple SIgnal Classification (MUSIC) are next shown to be targeted at analyzing signals with sparse frequency domain representations. Specifically, the relations of the approach of Prony to an annihilating filter in rate of innovation and ELP in coding are emphasized; the Pisarenko and MUSIC methods are further improvements of the Prony method under noisy environments. The iterative methods developed for sampling and coding applications are shown to be powerful tools in spectral estimation. Such narrowband spectral estimation is then related to multi-source location and direction of arrival estimation in array processing. Sparsity in unobservable source signals is also shown to facilitate source separation in sparse component analysis; the algorithms developed in this area such as linear programming and matching pursuit are also widely used in compressed sensing. Finally

  19. Sparse regularization for force identification using dictionaries

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  20. Slowness and sparseness have diverging effects on complex cell learning.

    PubMed

    Lies, Jörn-Philipp; Häfner, Ralf M; Bethge, Matthias

    2014-03-01

    Following earlier studies which showed that a sparse coding principle may explain the receptive field properties of complex cells in primary visual cortex, it has been concluded that the same properties may be equally derived from a slowness principle. In contrast to this claim, we here show that slowness and sparsity drive the representations towards substantially different receptive field properties. To do so, we present complete sets of basis functions learned with slow subspace analysis (SSA) in case of natural movies as well as translations, rotations, and scalings of natural images. SSA directly parallels independent subspace analysis (ISA) with the only difference that SSA maximizes slowness instead of sparsity. We find a large discrepancy between the filter shapes learned with SSA and ISA. We argue that SSA can be understood as a generalization of the Fourier transform where the power spectrum corresponds to the maximally slow subspace energies in SSA. Finally, we investigate the trade-off between slowness and sparseness when combined in one objective function.

  1. Quantification of (1) H-MRS signals based on sparse metabolite profiles in the time-frequency domain.

    PubMed

    Parto Dezfouli, Mohammad Ali; Parto Dezfouli, Mohsen; Ahmadian, Alireza; Frangi, Alejandro F; Esmaeili Rad, Melika; Saligheh Rad, Hamidreza

    2017-02-01

    MRS is an analytical approach used for both quantitative and qualitative analysis of human body metabolites. The accurate and robust quantification capability of proton MRS ((1) H-MRS) enables the accurate estimation of living tissue metabolite concentrations. However, such methods can be efficiently employed for quantification of metabolite concentrations only if the overlapping nature of metabolites, existing static field inhomogeneity and low signal-to-noise ratio (SNR) are taken into consideration. Representation of (1) H-MRS signals in the time-frequency domain enables us to handle the baseline and noise better. This is possible because the MRS signal of each metabolite is sparsely represented, with only a few peaks, in the frequency domain, but still along with specific time-domain features such as distinct decay constant associated with T2 relaxation rate. The baseline, however, has a smooth behavior in the frequency domain. In this study, we proposed a quantification method using continuous wavelet transformation of (1) H-MRS signals in combination with sparse representation of features in the time-frequency domain. Estimation of the sparse representations of MR spectra is performed according to the dictionaries constructed from metabolite profiles. Results on simulated and phantom data show that the proposed method is able to quantify the concentration of metabolites in (1) H-MRS signals with high accuracy and robustness. This is achieved for both low SNR (5 dB) and low signal-to-baseline ratio (-5 dB) regimes.

  2. Structured Sparse Method for Hyperspectral Unmixing

    NASA Astrophysics Data System (ADS)

    Zhu, Feiyun; Wang, Ying; Xiang, Shiming; Fan, Bin; Pan, Chunhong

    2014-02-01

    Hyperspectral Unmixing (HU) has received increasing attention in the past decades due to its ability of unveiling information latent in hyperspectral data. Unfortunately, most existing methods fail to take advantage of the spatial information in data. To overcome this limitation, we propose a Structured Sparse regularized Nonnegative Matrix Factorization (SS-NMF) method based on the following two aspects. First, we incorporate a graph Laplacian to encode the manifold structures embedded in the hyperspectral data space. In this way, the highly similar neighboring pixels can be grouped together. Second, the lasso penalty is employed in SS-NMF for the fact that pixels in the same manifold structure are sparsely mixed by a common set of relevant bases. These two factors act as a new structured sparse constraint. With this constraint, our method can learn a compact space, where highly similar pixels are grouped to share correlated sparse representations. Experiments on real hyperspectral data sets with different noise levels demonstrate that our method outperforms the state-of-the-art methods significantly.

  3. Sparse inpainting and isotropy

    SciTech Connect

    Feeney, Stephen M.; McEwen, Jason D.; Peiris, Hiranya V.; Marinucci, Domenico; Cammarota, Valentina; Wandelt, Benjamin D. E-mail: marinucc@axp.mat.uniroma2.it E-mail: h.peiris@ucl.ac.uk E-mail: cammarot@axp.mat.uniroma2.it

    2014-01-01

    Sparse inpainting techniques are gaining in popularity as a tool for cosmological data analysis, in particular for handling data which present masked regions and missing observations. We investigate here the relationship between sparse inpainting techniques using the spherical harmonic basis as a dictionary and the isotropy properties of cosmological maps, as for instance those arising from cosmic microwave background (CMB) experiments. In particular, we investigate the possibility that inpainted maps may exhibit anisotropies in the behaviour of higher-order angular polyspectra. We provide analytic computations and simulations of inpainted maps for a Gaussian isotropic model of CMB data, suggesting that the resulting angular trispectrum may exhibit small but non-negligible deviations from isotropy.

  4. Sparse inpainting and isotropy

    NASA Astrophysics Data System (ADS)

    Feeney, Stephen M.; Marinucci, Domenico; McEwen, Jason D.; Peiris, Hiranya V.; Wandelt, Benjamin D.; Cammarota, Valentina

    2014-01-01

    Sparse inpainting techniques are gaining in popularity as a tool for cosmological data analysis, in particular for handling data which present masked regions and missing observations. We investigate here the relationship between sparse inpainting techniques using the spherical harmonic basis as a dictionary and the isotropy properties of cosmological maps, as for instance those arising from cosmic microwave background (CMB) experiments. In particular, we investigate the possibility that inpainted maps may exhibit anisotropies in the behaviour of higher-order angular polyspectra. We provide analytic computations and simulations of inpainted maps for a Gaussian isotropic model of CMB data, suggesting that the resulting angular trispectrum may exhibit small but non-negligible deviations from isotropy.

  5. Bayesian sparse channel estimation

    NASA Astrophysics Data System (ADS)

    Chen, Chulong; Zoltowski, Michael D.

    2012-05-01

    In Orthogonal Frequency Division Multiplexing (OFDM) systems, the technique used to estimate and track the time-varying multipath channel is critical to ensure reliable, high data rate communications. It is recognized that wireless channels often exhibit a sparse structure, especially for wideband and ultra-wideband systems. In order to exploit this sparse structure to reduce the number of pilot tones and increase the channel estimation quality, the application of compressed sensing to channel estimation is proposed. In this article, to make the compressed channel estimation more feasible for practical applications, it is investigated from a perspective of Bayesian learning. Under the Bayesian learning framework, the large-scale compressed sensing problem, as well as large time delay for the estimation of the doubly selective channel over multiple consecutive OFDM symbols, can be avoided. Simulation studies show a significant improvement in channel estimation MSE and less computing time compared to the conventional compressed channel estimation techniques.

  6. A Multiobjective Sparse Feature Learning Model for Deep Neural Networks.

    PubMed

    Gong, Maoguo; Liu, Jia; Li, Hao; Cai, Qing; Su, Linzhi

    2015-12-01

    Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features.

  7. Sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1988-01-01

    Theoretical models of the human brain and proposed neural-network computers are developed analytically. Chapters are devoted to the mathematical foundations, background material from computer science, the theory of idealized neurons, neurons as address decoders, and the search of memory for the best match. Consideration is given to sparse memory, distributed storage, the storage and retrieval of sequences, the construction of distributed memory, and the organization of an autonomous learning system.

  8. Sparse matrix test collections

    SciTech Connect

    Duff, I.

    1996-12-31

    This workshop will discuss plans for coordinating and developing sets of test matrices for the comparison and testing of sparse linear algebra software. We will talk of plans for the next release (Release 2) of the Harwell-Boeing Collection and recent work on improving the accessibility of this Collection and others through the World Wide Web. There will only be three talks of about 15 to 20 minutes followed by a discussion from the floor.

  9. Sparse distributed memory

    SciTech Connect

    Kanerva, P.

    1988-01-01

    Theoretical models of the human brain and proposed neural-network computers are developed analytically. Chapters are devoted to the mathematical foundations, background material from computer science, the theory of idealized neurons, neurons as address decoders, and the search of memory for the best match. Consideration is given to sparse memory, distributed storage, the storage and retrieval of sequences, the construction of distributed memory, and the organization of an autonomous learning system. 63 refs.

  10. Sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1988-01-01

    Theoretical models of the human brain and proposed neural-network computers are developed analytically. Chapters are devoted to the mathematical foundations, background material from computer science, the theory of idealized neurons, neurons as address decoders, and the search of memory for the best match. Consideration is given to sparse memory, distributed storage, the storage and retrieval of sequences, the construction of distributed memory, and the organization of an autonomous learning system.

  11. Group Sparse Additive Models

    PubMed Central

    Yin, Junming; Chen, Xi; Xing, Eric P.

    2016-01-01

    We consider the problem of sparse variable selection in nonparametric additive models, with the prior knowledge of the structure among the covariates to encourage those variables within a group to be selected jointly. Previous works either study the group sparsity in the parametric setting (e.g., group lasso), or address the problem in the nonparametric setting without exploiting the structural information (e.g., sparse additive models). In this paper, we present a new method, called group sparse additive models (GroupSpAM), which can handle group sparsity in additive models. We generalize the ℓ1/ℓ2 norm to Hilbert spaces as the sparsity-inducing penalty in GroupSpAM. Moreover, we derive a novel thresholding condition for identifying the functional sparsity at the group level, and propose an efficient block coordinate descent algorithm for constructing the estimate. We demonstrate by simulation that GroupSpAM substantially outperforms the competing methods in terms of support recovery and prediction accuracy in additive models, and also conduct a comparative experiment on a real breast cancer dataset.

  12. Water storage variations extracted from GRACE data by combination of multi-resolution representation (MRR) and principal component analysis (PCA)

    NASA Astrophysics Data System (ADS)

    Ressler, Gerhard; Eicker, Annette; Lieb, Verena; Schmidt, Michael; Seitz, Florian; Shang, Kun; Shum, Che-Kwan

    2015-04-01

    Regionally changing hydrological conditions and their link to the availability of water for human consumption and agriculture is a challenging topic in the context of global change that is receiving increasing attention. Gravity field changes related to signals of land hydrology have been observed by the Gravity Recovery And Climate Experiment (GRACE) satellite mission over a period of more than 12 years. These changes are being analysed in our studies with respect to changing hydrological conditions, especially as a consequence of extreme weather situations and/or a change of climatic conditions. Typically, variations of the Earth's gravity field are modeled as a series expansion in terms of global spherical harmonics with time dependent harmonic coefficients. In order to investigate specific structures in the signal we alternatively apply a wavelet-based multi-resolution technique for the determination of regional spatiotemporal variations of the Earth's gravitational potential in combination with principal component analysis (PCA) for detailed evaluation of these structures. The multi-resolution representation (MRR) i.e. the composition of a signal considering different resolution levels is a suitable approach for spatial gravity modeling especially in case of inhomogeneous distribution of observation data on the one hand and because of the inhomogeneous structure of the Earth's gravity field itself on the other hand. In the MRR the signal is split into detail signals by applying low- and band-pass filters realized e.g. by spherical scaling and wavelet functions. Each detail signal is related to a specific resolution level and covers a certain part of the signal spectrum. Principal component analysis (PCA) enables for revealing specific signal patterns in the space as well as the time domain like trends and seasonal as well as semi seasonal variations. We apply the above mentioned combined technique to GRACE L1C residual potential differences that have been

  13. Digitized tissue microarray classification using sparse reconstruction

    NASA Astrophysics Data System (ADS)

    Xing, Fuyong; Liu, Baiyang; Qi, Xin; Foran, David J.; Yang, Lin

    2012-02-01

    In this paper, we propose a novel image classification method based on sparse reconstruction errors to discriminate cancerous breast tissue microarray (TMA) discs from benign ones. Sparse representation is employed to reconstruct the samples and separate the benign and cancer discs. The method consists of several steps including mask generation, dictionary learning, and data classification. Mask generation is performed using multiple scale texton histogram, integral histogram and AdaBoost. Two separate cancer and benign TMA dictionaries are learned using K-SVD. Sparse coefficients are calculated using orthogonal matching pursuit (OMP), and the reconstructive error of each testing sample is recorded. The testing image will be divided into many small patches. Each small patch will be assigned to the category which produced the smallest reconstruction error. The final classification of each testing sample is achieved by calculating the total reconstruction errors. Using standard RGB images, and tested on a dataset with 547 images, we achieved much better results than previous literature. The binary classification accuracy, sensitivity, and specificity are 88.0%, 90.6%, and 70.5%, respectively.

  14. TASMANIAN Sparse Grids Module

    SciTech Connect

    and Drayton Munster, Miroslav Stoyanov

    2013-09-20

    Sparse Grids are the family of methods of choice for multidimensional integration and interpolation in low to moderate number of dimensions. The method is to select extend a one dimensional set of abscissas, weights and basis functions by taking a subset of all possible tensor products. The module provides the ability to create global and local approximations based on polynomials and wavelets. The software has three components, a library, a wrapper for the library that provides a command line interface via text files ad a MATLAB interface via the command line tool.

  15. TASMANIAN Sparse Grids Module

    SciTech Connect

    and Drayton Munster, Miroslav Stoyanov

    2013-09-20

    Sparse Grids are the family of methods of choice for multidimensional integration and interpolation in low to moderate number of dimensions. The method is to select extend a one dimensional set of abscissas, weights and basis functions by taking a subset of all possible tensor products. The module provides the ability to create global and local approximations based on polynomials and wavelets. The software has three components, a library, a wrapper for the library that provides a command line interface via text files ad a MATLAB interface via the command line tool.

  16. Learning Low-Rank Class-Specific Dictionary and Sparse Intra-Class Variant Dictionary for Face Recognition.

    PubMed

    Tang, Xin; Feng, Guo-Can; Li, Xiao-Xin; Cai, Jia-Xin

    2015-01-01

    Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the

  17. Learning Low-Rank Class-Specific Dictionary and Sparse Intra-Class Variant Dictionary for Face Recognition

    PubMed Central

    Tang, Xin; Feng, Guo-can; Li, Xiao-xin; Cai, Jia-xin

    2015-01-01

    Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the

  18. Multiscale Region-Level VHR Image Change Detection via Sparse Change Descriptor and Robust Discriminative Dictionary Learning

    PubMed Central

    Xu, Yuan; Ding, Kun; Huo, Chunlei; Zhong, Zisha; Li, Haichang; Pan, Chunhong

    2015-01-01

    Very high resolution (VHR) image change detection is challenging due to the low discriminative ability of change feature and the difficulty of change decision in utilizing the multilevel contextual information. Most change feature extraction techniques put emphasis on the change degree description (i.e., in what degree the changes have happened), while they ignore the change pattern description (i.e., how the changes changed), which is of equal importance in characterizing the change signatures. Moreover, the simultaneous consideration of the classification robust to the registration noise and the multiscale region-consistent fusion is often neglected in change decision. To overcome such drawbacks, in this paper, a novel VHR image change detection method is proposed based on sparse change descriptor and robust discriminative dictionary learning. Sparse change descriptor combines the change degree component and the change pattern component, which are encoded by the sparse representation error and the morphological profile feature, respectively. Robust change decision is conducted by multiscale region-consistent fusion, which is implemented by the superpixel-level cosparse representation with robust discriminative dictionary and the conditional random field model. Experimental results confirm the effectiveness of the proposed change detection technique. PMID:25918748

  19. Constructing a Nonnegative Low-Rank and Sparse Graph With Data-Adaptive Features.

    PubMed

    Zhuang, Liansheng; Gao, Shenghua; Tang, Jinhui; Wang, Jingjing; Lin, Zhouchen; Ma, Yi; Yu, Nenghai

    2015-11-01

    This paper aims at constructing a good graph to discover the intrinsic data structures under a semisupervised learning setting. First, we propose to build a nonnegative low-rank and sparse (referred to as NNLRS) graph for the given data representation. In particular, the weights of edges in the graph are obtained by seeking a nonnegative low-rank and sparse reconstruction coefficients matrix that represents each data sample as a linear combination of others. The so-obtained NNLRS-graph captures both the global mixture of subspaces structure (by the low-rankness) and the locally linear structure (by the sparseness) of the data, hence it is both generative and discriminative. Second, as good features are extremely important for constructing a good graph, we propose to learn the data embedding matrix and construct the graph simultaneously within one framework, which is termed as NNLRS with embedded features (referred to as NNLRS-EF). Extensive NNLRS experiments on three publicly available data sets demonstrate that the proposed method outperforms the state-of-the-art graph construction method by a large margin for both semisupervised classification and discriminative analysis, which verifies the effectiveness of our proposed method.

  20. Flexible sparse regularization

    NASA Astrophysics Data System (ADS)

    Lorenz, Dirk A.; Resmerita, Elena

    2017-01-01

    The seminal paper of Daubechies, Defrise, DeMol made clear that {{\\ell }}p spaces with p\\in [1,2) and p-powers of the corresponding norms are appropriate settings for dealing with reconstruction of sparse solutions of ill-posed problems by regularization. It seems that the case p = 1 provides the best results in most of the situations compared to the cases p\\in (1,2). An extensive literature gives great credit also to using {{\\ell }}p spaces with p\\in (0,1) together with the corresponding quasi-norms, although one has to tackle challenging numerical problems raised by the non-convexity of the quasi-norms. In any of these settings, either superlinear, linear or sublinear, the question of how to choose the exponent p has been not only a numerical issue, but also a philosophical one. In this work we introduce a more flexible way of sparse regularization by varying exponents. We introduce the corresponding functional analytic framework, that leaves the setting of normed spaces but works with so-called F-norms. One curious result is that there are F-norms which generate the ℓ 1 space, but they are strictly convex, while the ℓ 1-norm is just convex.

  1. An empirical study on the matrix-based protein representations and their combination with sequence-based approaches.

    PubMed

    Nanni, Loris; Lumini, Alessandra; Brahnam, Sheryl

    2013-03-01

    Many domains have a stake in the development of reliable systems for automatic protein classification. Of particular interest in recent studies of automatic protein classification is the exploration of new methods for extracting features from a protein that enhance classification for specific problems. These methods have proven very useful in one or two domains, but they have failed to generalize well across several domains (i.e. classification problems). In this paper, we evaluate several feature extraction approaches for representing proteins with the aim of sequence-based protein classification. Several protein representations are evaluated, those starting from: the position specific scoring matrix (PSSM) of the proteins; the amino-acid sequence; a matrix representation of the protein, of dimension (length of the protein) ×20, obtained using the substitution matrices for representing each amino-acid as a vector. A valuable result is that a texture descriptor can be extracted from the PSSM protein representation which improves the performance of standard descriptors based on the PSSM representation. Experimentally, we develop our systems by comparing several protein descriptors on nine different datasets. Each descriptor is used to train a support vector machine (SVM) or an ensemble of SVM. Although different stand-alone descriptors work well on some datasets (but not on others), we have discovered that fusion among classifiers trained using different descriptors obtains a good performance across all the tested datasets. Matlab code/Datasets used in the proposed paper are available at http://www.bias.csr.unibo.it\

  2. Cerebellar Functional Parcellation Using Sparse Dictionary Learning Clustering.

    PubMed

    Wang, Changqing; Kipping, Judy; Bao, Chenglong; Ji, Hui; Qiu, Anqi

    2016-01-01

    The human cerebellum has recently been discovered to contribute to cognition and emotion beyond the planning and execution of movement, suggesting its functional heterogeneity. We aimed to identify the functional parcellation of the cerebellum using information from resting-state functional magnetic resonance imaging (rs-fMRI). For this, we introduced a new data-driven decomposition-based functional parcellation algorithm, called Sparse Dictionary Learning Clustering (SDLC). SDLC integrates dictionary learning, sparse representation of rs-fMRI, and k-means clustering into one optimization problem. The dictionary is comprised of an over-complete set of time course signals, with which a sparse representation of rs-fMRI signals can be constructed. Cerebellar functional regions were then identified using k-means clustering based on the sparse representation of rs-fMRI signals. We solved SDLC using a multi-block hybrid proximal alternating method that guarantees strong convergence. We evaluated the reliability of SDLC and benchmarked its classification accuracy against other clustering techniques using simulated data. We then demonstrated that SDLC can identify biologically reasonable functional regions of the cerebellum as estimated by their cerebello-cortical functional connectivity. We further provided new insights into the cerebello-cortical functional organization in children.

  3. Cerebellar Functional Parcellation Using Sparse Dictionary Learning Clustering

    PubMed Central

    Wang, Changqing; Kipping, Judy; Bao, Chenglong; Ji, Hui; Qiu, Anqi

    2016-01-01

    The human cerebellum has recently been discovered to contribute to cognition and emotion beyond the planning and execution of movement, suggesting its functional heterogeneity. We aimed to identify the functional parcellation of the cerebellum using information from resting-state functional magnetic resonance imaging (rs-fMRI). For this, we introduced a new data-driven decomposition-based functional parcellation algorithm, called Sparse Dictionary Learning Clustering (SDLC). SDLC integrates dictionary learning, sparse representation of rs-fMRI, and k-means clustering into one optimization problem. The dictionary is comprised of an over-complete set of time course signals, with which a sparse representation of rs-fMRI signals can be constructed. Cerebellar functional regions were then identified using k-means clustering based on the sparse representation of rs-fMRI signals. We solved SDLC using a multi-block hybrid proximal alternating method that guarantees strong convergence. We evaluated the reliability of SDLC and benchmarked its classification accuracy against other clustering techniques using simulated data. We then demonstrated that SDLC can identify biologically reasonable functional regions of the cerebellum as estimated by their cerebello-cortical functional connectivity. We further provided new insights into the cerebello-cortical functional organization in children. PMID:27199650

  4. A new sparse Bayesian learning method for inverse synthetic aperture radar imaging via exploiting cluster patterns

    NASA Astrophysics Data System (ADS)

    Fang, Jun; Zhang, Lizao; Duan, Huiping; Huang, Lei; Li, Hongbin

    2016-05-01

    The application of sparse representation to SAR/ISAR imaging has attracted much attention over the past few years. This new class of sparse representation based imaging methods present a number of unique advantages over conventional range-Doppler methods, the basic idea behind these works is to formulate SAR/ISAR imaging as a sparse signal recovery problem. In this paper, we propose a new two-dimensional pattern-coupled sparse Bayesian learning(SBL) method to capture the underlying cluster patterns of the ISAR target images. Based on this model, an expectation-maximization (EM) algorithm is developed to infer the maximum a posterior (MAP) estimate of the hyperparameters, along with the posterior distribution of the sparse signal. Experimental results demonstrate that the proposed method is able to achieve a substantial performance improvement over existing algorithms, including the conventional SBL method.

  5. Adaptive sparse signal processing of on-orbit lightning data using learned dictionaries

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Smith, David A.; Hamlin, Timothy D.; Light, Tess E.; Suszcynsky, David M.

    2013-05-01

    For the past two decades, there has been an ongoing research effort at Los Alamos National Laboratory to learn more about the Earth's radiofrequency (RF) background utilizing satellite-based RF observations of terrestrial lightning. The Fast On-orbit Recording of Transient Events (FORTE) satellite provided a rich RF lighting database, comprising of five years of data recorded from its two RF payloads. While some classification work has been done previously on the FORTE RF database, application of modern pattern recognition techniques may advance lightning research in the scientific community and potentially improve on-orbit processing and event discrimination capabilities for future satellite payloads. We now develop and implement new event classification capability on the FORTE database using state-of-the-art adaptive signal processing combined with compressive sensing and machine learning techniques. The focus of our work is improved feature extraction using sparse representations in learned dictionaries. Conventional localized data representations for RF transients using analytical dictionaries, such as a short-time Fourier basis or wavelets, can be suitable for analyzing some types of signals, but not others. Instead, we learn RF dictionaries directly from data, without relying on analytical constraints or additional knowledge about the signal characteristics, using several established machine learning algorithms. Sparse classification features are extracted via matching pursuit search over the learned dictionaries, and used in conjunction with a statistical classifier to distinguish between lightning types. We present preliminary results of our work and discuss classification scenarios and future development.

  6. Biomedical time series clustering based on non-negative sparse coding and probabilistic topic model.

    PubMed

    Wang, Jin; Liu, Ping; F H She, Mary; Nahavandi, Saeid; Kouzani, Abbas

    2013-09-01

    Biomedical time series clustering that groups a set of unlabelled temporal signals according to their underlying similarity is very useful for biomedical records management and analysis such as biosignals archiving and diagnosis. In this paper, a new framework for clustering of long-term biomedical time series such as electrocardiography (ECG) and electroencephalography (EEG) signals is proposed. Specifically, local segments extracted from the time series are projected as a combination of a small number of basis elements in a trained dictionary by non-negative sparse coding. A Bag-of-Words (BoW) representation is then constructed by summing up all the sparse coefficients of local segments in a time series. Based on the BoW representation, a probabilistic topic model that was originally developed for text document analysis is extended to discover the underlying similarity of a collection of time series. The underlying similarity of biomedical time series is well captured attributing to the statistic nature of the probabilistic topic model. Experiments on three datasets constructed from publicly available EEG and ECG signals demonstrates that the proposed approach achieves better accuracy than existing state-of-the-art methods, and is insensitive to model parameters such as length of local segments and dictionary size.

  7. Resistant multiple sparse canonical correlation.

    PubMed

    Coleman, Jacob; Replogle, Joseph; Chandler, Gabriel; Hardin, Johanna

    2016-04-01

    Canonical correlation analysis (CCA) is a multivariate technique that takes two datasets and forms the most highly correlated possible pairs of linear combinations between them. Each subsequent pair of linear combinations is orthogonal to the preceding pair, meaning that new information is gleaned from each pair. By looking at the magnitude of coefficient values, we can find out which variables can be grouped together, thus better understanding multiple interactions that are otherwise difficult to compute or grasp intuitively. CCA appears to have quite powerful applications to high-throughput data, as we can use it to discover, for example, relationships between gene expression and gene copy number variation. One of the biggest problems of CCA is that the number of variables (often upwards of 10,000) makes biological interpretation of linear combinations nearly impossible. To limit variable output, we have employed a method known as sparse canonical correlation analysis (SCCA), while adding estimation which is resistant to extreme observations or other types of deviant data. In this paper, we have demonstrated the success of resistant estimation in variable selection using SCCA. Additionally, we have used SCCA to find multiple canonical pairs for extended knowledge about the datasets at hand. Again, using resistant estimators provided more accurate estimates than standard estimators in the multiple canonical correlation setting. R code is available and documented at https://github.com/hardin47/rmscca.

  8. Passive microwave rainfall retrieval: A mathematical approach via sparse learning

    NASA Astrophysics Data System (ADS)

    Ebtehaj, M.; Lerman, G.; Foufoula-Georgiou, E.

    2013-12-01

    Detection and estimation of surface rainfall from spaceborne radiometric imaging is a challenging problem. The main challenges arise due to the nonlinear relationship of surface rainfall with its microwave multispectral signatures, the presence of noise, insufficient spatial resolution in observations, and the mixture of the earth surface and atmospheric radiations. A mathematical approach is presented for the detection and retrieval of surface rainfall from radiometric observations via supervised learning. In other words, we use a priori known libraries of high-resolution rainfall observations (e.g., obtained by an active radar) and their coincident spectral signatures (i.e., obtained by a radiometer) to design a mathematical model for rainfall retrieval. This model views the rainfall retrieval as a nonlinear inverse problem and relies on sparsity-promoting Bayesian inversion techniques. In this approach, we assume that small neighborhoods of the rainfall fields and their spectral signatures live on manifolds with similar local geometry and encode those neighborhoods in two joint libraries, the so-called rainfall and spectral dictionaries. We model rainfall passive microwave images by sparse linear combinations of the atoms of the spectral dictionary and then use the same representation coefficients to retrieve surface rain rates from the corresponding rainfall dictionary. The proposed methodology is examined by the use of spectral and rainfall dictionaries provided by the microwave imager (TMI) and precipitation radar (PR), aboard the Tropical Rainfall Measuring Mission (TRMM) satellite. Pros and cons of the presented approach are studied by extensive comparisons with the current operational rainfall algorithm of the TRMM satellite. Future extensions are also highlighted for potential application in the era of the Global Precipitation Measurement (GPM) mission. Comparing the retrieved rain rates for Hurricane Danielle 08/29/2010 (UTC 09:48:00). (Top panel) PR-2A

  9. Adaptive sparse grid expansions of the vibrational Hamiltonian

    SciTech Connect

    Strobusch, D.; Scheurer, Ch.

    2014-02-21

    The vibrational Hamiltonian involves two high dimensional operators, the kinetic energy operator (KEO), and the potential energy surface (PES). Both must be approximated for systems involving more than a few atoms. Adaptive approximation schemes are not only superior to truncated Taylor or many-body expansions (MBE), they also allow for error estimates, and thus operators of predefined precision. To this end, modified sparse grids (SG) are developed that can be combined with adaptive MBEs. This MBE/SG hybrid approach yields a unified, fully adaptive representation of the KEO and the PES. Refinement criteria, based on the vibrational self-consistent field (VSCF) and vibrational configuration interaction (VCI) methods, are presented. The combination of the adaptive MBE/SG approach and the VSCF plus VCI methods yields a black box like procedure to compute accurate vibrational spectra. This is demonstrated on a test set of molecules, comprising water, formaldehyde, methanimine, and ethylene. The test set is first employed to prove convergence for semi-empirical PM3-PESs and subsequently to compute accurate vibrational spectra from CCSD(T)-PESs that agree well with experimental values.

  10. Percolation on Sparse Networks

    NASA Astrophysics Data System (ADS)

    Karrer, Brian; Newman, M. E. J.; Zdeborová, Lenka

    2014-11-01

    We study percolation on networks, which is used as a model of the resilience of networked systems such as the Internet to attack or failure and as a simple model of the spread of disease over human contact networks. We reformulate percolation as a message passing process and demonstrate how the resulting equations can be used to calculate, among other things, the size of the percolating cluster and the average cluster size. The calculations are exact for sparse networks when the number of short loops in the network is small, but even on networks with many short loops we find them to be highly accurate when compared with direct numerical simulations. By considering the fixed points of the message passing process, we also show that the percolation threshold on a network with few loops is given by the inverse of the leading eigenvalue of the so-called nonbacktracking matrix.

  11. Sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1989-01-01

    Sparse distributed memory was proposed be Pentti Kanerva as a realizable architecture that could store large patterns and retrieve them based on partial matches with patterns representing current sensory inputs. This memory exhibits behaviors, both in theory and in experiment, that resemble those previously unapproached by machines - e.g., rapid recognition of faces or odors, discovery of new connections between seemingly unrelated ideas, continuation of a sequence of events when given a cue from the middle, knowing that one doesn't know, or getting stuck with an answer on the tip of one's tongue. These behaviors are now within reach of machines that can be incorporated into the computing systems of robots capable of seeing, talking, and manipulating. Kanerva's theory is a break with the Western rationalistic tradition, allowing a new interpretation of learning and cognition that respects biology and the mysteries of individual human beings.

  12. A comparison of methods for representing sparsely sampled random quantities.

    SciTech Connect

    Romero, Vicente Jose; Swiler, Laura Painton; Urbina, Angel; Mullins, Joshua

    2013-09-01

    This report discusses the treatment of uncertainties stemming from relatively few samples of random quantities. The importance of this topic extends beyond experimental data uncertainty to situations involving uncertainty in model calibration, validation, and prediction. With very sparse data samples it is not practical to have a goal of accurately estimating the underlying probability density function (PDF). Rather, a pragmatic goal is that the uncertainty representation should be conservative so as to bound a specified percentile range of the actual PDF, say the range between 0.025 and .975 percentiles, with reasonable reliability. A second, opposing objective is that the representation not be overly conservative; that it minimally over-estimate the desired percentile range of the actual PDF. The presence of the two opposing objectives makes the sparse-data uncertainty representation problem interesting and difficult. In this report, five uncertainty representation techniques are characterized for their performance on twenty-one test problems (over thousands of trials for each problem) according to these two opposing objectives and other performance measures. Two of the methods, statistical Tolerance Intervals and a kernel density approach specifically developed for handling sparse data, exhibit significantly better overall performance than the others.

  13. Learning to sense sparse signals: simultaneous sensing matrix and sparsifying dictionary optimization.

    PubMed

    Duarte-Carvajalino, Julio Martin; Sapiro, Guillermo

    2009-07-01

    Sparse signal representation, analysis, and sensing have received a lot of attention in recent years from the signal processing, optimization, and learning communities. On one hand, learning overcomplete dictionaries that facilitate a sparse representation of the data as a liner combination of a few atoms from such dictionary leads to state-of-the-art results in image and video restoration and classification. On the other hand, the framework of compressed sensing (CS) has shown that sparse signals can be recovered from far less samples than those required by the classical Shannon-Nyquist Theorem. The samples used in CS correspond to linear projections obtained by a sensing projection matrix. It has been shown that, for example, a nonadaptive random sampling matrix satisfies the fundamental theoretical requirements of CS, enjoying the additional benefit of universality. On the other hand, a projection sensing matrix that is optimally designed for a certain class of signals can further improve the reconstruction accuracy or further reduce the necessary number of samples. In this paper, we introduce a framework for the joint design and optimization, from a set of training images, of the nonparametric dictionary and the sensing matrix. We show that this joint optimization outperforms both the use of random sensing matrices and those matrices that are optimized independently of the learning of the dictionary. Particular cases of the proposed framework include the optimization of the sensing matrix for a given dictionary as well as the optimization of the dictionary for a predefined sensing environment. The presentation of the framework and its efficient numerical optimization is complemented with numerous examples on classical image datasets.

  14. Language Recognition via Sparse Coding

    DTIC Science & Technology

    2016-09-08

    a posteriori (MAP) adaptation scheme that further optimizes the discriminative quality of sparse-coded speech fea - tures. We empirically validate the...significantly improve the discriminative quality of sparse-coded speech fea - tures. In Section 4, we evaluate the proposed approaches against an i-vector

  15. Adaptive sparse signal processing for discrimination of satellite-based radiofrequency (RF) recordings of lightning events

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Smith, David A.

    2015-05-01

    For over two decades, Los Alamos National Laboratory programs have included an active research effort utilizing satellite observations of terrestrial lightning to learn more about the Earth's RF background. The FORTE satellite provided a rich satellite lightning database, which has been previously used for some event classification, and remains relevant for advancing lightning research. Lightning impulses are dispersed as they travel through the ionosphere, appearing as nonlinear chirps at the receiver on orbit. The data processing challenge arises from the combined complexity of the lightning source model, the propagation medium nonlinearities, and the sensor artifacts. We continue to develop modern event classification capability on the FORTE database using adaptive signal processing combined with compressive sensing techniques. The focus of our work is improved feature extraction using sparse representations in overcomplete analytical dictionaries. We explore two possible techniques for detecting lightning events, and showcase the algorithms on few representative data examples. We present preliminary results of our work and discuss future development.

  16. Sparse CT reconstruction based on multi-direction anisotropic total variation (MDATV)

    PubMed Central

    2014-01-01

    Background The sparse CT (Computed Tomography), inspired by compressed sensing, means to introduce a prior information of image sparsity into CT reconstruction to reduce the input projections so as to reduce the potential threat of incremental X-ray dose to patients’ health. Recently, many remarkable works were concentrated on the sparse CT reconstruction from sparse (limited-angle or few-view style) projections. In this paper we would like to incorporate more prior information into the sparse CT reconstruction for improvement of performance. It is known decades ago that the given projection directions can provide information about the directions of edges in the restored CT image. ATV (Anisotropic Total Variation), a TV (Total Variation) norm based regularization, could use the prior information of image sparsity and edge direction simultaneously. But ATV can only represent the edge information in few directions and lose much prior information of image edges in other directions. Methods To sufficiently use the prior information of edge directions, a novel MDATV (Multi-Direction Anisotropic Total Variation) is proposed. In this paper we introduce the 2D-IGS (Two Dimensional Image Gradient Space), and combined the coordinate rotation transform with 2D-IGS to represent edge information in multiple directions. Then by incorporating this multi-direction representation into ATV norm we get the MDATV regularization. To solve the optimization problem based on the MDATV regularization, a novel ART (algebraic reconstruction technique) + MDATV scheme is outlined. And NESTA (NESTerov’s Algorithm) is proposed to replace GD (Gradient Descent) for minimizing the TV-based regularization. Results The numerical and real data experiments demonstrate that MDATV based iterative reconstruction improved the quality of restored image. NESTA is more suitable than GD for minimization of TV-based regularization. Conclusions MDATV regularization can sufficiently use the prior

  17. Sparse CT reconstruction based on multi-direction anisotropic total variation (MDATV).

    PubMed

    Li, Hongxiao; Chen, Xiaodong; Wang, Yi; Zhou, Zhongxing; Zhu, Qingzhen; Yu, Daoyin

    2014-07-04

    The sparse CT (Computed Tomography), inspired by compressed sensing, means to introduce a prior information of image sparsity into CT reconstruction to reduce the input projections so as to reduce the potential threat of incremental X-ray dose to patients' health. Recently, many remarkable works were concentrated on the sparse CT reconstruction from sparse (limited-angle or few-view style) projections. In this paper we would like to incorporate more prior information into the sparse CT reconstruction for improvement of performance. It is known decades ago that the given projection directions can provide information about the directions of edges in the restored CT image. ATV (Anisotropic Total Variation), a TV (Total Variation) norm based regularization, could use the prior information of image sparsity and edge direction simultaneously. But ATV can only represent the edge information in few directions and lose much prior information of image edges in other directions. To sufficiently use the prior information of edge directions, a novel MDATV (Multi-Direction Anisotropic Total Variation) is proposed. In this paper we introduce the 2D-IGS (Two Dimensional Image Gradient Space), and combined the coordinate rotation transform with 2D-IGS to represent edge information in multiple directions. Then by incorporating this multi-direction representation into ATV norm we get the MDATV regularization. To solve the optimization problem based on the MDATV regularization, a novel ART (algebraic reconstruction technique) + MDATV scheme is outlined. And NESTA (NESTerov's Algorithm) is proposed to replace GD (Gradient Descent) for minimizing the TV-based regularization. The numerical and real data experiments demonstrate that MDATV based iterative reconstruction improved the quality of restored image. NESTA is more suitable than GD for minimization of TV-based regularization. MDATV regularization can sufficiently use the prior information of image sparsity and edge information

  18. Group-constrained sparse fMRI connectivity modeling for mild cognitive impairment identification.

    PubMed

    Wee, Chong-Yaw; Yap, Pew-Thian; Zhang, Daoqiang; Wang, Lihong; Shen, Dinggang

    2014-03-01

    Emergence of advanced network analysis techniques utilizing resting-state functional magnetic resonance imaging (R-fMRI) has enabled a more comprehensive understanding of neurological disorders at a whole-brain level. However, inferring brain connectivity from R-fMRI is a challenging task, particularly when the ultimate goal is to achieve good control-patient classification performance, owing to perplexing noise effects, curse of dimensionality, and inter-subject variability. Incorporating sparsity into connectivity modeling may be a possible solution to partially remedy this problem since most biological networks are intrinsically sparse. Nevertheless, sparsity constraint, when applied at an individual level, will inevitably cause inter-subject variability and hence degrade classification performance. To this end, we formulate the R-fMRI time series of each region of interest (ROI) as a linear representation of time series of other ROIs to infer sparse connectivity networks that are topologically identical across individuals. This formulation allows simultaneous selection of a common set of ROIs across subjects so that their linear combination is best in estimating the time series of the considered ROI. Specifically, l 1-norm is imposed on each subject to filter out spurious or insignificant connections to produce sparse networks. A group-constraint is hence imposed via multi-task learning using a l 2-norm to encourage consistent non-zero connections across subjects. This group-constraint is crucial since the network topology is identical for all subjects while still preserving individual information via different connectivity values. We validated the proposed modeling in mild cognitive impairment identification and promising results achieved demonstrate its superiority in disease characterization, particularly greater sensitivity to early stage brain pathologies. The inferred group-constrained sparse network is found to be biologically plausible and is highly

  19. Group-Constrained Sparse FMRI Connectivity Modeling for Mild Cognitive Impairment Identification

    PubMed Central

    Wee, Chong-Yaw; Yap, Pew-Thian; Zhang, Daoqiang; Wang, Lihong; Shen, Dinggang

    2013-01-01

    Emergence of advanced network analysis techniques utilizing resting-state functional Magnetic Resonance Imaging (R-fMRI) has enabled a more comprehensive understanding of neurological disorders at a whole-brain level. However, inferring brain connectivity from R-fMRI is a challenging task, particularly when the ultimate goal is to achieve good control-patient classification performance, owing to perplexing noise effects, curse of dimensionality, and inter-subject variability. Incorporating sparsity into connectivity modeling may be a possible solution to partially remedy this problem since most biological networks are intrinsically sparse. Nevertheless, sparsity constraint, when applied at an individual level, will inevitably cause inter-subject variability and hence degrade classification performance. To this end, we formulate the R-fMRI time series of each region-of-interest (ROI) as a linear representation of time series of other ROIs to infer sparse connectivity networks that are topologically identical across individuals. This formulation allows simultaneous selection of a common set of ROIs across subjects so that their linear combination is best in estimating the time series of the considered ROI. Specifically, l1-norm is imposed on each subject to filter out spurious or insignificant connections to produce sparse networks. A group-constraint is hence imposed via multi-task learning using a l2-norm to encourage consistent non-zero connections across subjects. This group-constraint is crucial since the network topology is identical for all subjects while still preserving individual information via different connectivity values. We validated the proposed modeling in mild cognitive impairment (MCI) identification and promising results achieved demonstrate its superiority in disease characterization, particularly greater sensitivity to early stage brain pathologies. The inferred group-constrained sparse network is found to be biologically plausible and is highly

  20. Sparse nonnegative matrix factorization with ℓ0-constraints

    PubMed Central

    Peharz, Robert; Pernkopf, Franz

    2012-01-01

    Although nonnegative matrix factorization (NMF) favors a sparse and part-based representation of nonnegative data, there is no guarantee for this behavior. Several authors proposed NMF methods which enforce sparseness by constraining or penalizing the ℓ1-norm of the factor matrices. On the other hand, little work has been done using a more natural sparseness measure, the ℓ0-pseudo-norm. In this paper, we propose a framework for approximate NMF which constrains the ℓ0-norm of the basis matrix, or the coefficient matrix, respectively. For this purpose, techniques for unconstrained NMF can be easily incorporated, such as multiplicative update rules, or the alternating nonnegative least-squares scheme. In experiments we demonstrate the benefits of our methods, which compare to, or outperform existing approaches. PMID:22505792

  1. Path integral molecular dynamics combined with discrete-variable-representation approach: the effect of solvation structures on vibrational spectra of Cl 2 in helium clusters

    NASA Astrophysics Data System (ADS)

    Takayanagi, Toshiyuki; Shiga, Motoyuki

    2002-08-01

    The structures and vibrational frequencies of Cl 2-helium clusters have been studied using the path integral molecular dynamics method combined with the discrete-variable-representation approach. It is found that the Cl 2-helium clusters form clear shell structures comprised of rings around the Cl 2 bond. The vibrational frequencies calculated show a monotonically increasing red shift with an increase in cluster size. It can be concluded that the first solvation shell and its density around T-shaped configurations play the most important role in the observed frequency shifts.

  2. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/sqrt{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/sqrt{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  3. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    PubMed Central

    Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus

    2015-01-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. PMID:26146475

  4. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.

    PubMed

    Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus

    2014-12-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.

  5. Spatiotemporal System Identification With Continuous Spatial Maps and Sparse Estimation.

    PubMed

    Aram, Parham; Kadirkamanathan, Visakan; Anderson, Sean R

    2015-11-01

    We present a framework for the identification of spatiotemporal linear dynamical systems. We use a state-space model representation that has the following attributes: 1) the number of spatial observation locations are decoupled from the model order; 2) the model allows for spatial heterogeneity; 3) the model representation is continuous over space; and 4) the model parameters can be identified in a simple and sparse estimation procedure. The model identification procedure we propose has four steps: 1) decomposition of the continuous spatial field using a finite set of basis functions where spatial frequency analysis is used to determine basis function width and spacing, such that the main spatial frequency contents of the underlying field can be captured; 2) initialization of states in closed form; 3) initialization of state-transition and input matrix model parameters using sparse regression-the least absolute shrinkage and selection operator method; and 4) joint state and parameter estimation using an iterative Kalman-filter/sparse-regression algorithm. To investigate the performance of the proposed algorithm we use data generated by the Kuramoto model of spatiotemporal cortical dynamics. The identification algorithm performs successfully, predicting the spatiotemporal field with high accuracy, whilst the sparse regression leads to a compact model.

  6. Blind source separation by sparse decomposition

    NASA Astrophysics Data System (ADS)

    Zibulevsky, Michael; Pearlmutter, Barak A.

    2000-04-01

    The blind source separation problem is to extract the underlying source signals from a set of their linear mixtures, where the mixing matrix is unknown. This situation is common, eg in acoustics, radio, and medical signal processing. We exploit the property of the sources to have a sparse representation in a corresponding signal dictionary. Such a dictionary may consist of wavelets, wavelet packets, etc., or be obtained by learning from a given family of signals. Starting from the maximum a posteriori framework, which is applicable to the case of more sources than mixtures, we derive a few other categories of objective functions, which provide faster and more robust computations, when there are an equal number of sources and mixtures. Our experiments with artificial signals and with musical sounds demonstrate significantly better separation than other known techniques.

  7. Integer sparse distributed memory: analysis and results.

    PubMed

    Snaider, Javier; Franklin, Stan; Strain, Steve; George, E Olusegun

    2013-10-01

    Sparse distributed memory is an auto-associative memory system that stores high dimensional Boolean vectors. Here we present an extension of the original SDM, the Integer SDM that uses modular arithmetic integer vectors rather than binary vectors. This extension preserves many of the desirable properties of the original SDM: auto-associativity, content addressability, distributed storage, and robustness over noisy inputs. In addition, it improves the representation capabilities of the memory and is more robust over normalization. It can also be extended to support forgetting and reliable sequence storage. We performed several simulations that test the noise robustness property and capacity of the memory. Theoretical analyses of the memory's fidelity and capacity are also presented.

  8. Learning doubly sparse transforms for images.

    PubMed

    Ravishankar, Saiprasad; Bresler, Yoram

    2013-12-01

    The sparsity of images in a transform domain or dictionary has been exploited in many applications in image processing. For example, analytical sparsifying transforms, such as wavelets and discrete cosine transform (DCT), have been extensively used in compression standards. Recently, synthesis sparsifying dictionaries that are directly adapted to the data have become popular especially in applications such as image denoising. Following up on our recent research, where we introduced the idea of learning square sparsifying transforms, we propose here novel problem formulations for learning doubly sparse transforms for signals or image patches. These transforms are a product of a fixed, fast analytic transform such as the DCT, and an adaptive matrix constrained to be sparse. Such transforms can be learnt, stored, and implemented efficiently. We show the superior promise of our learnt transforms as compared with analytical sparsifying transforms such as the DCT for image representation. We also show promising performance in image denoising that compares favorably with approaches involving learnt synthesis dictionaries such as the K-SVD algorithm. The proposed approach is also much faster than K-SVD denoising.

  9. Scene Classfication Based on the Semantic-Feature Fusion Fully Sparse Topic Model for High Spatial Resolution Remote Sensing Imagery

    NASA Astrophysics Data System (ADS)

    Zhu, Qiqi; Zhong, Yanfei; Zhang, Liangpei

    2016-06-01

    Topic modeling has been an increasingly mature method to bridge the semantic gap between the low-level features and high-level semantic information. However, with more and more high spatial resolution (HSR) images to deal with, conventional probabilistic topic model (PTM) usually presents the images with a dense semantic representation. This consumes more time and requires more storage space. In addition, due to the complex spectral and spatial information, a combination of multiple complementary features is proved to be an effective strategy to improve the performance for HSR image scene classification. But it should be noticed that how the distinct features are fused to fully describe the challenging HSR images, which is a critical factor for scene classification. In this paper, a semantic-feature fusion fully sparse topic model (SFF-FSTM) is proposed for HSR imagery scene classification. In SFF-FSTM, three heterogeneous features - the mean and standard deviation based spectral feature, wavelet based texture feature, and dense scale-invariant feature transform (SIFT) based structural feature are effectively fused at the latent semantic level. The combination of multiple semantic-feature fusion strategy and sparse based FSTM is able to provide adequate feature representations, and can achieve comparable performance with limited training samples. Experimental results on the UC Merced dataset and Google dataset of SIRI-WHU demonstrate that the proposed method can improve the performance of scene classification compared with other scene classification methods for HSR imagery.

  10. Cellular Adaptation Facilitates Sparse and Reliable Coding in Sensory Pathways

    PubMed Central

    Farkhooi, Farzad; Froese, Anja; Muller, Eilif; Menzel, Randolf; Nawrot, Martin P.

    2013-01-01

    Most neurons in peripheral sensory pathways initially respond vigorously when a preferred stimulus is presented, but adapt as stimulation continues. It is unclear how this phenomenon affects stimulus coding in the later stages of sensory processing. Here, we show that a temporally sparse and reliable stimulus representation develops naturally in sequential stages of a sensory network with adapting neurons. As a modeling framework we employ a mean-field approach together with an adaptive population density treatment, accompanied by numerical simulations of spiking neural networks. We find that cellular adaptation plays a critical role in the dynamic reduction of the trial-by-trial variability of cortical spike responses by transiently suppressing self-generated fast fluctuations in the cortical balanced network. This provides an explanation for a widespread cortical phenomenon by a simple mechanism. We further show that in the insect olfactory system cellular adaptation is sufficient to explain the emergence of the temporally sparse and reliable stimulus representation in the mushroom body. Our results reveal a generic, biophysically plausible mechanism that can explain the emergence of a temporally sparse and reliable stimulus representation within a sequential processing architecture. PMID:24098101

  11. Sparse Solution of High-Dimensional Model Calibration Inverse Problems under Uncertainty in Prior Structural Connectivity

    NASA Astrophysics Data System (ADS)

    Mohammad khaninezhad, M.; Jafarpour, B.

    2012-12-01

    Data limitation and heterogeneity of the geologic formations introduce significant uncertainty in predicting the related flow and transport processes in these environments. Fluid flow and displacement behavior in subsurface systems is mainly controlled by the structural connectivity models that create preferential flow pathways (or barriers). The connectivity of extreme geologic features strongly constrains the evolution of the related flow and transport processes in subsurface formations. Therefore, characterization of the geologic continuity and facies connectivity is critical for reliable prediction of the flow and transport behavior. The goal of this study is to develop a robust and geologically consistent framework for solving large-scale nonlinear subsurface characterization inverse problems under uncertainty about geologic continuity and structural connectivity. We formulate a novel inverse modeling approach by adopting a sparse reconstruction perspective, which involves two major components: 1) sparse description of hydraulic property distribution under significant uncertainty in structural connectivity and 2) formulation of an effective sparsity-promoting inversion method that is robust against prior model uncertainty. To account for the significant variability in the structural connectivity, we use, as prior, multiple distinct connectivity models. For sparse/compact representation of high-dimensional hydraulic property maps, we investigate two methods. In one approach, we apply the principle component analysis (PCA) to each prior connectivity model individually and combine the resulting leading components from each model to form a diverse geologic dictionary. Alternatively, we combine many realizations of the hydraulic properties from different prior connectivity models and use them to generate a diverse training dataset. We use the training dataset with a sparsifying transform, such as K-SVD, to construct a sparse geologic dictionary that is robust to

  12. Threaded Operations on Sparse Matrices

    SciTech Connect

    Sneed, Brett

    2015-09-01

    We investigate the use of sparse matrices and OpenMP multi-threading on linear algebra operations involving them. Several sparse matrix data structures are presented. Implementation of the multi- threading primarily occurs in the level one and two BLAS functions used within the four algorithms investigated{the Power Method, Conjugate Gradient, Biconjugate Gradient, and Jacobi's Method. The bene ts of launching threads once per high level algorithm are explored.

  13. Learning Multiscale Sparse Representations for Image and Video Restoration

    DTIC Science & Technology

    2007-07-01

    25], and more recently to video denoising [35]. In this paper, we extend the basic K- SVD work, providing a framework for learning multiscale and...The original K- SVD denoising algorithm [1], the extensions to color image denoising , non-homogeneous noise, and inpainting [25], and the K- SVD for...section, we briefly review these algorithms. 2.1. The grayscale image denoising K- SVD algorithm. We now briefly review the main ideas of the K- SVD

  14. Sparse Representation Based Multiple Frame Video Super-Resolution.

    PubMed

    Dai, Qiqin; Yoo, Seunghwan; Kappeler, Armin; Katsaggelos, Aggelos K

    2016-11-22

    In this paper, we propose two multiple-frame superresolution (SR) algorithms based on dictionary learning and motion estimation. First, we adopt the use of video bilevel dictionary learning which has been used for single-frame SR. It is extended to multiple frames by using motion estimation with subpixel accuracy. We propose a batch and a temporally recursive multi-frame SR algorithm, which improve over single frame SR. Finally, we propose a novel dictionary learning algorithm utilizing consecutive video frames, rather than still images or individual video frames, which further improves the performance of the video SR algorithms. Extensive experimental comparisons with state-of-the-art SR algorithms verify the effectiveness of our proposed multiple-frame video SR approach.

  15. Sparse Data Representation: The Role of Redundancy in Data Processing

    DTIC Science & Technology

    2005-09-13

    directions The Error Diffusion Halftoning Algorithm: Some Recent Stability Results and Applications Beyond Halftoning Dr. Chai Wu Thomas J. Watson Research...digital and analog printers use some form of halftoning ; just look at any picture in a newspaper or magazine under a magnifying glass. Error diffusion is...a popular technique for high quality digital halftoning . The purpose of this talk is to illustrate the versatility of error diffusion with

  16. Heart rate analysis by sparse representation for acute pain detection.

    PubMed

    Tejman-Yarden, Shai; Levi, Ofer; Beizerov, Alex; Parmet, Yisrael; Nguyen, Tu; Saunders, Michael; Rudich, Zvia; Perry, James C; Baker, Dewleen G; Moeller-Bertram, Tobias

    2016-04-01

    Objective pain assessment methods pose an advantage over the currently used subjective pain rating tools. Advanced signal processing methodologies, including the wavelet transform (WT) and the orthogonal matching pursuit algorithm (OMP), were developed in the past two decades. The aim of this study was to apply and compare these time-specific methods to heart rate samples of healthy subjects for acute pain detection. Fifteen adult volunteers participated in a study conducted in the pain clinic at a single center. Each subject's heart rate was sampled for 5-min baseline, followed by a cold pressor test (CPT). Analysis was done by the WT and the OMP algorithm with a Fourier/Wavelet dictionary separately. Data from 11 subjects were analyzed. Compared to baseline, The WT analysis showed a significant coefficients' density increase during the pain incline period (p < 0.01) and the entire CPT (p < 0.01), with significantly higher coefficient amplitudes. The OMP analysis showed a significant wavelet coefficients' density increase during pain incline and decline periods (p < 0.01, p < 0.05) and the entire CPT (p < 0.001), with suggestive higher amplitudes. Comparison of both methods showed that during the baseline there was a significant reduction in wavelet coefficient density usin