Science.gov

Sample records for sparse representation combined

  1. Facial expression recognition with facial parts based sparse representation classifier

    NASA Astrophysics Data System (ADS)

    Zhi, Ruicong; Ruan, Qiuqi

    2009-10-01

    Facial expressions play important role in human communication. The understanding of facial expression is a basic requirement in the development of next generation human computer interaction systems. Researches show that the intrinsic facial features always hide in low dimensional facial subspaces. This paper presents facial parts based facial expression recognition system with sparse representation classifier. Sparse representation classifier exploits sparse representation to select face features and classify facial expressions. The sparse solution is obtained by solving l1 -norm minimization problem with constraint of linear combination equation. Experimental results show that sparse representation is efficient for facial expression recognition and sparse representation classifier obtain much higher recognition accuracies than other compared methods.

  2. Denoising human cardiac diffusion tensor magnetic resonance images using sparse representation combined with segmentation.

    PubMed

    Bao, L J; Zhu, Y M; Liu, W Y; Croisille, P; Pu, Z B; Robini, M; Magnin, I E

    2009-03-21

    Cardiac diffusion tensor magnetic resonance imaging (DT-MRI) is noise sensitive, and the noise can induce numerous systematic errors in subsequent parameter calculations. This paper proposes a sparse representation-based method for denoising cardiac DT-MRI images. The method first generates a dictionary of multiple bases according to the features of the observed image. A segmentation algorithm based on nonstationary degree detector is then introduced to make the selection of atoms in the dictionary adapted to the image's features. The denoising is achieved by gradually approximating the underlying image using the atoms selected from the generated dictionary. The results on both simulated image and real cardiac DT-MRI images from ex vivo human hearts show that the proposed denoising method performs better than conventional denoising techniques by preserving image contrast and fine structures. PMID:19218737

  3. Fingerprint Compression Based on Sparse Representation.

    PubMed

    Shao, Guangqi; Wu, Yanping; A, Yong; Liu, Xiao; Guo, Tiande

    2014-02-01

    A new fingerprint compression algorithm based on sparse representation is introduced. Obtaining an overcomplete dictionary from a set of fingerprint patches allows us to represent them as a sparse linear combination of dictionary atoms. In the algorithm, we first construct a dictionary for predefined fingerprint image patches. For a new given fingerprint images, represent its patches according to the dictionary by computing l(0)-minimization and then quantize and encode the representation. In this paper, we consider the effect of various factors on compression results. Three groups of fingerprint images are tested. The experiments demonstrate that our algorithm is efficient compared with several competing compression techniques (JPEG, JPEG 2000, and WSQ), especially at high compression ratios. The experiments also illustrate that the proposed algorithm is robust to extract minutiae.

  4. SAR Image Despeckling Via Structural Sparse Representation

    NASA Astrophysics Data System (ADS)

    Lu, Ting; Li, Shutao; Fang, Leyuan; Benediktsson, Jón Atli

    2016-12-01

    A novel synthetic aperture radar (SAR) image despeckling method based on structural sparse representation is introduced. The proposed method utilizes the fact that different regions in SAR images correspond to varying terrain reflectivity. Therefore, SAR images can be split into a heterogeneous class (with a varied terrain reflectivity) and a homogeneous class (with a constant terrain reflectivity). In the proposed method, different sparse representation based despeckling schemes are designed by combining the different region characteristics in SAR images. For heterogeneous regions with rich structure and texture information, structural dictionaries are learned to appropriately represent varied structural characteristics. Specifically, each patch in these regions is sparsely coded with the best fitted structural dictionary, thus good structure preservation can be obtained. For homogenous regions without rich structure and texture information, the highly redundant photometric self-similarity is exploited to suppress speckle noise without introducing artifacts. That is achieved by firstly learning the sub-dictionary, then simultaneously sparsely coding for each group of photometrically similar image patches. Visual and objective experimental results demonstrate the superiority of the proposed method over the-state-of-the-art methods.

  5. Sparse representation for the ISAR image reconstruction

    NASA Astrophysics Data System (ADS)

    Hu, Mengqi; Montalbo, John; Li, Shuxia; Sun, Ligang; Qiao, Zhijun G.

    2016-05-01

    In this paper, a sparse representation of the data for an inverse synthetic aperture radar (ISAR) system is provided in two dimensions. The proposed sparse representation motivates the use a of a Convex Optimization that recovers the image with far less samples, which is required by Nyquist-Shannon sampling theorem to increases the efficiency and decrease the cost of calculation in radar imaging.

  6. Saliency Detection Using Sparse and Nonlinear Feature Representation

    PubMed Central

    Zhao, Qingjie; Manzoor, Muhammad Farhan; Ishaq Khan, Saqib

    2014-01-01

    An important aspect of visual saliency detection is how features that form an input image are represented. A popular theory supports sparse feature representation, an image being represented with a basis dictionary having sparse weighting coefficient. Another method uses a nonlinear combination of image features for representation. In our work, we combine the two methods and propose a scheme that takes advantage of both sparse and nonlinear feature representation. To this end, we use independent component analysis (ICA) and covariant matrices, respectively. To compute saliency, we use a biologically plausible center surround difference (CSD) mechanism. Our sparse features are adaptive in nature; the ICA basis function are learnt at every image representation, rather than being fixed. We show that Adaptive Sparse Features when used with a CSD mechanism yield better results compared to fixed sparse representations. We also show that covariant matrices consisting of nonlinear integration of color information alone are sufficient to efficiently estimate saliency from an image. The proposed dual representation scheme is then evaluated against human eye fixation prediction, response to psychological patterns, and salient object detection on well-known datasets. We conclude that having two forms of representation compliments one another and results in better saliency detection. PMID:24895644

  7. SAR Image despeckling via sparse representation

    NASA Astrophysics Data System (ADS)

    Wang, Zhongmei; Yang, Xiaomei; Zheng, Liang

    2014-11-01

    SAR image despeckling is an active research area in image processing due to its importance in improving the quality of image for object detection and classification.In this paper, a new approach is proposed for multiplicative noise in SAR image removal based on nonlocal sparse representation by dictionary learning and collaborative filtering. First, a image is divided into many patches, and then a cluster is formed by clustering log-similar image patches using Fuzzy C-means (FCM). For each cluster, an over-complete dictionary is computed using the K-SVD method that iteratively updates the dictionary and the sparse coefficients. The patches belonging to the same cluster are then reconstructed by a sparse combination of the corresponding dictionary atoms. The reconstructed patches are finally collaboratively aggregated to build the denoised image. The experimental results show that the proposed method achieves much better results than many state-of-the-art algorithms in terms of both objective evaluation index (PSNR and ENL) and subjective visual perception.

  8. Using Weighted Sparse Representation Model Combined with Discrete Cosine Transformation to Predict Protein-Protein Interactions from Protein Sequence

    PubMed Central

    Huang, Yu-An; You, Zhu-Hong; Gao, Xin; Wong, Leon; Wang, Lirong

    2015-01-01

    Increasing demand for the knowledge about protein-protein interactions (PPIs) is promoting the development of methods for predicting protein interaction network. Although high-throughput technologies have generated considerable PPIs data for various organisms, it has inevitable drawbacks such as high cost, time consumption, and inherently high false positive rate. For this reason, computational methods are drawing more and more attention for predicting PPIs. In this study, we report a computational method for predicting PPIs using the information of protein sequences. The main improvements come from adopting a novel protein sequence representation by using discrete cosine transform (DCT) on substitution matrix representation (SMR) and from using weighted sparse representation based classifier (WSRC). When performing on the PPIs dataset of Yeast, Human, and H. pylori, we got excellent results with average accuracies as high as 96.28%, 96.30%, and 86.74%, respectively, significantly better than previous methods. Promising results obtained have proven that the proposed method is feasible, robust, and powerful. To further evaluate the proposed method, we compared it with the state-of-the-art support vector machine (SVM) classifier. Extensive experiments were also performed in which we used Yeast PPIs samples as training set to predict PPIs of other five species datasets. PMID:26634213

  9. Using Weighted Sparse Representation Model Combined with Discrete Cosine Transformation to Predict Protein-Protein Interactions from Protein Sequence.

    PubMed

    Huang, Yu-An; You, Zhu-Hong; Gao, Xin; Wong, Leon; Wang, Lirong

    2015-01-01

    Increasing demand for the knowledge about protein-protein interactions (PPIs) is promoting the development of methods for predicting protein interaction network. Although high-throughput technologies have generated considerable PPIs data for various organisms, it has inevitable drawbacks such as high cost, time consumption, and inherently high false positive rate. For this reason, computational methods are drawing more and more attention for predicting PPIs. In this study, we report a computational method for predicting PPIs using the information of protein sequences. The main improvements come from adopting a novel protein sequence representation by using discrete cosine transform (DCT) on substitution matrix representation (SMR) and from using weighted sparse representation based classifier (WSRC). When performing on the PPIs dataset of Yeast, Human, and H. pylori, we got excellent results with average accuracies as high as 96.28%, 96.30%, and 86.74%, respectively, significantly better than previous methods. Promising results obtained have proven that the proposed method is feasible, robust, and powerful. To further evaluate the proposed method, we compared it with the state-of-the-art support vector machine (SVM) classifier. Extensive experiments were also performed in which we used Yeast PPIs samples as training set to predict PPIs of other five species datasets. PMID:26634213

  10. Robust face recognition via sparse representation.

    PubMed

    Wright, John; Yang, Allen Y; Ganesh, Arvind; Sastry, S Shankar; Ma, Yi

    2009-02-01

    We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by l{1}-minimization, we propose a general classification algorithm for (image-based) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as Eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly by exploiting the fact that these errors are often sparse with respect to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm and corroborate the above claims.

  11. Learning discriminative dictionary for group sparse representation.

    PubMed

    Sun, Yubao; Liu, Qingshan; Tang, Jinhui; Tao, Dacheng

    2014-09-01

    In recent years, sparse representation has been widely used in object recognition applications. How to learn the dictionary is a key issue to sparse representation. A popular method is to use l1 norm as the sparsity measurement of representation coefficients for dictionary learning. However, the l1 norm treats each atom in the dictionary independently, so the learned dictionary cannot well capture the multisubspaces structural information of the data. In addition, the learned subdictionary for each class usually shares some common atoms, which weakens the discriminative ability of the reconstruction error of each subdictionary. This paper presents a new dictionary learning model to improve sparse representation for image classification, which targets at learning a class-specific subdictionary for each class and a common subdictionary shared by all classes. The model is composed of a discriminative fidelity, a weighted group sparse constraint, and a subdictionary incoherence term. The discriminative fidelity encourages each class-specific subdictionary to sparsely represent the samples in the corresponding class. The weighted group sparse constraint term aims at capturing the structural information of the data. The subdictionary incoherence term is to make all subdictionaries independent as much as possible. Because the common subdictionary represents features shared by all classes, we only use the reconstruction error of each class-specific subdictionary for classification. Extensive experiments are conducted on several public image databases, and the experimental results demonstrate the power of the proposed method, compared with the state-of-the-arts.

  12. Visual tracking based on extreme learning machine and sparse representation.

    PubMed

    Wang, Baoxian; Tang, Linbo; Yang, Jinglin; Zhao, Baojun; Wang, Shuigen

    2015-01-01

    The existing sparse representation-based visual trackers mostly suffer from both being time consuming and having poor robustness problems. To address these issues, a novel tracking method is presented via combining sparse representation and an emerging learning technique, namely extreme learning machine (ELM). Specifically, visual tracking can be divided into two consecutive processes. Firstly, ELM is utilized to find the optimal separate hyperplane between the target observations and background ones. Thus, the trained ELM classification function is able to remove most of the candidate samples related to background contents efficiently, thereby reducing the total computational cost of the following sparse representation. Secondly, to further combine ELM and sparse representation, the resultant confidence values (i.e., probabilities to be a target) of samples on the ELM classification function are used to construct a new manifold learning constraint term of the sparse representation framework, which tends to achieve robuster results. Moreover, the accelerated proximal gradient method is used for deriving the optimal solution (in matrix form) of the constrained sparse tracking model. Additionally, the matrix form solution allows the candidate samples to be calculated in parallel, thereby leading to a higher efficiency. Experiments demonstrate the effectiveness of the proposed tracker. PMID:26506359

  13. Visual Tracking Based on Extreme Learning Machine and Sparse Representation

    PubMed Central

    Wang, Baoxian; Tang, Linbo; Yang, Jinglin; Zhao, Baojun; Wang, Shuigen

    2015-01-01

    The existing sparse representation-based visual trackers mostly suffer from both being time consuming and having poor robustness problems. To address these issues, a novel tracking method is presented via combining sparse representation and an emerging learning technique, namely extreme learning machine (ELM). Specifically, visual tracking can be divided into two consecutive processes. Firstly, ELM is utilized to find the optimal separate hyperplane between the target observations and background ones. Thus, the trained ELM classification function is able to remove most of the candidate samples related to background contents efficiently, thereby reducing the total computational cost of the following sparse representation. Secondly, to further combine ELM and sparse representation, the resultant confidence values (i.e., probabilities to be a target) of samples on the ELM classification function are used to construct a new manifold learning constraint term of the sparse representation framework, which tends to achieve robuster results. Moreover, the accelerated proximal gradient method is used for deriving the optimal solution (in matrix form) of the constrained sparse tracking model. Additionally, the matrix form solution allows the candidate samples to be calculated in parallel, thereby leading to a higher efficiency. Experiments demonstrate the effectiveness of the proposed tracker. PMID:26506359

  14. Ensemble polarimetric SAR image classification based on contextual sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Lamei; Wang, Xiao; Zou, Bin; Qiao, Zhijun

    2016-05-01

    Polarimetric SAR image interpretation has become one of the most interesting topics, in which the construction of the reasonable and effective technique of image classification is of key importance. Sparse representation represents the data using the most succinct sparse atoms of the over-complete dictionary and the advantages of sparse representation also have been confirmed in the field of PolSAR classification. However, it is not perfect, like the ordinary classifier, at different aspects. So ensemble learning is introduced to improve the issue, which makes a plurality of different learners training and obtained the integrated results by combining the individual learner to get more accurate and ideal learning results. Therefore, this paper presents a polarimetric SAR image classification method based on the ensemble learning of sparse representation to achieve the optimal classification.

  15. Maximum constrained sparse coding for image representation

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Zhao, Danpei; Jiang, Zhiguo

    2015-12-01

    Sparse coding exhibits good performance in many computer vision applications by finding bases which capture highlevel semantics of the data and learning sparse coefficients in terms of the bases. However, due to the fact that bases are non-orthogonal, sparse coding can hardly preserve the samples' similarity, which is important for discrimination. In this paper, a new image representing method called maximum constrained sparse coding (MCSC) is proposed. Sparse representation with more active coefficients means more similarity information, and the infinite norm is added to the solution for this purpose. We solve the optimizer by constraining the codes' maximum and releasing the residual to other dictionary atoms. Experimental results on image clustering show that our method can preserve the similarity of adjacent samples and maintain the sparsity of code simultaneously.

  16. Automatic landslide and mudflow detection method via multichannel sparse representation

    NASA Astrophysics Data System (ADS)

    Chao, Chen; Zhou, Jianjun; Hao, Zhuo; Sun, Bo; He, Jun; Ge, Fengxiang

    2015-10-01

    Landslide and mudflow detection is an important application of aerial images and high resolution remote sensing images, which is crucial for national security and disaster relief. Since the high resolution images are often large in size, it's necessary to develop an efficient algorithm for landslide and mudflow detection. Based on the theory of sparse representation and, we propose a novel automatic landslide and mudflow detection method in this paper, which combines multi-channel sparse representation and eight neighbor judgment methods. The whole process of the detection is totally automatic. We make the experiment on a high resolution image of ZhouQu district of Gansu province in China on August, 2010 and get a promising result which proved the effective of using sparse representation on landslide and mudflow detection.

  17. Feature Selection and Pedestrian Detection Based on Sparse Representation.

    PubMed

    Yao, Shihong; Wang, Tao; Shen, Weiming; Pan, Shaoming; Chong, Yanwen; Ding, Fei

    2015-01-01

    Pedestrian detection have been currently devoted to the extraction of effective pedestrian features, which has become one of the obstacles in pedestrian detection application according to the variety of pedestrian features and their large dimension. Based on the theoretical analysis of six frequently-used features, SIFT, SURF, Haar, HOG, LBP and LSS, and their comparison with experimental results, this paper screens out the sparse feature subsets via sparse representation to investigate whether the sparse subsets have the same description abilities and the most stable features. When any two of the six features are fused, the fusion feature is sparsely represented to obtain its important components. Sparse subsets of the fusion features can be rapidly generated by avoiding calculation of the corresponding index of dimension numbers of these feature descriptors; thus, the calculation speed of the feature dimension reduction is improved and the pedestrian detection time is reduced. Experimental results show that sparse feature subsets are capable of keeping the important components of these six feature descriptors. The sparse features of HOG and LSS possess the same description ability and consume less time compared with their full features. The ratios of the sparse feature subsets of HOG and LSS to their full sets are the highest among the six, and thus these two features can be used to best describe the characteristics of the pedestrian and the sparse feature subsets of the combination of HOG-LSS show better distinguishing ability and parsimony. PMID:26295480

  18. Feature Selection and Pedestrian Detection Based on Sparse Representation

    PubMed Central

    Yao, Shihong; Wang, Tao; Shen, Weiming; Pan, Shaoming; Chong, Yanwen; Ding, Fei

    2015-01-01

    Pedestrian detection have been currently devoted to the extraction of effective pedestrian features, which has become one of the obstacles in pedestrian detection application according to the variety of pedestrian features and their large dimension. Based on the theoretical analysis of six frequently-used features, SIFT, SURF, Haar, HOG, LBP and LSS, and their comparison with experimental results, this paper screens out the sparse feature subsets via sparse representation to investigate whether the sparse subsets have the same description abilities and the most stable features. When any two of the six features are fused, the fusion feature is sparsely represented to obtain its important components. Sparse subsets of the fusion features can be rapidly generated by avoiding calculation of the corresponding index of dimension numbers of these feature descriptors; thus, the calculation speed of the feature dimension reduction is improved and the pedestrian detection time is reduced. Experimental results show that sparse feature subsets are capable of keeping the important components of these six feature descriptors. The sparse features of HOG and LSS possess the same description ability and consume less time compared with their full features. The ratios of the sparse feature subsets of HOG and LSS to their full sets are the highest among the six, and thus these two features can be used to best describe the characteristics of the pedestrian and the sparse feature subsets of the combination of HOG-LSS show better distinguishing ability and parsimony. PMID:26295480

  19. Remote sensing image fusion via wavelet transform and sparse representation

    NASA Astrophysics Data System (ADS)

    Cheng, Jian; Liu, Haijun; Liu, Ting; Wang, Feng; Li, Hongsheng

    2015-06-01

    In this paper, we propose a remote sensing image fusion method which combines the wavelet transform and sparse representation to obtain fusion images with high spectral resolution and high spatial resolution. Firstly, intensity-hue-saturation (IHS) transform is applied to Multi-Spectral (MS) images. Then, wavelet transform is used to the intensity component of MS images and the Panchromatic (Pan) image to construct the multi-scale representation respectively. With the multi-scale representation, different fusion strategies are taken on the low-frequency and the high-frequency sub-images. Sparse representation with training dictionary is introduced into the low-frequency sub-image fusion. The fusion rule for the sparse representation coefficients of the low-frequency sub-images is defined by the spatial frequency maximum. For high-frequency sub-images with prolific detail information, the fusion rule is established by the images information fusion measurement indicator. Finally, the fused results are obtained through inverse wavelet transform and inverse IHS transform. The wavelet transform has the ability to extract the spectral information and the global spatial details from the original pairwise images, while sparse representation can extract the local structures of images effectively. Therefore, our proposed fusion method can well preserve the spectral information and the spatial detail information of the original images. The experimental results on the remote sensing images have demonstrated that our proposed method could well maintain the spectral characteristics of fusion images with a high spatial resolution.

  20. Learning Stable Multilevel Dictionaries for Sparse Representations.

    PubMed

    Thiagarajan, Jayaraman J; Ramamurthy, Karthikeyan Natesan; Spanias, Andreas

    2015-09-01

    Sparse representations using learned dictionaries are being increasingly used with success in several data processing and machine learning applications. The increasing need for learning sparse models in large-scale applications motivates the development of efficient, robust, and provably good dictionary learning algorithms. Algorithmic stability and generalizability are desirable characteristics for dictionary learning algorithms that aim to build global dictionaries, which can efficiently model any test data similar to the training samples. In this paper, we propose an algorithm to learn dictionaries for sparse representations from large scale data, and prove that the proposed learning algorithm is stable and generalizable asymptotically. The algorithm employs a 1-D subspace clustering procedure, the K-hyperline clustering, to learn a hierarchical dictionary with multiple levels. We also propose an information-theoretic scheme to estimate the number of atoms needed in each level of learning and develop an ensemble approach to learn robust dictionaries. Using the proposed dictionaries, the sparse code for novel test data can be computed using a low-complexity pursuit procedure. We demonstrate the stability and generalization characteristics of the proposed algorithm using simulations. We also evaluate the utility of the multilevel dictionaries in compressed recovery and subspace learning applications.

  1. Aerial Scene Recognition using Efficient Sparse Representation

    SciTech Connect

    Cheriyadat, Anil M

    2012-01-01

    Advanced scene recognition systems for processing large volumes of high-resolution aerial image data are in great demand today. However, automated scene recognition remains a challenging problem. Efficient encoding and representation of spatial and structural patterns in the imagery are key in developing automated scene recognition algorithms. We describe an image representation approach that uses simple and computationally efficient sparse code computation to generate accurate features capable of producing excellent classification performance using linear SVM kernels. Our method exploits unlabeled low-level image feature measurements to learn a set of basis vectors. We project the low-level features onto the basis vectors and use simple soft threshold activation function to derive the sparse features. The proposed technique generates sparse features at a significantly lower computational cost than other methods~\\cite{Yang10, newsam11}, yet it produces comparable or better classification accuracy. We apply our technique to high-resolution aerial image datasets to quantify the aerial scene classification performance. We demonstrate that the dense feature extraction and representation methods are highly effective for automatic large-facility detection on wide area high-resolution aerial imagery.

  2. Efficient visual tracking via low-complexity sparse representation

    NASA Astrophysics Data System (ADS)

    Lu, Weizhi; Zhang, Jinglin; Kpalma, Kidiyo; Ronsin, Joseph

    2015-12-01

    Thanks to its good performance on object recognition, sparse representation has recently been widely studied in the area of visual object tracking. Up to now, little attention has been paid to the complexity of sparse representation, while most works are focused on the performance improvement. By reducing the computation load related to sparse representation hundreds of times, this paper proposes by far the most computationally efficient tracking approach based on sparse representation. The proposal simply consists of two stages of sparse representation, one is for object detection and the other for object validation. Experimentally, it achieves better performance than some state-of-the-art methods in both accuracy and speed.

  3. Joint sparse representation for robust multimodal biometrics recognition.

    PubMed

    Shekhar, Sumit; Patel, Vishal M; Nasrabadi, Nasser M; Chellappa, Rama

    2014-01-01

    Traditional biometric recognition systems rely on a single biometric signature for authentication. While the advantage of using multiple sources of information for establishing the identity has been widely recognized, computational models for multimodal biometrics recognition have only recently received attention. We propose a multimodal sparse representation method, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations. Thus, we simultaneously take into account correlations as well as coupling information among biometric modalities. A multimodal quality measure is also proposed to weigh each modality as it gets fused. Furthermore, we also kernelize the algorithm to handle nonlinearity in data. The optimization problem is solved using an efficient alternative direction method. Various experiments show that the proposed method compares favorably with competing fusion-based methods.

  4. Joint sparse representation for robust multimodal biometrics recognition.

    PubMed

    Shekhar, Sumit; Patel, Vishal M; Nasrabadi, Nasser M; Chellappa, Rama

    2014-01-01

    Traditional biometric recognition systems rely on a single biometric signature for authentication. While the advantage of using multiple sources of information for establishing the identity has been widely recognized, computational models for multimodal biometrics recognition have only recently received attention. We propose a multimodal sparse representation method, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations. Thus, we simultaneously take into account correlations as well as coupling information among biometric modalities. A multimodal quality measure is also proposed to weigh each modality as it gets fused. Furthermore, we also kernelize the algorithm to handle nonlinearity in data. The optimization problem is solved using an efficient alternative direction method. Various experiments show that the proposed method compares favorably with competing fusion-based methods. PMID:24231870

  5. Sparse representation for color image restoration.

    PubMed

    Mairal, Julien; Elad, Michael; Sapiro, Guillermo

    2008-01-01

    Sparse representations of signals have drawn considerable interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. In particular, the design of well adapted dictionaries for images has been a major challenge. The K-SVD has been recently proposed for this task and shown to perform very well for various grayscale image processing tasks. In this paper, we address the problem of learning dictionaries for color images and extend the K-SVD-based grayscale image denoising algorithm that appears in. This work puts forward ways for handling nonhomogeneous noise and missing information, paving the way to state-of-the-art results in applications such as color image denoising, demosaicing, and inpainting, as demonstrated in this paper. PMID:18229804

  6. Neonatal Atlas Construction Using Sparse Representation

    PubMed Central

    Shi, Feng; Wang, Li; Wu, Guorong; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang

    2014-01-01

    Atlas construction generally includes first an image registration step to normalize all images into a common space and then an atlas building step to fuse the information from all the aligned images. Although numerous atlas construction studies have been performed to improve the accuracy of the image registration step, unweighted or simply weighted average is often used in the atlas building step. In this article, we propose a novel patch-based sparse representation method for atlas construction after all images have been registered into the common space. By taking advantage of local sparse representation, more anatomical details can be recovered in the built atlas. To make the anatomical structures spatially smooth in the atlas, the anatomical feature constraints on group structure of representations and also the overlapping of neighboring patches are imposed to ensure the anatomical consistency between neighboring patches. The proposed method has been applied to 73 neonatal MR images with poor spatial resolution and low tissue contrast, for constructing a neonatal brain atlas with sharp anatomical details. Experimental results demonstrate that the proposed method can significantly enhance the quality of the constructed atlas by discovering more anatomical details especially in the highly convoluted cortical regions. The resulting atlas demonstrates superior performance of our atlas when applied to spatially normalizing three different neonatal datasets, compared with other start-of-the-art neonatal brain atlases. PMID:24638883

  7. Image Super-Resolution via Adaptive Regularization and Sparse Representation.

    PubMed

    Cao, Feilong; Cai, Miaomiao; Tan, Yuanpeng; Zhao, Jianwei

    2016-07-01

    Previous studies have shown that image patches can be well represented as a sparse linear combination of elements from an appropriately selected over-complete dictionary. Recently, single-image super-resolution (SISR) via sparse representation using blurred and downsampled low-resolution images has attracted increasing interest, where the aim is to obtain the coefficients for sparse representation by solving an l0 or l1 norm optimization problem. The l0 optimization is a nonconvex and NP-hard problem, while the l1 optimization usually requires many more measurements and presents new challenges even when the image is the usual size, so we propose a new approach for SISR recovery based on regularization nonconvex optimization. The proposed approach is potentially a powerful method for recovering SISR via sparse representations, and it can yield a sparser solution than the l1 regularization method. We also consider the best choice for lp regularization with all p in (0, 1), where we propose a scheme that adaptively selects the norm value for each image patch. In addition, we provide a method for estimating the best value of the regularization parameter λ adaptively, and we discuss an alternate iteration method for selecting p and λ . We perform experiments, which demonstrates that the proposed regularization nonconvex optimization method can outperform the convex optimization method and generate higher quality images.

  8. Cervigram image segmentation based on reconstructive sparse representations

    NASA Astrophysics Data System (ADS)

    Zhang, Shaoting; Huang, Junzhou; Wang, Wei; Huang, Xiaolei; Metaxas, Dimitris

    2010-03-01

    We proposed an approach based on reconstructive sparse representations to segment tissues in optical images of the uterine cervix. Because of large variations in image appearance caused by the changing of the illumination and specular reflection, the color and texture features in optical images often overlap with each other and are not linearly separable. By leveraging sparse representations the data can be transformed to higher dimensions with sparse constraints and become more separated. K-SVD algorithm is employed to find sparse representations and corresponding dictionaries. The data can be reconstructed from its sparse representations and positive and/or negative dictionaries. Classification can be achieved based on comparing the reconstructive errors. In the experiments we applied our method to automatically segment the biomarker AcetoWhite (AW) regions in an archive of 60,000 images of the uterine cervix. Compared with other general methods, our approach showed lower space and time complexity and higher sensitivity.

  9. Group-based sparse representation for image restoration.

    PubMed

    Zhang, Jian; Zhao, Debin; Gao, Wen

    2014-08-01

    Traditional patch-based sparse representation modeling of natural images usually suffer from two problems. First, it has to solve a large-scale optimization problem with high computational complexity in dictionary learning. Second, each patch is considered independently in dictionary learning and sparse coding, which ignores the relationship among patches, resulting in inaccurate sparse coding coefficients. In this paper, instead of using patch as the basic unit of sparse representation, we exploit the concept of group as the basic unit of sparse representation, which is composed of nonlocal patches with similar structures, and establish a novel sparse representation modeling of natural images, called group-based sparse representation (GSR). The proposed GSR is able to sparsely represent natural images in the domain of group, which enforces the intrinsic local sparsity and nonlocal self-similarity of images simultaneously in a unified framework. In addition, an effective self-adaptive dictionary learning method for each group with low complexity is designed, rather than dictionary learning from natural images. To make GSR tractable and robust, a split Bregman-based technique is developed to solve the proposed GSR-driven ℓ0 minimization problem for image restoration efficiently. Extensive experiments on image inpainting, image deblurring and image compressive sensing recovery manifest that the proposed GSR modeling outperforms many current state-of-the-art schemes in both peak signal-to-noise ratio and visual perception.

  10. Automatic target recognition using group-structured sparse representation

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Wu, Xuewen; He, Jun; Zhu, Xiaoming; Chen, Chao

    2014-06-01

    Sparse representation classification method has been increasingly used in the fields of computer vision and pattern analysis, due to its high recognition rate, little dependence on the features, robustness to corruption and occlusion, and etc. However, most of these existing methods aim to find the sparsest representations of the test sample y in an overcomplete dictionary, which do not particularly consider the relevant structure between the atoms in the dictionary. Moreover, sufficient training samples are always required by the sparse representation method for effective recognition. In this paper we formulate the classification as a group-structured sparse representation problem using a sparsity-inducing norm minimization optimization and propose a novel sparse representation-based automatic target recognition (ATR) framework for the practical applications in which the training samples are drawn from the simulation models of real targets. The experimental results show that the proposed approach improves the recognition rate of standard sparse models, and our system can effectively and efficiently recognize targets under real environments, especially, where the good characteristics of the sparse representation based classification method are kept.

  11. Accelerating Dynamic Cardiac MR Imaging Using Structured Sparse Representation

    PubMed Central

    Cai, Nian; Wang, Shengru; Zhu, Shasha

    2013-01-01

    Compressed sensing (CS) has produced promising results on dynamic cardiac MR imaging by exploiting the sparsity in image series. In this paper, we propose a new method to improve the CS reconstruction for dynamic cardiac MRI based on the theory of structured sparse representation. The proposed method user the PCA subdictionaries for adaptive sparse representation and suppresses the sparse coding noise to obtain good reconstructions. An accelerated iterative shrinkage algorithm is used to solve the optimization problem and achieve a fast convergence rate. Experimental results demonstrate that the proposed method improves the reconstruction quality of dynamic cardiac cine MRI over the state-of-the-art CS method. PMID:24454528

  12. Pseudo spectral Chebyshev representation of few-group cross sections on sparse grids

    SciTech Connect

    Bokov, P. M.; Botes, D.; Zimin, V. G.

    2012-07-01

    This paper presents a pseudo spectral method for representing few-group homogenised cross sections, based on hierarchical polynomial interpolation. The interpolation is performed on a multi-dimensional sparse grid built from Chebyshev nodes. The representation is assembled directly from the samples using basis functions that are constructed as tensor products of the classical one-dimensional Lagrangian interpolation functions. The advantage of this representation is that it combines the accuracy of Chebyshev interpolation with the efficiency of sparse grid methods. As an initial test, this interpolation method was used to construct a representation for the two-group macroscopic cross sections of a VVER pin cell. (authors)

  13. Inverse lithography using sparse mask representations

    NASA Astrophysics Data System (ADS)

    Ionescu, Radu C.; Hurley, Paul; Apostol, Stefan

    2015-03-01

    We present a novel optimisation algorithm for inverse lithography, based on optimization of the mask derivative, a domain inherently sparse, and for rectilinear polygons, invertible. The method is first developed assuming a point light source, and then extended to general incoherent sources. What results is a fast algorithm, producing manufacturable masks (the search space is constrained to rectilinear polygons), and flexible (specific constraints such as minimal line widths can be imposed). One inherent trick is to treat polygons as continuous entities, thus making aerial image calculation extremely fast and accurate. Requirements for mask manufacturability can be integrated in the optimization without too much added complexity. We also explain how to extend the scheme for phase-changing mask optimization.

  14. Vector sparse representation of color image using quaternion matrix analysis.

    PubMed

    Xu, Yi; Yu, Licheng; Xu, Hongteng; Zhang, Hao; Nguyen, Truong

    2015-04-01

    Traditional sparse image models treat color image pixel as a scalar, which represents color channels separately or concatenate color channels as a monochrome image. In this paper, we propose a vector sparse representation model for color images using quaternion matrix analysis. As a new tool for color image representation, its potential applications in several image-processing tasks are presented, including color image reconstruction, denoising, inpainting, and super-resolution. The proposed model represents the color image as a quaternion matrix, where a quaternion-based dictionary learning algorithm is presented using the K-quaternion singular value decomposition (QSVD) (generalized K-means clustering for QSVD) method. It conducts the sparse basis selection in quaternion space, which uniformly transforms the channel images to an orthogonal color space. In this new color space, it is significant that the inherent color structures can be completely preserved during vector reconstruction. Moreover, the proposed sparse model is more efficient comparing with the current sparse models for image restoration tasks due to lower redundancy between the atoms of different color channels. The experimental results demonstrate that the proposed sparse image model avoids the hue bias issue successfully and shows its potential as a general and powerful tool in color image analysis and processing domain. PMID:25643407

  15. Representation-Independent Iteration of Sparse Data Arrays

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    An approach is defined that describes a method of iterating over massively large arrays containing sparse data using an approach that is implementation independent of how the contents of the sparse arrays are laid out in memory. What is unique and important here is the decoupling of the iteration over the sparse set of array elements from how they are internally represented in memory. This enables this approach to be backward compatible with existing schemes for representing sparse arrays as well as new approaches. What is novel here is a new approach for efficiently iterating over sparse arrays that is independent of the underlying memory layout representation of the array. A functional interface is defined for implementing sparse arrays in any modern programming language with a particular focus for the Chapel programming language. Examples are provided that show the translation of a loop that computes a matrix vector product into this representation for both the distributed and not-distributed cases. This work is directly applicable to NASA and its High Productivity Computing Systems (HPCS) program that JPL and our current program are engaged in. The goal of this program is to create powerful, scalable, and economically viable high-powered computer systems suitable for use in national security and industry by 2010. This is important to NASA for its computationally intensive requirements for analyzing and understanding the volumes of science data from our returned missions.

  16. Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking

    PubMed Central

    Qu, Shiru

    2016-01-01

    Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness. PMID:27630710

  17. Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking

    PubMed Central

    Qu, Shiru

    2016-01-01

    Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness.

  18. Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking.

    PubMed

    Yang, Honghong; Qu, Shiru

    2016-01-01

    Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness. PMID:27630710

  19. Sparse representation-based image restoration via nonlocal supervised coding

    NASA Astrophysics Data System (ADS)

    Li, Ao; Chen, Deyun; Sun, Guanglu; Lin, Kezheng

    2016-09-01

    Sparse representation (SR) and nonlocal technique (NLT) have shown great potential in low-level image processing. However, due to the degradation of the observed image, SR and NLT may not be accurate enough to obtain a faithful restoration results when they are used independently. To improve the performance, in this paper, a nonlocal supervised coding strategy-based NLT for image restoration is proposed. The novel method has three main contributions. First, to exploit the useful nonlocal patches, a nonnegative sparse representation is introduced, whose coefficients can be utilized as the supervised weights among patches. Second, a novel objective function is proposed, which integrated the supervised weights learning and the nonlocal sparse coding to guarantee a more promising solution. Finally, to make the minimization tractable and convergence, a numerical scheme based on iterative shrinkage thresholding is developed to solve the above underdetermined inverse problem. The extensive experiments validate the effectiveness of the proposed method.

  20. Sparse representation-based image restoration via nonlocal supervised coding

    NASA Astrophysics Data System (ADS)

    Li, Ao; Chen, Deyun; Sun, Guanglu; Lin, Kezheng

    2016-10-01

    Sparse representation (SR) and nonlocal technique (NLT) have shown great potential in low-level image processing. However, due to the degradation of the observed image, SR and NLT may not be accurate enough to obtain a faithful restoration results when they are used independently. To improve the performance, in this paper, a nonlocal supervised coding strategy-based NLT for image restoration is proposed. The novel method has three main contributions. First, to exploit the useful nonlocal patches, a nonnegative sparse representation is introduced, whose coefficients can be utilized as the supervised weights among patches. Second, a novel objective function is proposed, which integrated the supervised weights learning and the nonlocal sparse coding to guarantee a more promising solution. Finally, to make the minimization tractable and convergence, a numerical scheme based on iterative shrinkage thresholding is developed to solve the above underdetermined inverse problem. The extensive experiments validate the effectiveness of the proposed method.

  1. SAR target classification based on multiscale sparse representation

    NASA Astrophysics Data System (ADS)

    Ruan, Huaiyu; Zhang, Rong; Li, Jingge; Zhan, Yibing

    2016-03-01

    We propose a novel multiscale sparse representation approach for SAR target classification. It firstly extracts the dense SIFT descriptors on multiple scales, then trains a global multiscale dictionary by sparse coding algorithm. After obtaining the sparse representation, the method applies spatial pyramid matching (SPM) and max pooling to summarize the features for each image. The proposed method can provide more information and descriptive ability than single-scale ones. Moreover, it costs less extra computation than existing multiscale methods which compute a dictionary for each scale. The MSTAR database and ship database collected from TerraSAR-X images are used in classification setup. Results show that the best overall classification rate of the proposed approach can achieve 98.83% on the MSTAR database and 92.67% on the TerraSAR-X ship database.

  2. Distributed dictionary learning for sparse representation in sensor networks.

    PubMed

    Liang, Junli; Zhang, Miaohua; Zeng, Xianyu; Yu, Guoyang

    2014-06-01

    This paper develops a distributed dictionary learning algorithm for sparse representation of the data distributed across nodes of sensor networks, where the sensitive or private data are stored or there is no fusion center or there exists a big data application. The main contributions of this paper are: 1) we decouple the combined dictionary atom update and nonzero coefficient revision procedure into two-stage operations to facilitate distributed computations, first updating the dictionary atom in terms of the eigenvalue decomposition of the sum of the residual (correlation) matrices across the nodes then implementing a local projection operation to obtain the related representation coefficients for each node; 2) we cast the aforementioned atom update problem as a set of decentralized optimization subproblems with consensus constraints. Then, we simplify the multiplier update for the symmetry undirected graphs in sensor networks and minimize the separable subproblems to attain the consistent estimates iteratively; and 3) dictionary atoms are typically constrained to be of unit norm in order to avoid the scaling ambiguity. We efficiently solve the resultant hidden convex subproblems by determining the optimal Lagrange multiplier. Some experiments are given to show that the proposed algorithm is an alternative distributed dictionary learning approach, and is suitable for the sensor network environment. PMID:24733009

  3. Distributed dictionary learning for sparse representation in sensor networks.

    PubMed

    Liang, Junli; Zhang, Miaohua; Zeng, Xianyu; Yu, Guoyang

    2014-06-01

    This paper develops a distributed dictionary learning algorithm for sparse representation of the data distributed across nodes of sensor networks, where the sensitive or private data are stored or there is no fusion center or there exists a big data application. The main contributions of this paper are: 1) we decouple the combined dictionary atom update and nonzero coefficient revision procedure into two-stage operations to facilitate distributed computations, first updating the dictionary atom in terms of the eigenvalue decomposition of the sum of the residual (correlation) matrices across the nodes then implementing a local projection operation to obtain the related representation coefficients for each node; 2) we cast the aforementioned atom update problem as a set of decentralized optimization subproblems with consensus constraints. Then, we simplify the multiplier update for the symmetry undirected graphs in sensor networks and minimize the separable subproblems to attain the consistent estimates iteratively; and 3) dictionary atoms are typically constrained to be of unit norm in order to avoid the scaling ambiguity. We efficiently solve the resultant hidden convex subproblems by determining the optimal Lagrange multiplier. Some experiments are given to show that the proposed algorithm is an alternative distributed dictionary learning approach, and is suitable for the sensor network environment.

  4. Multiple kernel learning for sparse representation-based classification.

    PubMed

    Shrivastava, Ashish; Patel, Vishal M; Chellappa, Rama

    2014-07-01

    In this paper, we propose a multiple kernel learning (MKL) algorithm that is based on the sparse representation-based classification (SRC) method. Taking advantage of the nonlinear kernel SRC in efficiently representing the nonlinearities in the high-dimensional feature space, we propose an MKL method based on the kernel alignment criteria. Our method uses a two step training method to learn the kernel weights and sparse codes. At each iteration, the sparse codes are updated first while fixing the kernel mixing coefficients, and then the kernel mixing coefficients are updated while fixing the sparse codes. These two steps are repeated until a stopping criteria is met. The effectiveness of the proposed method is demonstrated using several publicly available image classification databases and it is shown that this method can perform significantly better than many competitive image classification algorithms. PMID:24835226

  5. Group-sparse representation with dictionary learning for medical image denoising and fusion.

    PubMed

    Li, Shutao; Yin, Haitao; Fang, Leyuan

    2012-12-01

    Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches.

  6. Color Sparse Representations for Image Processing: Review, Models, and Prospects.

    PubMed

    Barthélemy, Quentin; Larue, Anthony; Mars, Jérôme I

    2015-11-01

    Sparse representations have been extended to deal with color images composed of three channels. A review of dictionary-learning-based sparse representations for color images is made here, detailing the differences between the models, and comparing their results on the real and simulated data. These models are considered in a unifying framework that is based on the degrees of freedom of the linear filtering/transformation of the color channels. Moreover, this allows it to be shown that the scalar quaternionic linear model is equivalent to constrained matrix-based color filtering, which highlights the filtering implicitly applied through this model. Based on this reformulation, the new color filtering model is introduced, using unconstrained filters. In this model, spatial morphologies of color images are encoded by atoms, and colors are encoded by color filters. Color variability is no longer captured in increasing the dictionary size, but with color filters, this gives an efficient color representation.

  7. Half-quadratic-based iterative minimization for robust sparse representation.

    PubMed

    He, Ran; Zheng, Wei-Shi; Tan, Tieniu; Sun, Zhenan

    2014-02-01

    Robust sparse representation has shown significant potential in solving challenging problems in computer vision such as biometrics and visual surveillance. Although several robust sparse models have been proposed and promising results have been obtained, they are either for error correction or for error detection, and learning a general framework that systematically unifies these two aspects and explores their relation is still an open problem. In this paper, we develop a half-quadratic (HQ) framework to solve the robust sparse representation problem. By defining different kinds of half-quadratic functions, the proposed HQ framework is applicable to performing both error correction and error detection. More specifically, by using the additive form of HQ, we propose an ℓ1-regularized error correction method by iteratively recovering corrupted data from errors incurred by noises and outliers; by using the multiplicative form of HQ, we propose an ℓ1-regularized error detection method by learning from uncorrupted data iteratively. We also show that the ℓ1-regularization solved by soft-thresholding function has a dual relationship to Huber M-estimator, which theoretically guarantees the performance of robust sparse representation in terms of M-estimation. Experiments on robust face recognition under severe occlusion and corruption validate our framework and findings.

  8. Sparse representation based face recognition using weighted regions

    NASA Astrophysics Data System (ADS)

    Bilgazyev, Emil; Yeniaras, E.; Uyanik, I.; Unan, Mahmut; Leiss, E. L.

    2013-12-01

    Face recognition is a challenging research topic, especially when the training (gallery) and recognition (probe) images are acquired using different cameras under varying conditions. Even a small noise or occlusion in the images can compromise the accuracy of recognition. Lately, sparse encoding based classification algorithms gave promising results for such uncontrollable scenarios. In this paper, we introduce a novel methodology by modeling the sparse encoding with weighted patches to increase the robustness of face recognition even further. In the training phase, we define a mask (i.e., weight matrix) using a sparse representation selecting the facial regions, and in the recognition phase, we perform comparison on selected facial regions. The algorithm was evaluated both quantitatively and qualitatively using two comprehensive surveillance facial image databases, i.e., SCfaceandMFPV, with the results clearly superior to common state-of-the-art methodologies in different scenarios.

  9. Spatial, Temporal and Spectral Satellite Image Fusion via Sparse Representation

    NASA Astrophysics Data System (ADS)

    Song, Huihui

    Remote sensing provides good measurements for monitoring and further analyzing the climate change, dynamics of ecosystem, and human activities in global or regional scales. Over the past two decades, the number of launched satellite sensors has been increasing with the development of aerospace technologies and the growing requirements on remote sensing data in a vast amount of application fields. However, a key technological challenge confronting these sensors is that they tradeoff between spatial resolution and other properties, including temporal resolution, spectral resolution, swath width, etc., due to the limitations of hardware technology and budget constraints. To increase the spatial resolution of data with other good properties, one possible cost-effective solution is to explore data integration methods that can fuse multi-resolution data from multiple sensors, thereby enhancing the application capabilities of available remote sensing data. In this thesis, we propose to fuse the spatial resolution with temporal resolution and spectral resolution, respectively, based on sparse representation theory. Taking the study case of Landsat ETM+ (with spatial resolution of 30m and temporal resolution of 16 days) and MODIS (with spatial resolution of 250m ~ 1km and daily temporal resolution) reflectance, we propose two spatial-temporal fusion methods to combine the fine spatial information of Landsat image and the daily temporal resolution of MODIS image. Motivated by that the images from these two sensors are comparable on corresponding bands, we propose to link their spatial information on available Landsat- MODIS image pair (captured on prior date) and then predict the Landsat image from the MODIS counterpart on prediction date. To well-learn the spatial details from the prior images, we use a redundant dictionary to extract the basic representation atoms for both Landsat and MODIS images based on sparse representation. Under the scenario of two prior Landsat

  10. Supervised Discriminative Group Sparse Representation for Mild Cognitive Impairment Diagnosis.

    PubMed

    Suk, Heung-Il; Wee, Chong-Yaw; Lee, Seong-Whan; Shen, Dinggang

    2015-07-01

    Research on an early detection of Mild Cognitive Impairment (MCI), a prodromal stage of Alzheimer's Disease (AD), with resting-state functional Magnetic Resonance Imaging (rs-fMRI) has been of great interest for the last decade. Witnessed by recent studies, functional connectivity is a useful concept in extracting brain network features and finding biomarkers for brain disease diagnosis. However, it still remains challenging for the estimation of functional connectivity from rs-fMRI due to the inevitable high dimensional problem. In order to tackle this problem, we utilize a group sparse representation along with a structural equation model. Unlike the conventional group sparse representation method that does not explicitly consider class-label information, which can help enhance the diagnostic performance, in this paper, we propose a novel supervised discriminative group sparse representation method by penalizing a large within-class variance and a small between-class variance of connectivity coefficients. Thanks to the newly devised penalization terms, we can learn connectivity coefficients that are similar within the same class and distinct between classes, thus helping enhance the diagnostic accuracy. The proposed method also allows the learned common network structure to preserve the network specific and label-related characteristics. In our experiments on the rs-fMRI data of 37 subjects (12 MCI; 25 healthy normal control) with a cross-validation technique, we demonstrated the validity and effectiveness of the proposed method, showing the diagnostic accuracy of 89.19 % and the sensitivity of 0.9167.

  11. Discriminative object tracking via sparse representation and online dictionary learning.

    PubMed

    Xie, Yuan; Zhang, Wensheng; Li, Cuihua; Lin, Shuyang; Qu, Yanyun; Zhang, Yinghua

    2014-04-01

    We propose a robust tracking algorithm based on local sparse coding with discriminative dictionary learning and new keypoint matching schema. This algorithm consists of two parts: the local sparse coding with online updated discriminative dictionary for tracking (SOD part), and the keypoint matching refinement for enhancing the tracking performance (KP part). In the SOD part, the local image patches of the target object and background are represented by their sparse codes using an over-complete discriminative dictionary. Such discriminative dictionary, which encodes the information of both the foreground and the background, may provide more discriminative power. Furthermore, in order to adapt the dictionary to the variation of the foreground and background during the tracking, an online learning method is employed to update the dictionary. The KP part utilizes refined keypoint matching schema to improve the performance of the SOD. With the help of sparse representation and online updated discriminative dictionary, the KP part are more robust than the traditional method to reject the incorrect matches and eliminate the outliers. The proposed method is embedded into a Bayesian inference framework for visual tracking. Experimental results on several challenging video sequences demonstrate the effectiveness and robustness of our approach.

  12. Inpainting with sparse linear combinations of exemplars

    SciTech Connect

    Wohlberg, Brendt

    2008-01-01

    We introduce a new exemplar-based inpainting algorithm based on representing the region to be inpainted as a sparse linear combination of blocks extracted from similar parts of the image being inpainted. This method is conceptually simple, being computed by functional minimization, and avoids the complexity of correctly ordering the filling in of missing regions of other exemplar-based methods. Initial performance comparisons on small inpainting regions indicate that this method provides similar or better performance than other recent methods.

  13. Online Signature Verification Based on DCT and Sparse Representation.

    PubMed

    Liu, Yishu; Yang, Zhihua; Yang, Lihua

    2015-11-01

    In this paper, a novel online signature verification technique based on discrete cosine transform (DCT) and sparse representation is proposed. We find a new property of DCT, which can be used to obtain a compact representation of an online signature using a fixed number of coefficients, leading to simple matching procedures and providing an effective alternative to deal with time series of different lengths. The property is also used to extract energy features. Furthermore, a new attempt to apply sparse representation to online signature verification is made, and a novel task-specific method for building overcomplete dictionaries is proposed, then sparsity features are extracted. Finally, energy features and sparsity features are concatenated to form a feature vector. Experiments are conducted on the Sabancı University's Signature Database (SUSIG)-Visual and SVC2004 databases, and the results show that our proposed method authenticates persons very reliably with a verification performance which is better than those of state-of-the-art methods on the same databases.

  14. 3D ear identification based on sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying

    2014-01-01

    Biometrics based personal authentication is an effective way for automatically recognizing, with a high confidence, a person's identity. Recently, 3D ear shape has attracted tremendous interests in research field due to its richness of feature and ease of acquisition. However, the existing ICP (Iterative Closet Point)-based 3D ear matching methods prevalent in the literature are not quite efficient to cope with the one-to-many identification case. In this paper, we aim to fill this gap by proposing a novel effective fully automatic 3D ear identification system. We at first propose an accurate and efficient template-based ear detection method. By utilizing such a method, the extracted ear regions are represented in a common canonical coordinate system determined by the ear contour template, which facilitates much the following stages of feature extraction and classification. For each extracted 3D ear, a feature vector is generated as its representation by making use of a PCA-based local feature descriptor. At the stage of classification, we resort to the sparse representation based classification approach, which actually solves an l1-minimization problem. To the best of our knowledge, this is the first work introducing the sparse representation framework into the field of 3D ear identification. Extensive experiments conducted on a benchmark dataset corroborate the effectiveness and efficiency of the proposed approach. The associated Matlab source code and the evaluation results have been made publicly online available at http://sse.tongji.edu.cn/linzhang/ear/srcear/srcear.htm.

  15. MR image super-resolution reconstruction using sparse representation, nonlocal similarity and sparse derivative prior.

    PubMed

    Zhang, Di; He, Jiazhong; Zhao, Yun; Du, Minghui

    2015-03-01

    In magnetic resonance (MR) imaging, image spatial resolution is determined by various instrumental limitations and physical considerations. This paper presents a new algorithm for producing a high-resolution version of a low-resolution MR image. The proposed method consists of two consecutive steps: (1) reconstructs a high-resolution MR image from a given low-resolution observation via solving a joint sparse representation and nonlocal similarity L1-norm minimization problem; and (2) applies a sparse derivative prior based post-processing to suppress blurring effects. Extensive experiments on simulated brain MR images and two real clinical MR image datasets validate that the proposed method achieves much better results than many state-of-the-art algorithms in terms of both quantitative measures and visual perception.

  16. A MRI-CT prostate registration using sparse representation technique

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Jani, Ashesh B.; Rossi, Peter J.; Mao, Hui; Curran, Walter J.; Liu, Tian

    2016-03-01

    Purpose: To develop a new MRI-CT prostate registration using patch-based deformation prediction framework to improve MRI-guided prostate radiotherapy by incorporating multiparametric MRI into planning CT images. Methods: The main contribution is to estimate the deformation between prostate MRI and CT images in a patch-wise fashion by using the sparse representation technique. We assume that two image patches should follow the same deformation if their patch-wise appearance patterns are similar. Specifically, there are two stages in our proposed framework, i.e., the training stage and the application stage. In the training stage, each prostate MR images are carefully registered to the corresponding CT images and all training MR and CT images are carefully registered to a selected CT template. Thus, we obtain the dense deformation field for each training MR and CT image. In the application stage, for registering a new subject MR image with the same subject CT image, we first select a small number of key points at the distinctive regions of this subject CT image. Then, for each key point in the subject CT image, we extract the image patch, centered at the underlying key point. Then, we adaptively construct the coupled dictionary for the underlying point where each atom in the dictionary consists of image patches and the respective deformations obtained from training pair-wise MRI-CT images. Next, the subject image patch can be sparsely represented by a linear combination of training image patches in the dictionary, where we apply the same sparse coefficients to the respective deformations in the dictionary to predict the deformation for the subject MR image patch. After we repeat the same procedure for each subject CT key point, we use B-splines to interpolate a dense deformation field, which is used as the initialization to allow the registration algorithm estimating the remaining small segment of deformations from MRI to CT image

  17. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation.

    PubMed

    Sun, Rui; Zhang, Guanghai; Yan, Xiaoxing; Gao, Jun

    2016-08-16

    Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods.

  18. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation

    PubMed Central

    Sun, Rui; Zhang, Guanghai; Yan, Xiaoxing; Gao, Jun

    2016-01-01

    Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods. PMID:27537888

  19. Magnetic resonance brain tissue segmentation based on sparse representations

    NASA Astrophysics Data System (ADS)

    Rueda, Andrea

    2015-12-01

    Segmentation or delineation of specific organs and structures in medical images is an important task in the clinical diagnosis and treatment, since it allows to characterize pathologies through imaging measures (biomarkers). In brain imaging, segmentation of main tissues or specific structures is challenging, due to the anatomic variability and complexity, and the presence of image artifacts (noise, intensity inhomogeneities, partial volume effect). In this paper, an automatic segmentation strategy is proposed, based on sparse representations and coupled dictionaries. Image intensity patterns are singly related to tissue labels at the level of small patches, gathering this information in coupled intensity/segmentation dictionaries. This dictionaries are used within a sparse representation framework to find the projection of a new intensity image onto the intensity dictionary, and the same projection can be used with the segmentation dictionary to estimate the corresponding segmentation. Preliminary results obtained with two publicly available datasets suggest that the proposal is capable of estimating adequate segmentations for gray matter (GM) and white matter (WM) tissues, with an average overlapping of 0:79 for GM and 0:71 for WM (with respect to original segmentations).

  20. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation.

    PubMed

    Sun, Rui; Zhang, Guanghai; Yan, Xiaoxing; Gao, Jun

    2016-01-01

    Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods. PMID:27537888

  1. Robust image analysis with sparse representation on quantized visual features.

    PubMed

    Bao, Bing-Kun; Zhu, Guangyu; Shen, Jialie; Yan, Shuicheng

    2013-03-01

    Recent techniques based on sparse representation (SR) have demonstrated promising performance in high-level visual recognition, exemplified by the highly accurate face recognition under occlusion and other sparse corruptions. Most research in this area has focused on classification algorithms using raw image pixels, and very few have been proposed to utilize the quantized visual features, such as the popular bag-of-words feature abstraction. In such cases, besides the inherent quantization errors, ambiguity associated with visual word assignment and misdetection of feature points, due to factors such as visual occlusions and noises, constitutes the major cause of dense corruptions of the quantized representation. The dense corruptions can jeopardize the decision process by distorting the patterns of the sparse reconstruction coefficients. In this paper, we aim to eliminate the corruptions and achieve robust image analysis with SR. Toward this goal, we introduce two transfer processes (ambiguity transfer and mis-detection transfer) to account for the two major sources of corruption as discussed. By reasonably assuming the rarity of the two kinds of distortion processes, we augment the original SR-based reconstruction objective with l(0) norm regularization on the transfer terms to encourage sparsity and, hence, discourage dense distortion/transfer. Computationally, we relax the nonconvex l(0) norm optimization into a convex l(1) norm optimization problem, and employ the accelerated proximal gradient method to optimize the convergence provable updating procedure. Extensive experiments on four benchmark datasets, Caltech-101, Caltech-256, Corel-5k, and CMU pose, illumination, and expression, manifest the necessity of removing the quantization corruptions and the various advantages of the proposed framework.

  2. Classification of transient signals using sparse representations over adaptive dictionaries

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Brumby, Steven P.; Myers, Kary L.; Pawley, Norma H.

    2011-06-01

    Automatic classification of broadband transient radio frequency (RF) signals is of particular interest in persistent surveillance applications. Because such transients are often acquired in noisy, cluttered environments, and are characterized by complex or unknown analytical models, feature extraction and classification can be difficult. We propose a fast, adaptive classification approach based on non-analytical dictionaries learned from data. Conventional representations using fixed (or analytical) orthogonal dictionaries, e.g., Short Time Fourier and Wavelet Transforms, can be suboptimal for classification of transients, as they provide a rigid tiling of the time-frequency space, and are not specifically designed for a particular signal class. They do not usually lead to sparse decompositions, and require separate feature selection algorithms, creating additional computational overhead. Pursuit-type decompositions over analytical, redundant dictionaries yield sparse representations by design, and work well for target signals in the same function class as the dictionary atoms. The pursuit search however has a high computational cost, and the method can perform poorly in the presence of realistic noise and clutter. Our approach builds on the image analysis work of Mairal et al. (2008) to learn a discriminative dictionary for RF transients directly from data without relying on analytical constraints or additional knowledge about the signal characteristics. We then use a pursuit search over this dictionary to generate sparse classification features. We demonstrate that our learned dictionary is robust to unexpected changes in background content and noise levels. The target classification decision is obtained in almost real-time via a parallel, vectorized implementation.

  3. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    PubMed

    Isnard, Vincent; Taffou, Marine; Viaud-Delmon, Isabelle; Suied, Clara

    2016-01-01

    Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second) could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs). This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  4. 3D ear identification based on sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying

    2014-01-01

    Biometrics based personal authentication is an effective way for automatically recognizing, with a high confidence, a person's identity. Recently, 3D ear shape has attracted tremendous interests in research field due to its richness of feature and ease of acquisition. However, the existing ICP (Iterative Closet Point)-based 3D ear matching methods prevalent in the literature are not quite efficient to cope with the one-to-many identification case. In this paper, we aim to fill this gap by proposing a novel effective fully automatic 3D ear identification system. We at first propose an accurate and efficient template-based ear detection method. By utilizing such a method, the extracted ear regions are represented in a common canonical coordinate system determined by the ear contour template, which facilitates much the following stages of feature extraction and classification. For each extracted 3D ear, a feature vector is generated as its representation by making use of a PCA-based local feature descriptor. At the stage of classification, we resort to the sparse representation based classification approach, which actually solves an l1-minimization problem. To the best of our knowledge, this is the first work introducing the sparse representation framework into the field of 3D ear identification. Extensive experiments conducted on a benchmark dataset corroborate the effectiveness and efficiency of the proposed approach. The associated Matlab source code and the evaluation results have been made publicly online available at http://sse.tongji.edu.cn/linzhang/ear/srcear/srcear.htm. PMID:24740247

  5. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable

    PubMed Central

    Isnard, Vincent; Taffou, Marine; Viaud-Delmon, Isabelle; Suied, Clara

    2016-01-01

    Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second) could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs). This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition. PMID:26950589

  6. 3D Ear Identification Based on Sparse Representation

    PubMed Central

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying

    2014-01-01

    Biometrics based personal authentication is an effective way for automatically recognizing, with a high confidence, a person’s identity. Recently, 3D ear shape has attracted tremendous interests in research field due to its richness of feature and ease of acquisition. However, the existing ICP (Iterative Closet Point)-based 3D ear matching methods prevalent in the literature are not quite efficient to cope with the one-to-many identification case. In this paper, we aim to fill this gap by proposing a novel effective fully automatic 3D ear identification system. We at first propose an accurate and efficient template-based ear detection method. By utilizing such a method, the extracted ear regions are represented in a common canonical coordinate system determined by the ear contour template, which facilitates much the following stages of feature extraction and classification. For each extracted 3D ear, a feature vector is generated as its representation by making use of a PCA-based local feature descriptor. At the stage of classification, we resort to the sparse representation based classification approach, which actually solves an l1-minimization problem. To the best of our knowledge, this is the first work introducing the sparse representation framework into the field of 3D ear identification. Extensive experiments conducted on a benchmark dataset corroborate the effectiveness and efficiency of the proposed approach. The associated Matlab source code and the evaluation results have been made publicly online available at http://sse.tongji.edu.cn/linzhang/ear/srcear/srcear.htm. PMID:24740247

  7. Dynamic Facial Expression Recognition With Atlas Construction and Sparse Representation.

    PubMed

    Guo, Yimo; Zhao, Guoying; Pietikainen, Matti

    2016-05-01

    In this paper, a new dynamic facial expression recognition method is proposed. Dynamic facial expression recognition is formulated as a longitudinal groupwise registration problem. The main contributions of this method lie in the following aspects: 1) subject-specific facial feature movements of different expressions are described by a diffeomorphic growth model; 2) salient longitudinal facial expression atlas is built for each expression by a sparse groupwise image registration method, which can describe the overall facial feature changes among the whole population and can suppress the bias due to large intersubject facial variations; and 3) both the image appearance information in spatial domain and topological evolution information in temporal domain are used to guide recognition by a sparse representation method. The proposed framework has been extensively evaluated on five databases for different applications: the extended Cohn-Kanade, MMI, FERA, and AFEW databases for dynamic facial expression recognition, and UNBC-McMaster database for spontaneous pain expression monitoring. This framework is also compared with several state-of-the-art dynamic facial expression recognition methods. The experimental results demonstrate that the recognition rates of the new method are consistently higher than other methods under comparison. PMID:26955032

  8. A Modified Sparse Representation Method for Facial Expression Recognition.

    PubMed

    Wang, Wei; Xu, LiHong

    2016-01-01

    In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result. PMID:26880878

  9. Inpainting of historical seismograms using sparse representation method

    NASA Astrophysics Data System (ADS)

    Wang, Lifu; Sun, Yi; Cai, Xiaogang

    2015-01-01

    This paper presents a method of inpainting historical seismograms recorded by a pen and paper drum-type seismograph. In the seismogram, some portions of the wave may be lost or distorted owing to time marks or violent shaking. In this study, the seismic waveform is divided into several frames of equal length, and the lost or distorted portions are restored frame by frame. Because a seismogram contains several repetitive patterns in the entire waveform, each frame can be sparsely represented on the basis of these patterns. Therefore, the sparse representation model is employed to represent historical seismograms. In addition, an inpainting model that employs sparsity as a prior is formulated, and it is used to restore the lost portions by solving a L0-norm minimization problem. However, this minimization problem may be ill posed and result in an incorrect outcome if the missing interval duration of the wave is very large. Therefore, to solve this ill-posed problem, a prior based on the Fourier spectrum of the waveform is added to the inpainting method. Simulation results prove that the proposed inpainting method can restore the missing wave well.

  10. Pedestrian detection from thermal images: A sparse representation based approach

    NASA Astrophysics Data System (ADS)

    Qi, Bin; John, Vijay; Liu, Zheng; Mita, Seiichi

    2016-05-01

    Pedestrian detection, a key technology in computer vision, plays a paramount role in the applications of advanced driver assistant systems (ADASs) and autonomous vehicles. The objective of pedestrian detection is to identify and locate people in a dynamic environment so that accidents can be avoided. With significant variations introduced by illumination, occlusion, articulated pose, and complex background, pedestrian detection is a challenging task for visual perception. Different from visible images, thermal images are captured and presented with intensity maps based objects' emissivity, and thus have an enhanced spectral range to make human beings perceptible from the cool background. In this study, a sparse representation based approach is proposed for pedestrian detection from thermal images. We first adopted the histogram of sparse code to represent image features and then detect pedestrian with the extracted features in an unimodal and a multimodal framework respectively. In the unimodal framework, two types of dictionaries, i.e. joint dictionary and individual dictionary, are built by learning from prepared training samples. In the multimodal framework, a weighted fusion scheme is proposed to further highlight the contributions from features with higher separability. To validate the proposed approach, experiments were conducted to compare with three widely used features: Haar wavelets (HWs), histogram of oriented gradients (HOG), and histogram of phase congruency (HPC) as well as two classification methods, i.e. AdaBoost and support vector machine (SVM). Experimental results on a publicly available data set demonstrate the superiority of the proposed approach.

  11. Fault feature extraction of rolling element bearings using sparse representation

    NASA Astrophysics Data System (ADS)

    He, Guolin; Ding, Kang; Lin, Huibin

    2016-03-01

    Influenced by factors such as speed fluctuation, rolling element sliding and periodical variation of load distribution and impact force on the measuring direction of sensor, the impulse response signals caused by defective rolling bearing are non-stationary, and the amplitudes of the impulse may even drop to zero when the fault is out of load zone. The non-stationary characteristic and impulse missing phenomenon reduce the effectiveness of the commonly used demodulation method on rolling element bearing fault diagnosis. Based on sparse representation theories, a new approach for fault diagnosis of rolling element bearing is proposed. The over-complete dictionary is constructed by the unit impulse response function of damped second-order system, whose natural frequencies and relative damping ratios are directly identified from the fault signal by correlation filtering method. It leads to a high similarity between atoms and defect induced impulse, and also a sharply reduction of the redundancy of the dictionary. To improve the matching accuracy and calculation speed of sparse coefficient solving, the fault signal is divided into segments and the matching pursuit algorithm is carried out by segments. After splicing together all the reconstructed signals, the fault feature is extracted successfully. The simulation and experimental results show that the proposed method is effective for the fault diagnosis of rolling element bearing in large rolling element sliding and low signal to noise ratio circumstances.

  12. Sparse Representation of Electrodermal Activity With Knowledge-Driven Dictionaries

    PubMed Central

    Tsiartas, Andreas; Stein, Leah I.; Cermak, Sharon A.; Narayanan, Shrikanth S.

    2015-01-01

    Biometric sensors and portable devices are being increasingly embedded into our everyday life, creating the need for robust physiological models that efficiently represent, analyze, and interpret the acquired signals. We propose a knowledge-driven method to represent electrodermal activity (EDA), a psychophysiological signal linked to stress, affect, and cognitive processing. We build EDA-specific dictionaries that accurately model both the slow varying tonic part and the signal fluctuations, called skin conductance responses (SCR), and use greedy sparse representation techniques to decompose the signal into a small number of atoms from the dictionary. Quantitative evaluation of our method considers signal reconstruction, compression rate, and information retrieval measures, that capture the ability of the model to incorporate the main signal characteristics, such as SCR occurrences. Compared to previous studies fitting a predetermined structure to the signal, results indicate that our approach provides benefits across all aforementioned criteria. This paper demonstrates the ability of appropriate dictionaries along with sparse decomposition methods to reliably represent EDA signals and provides a foundation for automatic measurement of SCR characteristics and the extraction of meaningful EDA features. PMID:25494494

  13. Sinogram denoising via simultaneous sparse representation in learned dictionaries

    NASA Astrophysics Data System (ADS)

    Karimi, Davood; Ward, Rabab K.

    2016-05-01

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster.

  14. A Modified Rife Algorithm for Off-Grid DOA Estimation Based on Sparse Representations.

    PubMed

    Chen, Tao; Wu, Huanxin; Guo, Limin; Liu, Lutao

    2015-01-01

    In this paper we address the problem of off-grid direction of arrival (DOA) estimation based on sparse representations in the situation of multiple measurement vectors (MMV). A novel sparse DOA estimation method which changes MMV problem to SMV is proposed. This method uses sparse representations based on weighted eigenvectors (SRBWEV) to deal with the MMV problem. MMV problem can be changed to single measurement vector (SMV) problem by using the linear combination of eigenvectors of array covariance matrix in signal subspace as a new SMV for sparse solution calculation. So the complexity of this proposed algorithm is smaller than other DOA estimation algorithms of MMV. Meanwhile, it can overcome the limitation of the conventional sparsity-based DOA estimation approaches that the unknown directions belong to a predefined discrete angular grid, so it can further improve the DOA estimation accuracy. The modified Rife algorithm for DOA estimation (MRife-DOA) is simulated based on SRBWEV algorithm. In this proposed algorithm, the largest and sub-largest inner products between the measurement vector or its residual and the atoms in the dictionary are utilized to further modify DOA estimation according to the principle of Rife algorithm and the basic idea of coarse-to-fine estimation. Finally, simulation experiments show that the proposed algorithm is effective and can reduce the DOA estimation error caused by grid effect with lower complexity. PMID:26610521

  15. A Modified Rife Algorithm for Off-Grid DOA Estimation Based on Sparse Representations

    PubMed Central

    Chen, Tao; Wu, Huanxin; Guo, Limin; Liu, Lutao

    2015-01-01

    In this paper we address the problem of off-grid direction of arrival (DOA) estimation based on sparse representations in the situation of multiple measurement vectors (MMV). A novel sparse DOA estimation method which changes MMV problem to SMV is proposed. This method uses sparse representations based on weighted eigenvectors (SRBWEV) to deal with the MMV problem. MMV problem can be changed to single measurement vector (SMV) problem by using the linear combination of eigenvectors of array covariance matrix in signal subspace as a new SMV for sparse solution calculation. So the complexity of this proposed algorithm is smaller than other DOA estimation algorithms of MMV. Meanwhile, it can overcome the limitation of the conventional sparsity-based DOA estimation approaches that the unknown directions belong to a predefined discrete angular grid, so it can further improve the DOA estimation accuracy. The modified Rife algorithm for DOA estimation (MRife-DOA) is simulated based on SRBWEV algorithm. In this proposed algorithm, the largest and sub-largest inner products between the measurement vector or its residual and the atoms in the dictionary are utilized to further modify DOA estimation according to the principle of Rife algorithm and the basic idea of coarse-to-fine estimation. Finally, simulation experiments show that the proposed algorithm is effective and can reduce the DOA estimation error caused by grid effect with lower complexity. PMID:26610521

  16. Shape prior modeling using sparse representation and online dictionary learning.

    PubMed

    Zhang, Shaoting; Zhan, Yiqiang; Zhou, Yan; Uzunbas, Mustafa; Metaxas, Dimitris N

    2012-01-01

    The recently proposed sparse shape composition (SSC) opens a new avenue for shape prior modeling. Instead of assuming any parametric model of shape statistics, SSC incorporates shape priors on-the-fly by approximating a shape instance (usually derived from appearance cues) by a sparse combination of shapes in a training repository. Theoretically, one can increase the modeling capability of SSC by including as many training shapes in the repository. However, this strategy confronts two limitations in practice. First, since SSC involves an iterative sparse optimization at run-time, the more shape instances contained in the repository, the less run-time efficiency SSC has. Therefore, a compact and informative shape dictionary is preferred to a large shape repository. Second, in medical imaging applications, training shapes seldom come in one batch. It is very time consuming and sometimes infeasible to reconstruct the shape dictionary every time new training shapes appear. In this paper, we propose an online learning method to address these two limitations. Our method starts from constructing an initial shape dictionary using the K-SVD algorithm. When new training shapes come, instead of re-constructing the dictionary from the ground up, we update the existing one using a block-coordinates descent approach. Using the dynamically updated dictionary, sparse shape composition can be gracefully scaled up to model shape priors from a large number of training shapes without sacrificing run-time efficiency. Our method is validated on lung localization in X-Ray and cardiac segmentation in MRI time series. Compared to the original SSC, it shows comparable performance while being significantly more efficient. PMID:23286160

  17. Sparse representations and convex optimization as tools for LOFAR radio interferometric imaging

    NASA Astrophysics Data System (ADS)

    Girard, J. N.; Garsden, H.; Starck, J. L.; Corbel, S.; Woiselle, A.; Tasse, C.; McKean, J. P.; Bobin, J.

    2015-08-01

    Compressed sensing theory is slowly making its way to solve more and more astronomical inverse problems. We address here the application of sparse representations, convex optimization and proximal theory to radio interferometric imaging. First, we expose the theory behind interferometric imaging, sparse representations and convex optimization, and second, we illustrate their application with numerical tests with SASIR, an implementation of the FISTA, a Forward-Backward splitting algorithm hosted in a LOFAR imager. Various tests have been conducted in Garsden et al., 2015. The main results are: i) an improved angular resolution (super resolution of a factor ≈ 2) with point sources as compared to CLEAN on the same data, ii) correct photometry measurements on a field of point sources at high dynamic range and iii) the imaging of extended sources with improved fidelity. SASIR provides better reconstructions (five time less residuals) of the extended emission as compared to CLEAN. With the advent of large radiotelescopes, there is scope for improving classical imaging methods with convex optimization methods combined with sparse representations.

  18. Enhanced low-rank representation via sparse manifold adaption for semi-supervised learning.

    PubMed

    Peng, Yong; Lu, Bao-Liang; Wang, Suhang

    2015-05-01

    Constructing an informative and discriminative graph plays an important role in various pattern recognition tasks such as clustering and classification. Among the existing graph-based learning models, low-rank representation (LRR) is a very competitive one, which has been extensively employed in spectral clustering and semi-supervised learning (SSL). In SSL, the graph is composed of both labeled and unlabeled samples, where the edge weights are calculated based on the LRR coefficients. However, most of existing LRR related approaches fail to consider the geometrical structure of data, which has been shown beneficial for discriminative tasks. In this paper, we propose an enhanced LRR via sparse manifold adaption, termed manifold low-rank representation (MLRR), to learn low-rank data representation. MLRR can explicitly take the data local manifold structure into consideration, which can be identified by the geometric sparsity idea; specifically, the local tangent space of each data point was sought by solving a sparse representation objective. Therefore, the graph to depict the relationship of data points can be built once the manifold information is obtained. We incorporate a regularizer into LRR to make the learned coefficients preserve the geometric constraints revealed in the data space. As a result, MLRR combines both the global information emphasized by low-rank property and the local information emphasized by the identified manifold structure. Extensive experimental results on semi-supervised classification tasks demonstrate that MLRR is an excellent method in comparison with several state-of-the-art graph construction approaches. PMID:25634552

  19. Sparse representation utilizing tight frame for phase retrieval

    NASA Astrophysics Data System (ADS)

    Shi, Baoshun; Lian, Qiusheng; Chen, Shuzhen

    2015-12-01

    We treat the phase retrieval (PR) problem of reconstructing the interest signal from its Fourier magnitude. Since the Fourier phase information is lost, the problem is ill-posed. Several techniques have been used to address this problem by utilizing various priors such as non-negative, support, and Fourier magnitude constraints. Recent methods exploiting sparsity are developed to improve the reconstruction quality. However, the previous algorithms of utilizing sparsity prior suffer from either the low reconstruction quality at low oversampled factors or being sensitive to noise. To address these issues, we propose a framework that exploits sparsity of the signal in the translation invariant Haar pyramid (TIHP) tight frame. Based on this sparsity prior, we formulate the sparse representation regularization term and incorporate it into the PR optimization problem. We propose the alternating iterative algorithm for solving the corresponding non-convex problem by dividing the problem into several subproblems. We give the optimal solution to each subproblem, and experimental simulations under the noise-free and noisy scenario indicate that our proposed algorithm can obtain a better reconstruction quality compared to the conventional alternative projection methods, even outperform the recent sparsity-based algorithms in terms of reconstruction quality.

  20. Extreme learning machine and adaptive sparse representation for image classification.

    PubMed

    Cao, Jiuwen; Zhang, Kai; Luo, Minxia; Yin, Chun; Lai, Xiaoping

    2016-09-01

    Recent research has shown the speed advantage of extreme learning machine (ELM) and the accuracy advantage of sparse representation classification (SRC) in the area of image classification. Those two methods, however, have their respective drawbacks, e.g., in general, ELM is known to be less robust to noise while SRC is known to be time-consuming. Consequently, ELM and SRC complement each other in computational complexity and classification accuracy. In order to unify such mutual complementarity and thus further enhance the classification performance, we propose an efficient hybrid classifier to exploit the advantages of ELM and SRC in this paper. More precisely, the proposed classifier consists of two stages: first, an ELM network is trained by supervised learning. Second, a discriminative criterion about the reliability of the obtained ELM output is adopted to decide whether the query image can be correctly classified or not. If the output is reliable, the classification will be performed by ELM; otherwise the query image will be fed to SRC. Meanwhile, in the stage of SRC, a sub-dictionary that is adaptive to the query image instead of the entire dictionary is extracted via the ELM output. The computational burden of SRC thus can be reduced. Extensive experiments on handwritten digit classification, landmark recognition and face recognition demonstrate that the proposed hybrid classifier outperforms ELM and SRC in classification accuracy with outstanding computational efficiency.

  1. Extreme learning machine and adaptive sparse representation for image classification.

    PubMed

    Cao, Jiuwen; Zhang, Kai; Luo, Minxia; Yin, Chun; Lai, Xiaoping

    2016-09-01

    Recent research has shown the speed advantage of extreme learning machine (ELM) and the accuracy advantage of sparse representation classification (SRC) in the area of image classification. Those two methods, however, have their respective drawbacks, e.g., in general, ELM is known to be less robust to noise while SRC is known to be time-consuming. Consequently, ELM and SRC complement each other in computational complexity and classification accuracy. In order to unify such mutual complementarity and thus further enhance the classification performance, we propose an efficient hybrid classifier to exploit the advantages of ELM and SRC in this paper. More precisely, the proposed classifier consists of two stages: first, an ELM network is trained by supervised learning. Second, a discriminative criterion about the reliability of the obtained ELM output is adopted to decide whether the query image can be correctly classified or not. If the output is reliable, the classification will be performed by ELM; otherwise the query image will be fed to SRC. Meanwhile, in the stage of SRC, a sub-dictionary that is adaptive to the query image instead of the entire dictionary is extracted via the ELM output. The computational burden of SRC thus can be reduced. Extensive experiments on handwritten digit classification, landmark recognition and face recognition demonstrate that the proposed hybrid classifier outperforms ELM and SRC in classification accuracy with outstanding computational efficiency. PMID:27389571

  2. Robust Ear Recognition via Nonnegative Sparse Representation of Gabor Orientation Information

    PubMed Central

    Mu, Zhichun; Zeng, Hui; Luo, Shuang

    2014-01-01

    Orientation information is critical to the accuracy of ear recognition systems. In this paper, a new feature extraction approach is investigated for ear recognition by using orientation information of Gabor wavelets. The proposed Gabor orientation feature can not only avoid too much redundancy in conventional Gabor feature but also tend to extract more precise orientation information of the ear shape contours. Then, Gabor orientation feature based nonnegative sparse representation classification (Gabor orientation + NSRC) is proposed for ear recognition. Compared with SRC in which the sparse coding coefficients can be negative, the nonnegativity of NSRC conforms to the intuitive notion of combining parts to form a whole and therefore is more consistent with the biological modeling of visual data. Additionally, the use of Gabor orientation features increases the discriminative power of NSRC. Extensive experimental results show that the proposed Gabor orientation feature based nonnegative sparse representation classification paradigm achieves much better recognition performance and is found to be more robust to challenging problems such as pose changes, illumination variations, and ear partial occlusion in real-world applications. PMID:24723792

  3. Robust ear recognition via nonnegative sparse representation of Gabor orientation information.

    PubMed

    Zhang, Baoqing; Mu, Zhichun; Zeng, Hui; Luo, Shuang

    2014-01-01

    Orientation information is critical to the accuracy of ear recognition systems. In this paper, a new feature extraction approach is investigated for ear recognition by using orientation information of Gabor wavelets. The proposed Gabor orientation feature can not only avoid too much redundancy in conventional Gabor feature but also tend to extract more precise orientation information of the ear shape contours. Then, Gabor orientation feature based nonnegative sparse representation classification (Gabor orientation + NSRC) is proposed for ear recognition. Compared with SRC in which the sparse coding coefficients can be negative, the nonnegativity of NSRC conforms to the intuitive notion of combining parts to form a whole and therefore is more consistent with the biological modeling of visual data. Additionally, the use of Gabor orientation features increases the discriminative power of NSRC. Extensive experimental results show that the proposed Gabor orientation feature based nonnegative sparse representation classification paradigm achieves much better recognition performance and is found to be more robust to challenging problems such as pose changes, illumination variations, and ear partial occlusion in real-world applications.

  4. An Improved Sparse Representation over Learned Dictionary Method for Seizure Detection.

    PubMed

    Li, Junhui; Zhou, Weidong; Yuan, Shasha; Zhang, Yanli; Li, Chengcheng; Wu, Qi

    2016-02-01

    Automatic seizure detection has played an important role in the monitoring, diagnosis and treatment of epilepsy. In this paper, a patient specific method is proposed for seizure detection in the long-term intracranial electroencephalogram (EEG) recordings. This seizure detection method is based on sparse representation with online dictionary learning and elastic net constraint. The online learned dictionary could sparsely represent the testing samples more accurately, and the elastic net constraint which combines the 11-norm and 12-norm not only makes the coefficients sparse but also avoids over-fitting problem. First, the EEG signals are preprocessed using wavelet filtering and differential filtering, and the kernel function is applied to make the samples closer to linearly separable. Then the dictionaries of seizure and nonseizure are respectively learned from original ictal and interictal training samples with online dictionary optimization algorithm to compose the training dictionary. After that, the test samples are sparsely coded over the learned dictionary and the residuals associated with ictal and interictal sub-dictionary are calculated, respectively. Eventually, the test samples are classified as two distinct categories, seizure or nonseizure, by comparing the reconstructed residuals. The average segment-based sensitivity of 95.45%, specificity of 99.08%, and event-based sensitivity of 94.44% with false detection rate of 0.23/h and average latency of -5.14 s have been achieved with our proposed method.

  5. Robust ellipse fitting based on sparse combination of data points.

    PubMed

    Liang, Junli; Zhang, Miaohua; Liu, Ding; Zeng, Xianju; Ojowu, Ode; Zhao, Kexin; Li, Zhan; Liu, Han

    2013-06-01

    Ellipse fitting is widely applied in the fields of computer vision and automatic industry control, in which the procedure of ellipse fitting often follows the preprocessing step of edge detection in the original image. Therefore, the ellipse fitting method also depends on the accuracy of edge detection besides their own performance, especially due to the introduced outliers and edge point errors from edge detection which will cause severe performance degradation. In this paper, we develop a robust ellipse fitting method to alleviate the influence of outliers. The proposed algorithm solves ellipse parameters by linearly combining a subset of ("more accurate") data points (formed from edge points) rather than all data points (which contain possible outliers). In addition, considering that squaring the fitting residuals can magnify the contributions of these extreme data points, our algorithm replaces it with the absolute residuals to reduce this influence. Moreover, the norm of data point errors is bounded, and the worst case performance optimization is formed to be robust against data point errors. The resulting mixed l1-l2 optimization problem is further derived as a second-order cone programming one and solved by the computationally efficient interior-point methods. Note that the fitting approach developed in this paper specifically deals with the overdetermined system, whereas the current sparse representation theory is only applied to underdetermined systems. Therefore, the proposed algorithm can be looked upon as an extended application and development of the sparse representation theory. Some simulated and experimental examples are presented to illustrate the effectiveness of the proposed ellipse fitting approach. PMID:23412616

  6. Robust ellipse fitting based on sparse combination of data points.

    PubMed

    Liang, Junli; Zhang, Miaohua; Liu, Ding; Zeng, Xianju; Ojowu, Ode; Zhao, Kexin; Li, Zhan; Liu, Han

    2013-06-01

    Ellipse fitting is widely applied in the fields of computer vision and automatic industry control, in which the procedure of ellipse fitting often follows the preprocessing step of edge detection in the original image. Therefore, the ellipse fitting method also depends on the accuracy of edge detection besides their own performance, especially due to the introduced outliers and edge point errors from edge detection which will cause severe performance degradation. In this paper, we develop a robust ellipse fitting method to alleviate the influence of outliers. The proposed algorithm solves ellipse parameters by linearly combining a subset of ("more accurate") data points (formed from edge points) rather than all data points (which contain possible outliers). In addition, considering that squaring the fitting residuals can magnify the contributions of these extreme data points, our algorithm replaces it with the absolute residuals to reduce this influence. Moreover, the norm of data point errors is bounded, and the worst case performance optimization is formed to be robust against data point errors. The resulting mixed l1-l2 optimization problem is further derived as a second-order cone programming one and solved by the computationally efficient interior-point methods. Note that the fitting approach developed in this paper specifically deals with the overdetermined system, whereas the current sparse representation theory is only applied to underdetermined systems. Therefore, the proposed algorithm can be looked upon as an extended application and development of the sparse representation theory. Some simulated and experimental examples are presented to illustrate the effectiveness of the proposed ellipse fitting approach.

  7. Temporal Super Resolution Enhancement of Echocardiographic Images Based on Sparse Representation.

    PubMed

    Gifani, Parisa; Behnam, Hamid; Haddadi, Farzan; Sani, Zahra Alizadeh; Shojaeifard, Maryam

    2016-01-01

    A challenging issue for echocardiographic image interpretation is the accurate analysis of small transient motions of myocardium and valves during real-time visualization. A higher frame rate video may reduce this difficulty, and temporal super resolution (TSR) is useful for illustrating the fast-moving structures. In this paper, we introduce a novel framework that optimizes TSR enhancement of echocardiographic images by utilizing temporal information and sparse representation. The goal of this method is to increase the frame rate of echocardiographic videos, and therefore enable more accurate analyses of moving structures. For the proposed method, we first derived temporal information by extracting intensity variation time curves (IVTCs) assessed for each pixel. We then designed both low-resolution and high-resolution overcomplete dictionaries based on prior knowledge of the temporal signals and a set of prespecified known functions. The IVTCs can then be described as linear combinations of a few prototype atoms in the low-resolution dictionary. We used the Bayesian compressive sensing (BCS) sparse recovery algorithm to find the sparse coefficients of the signals. We extracted the sparse coefficients and the corresponding active atoms in the low-resolution dictionary to construct new sparse coefficients corresponding to the high-resolution dictionary. Using the estimated atoms and the high-resolution dictionary, a new IVTC with more samples was constructed. Finally, by placing the new IVTC signals in the original IVTC positions, we were able to reconstruct the original echocardiography video with more frames. The proposed method does not require training of low-resolution and high-resolution dictionaries, nor does it require motion estimation; it does not blur fast-moving objects, and does not have blocking artifacts.

  8. Weighted sparse representation for human ear recognition based on local descriptor

    NASA Astrophysics Data System (ADS)

    Mawloud, Guermoui; Djamel, Melaab

    2016-01-01

    A two-stage ear recognition framework is presented where two local descriptors and a sparse representation algorithm are combined. In a first stage, the algorithm proceeds by deducing a subset of the closest training neighbors to the test ear sample. The selection is based on the K-nearest neighbors classifier in the pattern of oriented edge magnitude feature space. In a second phase, the co-occurrence of adjacent local binary pattern features are extracted from the preselected subset and combined to form a dictionary. Afterward, sparse representation classifier is employed on the developed dictionary in order to infer the closest element to the test sample. Thus, by splitting up the ear image into a number of segments and applying the described recognition routine on each of them, the algorithm finalizes by attributing a final class label based on majority voting over the individual labels pointed out by each segment. Experimental results demonstrate the effectiveness as well as the robustness of the proposed scheme over leading state-of-the-art methods. Especially when the ear image is occluded, the proposed algorithm exhibits a great robustness and reaches the recognition performances outlined in the state of the art.

  9. Multi-class remote sensing object recognition based on discriminative sparse representation.

    PubMed

    Wang, Xin; Shen, Siqiu; Ning, Chen; Huang, Fengchen; Gao, Hongmin

    2016-02-20

    The automatic recognition of multi-class objects with various backgrounds is a big challenge in the field of remote sensing (RS) image analysis. In this paper, we propose a novel recognition framework for multi-class RS objects based on the discriminative sparse representation. In this framework, the recognition problem is implemented in two stages. In the first, or discriminative dictionary learning stage, considering the characterization of remote sensing objects, the scale-invariant feature transform descriptor is first combined with an improved bag-of-words model for multi-class objects feature extraction and representation. Then, information about each class of training samples is fused into the dictionary learning process; by using the K-singular value decomposition algorithm, a discriminative dictionary can be learned for sparse coding. In the second, or recognition, stage, to improve the computational efficiency, the phase spectrum of a quaternion Fourier transform model is applied to the test image to predict a small set of object candidate locations. Then, a multi-scale sliding window mechanism is utilized to scan the image over those candidate locations to obtain the object candidates (or objects of interest). Subsequently, the sparse coding coefficients of these candidates under the discriminative dictionary are mapped to the discriminative vectors that have a good ability to distinguish different classes of objects. Finally, multi-class object recognition can be accomplished by analyzing these vectors. The experimental results show that the proposed work outperforms a number of state-of-the-art methods for multi-class remote sensing object recognition.

  10. Multi-class remote sensing object recognition based on discriminative sparse representation.

    PubMed

    Wang, Xin; Shen, Siqiu; Ning, Chen; Huang, Fengchen; Gao, Hongmin

    2016-02-20

    The automatic recognition of multi-class objects with various backgrounds is a big challenge in the field of remote sensing (RS) image analysis. In this paper, we propose a novel recognition framework for multi-class RS objects based on the discriminative sparse representation. In this framework, the recognition problem is implemented in two stages. In the first, or discriminative dictionary learning stage, considering the characterization of remote sensing objects, the scale-invariant feature transform descriptor is first combined with an improved bag-of-words model for multi-class objects feature extraction and representation. Then, information about each class of training samples is fused into the dictionary learning process; by using the K-singular value decomposition algorithm, a discriminative dictionary can be learned for sparse coding. In the second, or recognition, stage, to improve the computational efficiency, the phase spectrum of a quaternion Fourier transform model is applied to the test image to predict a small set of object candidate locations. Then, a multi-scale sliding window mechanism is utilized to scan the image over those candidate locations to obtain the object candidates (or objects of interest). Subsequently, the sparse coding coefficients of these candidates under the discriminative dictionary are mapped to the discriminative vectors that have a good ability to distinguish different classes of objects. Finally, multi-class object recognition can be accomplished by analyzing these vectors. The experimental results show that the proposed work outperforms a number of state-of-the-art methods for multi-class remote sensing object recognition. PMID:26906591

  11. Multi-source adaptation joint kernel sparse representation for visual classification.

    PubMed

    Tao, JianWen; Hu, Wenjun; Wen, Shiting

    2016-04-01

    Most of the existing domain adaptation learning (DAL) methods relies on a single source domain to learn a classifier with well-generalized performance for the target domain of interest, which may lead to the so-called negative transfer problem. To this end, many multi-source adaptation methods have been proposed. While the advantages of using multi-source domains of information for establishing an adaptation model have been widely recognized, how to boost the robustness of the computational model for multi-source adaptation learning has only recently received attention. To address this issue for achieving enhanced performance, we propose in this paper a novel algorithm called multi-source Adaptation Regularization Joint Kernel Sparse Representation (ARJKSR) for robust visual classification problems. Specifically, ARJKSR jointly represents target dataset by a sparse linear combination of training data of each source domain in some optimal Reproduced Kernel Hilbert Space (RKHS), recovered by simultaneously minimizing the inter-domain distribution discrepancy and maximizing the local consistency, whilst constraining the observations from both target and source domains to share their sparse representations. The optimization problem of ARJKSR can be solved using an efficient alternative direction method. Under the framework ARJKSR, we further learn a robust label prediction matrix for the unlabeled instances of target domain based on the classical graph-based semi-supervised learning (GSSL) diagram, into which multiple Laplacian graphs constructed with the ARJKSR are incorporated. The validity of our method is examined by several visual classification problems. Results demonstrate the superiority of our method in comparison to several state-of-the-arts. PMID:26894961

  12. Sparse/DCT (S/DCT) two-layered representation of prediction residuals for video coding.

    PubMed

    Kang, Je-Won; Gabbouj, Moncef; Kuo, C-C Jay

    2013-07-01

    In this paper, we propose a cascaded sparse/DCT (S/DCT) two-layer representation of prediction residuals, and implement this idea on top of the state-of-the-art high efficiency video coding (HEVC) standard. First, a dictionary is adaptively trained to contain featured patterns of residual signals so that a high portion of energy in a structured residual can be efficiently coded via sparse coding. It is observed that the sparse representation alone is less effective in the R-D performance due to the side information overhead at higher bit rates. To overcome this problem, the DCT representation is cascaded at the second stage. It is applied to the remaining signal to improve coding efficiency. The two representations successfully complement each other. It is demonstrated by experimental results that the proposed algorithm outperforms the HEVC reference codec HM5.0 in the Common Test Condition.

  13. Image Super-Resolution Based on Structure-Modulated Sparse Representation.

    PubMed

    Zhang, Yongqin; Liu, Jiaying; Yang, Wenhan; Guo, Zongming

    2015-09-01

    Sparse representation has recently attracted enormous interests in the field of image restoration. The conventional sparsity-based methods enforce sparse coding on small image patches with certain constraints. However, they neglected the characteristics of image structures both within the same scale and across the different scales for the image sparse representation. This drawback limits the modeling capability of sparsity-based super-resolution methods, especially for the recovery of the observed low-resolution images. In this paper, we propose a joint super-resolution framework of structure-modulated sparse representations to improve the performance of sparsity-based image super-resolution. The proposed algorithm formulates the constrained optimization problem for high-resolution image recovery. The multistep magnification scheme with the ridge regression is first used to exploit the multiscale redundancy for the initial estimation of the high-resolution image. Then, the gradient histogram preservation is incorporated as a regularization term in sparse modeling of the image super-resolution problem. Finally, the numerical solution is provided to solve the super-resolution problem of model parameter estimation and sparse representation. Extensive experiments on image super-resolution are carried out to validate the generality, effectiveness, and robustness of the proposed algorithm. Experimental results demonstrate that our proposed algorithm, which can recover more fine structures and details from an input low-resolution image, outperforms the state-of-the-art methods both subjectively and objectively in most cases.

  14. Deformable segmentation via sparse representation and dictionary learning.

    PubMed

    Zhang, Shaoting; Zhan, Yiqiang; Metaxas, Dimitris N

    2012-10-01

    "Shape" and "appearance", the two pillars of a deformable model, complement each other in object segmentation. In many medical imaging applications, while the low-level appearance information is weak or mis-leading, shape priors play a more important role to guide a correct segmentation, thanks to the strong shape characteristics of biological structures. Recently a novel shape prior modeling method has been proposed based on sparse learning theory. Instead of learning a generative shape model, shape priors are incorporated on-the-fly through the sparse shape composition (SSC). SSC is robust to non-Gaussian errors and still preserves individual shape characteristics even when such characteristics is not statistically significant. Although it seems straightforward to incorporate SSC into a deformable segmentation framework as shape priors, the large-scale sparse optimization of SSC has low runtime efficiency, which cannot satisfy clinical requirements. In this paper, we design two strategies to decrease the computational complexity of SSC, making a robust, accurate and efficient deformable segmentation system. (1) When the shape repository contains a large number of instances, which is often the case in 2D problems, K-SVD is used to learn a more compact but still informative shape dictionary. (2) If the derived shape instance has a large number of vertices, which often appears in 3D problems, an affinity propagation method is used to partition the surface into small sub-regions, on which the sparse shape composition is performed locally. Both strategies dramatically decrease the scale of the sparse optimization problem and hence speed up the algorithm. Our method is applied on a diverse set of biomedical image analysis problems. Compared to the original SSC, these two newly-proposed modules not only significant reduce the computational complexity, but also improve the overall accuracy. PMID:22959839

  15. New Asteroid Models Based on Combined Dense and Sparse Photometry

    NASA Astrophysics Data System (ADS)

    Hanuš, Josef; Durech, J.

    2010-10-01

    For thousands of asteroids we investigated several ten thousands of sparse photometric data from astrometric projects. These data are available on AstDyS server (Asteroids -- Dynamic Site, http://hamilton.dm.unipi.it). We picked 7 astrometric surveys and used their calibrated photometry in lightcurve inversion method for determination of asteroid's convex shapes and rotational states. We present nearly 100 new asteroid models derived from combined dense and sparse data sets, where sparse photometry is taken from AstDyS server and dense lightcurves are from the Uppsala Asteroid Photometric Catalogue (UAPC) and from several individual observers.

  16. Optimized sparse-particle aerosol representations for modeling cloud-aerosol interactions

    NASA Astrophysics Data System (ADS)

    Fierce, Laura; McGraw, Robert

    2016-04-01

    Sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the method of moments. Given a set of moment constraints, we show how linear programming can be used to identify collections of sparse particles that approximately maximize distributional entropy. The collections of sparse particles derived from this approach reproduce CCN activity of the exact model aerosol distributions with high accuracy. Additionally, the linear programming techniques described in this study can be used to bound key aerosol properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy moment-based approach is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a new aerosol simulation scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.

  17. Blind deconvolution of images using optimal sparse representations.

    PubMed

    Bronstein, Michael M; Bronstein, Alexander M; Zibulevsky, Michael; Zeevi, Yehoshua Y

    2005-06-01

    The relative Newton algorithm, previously proposed for quasi-maximum likelihood blind source separation and blind deconvolution of one-dimensional signals is generalized for blind deconvolution of images. Smooth approximation of the absolute value is used as the nonlinear term for sparse sources. In addition, we propose a method of sparsification, which allows blind deconvolution of arbitrary sources, and show how to find optimal sparsifying transformations by supervised learning.

  18. An algorithm for inverse synthetic aperture imaging lidar based on sparse signal representation

    NASA Astrophysics Data System (ADS)

    Ren, X. Z.; Sun, X. M.

    2014-12-01

    In actual applications of inverse synthetic aperture imaging lidar, the issue of sparse aperture data arises when continuous measurements are impossible or the collected data during some periods are not valid. Hence, the imaging results obtained by traditional methods are limited by high sidelobes. Considering the sparse structure of actual target space in high frequency radar application, a novel imaging method based on sparse signal representation is proposed in this paper. Firstly, the range image is acquired by traditional pulse compression of the optical heterodyne process. Then, the redundant dictionary is constructed through the sparse azimuth sampling positions and the signal form after the range compression. Finally, the imaging results are obtained by solving an ill-posed problem based on sparse regularization. Simulation results confirm the effectiveness of the proposed method.

  19. Two-stage sparse representation-based face recognition with reconstructed images

    NASA Astrophysics Data System (ADS)

    Cheng, Guangtao; Song, Zhanjie; Lei, Yang; Han, Xiuning

    2014-09-01

    In order to address the challenges that both the training and testing images are contaminated by random pixels corruption, occlusion, and disguise, a robust face recognition algorithm based on two-stage sparse representation is proposed. Specifically, noises in the training images are first eliminated by low-rank matrix recovery. Then, by exploiting the first-stage sparse representation computed by solving a new extended ℓ1-minimization problem, noises in the testing image can be successfully removed. After the elimination, feature extraction techniques that are more discriminative but are sensitive to noise can be effectively performed on the reconstructed clean images, and the final classification is accomplished by utilizing the second-stage sparse representation obtained by solving the reduced ℓ1-minimization problem in a low-dimensional feature space. Extensive experiments are conducted on publicly available databases to verify the superiority and robustness of our algorithm.

  20. Sparse Representation-Based Image Quality Index With Adaptive Sub-Dictionaries.

    PubMed

    Li, Leida; Cai, Hao; Zhang, Yabin; Lin, Weisi; Kot, Alex C; Sun, Xingming

    2016-08-01

    Distortions cause structural changes in digital images, leading to degraded visual quality. Dictionary-based sparse representation has been widely studied recently due to its ability to extract inherent image structures. Meantime, it can extract image features with slightly higher level semantics. Intuitively, sparse representation can be used for image quality assessment, because visible distortions can cause significant changes to the sparse features. In this paper, a new sparse representation-based image quality assessment model is proposed based on the construction of adaptive sub-dictionaries. An overcomplete dictionary trained from natural images is employed to capture the structure changes between the reference and distorted images by sparse feature extraction via adaptive sub-dictionary selection. Based on the observation that image sparse features are invariant to weak degradations and the perceived image quality is generally influenced by diverse issues, three auxiliary quality features are added, including gradient, color, and luminance information. The proposed method is not sensitive to training images, so a universal dictionary can be adopted for quality evaluation. Extensive experiments on five public image quality databases demonstrate that the proposed method produces the state-of-the-art results, and it delivers consistently well performances when tested in different image quality databases.

  1. Dynamic time warping and sparse representation classification for birdsong phrase classification using limited training data.

    PubMed

    Tan, Lee N; Alwan, Abeer; Kossan, George; Cody, Martin L; Taylor, Charles E

    2015-03-01

    Annotation of phrases in birdsongs can be helpful to behavioral and population studies. To reduce the need for manual annotation, an automated birdsong phrase classification algorithm for limited data is developed. Limited data occur because of limited recordings or the existence of rare phrases. In this paper, classification of up to 81 phrase classes of Cassin's Vireo is performed using one to five training samples per class. The algorithm involves dynamic time warping (DTW) and two passes of sparse representation (SR) classification. DTW improves the similarity between training and test phrases from the same class in the presence of individual bird differences and phrase segmentation inconsistencies. The SR classifier works by finding a sparse linear combination of training feature vectors from all classes that best approximates the test feature vector. When the class decisions from DTW and the first pass SR classification are different, SR classification is repeated using training samples from these two conflicting classes. Compared to DTW, support vector machines, and an SR classifier without DTW, the proposed classifier achieves the highest classification accuracies of 94% and 89% on manually segmented and automatically segmented phrases, respectively, from unseen Cassin's Vireo individuals, using five training samples per class.

  2. Dynamic time warping and sparse representation classification for birdsong phrase classification using limited training data.

    PubMed

    Tan, Lee N; Alwan, Abeer; Kossan, George; Cody, Martin L; Taylor, Charles E

    2015-03-01

    Annotation of phrases in birdsongs can be helpful to behavioral and population studies. To reduce the need for manual annotation, an automated birdsong phrase classification algorithm for limited data is developed. Limited data occur because of limited recordings or the existence of rare phrases. In this paper, classification of up to 81 phrase classes of Cassin's Vireo is performed using one to five training samples per class. The algorithm involves dynamic time warping (DTW) and two passes of sparse representation (SR) classification. DTW improves the similarity between training and test phrases from the same class in the presence of individual bird differences and phrase segmentation inconsistencies. The SR classifier works by finding a sparse linear combination of training feature vectors from all classes that best approximates the test feature vector. When the class decisions from DTW and the first pass SR classification are different, SR classification is repeated using training samples from these two conflicting classes. Compared to DTW, support vector machines, and an SR classifier without DTW, the proposed classifier achieves the highest classification accuracies of 94% and 89% on manually segmented and automatically segmented phrases, respectively, from unseen Cassin's Vireo individuals, using five training samples per class. PMID:25786922

  3. Non-negative structural sparse representation for high resolution hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Meng, Guiyu; Li, Guangyu; Dong, Weisheng; Shi, Guangming

    2014-11-01

    High resolution hyperspectral images have important applications in many areas, such as anomaly detection, target recognition and image classification. Due to the limitation of the sensors, it is challenging to obtain high spatial resolution hyperspectral images. Recently, the methods that reconstruct high spatial resolution hyperspectral images from the pair of low resolution hyperspectral images and high resolution RGB image of the same scene have shown promising results. In these methods, sparse non-negative matrix factorization (SNNMF) technique was proposed to exploit the spectral correlations among the RGB and spectral images. However, only the spectral correlations were exploited in these methods, ignoring the abundant spatial structural correlations of the hyperspectral images. In this paper, we propose a novel algorithm combining the structural sparse representation and non-negative matrix factorization technique to exploit the spectral-spatial structure correlations and nonlocal similarity of the hyperspectral images. Compared with SNNMF, our method makes use of both the spectral and spatial redundancies of hyperspectral images, leading to better reconstruction performance. The proposed optimization problem is efficiently solved by using the alternating direction method of multipliers (ADMM) technique. Experiments on a public database show that our approach performs better than other state-of-the-art methods on the visual effect and in the quantitative assessment.

  4. Tracking and recognition face in videos with incremental local sparse representation model

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang

    2013-10-01

    This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.

  5. Sparse representation based on local time-frequency template matching for bearing transient fault feature extraction

    NASA Astrophysics Data System (ADS)

    He, Qingbo; Ding, Xiaoxi

    2016-05-01

    The transients caused by the localized fault are important measurement information for bearing fault diagnosis. Thus it is crucial to extract the transients from the bearing vibration or acoustic signals that are always corrupted by a large amount of background noise. In this paper, an iterative transient feature extraction approach is proposed based on time-frequency (TF) domain sparse representation. The approach is realized by presenting a new method, called local TF template matching. In this method, the TF atoms are constructed based on the TF distribution (TFD) of the Morlet wavelet bases and local TF templates are formulated from the TF atoms for the matching process. The instantaneous frequency (IF) ridge calculated from the TFD of an analyzed signal provides the frequency parameter values for the TF atoms as well as an effective template matching path on the TF plane. In each iteration, local TF templates are employed to do correlation with the TFD of the analyzed signal along the IF ridge tube for identifying the optimum parameters of transient wavelet model. With this iterative procedure, transients can be extracted in the TF domain from measured signals one by one. The final signal can be synthesized by combining the extracted TF atoms and the phase of the raw signal. The local TF template matching builds an effective TF matching-based sparse representation approach with the merit of satisfying the native pulse waveform structure of transients. The effectiveness of the proposed method is verified by practical defective bearing signals. Comparison results also show that the proposed method is superior to traditional methods in transient feature extraction.

  6. Automated identification of crystallographic ligands using sparse-density representations

    SciTech Connect

    Carolan, C. G.; Lamzin, V. S.

    2014-07-01

    A novel procedure for identifying ligands in macromolecular crystallographic electron-density maps is introduced. Density clusters in such maps can be rapidly attributed to one of 82 different ligands in an automated manner. A novel procedure for the automatic identification of ligands in macromolecular crystallographic electron-density maps is introduced. It is based on the sparse parameterization of density clusters and the matching of the pseudo-atomic grids thus created to conformationally variant ligands using mathematical descriptors of molecular shape, size and topology. In large-scale tests on experimental data derived from the Protein Data Bank, the procedure could quickly identify the deposited ligand within the top-ranked compounds from a database of candidates. This indicates the suitability of the method for the identification of binding entities in fragment-based drug screening and in model completion in macromolecular structure determination.

  7. Sparse overcomplete Gabor wavelet representation based on local competitions.

    PubMed

    Fischer, Sylvain; Cristóbal, Gabriel; Redondo, Rafael

    2006-02-01

    Gabor representations present a number of interesting properties despite the fact that the basis functions are nonorthogonal and provide an overcomplete representation or a nonexact reconstruction. Overcompleteness involves an expansion of the number of coefficients in the transform domain and induces a redundancy that can be further reduced through computational costly iterative algorithms like Matching Pursuit. Here, a biologically plausible algorithm based on competitions between neighboring coefficients is employed for adaptively representing any source image by a selected subset of Gabor functions. This scheme involves a sharper edge localization and a significant reduction of the information redundancy, while, at the same time, the reconstruction quality is preserved. The method is characterized by its biological plausibility and promising results, but it still requires a more in depth theoretical analysis for completing its validation.

  8. Single-Trial Sparse Representation-Based Approach for VEP Extraction

    PubMed Central

    Yu, Nannan; Hu, Funian; Zou, Dexuan; Ding, Qisheng

    2016-01-01

    Sparse representation is a powerful tool in signal denoising, and visual evoked potentials (VEPs) have been proven to have strong sparsity over an appropriate dictionary. Inspired by this idea, we present in this paper a novel sparse representation-based approach to solving the VEP extraction problem. The extraction process is performed in three stages. First, instead of using the mixed signals containing the electroencephalogram (EEG) and VEPs, we utilise an EEG from a previous trial, which did not contain VEPs, to identify the parameters of the EEG autoregressive (AR) model. Second, instead of the moving average (MA) model, sparse representation is used to model the VEPs in the autoregressive-moving average (ARMA) model. Finally, we calculate the sparse coefficients and derive VEPs by using the AR model. Next, we tested the performance of the proposed algorithm with synthetic and real data, after which we compared the results with that of an AR model with exogenous input modelling and a mixed overcomplete dictionary-based sparse component decomposition method. Utilising the synthetic data, the algorithms are then employed to estimate the latencies of P100 of the VEPs corrupted by added simulated EEG at different signal-to-noise ratio (SNR) values. The validations demonstrate that our method can well preserve the details of the VEPs for latency estimation, even in low SNR environments. PMID:27807541

  9. A joint sparse representation-based method for double-trial evoked potentials estimation.

    PubMed

    Yu, Nannan; Liu, Haikuan; Wang, Xiaoyan; Lu, Hanbing

    2013-12-01

    In this paper, we present a novel approach to solving an evoked potentials estimating problem. Generally, the evoked potentials in two consecutive trials obtained by repeated identical stimuli of the nerves are extremely similar. In order to trace evoked potentials, we propose a joint sparse representation-based double-trial evoked potentials estimation method, taking full advantage of this similarity. The estimation process is performed in three stages: first, according to the similarity of evoked potentials and the randomness of a spontaneous electroencephalogram, the two consecutive observations of evoked potentials are considered as superpositions of the common component and the unique components; second, making use of their characteristics, the two sparse dictionaries are constructed; and finally, we apply the joint sparse representation method in order to extract the common component of double-trial observations, instead of the evoked potential in each trial. A series of experiments carried out on simulated and human test responses confirmed the superior performance of our method.

  10. Robust infrared small target detection via non-negativity constraint-based sparse representation.

    PubMed

    Wan, Minjie; Gu, Guohua; Qian, Weixian; Ren, Kan; Chen, Qian

    2016-09-20

    Infrared (IR) small target detection is one of the vital techniques in many military applications, including IR remote sensing, early warning, and IR precise guidance. Over-complete dictionary based sparse representation is an effective image representation method that can capture geometrical features of IR small targets by the redundancy of the dictionary. In this paper, we concentrate on solving the problem of robust infrared small target detection under various scenes via sparse representation theory. First, a frequency saliency detection based preprocessing is developed to extract suspected regions that may possibly contain the target so that the subsequent computing load is reduced. Second, a target over-complete dictionary is constructed by a varietal two-dimensional Gaussian model with an extent feature constraint and a background term. Third, a sparse representation model with a non-negativity constraint is proposed for the suspected regions to calculate the corresponding coefficient vectors. Fourth, the detection problem is skillfully converted to an l1-regularized optimization through an accelerated proximal gradient (APG) method. Finally, based on the distinct sparsity difference, an evaluation index called sparse rate (SR) is presented to extract the real target by an adaptive segmentation directly. Large numbers of experiments demonstrate both the effectiveness and robustness of this method. PMID:27661588

  11. Locality constrained joint dynamic sparse representation for local matching based face recognition.

    PubMed

    Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun

    2014-01-01

    Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC. PMID:25419662

  12. Locality Constrained Joint Dynamic Sparse Representation for Local Matching Based Face Recognition

    PubMed Central

    Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun

    2014-01-01

    Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC. PMID:25419662

  13. Sparse-representation algorithms for blind estimation of acoustic-multipath channels.

    PubMed

    Zeng, Wen-Jun; Jiang, Xue; So, Hing Cheung

    2013-04-01

    Acoustic channel estimation is an important problem in various applications. Unlike many existing channel estimation techniques that need known probe or training signals, this paper develops a blind multipath channel identification algorithm. The proposed approach is based on the single-input multiple-output model and exploits the sparse multichannel structure. Three sparse representation algorithms, namely, matching pursuit, orthogonal matching pursuit, and basis pursuit, are applied to the blind sparse identification problem. Compared with the classical least squares approach to blind multichannel estimation, the proposed scheme does not require that the channel order be exactly determined and it is robust to channel order selection. Moreover, the ill-conditioning induced by the large delay spread is overcome by the sparse constraint. Simulation results for deconvolution of both underwater and room acoustic channels confirm the effectiveness of the proposed approach.

  14. Multitask joint spatial pyramid matching using sparse representation with dynamic coefficients for object recognition

    NASA Astrophysics Data System (ADS)

    Hajigholam, Mohammad-Hossein; Raie, Abolghasem-Asadollah; Faez, Karim

    2016-03-01

    Object recognition is considered a necessary part in many computer vision applications. Recently, sparse coding methods, based on representing a sparse feature from an image, show remarkable results on several object recognition benchmarks, but the precision obtained by these methods is not yet sufficient. Such a problem arises where there are few training images available. As such, using multiple features and multitask dictionaries appears to be crucial to achieving better results. We use multitask joint sparse representation, using dynamic coefficients to connect these sparse features. In other words, we calculate the importance of each feature for each class separately. This causes the features to be used efficiently and appropriately for each class. Thus, we use variance of features and particle swarm optimization algorithms to obtain these dynamic coefficients. Experimental results of our work on Caltech-101 and Caltech-256 databases show more accuracy compared with state-of-the art ones on the same databases.

  15. Low-Rank and Eigenface Based Sparse Representation for Face Recognition

    PubMed Central

    Hou, Yi-Fu; Sun, Zhan-Li; Chong, Yan-Wen; Zheng, Chun-Hou

    2014-01-01

    In this paper, based on low-rank representation and eigenface extraction, we present an improvement to the well known Sparse Representation based Classification (SRC). Firstly, the low-rank images of the face images of each individual in training subset are extracted by the Robust Principal Component Analysis (Robust PCA) to alleviate the influence of noises (e.g., illumination difference and occlusions). Secondly, Singular Value Decomposition (SVD) is applied to extract the eigenfaces from these low-rank and approximate images. Finally, we utilize these eigenfaces to construct a compact and discriminative dictionary for sparse representation. We evaluate our method on five popular databases. Experimental results demonstrate the effectiveness and robustness of our method. PMID:25334027

  16. Single and Multiple Object Tracking Using a Multi-Feature Joint Sparse Representation.

    PubMed

    Hu, Weiming; Li, Wei; Zhang, Xiaoqin; Maybank, Stephen

    2015-04-01

    In this paper, we propose a tracking algorithm based on a multi-feature joint sparse representation. The templates for the sparse representation can include pixel values, textures, and edges. In the multi-feature joint optimization, noise or occlusion is dealt with using a set of trivial templates. A sparse weight constraint is introduced to dynamically select the relevant templates from the full set of templates. A variance ratio measure is adopted to adaptively adjust the weights of different features. The multi-feature template set is updated adaptively. We further propose an algorithm for tracking multi-objects with occlusion handling based on the multi-feature joint sparse reconstruction. The observation model based on sparse reconstruction automatically focuses on the visible parts of an occluded object by using the information in the trivial templates. The multi-object tracking is simplified into a joint Bayesian inference. The experimental results show the superiority of our algorithm over several state-of-the-art tracking algorithms. PMID:26353296

  17. Segmentation of Hyperacute Cerebral Infarcts Based on Sparse Representation of Diffusion Weighted Imaging

    PubMed Central

    Zhang, Xiaodong; Jing, Shasha; Gao, Peiyi; Xue, Jing; Su, Lu; Li, Weiping; Ren, Lijie

    2016-01-01

    Segmentation of infarcts at hyperacute stage is challenging as they exhibit substantial variability which may even be hard for experts to delineate manually. In this paper, a sparse representation based classification method is explored. For each patient, four volumetric data items including three volumes of diffusion weighted imaging and a computed asymmetry map are employed to extract patch features which are then fed to dictionary learning and classification based on sparse representation. Elastic net is adopted to replace the traditional L0-norm/L1-norm constraints on sparse representation to stabilize sparse code. To decrease computation cost and to reduce false positives, regions-of-interest are determined to confine candidate infarct voxels. The proposed method has been validated on 98 consecutive patients recruited within 6 hours from onset. It is shown that the proposed method could handle well infarcts with intensity variability and ill-defined edges to yield significantly higher Dice coefficient (0.755 ± 0.118) than the other two methods and their enhanced versions by confining their segmentations within the regions-of-interest (average Dice coefficient less than 0.610). The proposed method could provide a potential tool to quantify infarcts from diffusion weighted imaging at hyperacute stage with accuracy and speed to assist the decision making especially for thrombolytic therapy. PMID:27746825

  18. Detection of dual-band infrared small target based on joint dynamic sparse representation

    NASA Astrophysics Data System (ADS)

    Zhou, Jinwei; Li, Jicheng; Shi, Zhiguang; Lu, Xiaowei; Ren, Dongwei

    2015-10-01

    Infrared small target detection is a crucial and yet still is a difficult issue in aeronautic and astronautic applications. Sparse representation is an important mathematic tool and has been used extensively in image processing in recent years. Joint sparse representation is applied in dual-band infrared dim target detection in this paper. Firstly, according to the characters of dim targets in dual-band infrared images, 2-dimension Gaussian intensity model was used to construct target dictionary, then the dictionary was classified into different sub-classes according to different positions of Gaussian function's center point in image block; The fact that dual-band small targets detection can use the same dictionary and the sparsity doesn't lie in atom-level but in sub-class level was utilized, hence the detection of targets in dual-band infrared images was converted to be a joint dynamic sparse representation problem. And the dynamic active sets were used to describe the sparse constraint of coefficients. Two modified sparsity concentration index (SCI) criteria was proposed to evaluate whether targets exist in the images. In experiments, it shows that the proposed algorithm can achieve better detecting performance and dual-band detection is much more robust to noise compared with single-band detection. Moreover, the proposed method can be expanded to multi-spectrum small target detection.

  19. Gyrator transform based double random phase encoding with sparse representation for information authentication

    NASA Astrophysics Data System (ADS)

    Chen, Jun-xin; Zhu, Zhi-liang; Fu, Chong; Yu, Hai; Zhang, Li-bo

    2015-07-01

    Optical information security systems have drawn long-term concerns. In this paper, an optical information authentication approach using gyrator transform based double random phase encoding with sparse representation is proposed. Different from traditional optical encryption schemes, only sparse version of the ciphertext is preserved, and hence the decrypted result is completely unrecognizable and shows no similarity to the plaintext. However, we demonstrate that the noise-like decipher result can be effectively authenticated by means of optical correlation approach. Simulations prove that the proposed method is feasible and effective, and can provide additional protection for optical security systems.

  20. Progressive sparse representation-based classification using local discrete cosine transform evaluation for image recognition

    NASA Astrophysics Data System (ADS)

    Song, Xiaoning; Feng, Zhen-Hua; Hu, Guosheng; Yang, Xibei; Yang, Jingyu; Qi, Yunsong

    2015-09-01

    This paper proposes a progressive sparse representation-based classification algorithm using local discrete cosine transform (DCT) evaluation to perform face recognition. Specifically, the sum of the contributions of all training samples of each subject is first taken as the contribution of this subject, then the redundant subject with the smallest contribution to the test sample is iteratively eliminated. Second, the progressive method aims at representing the test sample as a linear combination of all the remaining training samples, by which the representation capability of each training sample is exploited to determine the optimal "nearest neighbors" for the test sample. Third, the transformed DCT evaluation is constructed to measure the similarity between the test sample and each local training sample using cosine distance metrics in the DCT domain. The final goal of the proposed method is to determine an optimal weighted sum of nearest neighbors that are obtained under the local correlative degree evaluation, which is approximately equal to the test sample, and we can use this weighted linear combination to perform robust classification. Experimental results conducted on the ORL database of faces (created by the Olivetti Research Laboratory in Cambridge), the FERET face database (managed by the Defense Advanced Research Projects Agency and the National Institute of Standards and Technology), AR face database (created by Aleix Martinez and Robert Benavente in the Computer Vision Center at U.A.B), and USPS handwritten digit database (gathered at the Center of Excellence in Document Analysis and Recognition at SUNY Buffalo) demonstrate the effectiveness of the proposed method.

  1. An efficient classification method based on principal component and sparse representation.

    PubMed

    Zhai, Lin; Fu, Shujun; Zhang, Caiming; Liu, Yunxian; Wang, Lu; Liu, Guohua; Yang, Mingqiang

    2016-01-01

    As an important application in optical imaging, palmprint recognition is interfered by many unfavorable factors. An effective fusion of blockwise bi-directional two-dimensional principal component analysis and grouping sparse classification is presented. The dimension reduction and normalizing are implemented by the blockwise bi-directional two-dimensional principal component analysis for palmprint images to extract feature matrixes, which are assembled into an overcomplete dictionary in sparse classification. A subspace orthogonal matching pursuit algorithm is designed to solve the grouping sparse representation. Finally, the classification result is gained by comparing the residual between testing and reconstructed images. Experiments are carried out on a palmprint database, and the results show that this method has better robustness against position and illumination changes of palmprint images, and can get higher rate of palmprint recognition. PMID:27386281

  2. Embedded sparse representation of fMRI data via group-wise dictionary optimization

    NASA Astrophysics Data System (ADS)

    Zhu, Dajiang; Lin, Binbin; Faskowitz, Joshua; Ye, Jieping; Thompson, Paul M.

    2016-03-01

    Sparse learning enables dimension reduction and efficient modeling of high dimensional signals and images, but it may need to be tailored to best suit specific applications and datasets. Here we used sparse learning to efficiently represent functional magnetic resonance imaging (fMRI) data from the human brain. We propose a novel embedded sparse representation (ESR), to identify the most consistent dictionary atoms across different brain datasets via an iterative group-wise dictionary optimization procedure. In this framework, we introduced additional criteria to make the learned dictionary atoms more consistent across different subjects. We successfully identified four common dictionary atoms that follow the external task stimuli with very high accuracy. After projecting the corresponding coefficient vectors back into the 3-D brain volume space, the spatial patterns are also consistent with traditional fMRI analysis results. Our framework reveals common features of brain activation in a population, as a new, efficient fMRI analysis method.

  3. Enhancement of snow cover change detection with sparse representation and dictionary learning

    NASA Astrophysics Data System (ADS)

    Varade, D.; Dikshit, O.

    2014-11-01

    Sparse representation and decoding is often used for denoising images and compression of images with respect to inherent features. In this paper, we adopt a methodology incorporating sparse representation of a snow cover change map using the K-SVD trained dictionary and sparse decoding to enhance the change map. The pixels often falsely characterized as "changes" are eliminated using this approach. The preliminary change map was generated using differenced NDSI or S3 maps in case of Resourcesat-2 and Landsat 8 OLI imagery respectively. These maps are extracted into patches for compressed sensing using Discrete Cosine Transform (DCT) to generate an initial dictionary which is trained by the K-SVD approach. The trained dictionary is used for sparse coding of the change map using the Orthogonal Matching Pursuit (OMP) algorithm. The reconstructed change map incorporates a greater degree of smoothing and represents the features (snow cover changes) with better accuracy. The enhanced change map is segmented using kmeans to discriminate between the changed and non-changed pixels. The segmented enhanced change map is compared, firstly with the difference of Support Vector Machine (SVM) classified NDSI maps and secondly with a reference data generated as a mask by visual interpretation of the two input images. The methodology is evaluated using multi-spectral datasets from Resourcesat-2 and Landsat-8. The k-hat statistic is computed to determine the accuracy of the proposed approach.

  4. Multimodal image data fusion for Alzheimer's Disease diagnosis by sparse representation.

    PubMed

    Ortiz, Andrés; Fajardo, Daniel; Górriz, Juan M; Ramírez, Javier; Martínez-Murcia, Francisco J

    2014-01-01

    Alzheimer's Diasese (AD) diagnosis can be carried out by analysing functional or structural changes in the brain. Functional changes associated to neurological disorders can be figured out by positron emission tomography (PET) as it allows to study the activation of certain areas of the brain during specific task development. On the other hand, neurological disorders can also be discovered by analysing structural changes in the brain which are usually assessed by Magnetic Resonance Imaging (MRI). In fact, computer-aided diagnosis tools (CAD) that have been recently devised for the diagnosis of neurological disorders use functional or structural data. However, functional and structural data can be fused out in order to improve the accuracy and to diminish the false positive rate in CAD tools. In this paper we present a method for the diagnosis of AD which fuses multimodal image (PET and MRI) data by combining Sparse Representation Classifiers (SRC). The method presented in this work shows accuracy values up to 95% and clearly outperforms the classification outcomes obtained using single-modality images.

  5. Super-resolution of hyperspectral images using sparse representation and Gabor prior

    NASA Astrophysics Data System (ADS)

    Patel, Rakesh C.; Joshi, Manjunath V.

    2016-04-01

    Super-resolution (SR) as a postprocessing technique is quite useful in enhancing the spatial resolution of hyperspectral (HS) images without affecting its spectral resolution. We present an approach to increase the spatial resolution of HS images by making use of sparse representation and Gabor prior. The low-resolution HS observations consisting of large number of bands are represented as a linear combination of a small number of basis images using principal component analysis (PCA), and the significant components are used in our work. We first obtain initial estimates of SR on this reduced dimension by using compressive sensing-based method. Since SR is an ill-posed problem, the final solution is obtained by using a regularization framework. The novelty of our approach lies in: (1) estimation of optimal point spread function in the form of decimation matrix, and (2) using a new prior called "Gabor prior" to super-resolve the significant PCA components. Experiments are conducted on two different HS datasets namely, 31-band natural HS image set collected under controlled laboratory environment and a set of 224-band real HS images collected by airborne visible/infrared imaging spectrometer remote sensing sensor. Visual inspections and quantitative comparison confirm that our method enhances spatial information without introducing significant spectral distortion. Our conclusions include: (1) incorporate the sensor characteristics in the form of estimated decimation matrix for SR, and (2) preserve various frequencies in super-resolved image by making use of Gabor prior.

  6. Sparse representation based latent components analysis for machinery weak fault detection

    NASA Astrophysics Data System (ADS)

    Tang, Haifeng; Chen, Jin; Dong, Guangming

    2014-06-01

    Weak machinery fault detection is a difficult task because of two main reasons (1) At the early stage of fault development, signature of fault related component performs incompletely and is quite different from that at the apparent failure stage. In most instances, it seems almost identical with the normal operating state. (2) The fault feature is always submerged and distorted by relatively strong background noise and macro-structural vibrations even if the fault component already performs completely, especially when the structure of fault components and interference are close. To solve these problems, we should penetrate into the underlying structure of the signal. Sparse representation provides a class of algorithms for finding succinct representations of signal that capture higher-level features in the data. With the purpose of extracting incomplete or seriously overwhelmed fault components, a sparse representation based latent components decomposition method is proposed in this paper. As a special case of sparse representation, shift-invariant sparse coding algorithm provides an effective basis functions learning scheme for capturing the underlying structure of machinery fault signal by iteratively solving two convex optimization problems: an L1-regularized least squares problem and an L2-constrained least squares problem. Among these basis functions, fault feature can be probably contained and extracted if optimal latent component is filtered. The proposed scheme is applied to analyze vibration signals of both rolling bearings and gears. Experiment of accelerated lifetime test of bearings validates the proposed method's ability of detecting early fault. Besides, experiments of fault bearings and gears with heavy noise and interference show the approach can effectively distinguish subtle differences between defect and interference. All the experimental data are analyzed by wavelet shrinkage and basis pursuit de-noising (BPDN) method for comparison.

  7. Sparse coding based dense feature representation model for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Oguslu, Ender; Zhou, Guoqing; Zheng, Zezhong; Iftekharuddin, Khan; Li, Jiang

    2015-11-01

    We present a sparse coding based dense feature representation model (a preliminary version of the paper was presented at the SPIE Remote Sensing Conference, Dresden, Germany, 2013) for hyperspectral image (HSI) classification. The proposed method learns a new representation for each pixel in HSI through the following four steps: sub-band construction, dictionary learning, encoding, and feature selection. The new representation usually has a very high dimensionality requiring a large amount of computational resources. We applied the l1/lq regularized multiclass logistic regression technique to reduce the size of the new representation. We integrated the method with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) to discriminate different types of land cover. We evaluated the proposed algorithm on three well-known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit, and image fusion and recursive filtering. Experimental results show that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification.

  8. Face Recognition Using Sparse Representation-Based Classification on K-Nearest Subspace

    PubMed Central

    Mi, Jian-Xun; Liu, Jin-Xing

    2013-01-01

    The sparse representation-based classification (SRC) has been proven to be a robust face recognition method. However, its computational complexity is very high due to solving a complex -minimization problem. To improve the calculation efficiency, we propose a novel face recognition method, called sparse representation-based classification on k-nearest subspace (SRC-KNS). Our method first exploits the distance between the test image and the subspace of each individual class to determine the nearest subspaces and then performs SRC on the selected classes. Actually, SRC-KNS is able to reduce the scale of the sparse representation problem greatly and the computation to determine the nearest subspaces is quite simple. Therefore, SRC-KNS has a much lower computational complexity than the original SRC. In order to well recognize the occluded face images, we propose the modular SRC-KNS. For this modular method, face images are partitioned into a number of blocks first and then we propose an indicator to remove the contaminated blocks and choose the nearest subspaces. Finally, SRC is used to classify the occluded test sample in the new feature space. Compared to the approach used in the original SRC work, our modular SRC-KNS can greatly reduce the computational load. A number of face recognition experiments show that our methods have five times speed-up at least compared to the original SRC, while achieving comparable or even better recognition rates. PMID:23555671

  9. Robust brain parcellation using sparse representation on resting-state fMRI.

    PubMed

    Zhang, Yu; Caspers, Svenja; Fan, Lingzhong; Fan, Yong; Song, Ming; Liu, Cirong; Mo, Yin; Roski, Christian; Eickhoff, Simon; Amunts, Katrin; Jiang, Tianzi

    2015-11-01

    Resting-state fMRI (rs-fMRI) has been widely used to segregate the brain into individual modules based on the presence of distinct connectivity patterns. Many parcellation methods have been proposed for brain parcellation using rs-fMRI, but their results have been somewhat inconsistent, potentially due to various types of noise. In this study, we provide a robust parcellation method for rs-fMRI-based brain parcellation, which constructs a sparse similarity graph based on the sparse representation coefficients of each seed voxel and then uses spectral clustering to identify distinct modules. Both the local time-varying BOLD signals and whole-brain connectivity patterns may be used as features and yield similar parcellation results. The robustness of our method was tested on both simulated and real rs-fMRI datasets. In particular, on simulated rs-fMRI data, sparse representation achieved good performance across different noise levels, including high accuracy of parcellation and high robustness to noise. On real rs-fMRI data, stable parcellation of the medial frontal cortex (MFC) and parietal operculum (OP) were achieved on three different datasets, with high reproducibility within each dataset and high consistency across these results. Besides, the parcellation of MFC was little influenced by the degrees of spatial smoothing. Furthermore, the consistent parcellation of OP was also well corresponding to cytoarchitectonic subdivisions and known somatotopic organizations. Our results demonstrate a new promising approach to robust brain parcellation using resting-state fMRI by sparse representation.

  10. Sparse Representation for Infrared Dim Target Detection via a Discriminative Over-Complete Dictionary Learned Online

    PubMed Central

    Li, Zheng-Zhou; Chen, Jing; Hou, Qian; Fu, Hong-Xia; Dai, Zhen; Jin, Gang; Li, Ru-Zhang; Liu, Chang-Ju

    2014-01-01

    It is difficult for structural over-complete dictionaries such as the Gabor function and discriminative over-complete dictionary, which are learned offline and classified manually, to represent natural images with the goal of ideal sparseness and to enhance the difference between background clutter and target signals. This paper proposes an infrared dim target detection approach based on sparse representation on a discriminative over-complete dictionary. An adaptive morphological over-complete dictionary is trained and constructed online according to the content of infrared image by K-singular value decomposition (K-SVD) algorithm. Then the adaptive morphological over-complete dictionary is divided automatically into a target over-complete dictionary describing target signals, and a background over-complete dictionary embedding background by the criteria that the atoms in the target over-complete dictionary could be decomposed more sparsely based on a Gaussian over-complete dictionary than the one in the background over-complete dictionary. This discriminative over-complete dictionary can not only capture significant features of background clutter and dim targets better than a structural over-complete dictionary, but also strengthens the sparse feature difference between background and target more efficiently than a discriminative over-complete dictionary learned offline and classified manually. The target and background clutter can be sparsely decomposed over their corresponding over-complete dictionaries, yet couldn't be sparsely decomposed based on their opposite over-complete dictionary, so their residuals after reconstruction by the prescribed number of target and background atoms differ very visibly. Some experiments are included and the results show that this proposed approach could not only improve the sparsity more efficiently, but also enhance the performance of small target detection more effectively. PMID:24871988

  11. Sparse representation for infrared Dim target detection via a discriminative over-complete dictionary learned online.

    PubMed

    Li, Zheng-Zhou; Chen, Jing; Hou, Qian; Fu, Hong-Xia; Dai, Zhen; Jin, Gang; Li, Ru-Zhang; Liu, Chang-Ju

    2014-01-01

    It is difficult for structural over-complete dictionaries such as the Gabor function and discriminative over-complete dictionary, which are learned offline and classified manually, to represent natural images with the goal of ideal sparseness and to enhance the difference between background clutter and target signals. This paper proposes an infrared dim target detection approach based on sparse representation on a discriminative over-complete dictionary. An adaptive morphological over-complete dictionary is trained and constructed online according to the content of infrared image by K-singular value decomposition (K-SVD) algorithm. Then the adaptive morphological over-complete dictionary is divided automatically into a target over-complete dictionary describing target signals, and a background over-complete dictionary embedding background by the criteria that the atoms in the target over-complete dictionary could be decomposed more sparsely based on a Gaussian over-complete dictionary than the one in the background over-complete dictionary. This discriminative over-complete dictionary can not only capture significant features of background clutter and dim targets better than a structural over-complete dictionary, but also strengthens the sparse feature difference between background and target more efficiently than a discriminative over-complete dictionary learned offline and classified manually. The target and background clutter can be sparsely decomposed over their corresponding over-complete dictionaries, yet couldn't be sparsely decomposed based on their opposite over-complete dictionary, so their residuals after reconstruction by the prescribed number of target and background atoms differ very visibly. Some experiments are included and the results show that this proposed approach could not only improve the sparsity more efficiently, but also enhance the performance of small target detection more effectively.

  12. A Sparse Hierarchical Map Representation for Mars Science Laboratory Science Operations

    NASA Astrophysics Data System (ADS)

    Nefian, A. V.; Edwards, L. J.; Keely, L.; Lees, D. S.; Fluckinger, L.; Malin, M. C.; Parker, T. J.

    2015-12-01

    We describe a solution for multi-scale Mars terrain modeling and mapping with Digital Elevation Models (DEMs) and co-registered orthogonally projected imagery (ortho-images). High resolution DEMs and ortho-images derived from Mars Science Laboratory (MSL) rover science and navigation cameras are represented in context with lower resolution, wide coverage DEMs and ortho-images derived from Mars Reconnaissance Orbiter (MRO) HiRISE and CTX camera images and Mars Express (MEX) mission HRSC images. Merging MSL rover image derived terrain models with those from orbital images at a uniform high resolution would require super-sampling of the orbital data across a large area to maintain significant context. This solution is not practical, and would result in a mapping product of enormous size. Instead, we choose a sparse hierarchical map representation. Each level in this hierarchical representation is a map described by a set of tiles with fixed number of samples and fixed resolution. The number of samples in a tile is fixed for all levels and each level is associated with a specific resolution. In this work, the resolution ratio between two adjacent levels is set to two. The map at each level is sparse and it contains only the tiles for which data is available at the resolution of the given level. For example, at the highest resolution level only MSL science camera models are available and only a small set of tiles are generated in a sparse map. At the lowest resolution, the map contains the complete set of tiles. The reference level of the representation is chosen to be the HiRISE terrain model and CTX, HRSC and MSL data are projected onto this model before being mapped. While our terrain representation was developed for use in "Antares", a visual planning and sequencing tool for MSL science cameras developed at NASA Ames Research Center, it is general purpose and has a number of potential geo-science visualization applications.

  13. Asteroid models from combined sparse and dense photometric data

    NASA Astrophysics Data System (ADS)

    Durech, J.; Kaasalainen, M.; Warner, B. D.; Fauerbach, M.; Marks, S. A.; Fauvaud, S.; Fauvaud, M.; Vugnon, J.-M.; Pilcher, F.; Bernasconi, L.; Behrend, R.

    2009-01-01

    Aims: Shape and spin state are basic physical characteristics of an asteroid. They can be derived from disc-integrated photometry by the lightcurve inversion method. Increasing the number of asteroids with known basic physical properties is necessary to better understand the nature of individual objects as well as for studies of the whole asteroid population. Methods: We use the lightcurve inversion method to obtain rotation parameters and coarse shape models of selected asteroids. We combine sparse photometric data from the US Naval Observatory with ordinary lightcurves from the Uppsala Asteroid Photometric Catalogue and the Palmer Divide Observatory archive, and show that such combined data sets are in many cases sufficient to derive a model even if neither sparse photometry nor lightcurves can be used alone. Our approach is tested on multiple-apparition lightcurve inversion models and we show that the method produces consistent results. Results: We present new shape models and spin parameters for 24 asteroids. The shape models are only coarse but describe the global shape characteristics well. The typical error in the pole direction is ~10-20°. For a further 18 asteroids, inversion led to a unique determination of the rotation period but the pole direction was not well constrained. In these cases we give only an estimate of the ecliptic latitude of the pole.

  14. A Vehicle Active Safety Model: Vehicle Speed Control Based on Driver Vigilance Detection Using Wearable EEG and Sparse Representation.

    PubMed

    Zhang, Zutao; Luo, Dianyuan; Rasim, Yagubov; Li, Yanjun; Meng, Guanjun; Xu, Jian; Wang, Chunbai

    2016-01-01

    In this paper, we present a vehicle active safety model for vehicle speed control based on driver vigilance detection using low-cost, comfortable, wearable electroencephalographic (EEG) sensors and sparse representation. The proposed system consists of three main steps, namely wireless wearable EEG collection, driver vigilance detection, and vehicle speed control strategy. First of all, a homemade low-cost comfortable wearable brain-computer interface (BCI) system with eight channels is designed for collecting the driver's EEG signal. Second, wavelet de-noising and down-sample algorithms are utilized to enhance the quality of EEG data, and Fast Fourier Transformation (FFT) is adopted to extract the EEG power spectrum density (PSD). In this step, sparse representation classification combined with k-singular value decomposition (KSVD) is firstly introduced in PSD to estimate the driver's vigilance level. Finally, a novel safety strategy of vehicle speed control, which controls the electronic throttle opening and automatic braking after driver fatigue detection using the above method, is presented to avoid serious collisions and traffic accidents. The simulation and practical testing results demonstrate the feasibility of the vehicle active safety model. PMID:26907278

  15. A Vehicle Active Safety Model: Vehicle Speed Control Based on Driver Vigilance Detection Using Wearable EEG and Sparse Representation

    PubMed Central

    Zhang, Zutao; Luo, Dianyuan; Rasim, Yagubov; Li, Yanjun; Meng, Guanjun; Xu, Jian; Wang, Chunbai

    2016-01-01

    In this paper, we present a vehicle active safety model for vehicle speed control based on driver vigilance detection using low-cost, comfortable, wearable electroencephalographic (EEG) sensors and sparse representation. The proposed system consists of three main steps, namely wireless wearable EEG collection, driver vigilance detection, and vehicle speed control strategy. First of all, a homemade low-cost comfortable wearable brain-computer interface (BCI) system with eight channels is designed for collecting the driver’s EEG signal. Second, wavelet de-noising and down-sample algorithms are utilized to enhance the quality of EEG data, and Fast Fourier Transformation (FFT) is adopted to extract the EEG power spectrum density (PSD). In this step, sparse representation classification combined with k-singular value decomposition (KSVD) is firstly introduced in PSD to estimate the driver’s vigilance level . Finally, a novel safety strategy of vehicle speed control, which controls the electronic throttle opening and automatic braking after driver fatigue detection using the above method, is presented to avoid serious collisions and traffic accidents. The simulation and practical testing results demonstrate the feasibility of the vehicle active safety model. PMID:26907278

  16. Sparse representation using multiple dictionaries for single image super-resolution

    NASA Astrophysics Data System (ADS)

    Lin, Yih-Lon; Sung, Chung-Ming; Chiang, Yu-Min

    2015-03-01

    New algorithms are proposed in this paper for single image super-resolution using multiple dictionaries based on sparse representation. In the proposed algorithms, a classifier is constructed which is based on the edge properties of image patches via the two lowest discrete cosine transformation (DCT) coefficients. The classifier partitions all training patches into three classes. Training patches from each of the three classes can then be used for the training of the corresponding dictionary via the K-SVD (singular value decomposition) algorithm. Experimental results show that the high resolution image quality using the proposed algorithms is better than that using the traditional bi-cubic interpolation and Yang's method.

  17. SPARSE REPRESENTATIONS WITH DATA FIDELITY TERM VIA AN ITERATIVELY REWEIGHTED LEAST SQUARES ALGORITHM

    SciTech Connect

    WOHLBERG, BRENDT; RODRIGUEZ, PAUL

    2007-01-08

    Basis Pursuit and Basis Pursuit Denoising, well established techniques for computing sparse representations, minimize an {ell}{sup 2} data fidelity term subject to an {ell}{sup 1} sparsity constraint or regularization term on the solution by mapping the problem to a linear or quadratic program. Basis Pursuit Denoising with an {ell}{sup 1} data fidelity term has recently been proposed, also implemented via a mapping to a linear program. They introduce an alternative approach via an iteratively Reweighted Least Squares algorithm, providing greater flexibility in the choice of data fidelity term norm, and computational advantages in certain circumstances.

  18. Imaging through turbid media via sparse representation: imaging quality comparison of three projection matrices

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Li, Huijuan; Wu, Tengfei; Dai, Weijia; Bi, Xiangli

    2015-05-01

    The incident light will be scattered away due to the inhomogeneity of the refractive index in many materials which will greatly reduce the imaging depth and degrade the imaging quality. Many exciting methods have been presented in recent years for solving this problem and realizing imaging through a highly scattering medium, such as the wavefront modulation technique and reconstruction technique. The imaging method based on compressed sensing (CS) theory can decrease the computational complexity because it doesn't require the whole speckle pattern to realize reconstruction. One of the key premises of this method is that the object is sparse or can be sparse representation. However, choosing a proper projection matrix is very important to the imaging quality. In this paper, we analyzed that the transmission matrix (TM) of a scattering medium obeys circular Gaussian distribution, which makes it possible that a scattering medium can be used as the measurement matrix in the CS theory. In order to verify the performance of this method, a whole optical system is simulated. Various projection matrices are introduced to make the object sparse, including the fast Fourier transform (FFT) basis, the discrete cosine transform (DCT) basis and the discrete wavelet transform (DWT) basis, the imaging performances of each of which are compared comprehensively. Simulation results show that for most targets, applying the discrete wavelet transform basis will obtain an image in good quality. This work can be applied to biomedical imaging and used to develop real-time imaging through highly scattering media.

  19. Estimating patient-specific and anatomically correct reference model for craniomaxillofacial deformity via sparse representation

    PubMed Central

    Wang, Li; Ren, Yi; Gao, Yaozong; Tang, Zhen; Chen, Ken-Chung; Li, Jianfu; Shen, Steve G. F.; Yan, Jin; Lee, Philip K. M.; Chow, Ben; Xia, James J.; Shen, Dinggang

    2015-01-01

    Purpose: A significant number of patients suffer from craniomaxillofacial (CMF) deformity and require CMF surgery in the United States. The success of CMF surgery depends on not only the surgical techniques but also an accurate surgical planning. However, surgical planning for CMF surgery is challenging due to the absence of a patient-specific reference model. Currently, the outcome of the surgery is often subjective and highly dependent on surgeon’s experience. In this paper, the authors present an automatic method to estimate an anatomically correct reference shape of jaws for orthognathic surgery, a common type of CMF surgery. Methods: To estimate a patient-specific jaw reference model, the authors use a data-driven method based on sparse shape composition. Given a dictionary of normal subjects, the authors first use the sparse representation to represent the midface of a patient by the midfaces of the normal subjects in the dictionary. Then, the derived sparse coefficients are used to reconstruct a patient-specific reference jaw shape. Results: The authors have validated the proposed method on both synthetic and real patient data. Experimental results show that the authors’ method can effectively reconstruct the normal shape of jaw for patients. Conclusions: The authors have presented a novel method to automatically estimate a patient-specific reference model for the patient suffering from CMF deformity. PMID:26429255

  20. Sparse Representation Based Biomarker Selection for Schizophrenia with Integrated Analysis of fMRI and SNPs

    PubMed Central

    Cao, Hongbao; Duan, Junbo; Lin, Dongdong; Shugart, Yin Yao; Calhoun, Vince; Wang, Yu-Ping

    2014-01-01

    Integrative analysis of multiple data types can take advantage of their complementary information and therefore may provide higher power to identify potential biomarkers that would be missed using individual data analysis. Due to different nature of diverse data modality, data integration is challenging. Here we address the data integration problem by developing a generalized sparse model (GSM) using weighting factors to integrate multi-modality data for biomarker selection. As an example, we applied the GSM model to a joint analysis of two types of schizophrenia data sets: 759075 SNPs and 153594 functional magnetic resonance imaging (fMRI) voxels in 208 subjects (92 cases/116 controls). To solve this small-sample-large-variable problem, we developed a novel sparse representation based variable selection (SRVS) algorithm, with the primary aim to identify biomarkers associated with schizophrenia. To validate the effectiveness of the selected variables, we performed multivariate classification followed by a ten-fold cross validation. We compared our proposed SRVS algorithm with an earlier sparse model based variable selection algorithm for integrated analysis. In addition, we compared with the traditional statistics method for univariant data analysis (Chi-squared test for SNP data and ANOVA for fMRI data). Results showed that our proposed SRVS method can identify novel biomarkers that show stronger capability in distinguishing schizophrenia patients from healthy controls. Moreover, better classification ratios were achieved using biomarkers from both types of data, suggesting the importance of integrative analysis. PMID:24530838

  1. Joint detection and segmentation of vertebral bodies in CT images by sparse representation error minimization

    NASA Astrophysics Data System (ADS)

    Korez, Robert; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž

    2016-03-01

    Automated detection and segmentation of vertebral bodies from spinal computed tomography (CT) images is usually a prerequisite step for numerous spine-related medical applications, such as diagnosis, surgical planning and follow-up assessment of spinal pathologies. However, automated detection and segmentation are challenging tasks due to a relatively high degree of anatomical complexity, presence of unclear boundaries and articulation of vertebrae with each other. In this paper, we describe a sparse representation error minimization (SEM) framework for joint detection and segmentation of vertebral bodies in CT images. By minimizing the sparse representation error of sampled intensity values, we are able to recover the oriented bounding box (OBB) and segmentation binary mask for each vertebral body in the CT image. The performance of the proposed SEM framework was evaluated on five CT images of the thoracolumbar spine. The resulting Euclidean distance of 1:75+/-1:02 mm, computed between the center points of recovered and corresponding reference OBBs, and Dice coefficient of 92:3+/-2:7%, computed between the resulting and corresponding reference segmentation binary masks, indicate that the proposed framework can successfully detect and segment vertebral bodies in CT images of the thoracolumbar spine.

  2. Automatic classification of intracardiac tumor and thrombi in echocardiography based on sparse representation.

    PubMed

    Guo, Yi; Wang, Yuanyuan; Kong, Dehong; Shu, Xianhong

    2015-03-01

    Identification of intracardiac masses in echocardiograms is one important task in cardiac disease diagnosis. To improve diagnosis accuracy, a novel fully automatic classification method based on the sparse representation is proposed to distinguish intracardiac tumor and thrombi in echocardiography. First, a region of interest is cropped to define the mass area. Then, a unique globally denoising method is employed to remove the speckle and preserve the anatomical structure. Subsequently, the contour of the mass and its connected atrial wall are described by the K-singular value decomposition and a modified active contour model. Finally, the motion, the boundary as well as the texture features are processed by a sparse representation classifier to distinguish two masses. Ninety-seven clinical echocardiogram sequences are collected to assess the effectiveness. Compared with other state-of-the-art classifiers, our proposed method demonstrates the best performance by achieving an accuracy of 96.91%, a sensitivity of 100%, and a specificity of 93.02%. It explicates that our method is capable of classifying intracardiac tumors and thrombi in echocardiography, potentially to assist the cardiologists in the clinical practice.

  3. Aspect-Aided Dynamic Non-Negative Sparse Representation-Based Microwave Image Classification.

    PubMed

    Zhang, Xinzheng; Yang, Qiuyue; Liu, Miaomiao; Jia, Yunjian; Liu, Shujun; Li, Guojun

    2016-01-01

    Classification of target microwave images is an important application in much areas such as security, surveillance, etc. With respect to the task of microwave image classification, a recognition algorithm based on aspect-aided dynamic non-negative least square (ADNNLS) sparse representation is proposed. Firstly, an aspect sector is determined, the center of which is the estimated aspect angle of the testing sample. The training samples in the aspect sector are divided into active atoms and inactive atoms by smooth self-representative learning. Secondly, for each testing sample, the corresponding active atoms are selected dynamically, thereby establishing dynamic dictionary. Thirdly, the testing sample is represented with ℓ 1 -regularized non-negative sparse representation under the corresponding dynamic dictionary. Finally, the class label of the testing sample is identified by use of the minimum reconstruction error. Verification of the proposed algorithm was conducted using the Moving and Stationary Target Acquisition and Recognition (MSTAR) database which was acquired by synthetic aperture radar. Experiment results validated that the proposed approach was able to capture the local aspect characteristics of microwave images effectively, thereby improving the classification performance. PMID:27598172

  4. Sparse component analysis using time-frequency representations for operational modal analysis.

    PubMed

    Qin, Shaoqian; Guo, Jie; Zhu, Changan

    2015-01-01

    Sparse component analysis (SCA) has been widely used for blind source separation(BSS) for many years. Recently, SCA has been applied to operational modal analysis (OMA), which is also known as output-only modal identification. This paper considers the sparsity of sources' time-frequency (TF) representation and proposes a new TF-domain SCA under the OMA framework. First, the measurements from the sensors are transformed to the TF domain to get a sparse representation. Then, single-source-points (SSPs) are detected to better reveal the hyperlines which correspond to the columns of the mixing matrix. The K-hyperline clustering algorithm is used to identify the direction vectors of the hyperlines and then the mixing matrix is calculated. Finally, basis pursuit de-noising technique is used to recover the modal responses, from which the modal parameters are computed. The proposed method is valid even if the number of active modes exceed the number of sensors. Numerical simulation and experimental verification demonstrate the good performance of the proposed method. PMID:25789492

  5. Human gait recognition using patch distribution feature and locality-constrained group sparse representation.

    PubMed

    Xu, Dong; Huang, Yi; Zeng, Zinan; Xu, Xinxing

    2012-01-01

    In this paper, we propose a new patch distribution feature (PDF) (i.e., referred to as Gabor-PDF) for human gait recognition. We represent each gait energy image (GEI) as a set of local augmented Gabor features, which concatenate the Gabor features extracted from different scales and different orientations together with the X-Y coordinates. We learn a global Gaussian mixture model (GMM) (i.e., referred to as the universal background model) with the local augmented Gabor features from all the gallery GEIs; then, each gallery or probe GEI is further expressed as the normalized parameters of an image-specific GMM adapted from the global GMM. Observing that one video is naturally represented as a group of GEIs, we also propose a new classification method called locality-constrained group sparse representation (LGSR) to classify each probe video by minimizing the weighted l(1, 2) mixed-norm-regularized reconstruction error with respect to the gallery videos. In contrast to the standard group sparse representation method that is a special case of LGSR, the group sparsity and local smooth sparsity constraints are both enforced in LGSR. Our comprehensive experiments on the benchmark USF HumanID database demonstrate the effectiveness of the newly proposed feature Gabor-PDF and the new classification method LGSR for human gait recognition. Moreover, LGSR using the new feature Gabor-PDF achieves the best average Rank-1 and Rank-5 recognition rates on this database among all gait recognition algorithms proposed to date.

  6. A high-resolution technique for ultrasound harmonic imaging using sparse representations in Gabor frames.

    PubMed

    Michailovich, Oleg; Adam, Dan

    2002-12-01

    Over the last few decades there were dramatic improvements in ultrasound imaging quality with the utilization of harmonic frequencies induced by both tissue and echo-contrast agents. The advantages of harmonic imaging cause rapid penetration of this modality to diverse clinical uses, among which myocardial perfusion determination seems to be the most important application. In order to effectively employ the information, comprised in the higher harmonics of the received signals, this information should be properly extracted. A commonly used method of harmonics separation is linear filtering. One of its main shortcomings is the inverse relationship between the detectability of the contrast agent and the axial resolution. In this paper, a novel, nonlinear technique is proposed for separating the harmonic components, contained in the received radio-frequency images. It is demonstrated that the harmonic separation can be efficiently performed by means of convex optimization. It performs the separation without affecting the image resolution. The procedure is based on the concepts of sparse signal representation in overcomplete signal bases. A special type of the sparse signal representation, that is especially suitable for the problem at hand, is explicitly described. The ability of the novel technique to acquire "un-masked," second (or higher) harmonic images is demonstrated in series of computer and phantom experiments.

  7. Blind image deblurring based on trained dictionary and curvelet using sparse representation

    NASA Astrophysics Data System (ADS)

    Feng, Liang; Huang, Qian; Xu, Tingfa; Li, Shao

    2015-04-01

    Motion blur is one of the most significant and common artifacts causing poor image quality in digital photography, in which many factors resulted. In imaging process, if the objects are moving quickly in the scene or the camera moves in the exposure interval, the image of the scene would blur along the direction of relative motion between the camera and the scene, e.g. camera shake, atmospheric turbulence. Recently, sparse representation model has been widely used in signal and image processing, which is an effective method to describe the natural images. In this article, a new deblurring approach based on sparse representation is proposed. An overcomplete dictionary learned from the trained image samples via the KSVD algorithm is designed to represent the latent image. The motion-blur kernel can be treated as a piece-wise smooth function in image domain, whose support is approximately a thin smooth curve, so we employed curvelet to represent the blur kernel. Both of overcomplete dictionary and curvelet system have high sparsity, which improves the robustness to the noise and more satisfies the observer's visual demand. With the two priors, we constructed restoration model of blurred images and succeeded to solve the optimization problem with the help of alternating minimization technique. The experiment results prove the method can preserve the texture of original images and suppress the ring artifacts effectively.

  8. A Fast Algorithm for Learning Overcomplete Dictionary for Sparse Representation Based on Proximal Operators.

    PubMed

    Li, Zhenni; Ding, Shuxue; Li, Yujie

    2015-09-01

    We present a fast, efficient algorithm for learning an overcomplete dictionary for sparse representation of signals. The whole problem is considered as a minimization of the approximation error function with a coherence penalty for the dictionary atoms and with the sparsity regularization of the coefficient matrix. Because the problem is nonconvex and nonsmooth, this minimization problem cannot be solved efficiently by an ordinary optimization method. We propose a decomposition scheme and an alternating optimization that can turn the problem into a set of minimizations of piecewise quadratic and univariate subproblems, each of which is a single variable vector problem, of either one dictionary atom or one coefficient vector. Although the subproblems are still nonsmooth, remarkably they become much simpler so that we can find a closed-form solution by introducing a proximal operator. This leads to an efficient algorithm for sparse representation. To our knowledge, applying the proximal operator to the problem with an incoherence term and obtaining the optimal dictionary atoms in closed form with a proximal operator technique have not previously been studied. The main advantages of the proposed algorithm are that, as suggested by our analysis and simulation study, it has lower computational complexity and a higher convergence rate than state-of-the-art algorithms. In addition, for real applications, it shows good performance and significant reductions in computational time.

  9. A Novel Method of Automatic Plant Species Identification Using Sparse Representation of Leaf Tooth Features

    PubMed Central

    Jin, Taisong; Hou, Xueliang; Li, Pifan; Zhou, Feifei

    2015-01-01

    Automatic species identification has many advantages over traditional species identification. Currently, most plant automatic identification methods focus on the features of leaf shape, venation and texture, which are promising for the identification of some plant species. However, leaf tooth, a feature commonly used in traditional species identification, is ignored. In this paper, a novel automatic species identification method using sparse representation of leaf tooth features is proposed. In this method, image corners are detected first, and the abnormal image corner is removed by the PauTa criteria. Next, the top and bottom leaf tooth edges are discriminated to effectively correspond to the extracted image corners; then, four leaf tooth features (Leaf-num, Leaf-rate, Leaf-sharpness and Leaf-obliqueness) are extracted and concatenated into a feature vector. Finally, a sparse representation-based classifier is used to identify a plant species sample. Tests on a real-world leaf image dataset show that our proposed method is feasible for species identification. PMID:26440281

  10. Sparse Representation of Deformable 3D Organs with Spherical Harmonics and Structured Dictionary

    PubMed Central

    Wang, Dan; Tewfik, Ahmed H.; Zhang, Yingchun; Shen, Yunhe

    2011-01-01

    This paper proposed a novel algorithm to sparsely represent a deformable surface (SRDS) with low dimensionality based on spherical harmonic decomposition (SHD) and orthogonal subspace pursuit (OSP). The key idea in SRDS method is to identify the subspaces from a training data set in the transformed spherical harmonic domain and then cluster each deformation into the best-fit subspace for fast and accurate representation. This algorithm is also generalized into applications of organs with both interior and exterior surfaces. To test the feasibility, we first use the computer models to demonstrate that the proposed approach matches the accuracy of complex mathematical modeling techniques and then both ex vivo and in vivo experiments are conducted using 3D magnetic resonance imaging (MRI) scans for verification in practical settings. All results demonstrated that the proposed algorithm features sparse representation of deformable surfaces with low dimensionality and high accuracy. Specifically, the precision evaluated as maximum error distance between the reconstructed surface and the MRI ground truth is better than 3 mm in real MRI experiments. PMID:21941524

  11. Aspect-Aided Dynamic Non-Negative Sparse Representation-Based Microwave Image Classification

    PubMed Central

    Zhang, Xinzheng; Yang, Qiuyue; Liu, Miaomiao; Jia, Yunjian; Liu, Shujun; Li, Guojun

    2016-01-01

    Classification of target microwave images is an important application in much areas such as security, surveillance, etc. With respect to the task of microwave image classification, a recognition algorithm based on aspect-aided dynamic non-negative least square (ADNNLS) sparse representation is proposed. Firstly, an aspect sector is determined, the center of which is the estimated aspect angle of the testing sample. The training samples in the aspect sector are divided into active atoms and inactive atoms by smooth self-representative learning. Secondly, for each testing sample, the corresponding active atoms are selected dynamically, thereby establishing dynamic dictionary. Thirdly, the testing sample is represented with ℓ1-regularized non-negative sparse representation under the corresponding dynamic dictionary. Finally, the class label of the testing sample is identified by use of the minimum reconstruction error. Verification of the proposed algorithm was conducted using the Moving and Stationary Target Acquisition and Recognition (MSTAR) database which was acquired by synthetic aperture radar. Experiment results validated that the proposed approach was able to capture the local aspect characteristics of microwave images effectively, thereby improving the classification performance. PMID:27598172

  12. A reconstruction algorithm based on sparse representation for Raman signal processing under high background noise

    NASA Astrophysics Data System (ADS)

    Fan, X.; Wang, X.; Wang, X.; Xu, Y.; Que, J.; He, H.; Wang, X.; Tang, M.

    2016-02-01

    Background noise is one of the main interference sources of the Raman spectroscopy measurement and imaging technique. In this paper, a sparse representation based algorithm is presented to process the Raman signals under high background noise. In contrast with the existing de-noising methods, the proposed method reconstructs the pure Raman signals by estimating the Raman peak information. The advantage of the proposed algorithm is its high anti-noise capacity and low pure Raman signal reduction contributed by its reconstruction principle. Meanwhile, the Batch-OMP algorithm is applied to accelerate the training of the sparse representation. Therefore, it is very suitable to be adopted in the Raman measurement or imaging instruments to observe fast dynamic processes where the scanning time has to be shortened and the signal-to-noise ratio (SNR) of the raw tested signal is reduced. In the simulation and experiment, the de-noising result obtained by the proposed algorithm was better than the traditional Savitzky-Golay (S-G) filter and the fixed-threshold wavelet de-noising algorithm.

  13. Sparse Component Analysis Using Time-Frequency Representations for Operational Modal Analysis

    PubMed Central

    Qin, Shaoqian; Guo, Jie; Zhu, Changan

    2015-01-01

    Sparse component analysis (SCA) has been widely used for blind source separation(BSS) for many years. Recently, SCA has been applied to operational modal analysis (OMA), which is also known as output-only modal identification. This paper considers the sparsity of sources' time-frequency (TF) representation and proposes a new TF-domain SCA under the OMA framework. First, the measurements from the sensors are transformed to the TF domain to get a sparse representation. Then, single-source-points (SSPs) are detected to better reveal the hyperlines which correspond to the columns of the mixing matrix. The K-hyperline clustering algorithm is used to identify the direction vectors of the hyperlines and then the mixing matrix is calculated. Finally, basis pursuit de-noising technique is used to recover the modal responses, from which the modal parameters are computed. The proposed method is valid even if the number of active modes exceed the number of sensors. Numerical simulation and experimental verification demonstrate the good performance of the proposed method. PMID:25789492

  14. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    PubMed Central

    Wang, Li; Chen, Ken Chung; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang; Shen, Steve G. F.; Yan, Jin; Lee, Philip K. M.; Chow, Ben; Liu, Nancy X.; Xia, James J.; Shen, Dinggang

    2014-01-01

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into a maximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  15. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    SciTech Connect

    Wang, Li; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang; Chen, Ken Chung; Shen, Steve G. F.; Yan, Jin; Lee, Philip K. M.; Chow, Ben; Liu, Nancy X.; Xia, James J.; Shen, Dinggang

    2014-04-15

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  16. Fusion of sparse representation and dictionary matching for identification of humans in uncontrolled environment.

    PubMed

    Fernandes, Steven Lawrence; Bala, G Josemin

    2016-09-01

    gait recognitionare developed. Then a novel biomechanics based gait recognition is developed using Sparse Representation to generate what we term as "score 1." Further another novel technique for composite sketch matching is developed using Dictionary Matching to generate what we term as "score 2." Finally, score level fusion using Dempster Shafer and Proportional Conflict Distribution Rule Number 5 is performed. The proposed fusion approach is validated using a database containing biomechanics based gait sequences and biometric based composite sketches. From our analysis we find that a fusion of gait recognition and composite sketch matching provides excellent results for real-time human identification. PMID:27498411

  17. Fusion of sparse representation and dictionary matching for identification of humans in uncontrolled environment.

    PubMed

    Fernandes, Steven Lawrence; Bala, G Josemin

    2016-09-01

    gait recognitionare developed. Then a novel biomechanics based gait recognition is developed using Sparse Representation to generate what we term as "score 1." Further another novel technique for composite sketch matching is developed using Dictionary Matching to generate what we term as "score 2." Finally, score level fusion using Dempster Shafer and Proportional Conflict Distribution Rule Number 5 is performed. The proposed fusion approach is validated using a database containing biomechanics based gait sequences and biometric based composite sketches. From our analysis we find that a fusion of gait recognition and composite sketch matching provides excellent results for real-time human identification.

  18. Secure and Robust Iris Recognition Using Random Projections and Sparse Representations.

    PubMed

    Pillai, Jaishanker K; Patel, Vishal M; Chellappa, Rama; Ratha, Nalini K

    2011-09-01

    Noncontact biometrics such as face and iris have additional benefits over contact-based biometrics such as fingerprint and hand geometry. However, three important challenges need to be addressed in a noncontact biometrics-based authentication system: ability to handle unconstrained acquisition, robust and accurate matching, and privacy enhancement without compromising security. In this paper, we propose a unified framework based on random projections and sparse representations, that can simultaneously address all three issues mentioned above in relation to iris biometrics. Our proposed quality measure can handle segmentation errors and a wide variety of possible artifacts during iris acquisition. We demonstrate how the proposed approach can be easily extended to handle alignment variations and recognition from iris videos, resulting in a robust and accurate system. The proposed approach includes enhancements to privacy and security by providing ways to create cancelable iris templates. Results on public data sets show significant benefits of the proposed approach. PMID:21339529

  19. Sparse representation of transients in wavelet basis and its application in gearbox fault feature extraction

    NASA Astrophysics Data System (ADS)

    Fan, Wei; Cai, Gaigai; Zhu, Z. K.; Shen, Changqing; Huang, Weiguo; Shang, Li

    2015-05-01

    Vibration signals from a defective gearbox are often associated with important measurement information useful for gearbox fault diagnosis. The extraction of transient features from the vibration signals has always been a key issue for detecting the localized fault. In this paper, a new transient feature extraction technique is proposed for gearbox fault diagnosis based on sparse representation in wavelet basis. With the proposed method, both the impulse time and the period of transients can be effectively identified, and thus the transient features can be extracted. The effectiveness of the proposed method is verified by the simulated signals as well as the practical gearbox vibration signals. Comparison study shows that the proposed method outperforms empirical mode decomposition (EMD) in transient feature extraction.

  20. A Sparse Representation-Based Deployment Method for Optimizing the Observation Quality of Camera Networks

    PubMed Central

    Wang, Chang; Qi, Fei; Shi, Guangming; Wang, Xiaotian

    2013-01-01

    Deployment is a critical issue affecting the quality of service of camera networks. The deployment aims at adopting the least number of cameras to cover the whole scene, which may have obstacles to occlude the line of sight, with expected observation quality. This is generally formulated as a non-convex optimization problem, which is hard to solve in polynomial time. In this paper, we propose an efficient convex solution for deployment optimizing the observation quality based on a novel anisotropic sensing model of cameras, which provides a reliable measurement of the observation quality. The deployment is formulated as the selection of a subset of nodes from a redundant initial deployment with numerous cameras, which is an ℓ0 minimization problem. Then, we relax this non-convex optimization to a convex ℓ1 minimization employing the sparse representation. Therefore, the high quality deployment is efficiently obtained via convex optimization. Simulation results confirm the effectiveness of the proposed camera deployment algorithms. PMID:23989826

  1. Sparse representation-based classification scheme for motor imagery-based brain-computer interface systems

    NASA Astrophysics Data System (ADS)

    Shin, Younghak; Lee, Seungchan; Lee, Junho; Lee, Heung-No

    2012-10-01

    Motor imagery (MI)-based brain-computer interface systems (BCIs) normally use a powerful spatial filtering and classification method to maximize their performance. The common spatial pattern (CSP) algorithm is a widely used spatial filtering method for MI-based BCIs. In this work, we propose a new sparse representation-based classification (SRC) scheme for MI-based BCI applications. Sensorimotor rhythms are extracted from electroencephalograms and used for classification. The proposed SRC method utilizes the frequency band power and CSP algorithm to extract features for classification. We analyzed the performance of the new method using experimental datasets. The results showed that the SRC scheme provides highly accurate classification results, which were better than those obtained using the well-known linear discriminant analysis classification method. The enhancement of the proposed method in terms of the classification accuracy was verified using cross-validation and a statistical paired t-test (p < 0.001).

  2. The extraction of spot signal in Shack-Hartmann wavefront sensor based on sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Yanyan; Xu, Wentao; Chen, Suting; Ge, Junxiang; Wan, Fayu

    2016-07-01

    Several techniques have been used with Shack-Hartmann wavefront sensors to determine the local wave-front gradient across each lenslet. While the centroid error of Shack-Hartmann wavefront sensor is relatively large since the skylight background and the detector noise. In this paper, we introduce a new method based on sparse representation to extract the target signal from the background and the noise. First, an over complete dictionary of the spot signal is constructed based on two-dimensional Gaussian model. Then the Shack-Hartmann image is divided into sub blocks. The corresponding coefficients of each block is computed in the over complete dictionary. Since the coefficients of the noise and the target are large different, then extract the target by setting a threshold to the coefficients. Experimental results show that the target can be well extracted and the deviation, RMS and PV of the centroid are all smaller than the method of subtracting threshold.

  3. Fast Acquisition and Reconstruction of Optical Coherence Tomography Images via Sparse Representation

    PubMed Central

    Li, Shutao; McNabb, Ryan P.; Nie, Qing; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.; Farsiu, Sina

    2014-01-01

    In this paper, we present a novel technique, based on compressive sensing principles, for reconstruction and enhancement of multi-dimensional image data. Our method is a major improvement and generalization of the multi-scale sparsity based tomographic denoising (MSBTD) algorithm we recently introduced for reducing speckle noise. Our new technique exhibits several advantages over MSBTD, including its capability to simultaneously reduce noise and interpolate missing data. Unlike MSBTD, our new method does not require an a priori high-quality image from the target imaging subject and thus offers the potential to shorten clinical imaging sessions. This novel image restoration method, which we termed sparsity based simultaneous denoising and interpolation (SBSDI), utilizes sparse representation dictionaries constructed from previously collected datasets. We tested the SBSDI algorithm on retinal spectral domain optical coherence tomography images captured in the clinic. Experiments showed that the SBSDI algorithm qualitatively and quantitatively outperforms other state-of-the-art methods. PMID:23846467

  4. Secure and Robust Iris Recognition Using Random Projections and Sparse Representations.

    PubMed

    Pillai, Jaishanker K; Patel, Vishal M; Chellappa, Rama; Ratha, Nalini K

    2011-09-01

    Noncontact biometrics such as face and iris have additional benefits over contact-based biometrics such as fingerprint and hand geometry. However, three important challenges need to be addressed in a noncontact biometrics-based authentication system: ability to handle unconstrained acquisition, robust and accurate matching, and privacy enhancement without compromising security. In this paper, we propose a unified framework based on random projections and sparse representations, that can simultaneously address all three issues mentioned above in relation to iris biometrics. Our proposed quality measure can handle segmentation errors and a wide variety of possible artifacts during iris acquisition. We demonstrate how the proposed approach can be easily extended to handle alignment variations and recognition from iris videos, resulting in a robust and accurate system. The proposed approach includes enhancements to privacy and security by providing ways to create cancelable iris templates. Results on public data sets show significant benefits of the proposed approach.

  5. On complex-valued deautoconvolution of compactly supported functions with sparse Fourier representation

    NASA Astrophysics Data System (ADS)

    Bürger, Steven; Flemming, Jens; Hofmann, Bernd

    2016-10-01

    Convergence rates results for the Tikhonov regularization of nonlinear ill-posed operator equations are missing, even for a Hilbert space setting, if a range type source condition fails and if moreover nonlinearity conditions of tangential cone type cannot be shown. This situation applies for a deautoconvolution problem in complex-valued L 2-spaces over finite real intervals, occurring in a slightly generalized version in laser optics. For this problem we show that the lack of applicable convergence rates results can be overcome under the assumption that the solution of the operator equation has a sparse Fourier representation. Precisely, we derive a variational source condition for that case, which implies a convergence rate immediately. The surprising observation is that a sparsity assumption imposed on the solution leads to success, although the used norm square is not known to be a sparsity promoting penalty in the Tikhonov functional.

  6. Heterogeneous iris image hallucination using sparse representation on a learned heterogeneous patch dictionary

    NASA Astrophysics Data System (ADS)

    Li, Yung-Hui; Zheng, Bo-Ren; Ji, Dai-Yan; Tien, Chung-Hao; Liu, Po-Tsun

    2014-09-01

    Cross sensor iris matching may seriously degrade the recognition performance because of the sensor mis-match problem of iris images between the enrollment and test stage. In this paper, we propose two novel patch-based heterogeneous dictionary learning method to attack this problem. The first method applies the latest sparse representation theory while the second method tries to learn the correspondence relationship through PCA in heterogeneous patch space. Both methods learn the basic atoms in iris textures across different image sensors and build connections between them. After such connections are built, at test stage, it is possible to hallucinate (synthesize) iris images across different sensors. By matching training images with hallucinated images, the recognition rate can be successfully enhanced. The experimental results showed the satisfied results both visually and in terms of recognition rate. Experimenting with an iris database consisting of 3015 images, we show that the EER is decreased 39.4% relatively by the proposed method.

  7. Epileptic Seizure Detection with Log-Euclidean Gaussian Kernel-Based Sparse Representation.

    PubMed

    Yuan, Shasha; Zhou, Weidong; Wu, Qi; Zhang, Yanli

    2016-05-01

    Epileptic seizure detection plays an important role in the diagnosis of epilepsy and reducing the massive workload of reviewing electroencephalography (EEG) recordings. In this work, a novel algorithm is developed to detect seizures employing log-Euclidean Gaussian kernel-based sparse representation (SR) in long-term EEG recordings. Unlike the traditional SR for vector data in Euclidean space, the log-Euclidean Gaussian kernel-based SR framework is proposed for seizure detection in the space of the symmetric positive definite (SPD) matrices, which form a Riemannian manifold. Since the Riemannian manifold is nonlinear, the log-Euclidean Gaussian kernel function is applied to embed it into a reproducing kernel Hilbert space (RKHS) for performing SR. The EEG signals of all channels are divided into epochs and the SPD matrices representing EEG epochs are generated by covariance descriptors. Then, the testing samples are sparsely coded over the dictionary composed by training samples utilizing log-Euclidean Gaussian kernel-based SR. The classification of testing samples is achieved by computing the minimal reconstructed residuals. The proposed method is evaluated on the Freiburg EEG dataset of 21 patients and shows its notable performance on both epoch-based and event-based assessments. Moreover, this method handles multiple channels of EEG recordings synchronously which is more speedy and efficient than traditional seizure detection methods.

  8. Spatiotemporal Context Awareness for Urban Traffic Modeling and Prediction: Sparse Representation Based Variable Selection.

    PubMed

    Yang, Su; Shi, Shixiong; Hu, Xiaobing; Wang, Minjie

    2015-01-01

    Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1) Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2) The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3) The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks.

  9. Spatiotemporal Context Awareness for Urban Traffic Modeling and Prediction: Sparse Representation Based Variable Selection.

    PubMed

    Yang, Su; Shi, Shixiong; Hu, Xiaobing; Wang, Minjie

    2015-01-01

    Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1) Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2) The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3) The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks. PMID:26496370

  10. Spatiotemporal Context Awareness for Urban Traffic Modeling and Prediction: Sparse Representation Based Variable Selection

    PubMed Central

    Yang, Su; Shi, Shixiong; Hu, Xiaobing; Wang, Minjie

    2015-01-01

    Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1) Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2) The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3) The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks. PMID:26496370

  11. Improving low-dose cardiac CT images using 3D sparse representation based processing

    NASA Astrophysics Data System (ADS)

    Shi, Luyao; Chen, Yang; Luo, Limin

    2015-03-01

    Cardiac computed tomography (CCT) has been widely used in diagnoses of coronary artery diseases due to the continuously improving temporal and spatial resolution. When helical CT with a lower pitch scanning mode is used, the effective radiation dose can be significant when compared to other radiological exams. Many methods have been developed to reduce radiation dose in coronary CT exams including high pitch scans using dual source CT scanners and step-and-shot scanning mode for both single source and dual source CT scanners. Additionally, software methods have also been proposed to reduce noise in the reconstructed CT images and thus offering the opportunity to reduce radiation dose while maintaining the desired diagnostic performance of a certain imaging task. In this paper, we propose that low-dose scans should be considered in order to avoid the harm from accumulating unnecessary X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. Accordingly, in this paper, a 3D dictionary representation based image processing method is proposed to reduce CT image noise. Information on both spatial and temporal structure continuity is utilized in sparse representation to improve the performance of the image processing method. Clinical cases were used to validate the proposed method.

  12. Sparse representation of MER signals for localizing the Subthalamic Nucleus in Parkinson's disease surgery.

    PubMed

    Vargas Cardona, Hernán Darío; Álvarez, Mauricio A; Orozco, Álvaro A

    2014-01-01

    Deep brain stimulation (DBS) of Subthalamic Nucleus (STN) is the best method for treating advanced Parkinson's disease (PD), leading to striking improvements in motor function and quality of life of PD patients. During DBS, online analysis of microelectrode recording (MER) signals is a powerful tool to locate the STN. Therapeutic outcomes depend of a precise positioning of a stimulator device in the target area. In this paper, we show how a sparse representation of MER signals allows to extract discriminant features, improving the accuracy in identification of STN. We apply three techniques for over-complete representation of signals: Method of Frames (MOF), Best Orthogonal Basis (BOB) and Basis Pursuit (BP). All the techniques are compared to classical methods for signal processing like Wavelet Transform (WT), and a more sophisticated method known as adaptive Wavelet with lifting schemes (AW-LS). We apply each processing method in two real databases and we evaluate its performance with simple supervised classifiers. Classification outcomes for MOF, BOB and BP clearly outperform WT and AW-LF in all classifiers for both databases, reaching accuracy values over 98%.

  13. Automated Variability Selection in Time-domain Imaging Surveys Using Sparse Representations with Learned Dictionaries

    NASA Astrophysics Data System (ADS)

    Wozniak, Przemyslaw R.; Moody, D. I.; Ji, Z.; Brumby, S. P.; Brink, H.; Richards, J.; Bloom, J. S.

    2013-01-01

    Exponential growth in data streams and discovery power delivered by modern time-domain imaging surveys creates a pressing need for variability extraction algorithms that are both fully automated and highly reliable. The current state of the art methods based on image differencing are limited by the fact that for every real variable source the algorithm returns a large number of bogus "detections" caused by atmospheric effects and instrumental signatures coupled with imperfect image processing. Here we present a new approach to this problem inspired by recent advances in computer vision and train the machine directly on pixel data. The training data set comes from the Palomar Transient Factory survey and consists of small images centered around transient candidates with known real/bogus classification. This set of 441-dimensional vectors (21x21 pixel images) is then transformed to a linear representation using the so called dictionary, an overcomplete basis constructed separately for each class. The learning algorithm captures the fact that the intrinsic dimensionality of the input images is typically much lower than the size of the dictionary, and therefore the data vectors are well approximated with a small number of dictionary elements. This sparse representation can be used to construct informative features for any suitable machine learning classifier. In our preliminary analysis automatically extracted features approach the performance of features constructed by humans using subject domain knowledge.

  14. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation

    NASA Astrophysics Data System (ADS)

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-03-01

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images.

  15. Sparse Distributed Representation of Odors in a Large-scale Olfactory Bulb Circuit

    PubMed Central

    Yu, Yuguo; McTavish, Thomas S.; Hines, Michael L.; Shepherd, Gordon M.; Valenti, Cesare; Migliore, Michele

    2013-01-01

    In the olfactory bulb, lateral inhibition mediated by granule cells has been suggested to modulate the timing of mitral cell firing, thereby shaping the representation of input odorants. Current experimental techniques, however, do not enable a clear study of how the mitral-granule cell network sculpts odor inputs to represent odor information spatially and temporally. To address this critical step in the neural basis of odor recognition, we built a biophysical network model of mitral and granule cells, corresponding to 1/100th of the real system in the rat, and used direct experimental imaging data of glomeruli activated by various odors. The model allows the systematic investigation and generation of testable hypotheses of the functional mechanisms underlying odor representation in the olfactory bulb circuit. Specifically, we demonstrate that lateral inhibition emerges within the olfactory bulb network through recurrent dendrodendritic synapses when constrained by a range of balanced excitatory and inhibitory conductances. We find that the spatio-temporal dynamics of lateral inhibition plays a critical role in building the glomerular-related cell clusters observed in experiments, through the modulation of synaptic weights during odor training. Lateral inhibition also mediates the development of sparse and synchronized spiking patterns of mitral cells related to odor inputs within the network, with the frequency of these synchronized spiking patterns also modulated by the sniff cycle. PMID:23555237

  16. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation

    PubMed Central

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-01-01

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images. PMID:26980176

  17. From molecular model to sparse representation of chromatographic signals with an unknown number of peaks.

    PubMed

    Bertholon, F; Harant, O; Foan, L; Vignoud, S; Jutten, C; Grangeat, P

    2015-08-01

    Analysis of a fluid mixture using a chromatographic system is a standard technique for many biomedical applications such as in-vitro diagnostic of body fluids or air and water quality assessment. The analysis is often dedicated towards a set of molecules or biomarkers. However, due to the fluid complexity, the number of mixture components is often larger than the list of targeted molecules. In order to get an analysis as exhaustive as possible and also to take into account possible interferences, it is important to identify and to quantify all the components that are included in the chromatographic signal. Thus the signal processing aims to reconstruct a list of an unknown number of components and their relative concentrations. We address this question as a problem of sparse representation of a chromatographic signal. The innovative representation is based on a stochastic forward model describing the transport of elementary molecules in the chromatography column as a molecular random walk. We investigate three methods: two probabilistic Bayesian approaches, one parametric and one non-parametric, and a determinist approach based on a parsimonious decomposition on a dictionary basis. We examine the performances of these 3 approaches on an experimental case dedicated to the analysis of mixtures of the micro-pollutants Polycyclic Aromatic Hydrocarbons (PAH) in a methanol solution in two cases of high and low signal to noise ratio (SNR).

  18. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation.

    PubMed

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-03-16

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images.

  19. Categorizing biomedicine images using novel image features and sparse coding representation

    PubMed Central

    2013-01-01

    Background Images embedded in biomedical publications carry rich information that often concisely summarize key hypotheses adopted, methods employed, or results obtained in a published study. Therefore, they offer valuable clues for understanding main content in a biomedical publication. Prior studies have pointed out the potential of mining images embedded in biomedical publications for automatically understanding and retrieving such images' associated source documents. Within the broad area of biomedical image processing, categorizing biomedical images is a fundamental step for building many advanced image analysis, retrieval, and mining applications. Similar to any automatic categorization effort, discriminative image features can provide the most crucial aid in the process. Method We observe that many images embedded in biomedical publications carry versatile annotation text. Based on the locations of and the spatial relationships between these text elements in an image, we thus propose some novel image features for image categorization purpose, which quantitatively characterize the spatial positions and distributions of text elements inside a biomedical image. We further adopt a sparse coding representation (SCR) based technique to categorize images embedded in biomedical publications by leveraging our newly proposed image features. Results we randomly selected 990 images of the JPG format for use in our experiments where 310 images were used as training samples and the rest were used as the testing cases. We first segmented 310 sample images following the our proposed procedure. This step produced a total of 1035 sub-images. We then manually labeled all these sub-images according to the two-level hierarchical image taxonomy proposed by [1]. Among our annotation results, 316 are microscopy images, 126 are gel electrophoresis images, 135 are line charts, 156 are bar charts, 52 are spot charts, 25 are tables, 70 are flow charts, and the remaining 155 images are

  20. Protein structure prediction: combining de novo modeling with sparse experimental data.

    PubMed

    Latek, Dorota; Ekonomiuk, Dariusz; Kolinski, Andrzej

    2007-07-30

    Routine structure prediction of new folds is still a challenging task for computational biology. The challenge is not only in the proper determination of overall fold but also in building models of acceptable resolution, useful for modeling the drug interactions and protein-protein complexes. In this work we propose and test a comprehensive approach to protein structure modeling supported by sparse, and relatively easy to obtain, experimental data. We focus on chemical shift-based restraints from NMR, although other sparse restraints could be easily included. In particular, we demonstrate that combining the typical NMR software with artificial intelligence-based prediction of secondary structure enhances significantly the accuracy of the restraints for molecular modeling. The computational procedure is based on the reduced representation approach implemented in the CABS modeling software, which proved to be a versatile tool for protein structure prediction during the CASP (CASP stands for critical assessment of techniques for protein structure prediction) experiments (see http://predictioncenter/CASP6/org). The method is successfully tested on a small set of representative globular proteins of different size and topology, including the two CASP6 targets, for which the required NMR data already exist. The method is implemented in a semi-automated pipeline applicable to a large scale structural annotation of genomic data. Here, we limit the computations to relatively small set. This enabled, without a loss of generality, a detailed discussion of various factors determining accuracy of the proposed approach to the protein structure prediction.

  1. Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation.

    PubMed

    Xu, Yong; Fang, Xiaozhao; Wu, Jian; Li, Xuelong; Zhang, David

    2016-02-01

    In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http://www.yongxu.org/lunwen.html.

  2. Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation.

    PubMed

    Xu, Yong; Fang, Xiaozhao; Wu, Jian; Li, Xuelong; Zhang, David

    2016-02-01

    In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http://www.yongxu.org/lunwen.html. PMID:26701675

  3. Generalization of spectral fidelity with flexible measures for the sparse representation classification of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Zhu, Yong; Huang, Xin; Li, Jiayi

    2016-10-01

    Sparse representation classification (SRC) is becoming a promising tool for hyperspectral image (HSI) classification, where the Euclidean spectral distance (ESD) is widely used to reflect the fidelity between the original and reconstructed signals. In this paper, a generalized model is proposed to extend SRC by characterizing the spectral fidelity with flexible similarity measures. To validate the flexibility, several typical similarity measures-the spectral angle similarity (SAS), spectral information divergence (SID), the structural similarity index measure (SSIM), and the ESD-are included in the generalized model. Furthermore, a general solution based on a gradient descent technique is used to solve the nonlinear optimization problem formulated by the flexible similarity measures. To test the generalized model, two actual HSIs were used, and the experimental results confirm the ability of the proposed model to accommodate the various spectral similarity measures. Performance comparisons with the ESD, SAS, SID, and SSIM criteria were also conducted, and the results consistently show the advantages of the generalized model for HSI classification in terms of overall accuracy and kappa coefficient.

  4. Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods

    PubMed Central

    Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev

    2013-01-01

    Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L2-norm regularization. However, sparse representation methods via L1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72–88, 2013. PMID:23847452

  5. Pulmonary emphysema classification based on an improved texton learning model by sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Zhou, Xiangrong; Goshima, Satoshi; Chen, Huayue; Muramatsu, Chisako; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Fujita, Hiroshi

    2013-03-01

    In this paper, we present a texture classification method based on texton learned via sparse representation (SR) with new feature histogram maps in the classification of emphysema. First, an overcomplete dictionary of textons is learned via KSVD learning on every class image patches in the training dataset. In this stage, high-pass filter is introduced to exclude patches in smooth area to speed up the dictionary learning process. Second, 3D joint-SR coefficients and intensity histograms of the test images are used for characterizing regions of interest (ROIs) instead of conventional feature histograms constructed from SR coefficients of the test images over the dictionary. Classification is then performed using a classifier with distance as a histogram dissimilarity measure. Four hundreds and seventy annotated ROIs extracted from 14 test subjects, including 6 paraseptal emphysema (PSE) subjects, 5 centrilobular emphysema (CLE) subjects and 3 panlobular emphysema (PLE) subjects, are used to evaluate the effectiveness and robustness of the proposed method. The proposed method is tested on 167 PSE, 240 CLE and 63 PLE ROIs consisting of mild, moderate and severe pulmonary emphysema. The accuracy of the proposed system is around 74%, 88% and 89% for PSE, CLE and PLE, respectively.

  6. An application to pulmonary emphysema classification based on model of texton learning by sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Zhou, Xiangrong; Goshima, Satoshi; Chen, Huayue; Muramatsu, Chisako; Hara, Takeshi; Yokoyama, Ryojiro; Kanematsu, Masayuki; Fujita, Hiroshi

    2012-03-01

    We aim at using a new texton based texture classification method in the classification of pulmonary emphysema in computed tomography (CT) images of the lungs. Different from conventional computer-aided diagnosis (CAD) pulmonary emphysema classification methods, in this paper, firstly, the dictionary of texton is learned via applying sparse representation(SR) to image patches in the training dataset. Then the SR coefficients of the test images over the dictionary are used to construct the histograms for texture presentations. Finally, classification is performed by using a nearest neighbor classifier with a histogram dissimilarity measure as distance. The proposed approach is tested on 3840 annotated regions of interest consisting of normal tissue and mild, moderate and severe pulmonary emphysema of three subtypes. The performance of the proposed system, with an accuracy of about 88%, is comparably higher than state of the art method based on the basic rotation invariant local binary pattern histograms and the texture classification method based on texton learning by k-means, which performs almost the best among other approaches in the literature.

  7. Simple adaptive sparse representation based classification schemes for EEG based brain-computer interface applications.

    PubMed

    Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No

    2015-11-01

    One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation.

  8. Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods.

    PubMed

    Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev

    2013-05-01

    Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L2-norm regularization. However, sparse representation methods via L1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72-88, 2013.

  9. A sparse representation based method to classify pulmonary patterns of diffuse lung diseases.

    PubMed

    Zhao, Wei; Xu, Rui; Hirano, Yasushi; Tachibana, Rie; Kido, Shoji

    2015-01-01

    We applied and optimized the sparse representation (SR) approaches in the computer-aided diagnosis (CAD) to classify normal tissues and five kinds of diffuse lung disease (DLD) patterns: consolidation, ground-glass opacity, honeycombing, emphysema, and nodule. By using the K-SVD which is based on the singular value decomposition (SVD) and orthogonal matching pursuit (OMP), it can achieve a satisfied recognition rate, but too much time was spent in the experiment. To reduce the runtime of the method, the K-Means algorithm was substituted for the K-SVD, and the OMP was simplified by searching the desired atoms at one time (OMP1). We proposed three SR based methods for evaluation: SR1 (K-SVD+OMP), SR2 (K-Means+OMP), and SR3 (K-Means+OMP1). 1161 volumes of interest (VOIs) were used to optimize the parameters and train each method, and 1049 VOIs were adopted to evaluate the performances of the methods. The SR based methods were powerful to recognize the DLD patterns (SR1: 96.1%, SR2: 95.6%, SR3: 96.4%) and significantly better than the baseline methods. Furthermore, when the K-Means and OMP1 were applied, the runtime of the SR based methods can be reduced by 98.2% and 55.2%, respectively. Therefore, we thought that the method using the K-Means and OMP1 (SR3) was efficient for the CAD of the DLDs.

  10. A sparse representation based method to classify pulmonary patterns of diffuse lung diseases.

    PubMed

    Zhao, Wei; Xu, Rui; Hirano, Yasushi; Tachibana, Rie; Kido, Shoji

    2015-01-01

    We applied and optimized the sparse representation (SR) approaches in the computer-aided diagnosis (CAD) to classify normal tissues and five kinds of diffuse lung disease (DLD) patterns: consolidation, ground-glass opacity, honeycombing, emphysema, and nodule. By using the K-SVD which is based on the singular value decomposition (SVD) and orthogonal matching pursuit (OMP), it can achieve a satisfied recognition rate, but too much time was spent in the experiment. To reduce the runtime of the method, the K-Means algorithm was substituted for the K-SVD, and the OMP was simplified by searching the desired atoms at one time (OMP1). We proposed three SR based methods for evaluation: SR1 (K-SVD+OMP), SR2 (K-Means+OMP), and SR3 (K-Means+OMP1). 1161 volumes of interest (VOIs) were used to optimize the parameters and train each method, and 1049 VOIs were adopted to evaluate the performances of the methods. The SR based methods were powerful to recognize the DLD patterns (SR1: 96.1%, SR2: 95.6%, SR3: 96.4%) and significantly better than the baseline methods. Furthermore, when the K-Means and OMP1 were applied, the runtime of the SR based methods can be reduced by 98.2% and 55.2%, respectively. Therefore, we thought that the method using the K-Means and OMP1 (SR3) was efficient for the CAD of the DLDs. PMID:25821509

  11. 3D face recognition based on multiple keypoint descriptors and sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876

  12. Dim moving target tracking algorithm based on particle discriminative sparse representation

    NASA Astrophysics Data System (ADS)

    Li, Zhengzhou; Li, Jianing; Ge, Fengzeng; Shao, Wanxing; Liu, Bing; Jin, Gang

    2016-03-01

    The small dim moving target usually submerged in strong noise, and its motion observability is debased by numerous false alarms for low signal-to-noise ratio (SNR). A target tracking algorithm based on particle filter and discriminative sparse representation is proposed in this paper to cope with the uncertainty of dim moving target tracking. The weight of every particle is the crucial factor to ensuring the accuracy of dim target tracking for particle filter (PF) that can achieve excellent performance even under the situation of non-linear and non-Gaussian motion. In discriminative over-complete dictionary constructed according to image sequence, the target dictionary describes target signal and the background dictionary embeds background clutter. The difference between target particle and background particle is enhanced to a great extent, and the weight of every particle is then measured by means of the residual after reconstruction using the prescribed number of target atoms and their corresponding coefficients. The movement state of dim moving target is then estimated and finally tracked by these weighted particles. Meanwhile, the subspace of over-complete dictionary is updated online by the stochastic estimation algorithm. Some experiments are induced and the experimental results show the proposed algorithm could improve the performance of moving target tracking by enhancing the consistency between the posteriori probability distribution and the moving target state.

  13. Clustering-weighted SIFT-based classification method via sparse representation

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Xu, Feng; He, Jun

    2014-07-01

    In recent years, sparse representation-based classification (SRC) has received significant attention due to its high recognition rate. However, the original SRC method requires a rigid alignment, which is crucial for its application. Therefore, features such as SIFT descriptors are introduced into the SRC method, resulting in an alignment-free method. However, a feature-based dictionary always contains considerable useful information for recognition. We explore the relationship of the similarity of the SIFT descriptors to multitask recognition and propose a clustering-weighted SIFT-based SRC method (CWS-SRC). The proposed approach is considerably more suitable for multitask recognition with sufficient samples. Using two public face databases (AR and Yale face) and a self-built car-model database, the performance of the proposed method is evaluated and compared to that of the SRC, SIFT matching, and MKD-SRC methods. Experimental results indicate that the proposed method exhibits better performance in the alignment-free scenario with sufficient samples.

  14. Wireless Sensor Array Network DoA Estimation from Compressed Array Data via Joint Sparse Representation.

    PubMed

    Yu, Kai; Yin, Ming; Luo, Ji-An; Wang, Yingguan; Bao, Ming; Hu, Yu-Hen; Wang, Zhi

    2016-05-23

    A compressive sensing joint sparse representation direction of arrival estimation (CSJSR-DoA) approach is proposed for wireless sensor array networks (WSAN). By exploiting the joint spatial and spectral correlations of acoustic sensor array data, the CSJSR-DoA approach provides reliable DoA estimation using randomly-sampled acoustic sensor data. Since random sampling is performed at remote sensor arrays, less data need to be transmitted over lossy wireless channels to the fusion center (FC), and the expensive source coding operation at sensor nodes can be avoided. To investigate the spatial sparsity, an upper bound of the coherence of incoming sensor signals is derived assuming a linear sensor array configuration. This bound provides a theoretical constraint on the angular separation of acoustic sources to ensure the spatial sparsity of the received acoustic sensor array signals. The Cram e ´ r-Rao bound of the CSJSR-DoA estimator that quantifies the theoretical DoA estimation performance is also derived. The potential performance of the CSJSR-DoA approach is validated using both simulations and field experiments on a prototype WSAN platform. Compared to existing compressive sensing-based DoA estimation methods, the CSJSR-DoA approach shows significant performance improvement.

  15. Wireless Sensor Array Network DoA Estimation from Compressed Array Data via Joint Sparse Representation

    PubMed Central

    Yu, Kai; Yin, Ming; Luo, Ji-An; Wang, Yingguan; Bao, Ming; Hu, Yu-Hen; Wang, Zhi

    2016-01-01

    A compressive sensing joint sparse representation direction of arrival estimation (CSJSR-DoA) approach is proposed for wireless sensor array networks (WSAN). By exploiting the joint spatial and spectral correlations of acoustic sensor array data, the CSJSR-DoA approach provides reliable DoA estimation using randomly-sampled acoustic sensor data. Since random sampling is performed at remote sensor arrays, less data need to be transmitted over lossy wireless channels to the fusion center (FC), and the expensive source coding operation at sensor nodes can be avoided. To investigate the spatial sparsity, an upper bound of the coherence of incoming sensor signals is derived assuming a linear sensor array configuration. This bound provides a theoretical constraint on the angular separation of acoustic sources to ensure the spatial sparsity of the received acoustic sensor array signals. The Crame´r–Rao bound of the CSJSR-DoA estimator that quantifies the theoretical DoA estimation performance is also derived. The potential performance of the CSJSR-DoA approach is validated using both simulations and field experiments on a prototype WSAN platform. Compared to existing compressive sensing-based DoA estimation methods, the CSJSR-DoA approach shows significant performance improvement. PMID:27223287

  16. 3D Face Recognition Based on Multiple Keypoint Descriptors and Sparse Representation

    PubMed Central

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876

  17. Intrinsic functional component analysis via sparse representation on Alzheimer's disease neuroimaging initiative database.

    PubMed

    Jiang, Xi; Zhang, Xin; Zhu, Dajiang

    2014-10-01

    Alzheimer's disease (AD) is the most common type of dementia (accounting for 60% to 80%) and is the fifth leading cause of death for those people who are 65 or older. By 2050, one new case of AD in United States is expected to develop every 33 sec. Unfortunately, there is no available effective treatment that can stop or slow the death of neurons that causes AD symptoms. On the other hand, it is widely believed that AD starts before development of the associated symptoms, so its prestages, including mild cognitive impairment (MCI) or even significant memory concern (SMC), have received increasing attention, not only because of their potential as a precursor of AD, but also as a possible predictor of conversion to other neurodegenerative diseases. Although these prestages have been defined clinically, accurate/efficient diagnosis is still challenging. Moreover, brain functional abnormalities behind those alterations and conversions are still unclear. In this article, by developing novel sparse representations of whole-brain resting-state functional magnetic resonance imaging signals and by using the most updated Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, we successfully identified multiple functional components simultaneously, and which potentially represent those intrinsic functional networks involved in the resting-state activities. Interestingly, these identified functional components contain all the resting-state networks obtained from traditional independent-component analysis. Moreover, by using the features derived from those functional components, it yields high classification accuracy for both AD (94%) and MCI (92%) versus normal controls. Even for SMC we can still have 92% accuracy. PMID:24846640

  18. Intrinsic functional component analysis via sparse representation on Alzheimer's disease neuroimaging initiative database.

    PubMed

    Jiang, Xi; Zhang, Xin; Zhu, Dajiang

    2014-10-01

    Alzheimer's disease (AD) is the most common type of dementia (accounting for 60% to 80%) and is the fifth leading cause of death for those people who are 65 or older. By 2050, one new case of AD in United States is expected to develop every 33 sec. Unfortunately, there is no available effective treatment that can stop or slow the death of neurons that causes AD symptoms. On the other hand, it is widely believed that AD starts before development of the associated symptoms, so its prestages, including mild cognitive impairment (MCI) or even significant memory concern (SMC), have received increasing attention, not only because of their potential as a precursor of AD, but also as a possible predictor of conversion to other neurodegenerative diseases. Although these prestages have been defined clinically, accurate/efficient diagnosis is still challenging. Moreover, brain functional abnormalities behind those alterations and conversions are still unclear. In this article, by developing novel sparse representations of whole-brain resting-state functional magnetic resonance imaging signals and by using the most updated Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, we successfully identified multiple functional components simultaneously, and which potentially represent those intrinsic functional networks involved in the resting-state activities. Interestingly, these identified functional components contain all the resting-state networks obtained from traditional independent-component analysis. Moreover, by using the features derived from those functional components, it yields high classification accuracy for both AD (94%) and MCI (92%) versus normal controls. Even for SMC we can still have 92% accuracy.

  19. Intrinsic Functional Component Analysis via Sparse Representation on Alzheimer's Disease Neuroimaging Initiative Database

    PubMed Central

    Jiang, Xi; Zhang, Xin

    2014-01-01

    Abstract Alzheimer's disease (AD) is the most common type of dementia (accounting for 60% to 80%) and is the fifth leading cause of death for those people who are 65 or older. By 2050, one new case of AD in United States is expected to develop every 33 sec. Unfortunately, there is no available effective treatment that can stop or slow the death of neurons that causes AD symptoms. On the other hand, it is widely believed that AD starts before development of the associated symptoms, so its prestages, including mild cognitive impairment (MCI) or even significant memory concern (SMC), have received increasing attention, not only because of their potential as a precursor of AD, but also as a possible predictor of conversion to other neurodegenerative diseases. Although these prestages have been defined clinically, accurate/efficient diagnosis is still challenging. Moreover, brain functional abnormalities behind those alterations and conversions are still unclear. In this article, by developing novel sparse representations of whole-brain resting-state functional magnetic resonance imaging signals and by using the most updated Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, we successfully identified multiple functional components simultaneously, and which potentially represent those intrinsic functional networks involved in the resting-state activities. Interestingly, these identified functional components contain all the resting-state networks obtained from traditional independent-component analysis. Moreover, by using the features derived from those functional components, it yields high classification accuracy for both AD (94%) and MCI (92%) versus normal controls. Even for SMC we can still have 92% accuracy. PMID:24846640

  20. Prediction of protein-protein interactions with clustered amino acids and weighted sparse representation.

    PubMed

    Huang, Qiaoying; You, Zhuhong; Zhang, Xiaofeng; Zhou, Yong

    2015-01-01

    With the completion of the Human Genome Project, bioscience has entered into the era of the genome and proteome. Therefore, protein-protein interactions (PPIs) research is becoming more and more important. Life activities and the protein-protein interactions are inseparable, such as DNA synthesis, gene transcription activation, protein translation, etc. Though many methods based on biological experiments and machine learning have been proposed, they all spent a long time to learn and obtained an imprecise accuracy. How to efficiently and accurately predict PPIs is still a big challenge. To take up such a challenge, we developed a new predictor by incorporating the reduced amino acid alphabet (RAAA) information into the general form of pseudo-amino acid composition (PseAAC) and with the weighted sparse representation-based classification (WSRC). The remarkable advantages of introducing the reduced amino acid alphabet is being able to avoid the notorious dimensionality disaster or overfitting problem in statistical prediction. Additionally, experiments have proven that our method achieved good performance in both a low- and high-dimensional feature space. Among all of the experiments performed on the PPIs data of Saccharomyces cerevisiae, the best one achieved 90.91% accuracy, 94.17% sensitivity, 87.22% precision and a 83.43% Matthews correlation coefficient (MCC) value. In order to evaluate the prediction ability of our method, extensive experiments are performed to compare with the state-of-the-art technique, support vector machine (SVM). The achieved results show that the proposed approach is very promising for predicting PPIs, and it can be a helpful supplement for PPIs prediction. PMID:25984606

  1. Generating virtual training samples for sparse representation of face images and face recognition

    NASA Astrophysics Data System (ADS)

    Du, Yong; Wang, Yu

    2016-03-01

    There are many challenges in face recognition. In real-world scenes, images of the same face vary with changing illuminations, different expressions and poses, multiform ornaments, or even altered mental status. Limited available training samples cannot convey these possible changes in the training phase sufficiently, and this has become one of the restrictions to improve the face recognition accuracy. In this article, we view the multiplication of two images of the face as a virtual face image to expand the training set and devise a representation-based method to perform face recognition. The generated virtual samples really reflect some possible appearance and pose variations of the face. By multiplying a training sample with another sample from the same subject, we can strengthen the facial contour feature and greatly suppress the noise. Thus, more human essential information is retained. Also, uncertainty of the training data is simultaneously reduced with the increase of the training samples, which is beneficial for the training phase. The devised representation-based classifier uses both the original and new generated samples to perform the classification. In the classification phase, we first determine K nearest training samples for the current test sample by calculating the Euclidean distances between the test sample and training samples. Then, a linear combination of these selected training samples is used to represent the test sample, and the representation result is used to classify the test sample. The experimental results show that the proposed method outperforms some state-of-the-art face recognition methods.

  2. Integrating fMRI and SNP data for biomarker identification for schizophrenia with a sparse representation based variable selection method

    PubMed Central

    2013-01-01

    Background In recent years, both single-nucleotide polymorphism (SNP) array and functional magnetic resonance imaging (fMRI) have been widely used for the study of schizophrenia (SCZ). In addition, a few studies have been reported integrating both SNPs data and fMRI data for comprehensive analysis. Methods In this study, a novel sparse representation based variable selection (SRVS) method has been proposed and tested on a simulation data set to demonstrate its multi-resolution properties. Then the SRVS method was applied to an integrative analysis of two different SCZ data sets, a Single-nucleotide polymorphism (SNP) data set and a functional resonance imaging (fMRI) data set, including 92 cases and 116 controls. Biomarkers for the disease were identified and validated with a multivariate classification approach followed by a leave one out (LOO) cross-validation. Then we compared the results with that of a previously reported sparse representation based feature selection method. Results Results showed that biomarkers from our proposed SRVS method gave significantly higher classification accuracy in discriminating SCZ patients from healthy controls than that of the previous reported sparse representation method. Furthermore, using biomarkers from both data sets led to better classification accuracy than using single type of biomarkers, which suggests the advantage of integrative analysis of different types of data. Conclusions The proposed SRVS algorithm is effective in identifying significant biomarkers for complicated disease as SCZ. Integrating different types of data (e.g. SNP and fMRI data) may identify complementary biomarkers benefitting the diagnosis accuracy of the disease. PMID:24565219

  3. Hyperspectral Image Super-Resolution via Non-Negative Structured Sparse Representation.

    PubMed

    Dong, Weisheng; Fu, Fazuo; Shi, Guangming; Cao, Xun; Wu, Jinjian; Li, Guangyu; Li, Guangyu

    2016-05-01

    Hyperspectral imaging has many applications from agriculture and astronomy to surveillance and mineralogy. However, it is often challenging to obtain high-resolution (HR) hyperspectral images using existing hyperspectral imaging techniques due to various hardware limitations. In this paper, we propose a new hyperspectral image super-resolution method from a low-resolution (LR) image and a HR reference image of the same scene. The estimation of the HR hyperspectral image is formulated as a joint estimation of the hyperspectral dictionary and the sparse codes based on the prior knowledge of the spatial-spectral sparsity of the hyperspectral image. The hyperspectral dictionary representing prototype reflectance spectra vectors of the scene is first learned from the input LR image. Specifically, an efficient non-negative dictionary learning algorithm using the block-coordinate descent optimization technique is proposed. Then, the sparse codes of the desired HR hyperspectral image with respect to learned hyperspectral basis are estimated from the pair of LR and HR reference images. To improve the accuracy of non-negative sparse coding, a clustering-based structured sparse coding method is proposed to exploit the spatial correlation among the learned sparse codes. The experimental results on both public datasets and real LR hypspectral images suggest that the proposed method substantially outperforms several existing HR hyperspectral image recovery techniques in the literature in terms of both objective quality metrics and computational efficiency. PMID:27019486

  4. Object detection approach using generative sparse, hierarchical networks with top-down and lateral connections for combining texture/color detection and shape/contour detection

    DOEpatents

    Paiton, Dylan M.; Kenyon, Garrett T.; Brumby, Steven P.; Schultz, Peter F.; George, John S.

    2015-07-28

    An approach to detecting objects in an image dataset may combine texture/color detection, shape/contour detection, and/or motion detection using sparse, generative, hierarchical models with lateral and top-down connections. A first independent representation of objects in an image dataset may be produced using a color/texture detection algorithm. A second independent representation of objects in the image dataset may be produced using a shape/contour detection algorithm. A third independent representation of objects in the image dataset may be produced using a motion detection algorithm. The first, second, and third independent representations may then be combined into a single coherent output using a combinatorial algorithm.

  5. Object detection approach using generative sparse, hierarchical networks with top-down and lateral connections for combining texture/color detection and shape/contour detection

    DOEpatents

    Paiton, Dylan M.; Kenyon, Garrett T.; Brumby, Steven P.; Schultz, Peter F.; George, John S.

    2016-10-25

    An approach to detecting objects in an image dataset may combine texture/color detection, shape/contour detection, and/or motion detection using sparse, generative, hierarchical models with lateral and top-down connections. A first independent representation of objects in an image dataset may be produced using a color/texture detection algorithm. A second independent representation of objects in the image dataset may be produced using a shape/contour detection algorithm. A third independent representation of objects in the image dataset may be produced using a motion detection algorithm. The first, second, and third independent representations may then be combined into a single coherent output using a combinatorial algorithm.

  6. Integration of Sparse Multi-modality Representation and Anatomical Constraint for Isointense Infant Brain MR Image Segmentation

    PubMed Central

    Wang, Li; Shi, Feng; Gao, Yaozong; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang

    2014-01-01

    Segmentation of infant brain MR images is challenging due to poor spatial resolution, severe partial volume effect, and the ongoing maturation and myelination process. During the first year of life, the brain image contrast between white and gray matters undergoes dramatic changes. In particular, the image contrast inverses around 6–8 months of age, where the white and gray matter tissues are isointense in T1 and T2 weighted images and hence exhibit the extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a general framework that adopts sparse representation to fuse the multi-modality image information and further incorporate the anatomical constraints for brain tissue segmentation. Specifically, we first derive an initial segmentation from a library of aligned images with ground-truth segmentations by using sparse representation in a patch-based fashion for the multi-modality T1, T2 and FA images. The segmentation result is further iteratively refined by integration of the anatomical constraint. The proposed method was evaluated on 22 infant brain MR images acquired at around 6 months of age by using a leave-one-out cross-validation, as well as other 10 unseen testing subjects. Our method achieved a high accuracy for the Dice ratios that measure the volume overlap between automated and manual segmentations, i.e., 0.889±0.008 for white matter and 0.870±0.006 for gray matter. PMID:24291615

  7. Characterizing and differentiating task-based and resting state fMRI signals via two-stage sparse representations.

    PubMed

    Zhang, Shu; Li, Xiang; Lv, Jinglei; Jiang, Xi; Guo, Lei; Liu, Tianming

    2016-03-01

    A relatively underexplored question in fMRI is whether there are intrinsic differences in terms of signal composition patterns that can effectively characterize and differentiate task-based or resting state fMRI (tfMRI or rsfMRI) signals. In this paper, we propose a novel two-stage sparse representation framework to examine the fundamental difference between tfMRI and rsfMRI signals. Specifically, in the first stage, the whole-brain tfMRI or rsfMRI signals of each subject were composed into a big data matrix, which was then factorized into a subject-specific dictionary matrix and a weight coefficient matrix for sparse representation. In the second stage, all of the dictionary matrices from both tfMRI/rsfMRI data across multiple subjects were composed into another big data-matrix, which was further sparsely represented by a cross-subjects common dictionary and a weight matrix. This framework has been applied on the recently publicly released Human Connectome Project (HCP) fMRI data and experimental results revealed that there are distinctive and descriptive atoms in the cross-subjects common dictionary that can effectively characterize and differentiate tfMRI and rsfMRI signals, achieving 100% classification accuracy. Moreover, our methods and results can be meaningfully interpreted, e.g., the well-known default mode network (DMN) activities can be recovered from the very noisy and heterogeneous aggregated big-data of tfMRI and rsfMRI signals across all subjects in HCP Q1 release. PMID:25732072

  8. Sparse Shape Representation using the Laplace-Beltrami Eigenfunctions and Its Application to Modeling Subcortical Structures.

    PubMed

    Kim, Seung-Goo; Chung, Moo K; Schaefer, Stacey M; van Reekum, Carien; Davidson, Richard J

    2012-01-01

    We present a new sparse shape modeling framework on the Laplace-Beltrami (LB) eigenfunctions. Traditionally, the LB-eigenfunctions are used as a basis for intrinsically representing surface shapes by forming a Fourier series expansion. To reduce high frequency noise, only the first few terms are used in the expansion and higher frequency terms are simply thrown away. However, some lower frequency terms may not necessarily contribute significantly in reconstructing the surfaces. Motivated by this idea, we propose to filter out only the significant eigenfunctions by imposing l1-penalty. The new sparse framework can further avoid additional surface-based smoothing often used in the field. The proposed approach is applied in investigating the influence of age (38-79 years) and gender on amygdala and hippocampus shapes in the normal population. In addition, we show how the emotional response is related to the anatomy of the subcortical structures.

  9. Improved extreme value weighted sparse representational image denoising with random perturbation

    NASA Astrophysics Data System (ADS)

    Xuan, Shibin; Han, Yulan

    2015-11-01

    Research into the removal of mixed noise is a hot topic in the field of image denoising. Currently, weighted encoding with sparse nonlocal regularization represents an excellent mixed noise removal method. To make the fitting function closer to the requirements of a robust estimation technique, an extreme value technique is used that allows the fitting function to satisfy three conditions of robust estimation on a larger interval. Moreover, a random disturbance sequence is integrated into the denoising model to prevent the iterative solving process from falling into local optima. A radon transform-based noise detection algorithm and an adaptive median filter are used to obtain a high-quality initial solution for the iterative procedure of the image denoising model. Experimental results indicate that this improved method efficiently enhances the weighted encoding with a sparse nonlocal regularization model. The proposed method can effectively remove mixed noise from corrupted images, while better preserving the edges and details of the processed image.

  10. Sparse representation of multi parametric DCE-MRI features using K-SVD for classifying gene expression based breast cancer recurrence risk

    NASA Astrophysics Data System (ADS)

    Mahrooghy, Majid; Ashraf, Ahmed B.; Daye, Dania; Mies, Carolyn; Rosen, Mark; Feldman, Michael; Kontos, Despina

    2014-03-01

    We evaluate the prognostic value of sparse representation-based features by applying the K-SVD algorithm on multiparametric kinetic, textural, and morphologic features in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). K-SVD is an iterative dimensionality reduction method that optimally reduces the initial feature space by updating the dictionary columns jointly with the sparse representation coefficients. Therefore, by using K-SVD, we not only provide sparse representation of the features and condense the information in a few coefficients but also we reduce the dimensionality. The extracted K-SVD features are evaluated by a machine learning algorithm including a logistic regression classifier for the task of classifying high versus low breast cancer recurrence risk as determined by a validated gene expression assay. The features are evaluated using ROC curve analysis and leave one-out cross validation for different sparse representation and dimensionality reduction numbers. Optimal sparse representation is obtained when the number of dictionary elements is 4 (K=4) and maximum non-zero coefficients is 2 (L=2). We compare K-SVD with ANOVA based feature selection for the same prognostic features. The ROC results show that the AUC of the K-SVD based (K=4, L=2), the ANOVA based, and the original features (i.e., no dimensionality reduction) are 0.78, 0.71. and 0.68, respectively. From the results, it can be inferred that by using sparse representation of the originally extracted multi-parametric, high-dimensional data, we can condense the information on a few coefficients with the highest predictive value. In addition, the dimensionality reduction introduced by K-SVD can prevent models from over-fitting.

  11. Harnessing data structure for recovery of randomly missing structural vibration responses time history: Sparse representation versus low-rank structure

    NASA Astrophysics Data System (ADS)

    Yang, Yongchao; Nagarajaiah, Satish

    2016-06-01

    Randomly missing data of structural vibration responses time history often occurs in structural dynamics and health monitoring. For example, structural vibration responses are often corrupted by outliers or erroneous measurements due to sensor malfunction; in wireless sensing platforms, data loss during wireless communication is a common issue. Besides, to alleviate the wireless data sampling or communication burden, certain accounts of data are often discarded during sampling or before transmission. In these and other applications, recovery of the randomly missing structural vibration responses from the available, incomplete data, is essential for system identification and structural health monitoring; it is an ill-posed inverse problem, however. This paper explicitly harnesses the data structure itself-of the structural vibration responses-to address this (inverse) problem. What is relevant is an empirical, but often practically true, observation, that is, typically there are only few modes active in the structural vibration responses; hence a sparse representation (in frequency domain) of the single-channel data vector, or, a low-rank structure (by singular value decomposition) of the multi-channel data matrix. Exploiting such prior knowledge of data structure (intra-channel sparse or inter-channel low-rank), the new theories of ℓ1-minimization sparse recovery and nuclear-norm-minimization low-rank matrix completion enable recovery of the randomly missing or corrupted structural vibration response data. The performance of these two alternatives, in terms of recovery accuracy and computational time under different data missing rates, is investigated on a few structural vibration response data sets-the seismic responses of the super high-rise Canton Tower and the structural health monitoring accelerations of a real large-scale cable-stayed bridge. Encouraging results are obtained and the applicability and limitation of the presented methods are discussed.

  12. Assessing effects of prenatal alcohol exposure using group-wise sparse representation of fMRI data.

    PubMed

    Lv, Jinglei; Jiang, Xi; Li, Xiang; Zhu, Dajiang; Zhao, Shijie; Zhang, Tuo; Hu, Xintao; Han, Junwei; Guo, Lei; Li, Zhihao; Coles, Claire; Hu, Xiaoping; Liu, Tianming

    2015-08-30

    Task-based fMRI activation mapping has been widely used in clinical neuroscience in order to assess different functional activity patterns in conditions such as prenatal alcohol exposure (PAE) affected brains and healthy controls. In this paper, we propose a novel, alternative approach of group-wise sparse representation of the fMRI data of multiple groups of subjects (healthy control, exposed non-dysmorphic PAE and exposed dysmorphic PAE) and assess the systematic functional activity differences among these three populations. Specifically, a common time series signal dictionary is learned from the aggregated fMRI signals of all three groups of subjects, and then the weight coefficient matrices (named statistical coefficient map (SCM)) associated with each common dictionary were statistically assessed for each group separately. Through inter-group comparisons based on the correspondence established by the common dictionary, our experimental results have demonstrated that the group-wise sparse coding strategy and the SCM can effectively reveal a collection of brain networks/regions that were affected by different levels of severity of PAE. PMID:26195294

  13. Identification of functional networks in resting state fMRI data using adaptive sparse representation and affinity propagation clustering

    PubMed Central

    Li, Xuan; Wang, Haixian

    2015-01-01

    Human brain functional system has been viewed as a complex network. To accurately characterize this brain network, it is important to estimate the functional connectivity between separate brain regions (i.e., association matrix). One common approach to evaluating the connectivity is the pairwise Pearson correlation. However, this bivariate method completely ignores the influence of other regions when computing the pairwise association. Another intractable issue existed in many approaches to further analyzing the network structure is the requirement of applying a threshold to the association matrix. To address these issues, we develop a novel scheme to investigate the brain functional networks. Specifically, we first establish a global functional connection network by using the Adaptive Sparse Representation (ASR), adaptively integrating the sparsity of ℓ1-norm and the grouping effect of ℓ2-norm for linear representation and then identify connectivity patterns with Affinity Propagation (AP) clustering algorithm. Results on both simulated and real data indicate that the proposed scheme is superior to the Pearson correlation in connectivity quality and clustering quality. Our findings suggest that the proposed scheme is an accurate and useful technique to delineate functional network structure for functionally parsimonious and correlated fMRI data with a large number of brain regions. PMID:26528123

  14. Temperature variation effects on sparse representation of guided-waves for damage diagnosis in pipelines

    NASA Astrophysics Data System (ADS)

    Eybpoosh, Matineh; Berges, Mario; Noh, Hae Young

    2015-04-01

    Multiple ultrasonic guided-wave modes propagating along a pipe travel with different velocities which are themselves a function of frequency. Reflections from the features of the structure (e.g., boundaries, pipe welding, damage, etc.), and their complex superposition, adds to the complexity of guided-waves. Guided-wave based damage diagnosis of pipelines becomes even more challenging when environmental and operational conditions (EOCs) vary (e.g., temperature, flow rate, inner pressure, etc.). These complexities make guided-wave based damage diagnosis of operating pipelines a challenging task. This paper reviews the approaches to-date addressing these challenges, and highlights the preferred characteristics of a method that simplifies guided-wave signals for damage diagnosis purposes. A method is proposed to extract a sparse subset of guided-wave signals in time-domain, while retaining optimal damage information for detection purpose. In this paper, the general concept of this method is proved through an extensive set of experiments. Effects of temperature variation on detection performance of the proposed method, and on discriminatory power of the extracted damage-sensitive features are investigated. The potential of the proposed method for real-time damage detection is illustrated, for wide range of temperature variation scenarios (i.e., temperature difference between training and test data varying between -2°C and 13°C).

  15. Automatic Myonuclear Detection in Isolated Single Muscle Fibers Using Robust Ellipse Fitting and Sparse Representation.

    PubMed

    Su, Hai; Xing, Fuyong; Lee, Jonah D; Peterson, Charlotte A; Yang, Lin

    2014-01-01

    Accurate and robust detection of myonuclei in isolated single muscle fibers is required to calculate myonuclear domain size. However, this task is challenging because: 1) shape and size variations of the nuclei, 2) overlapping nuclear clumps, and 3) multiple z-stack images with out-of-focus regions. In this paper, we have proposed a novel automatic detection algorithm to robustly quantify myonuclei in isolated single skeletal muscle fibers. The original z-stack images are first converted into one all-in-focus image using multi-focus image fusion. A sufficient number of ellipse fitting hypotheses are then generated from the myonuclei contour segments using heteroscedastic errors-in-variables (HEIV) regression. A set of representative training samples and a set of discriminative features are selected by a two-stage sparse model. The selected samples with representative features are utilized to train a classifier to select the best candidates. A modified inner geodesic distance based mean-shift clustering algorithm is used to produce the final nuclei detection results. The proposed method was extensively tested using 42 sets of z-stack images containing over 1,500 myonuclei. The method demonstrates excellent results that are better than current state-of-the-art approaches.

  16. Automatic Myonuclear Detection in Isolated Single Muscle Fibers Using Robust Ellipse Fitting and Sparse Representation

    PubMed Central

    Su, Hai; Xing, Fuyong; Lee, Jonah D.; Peterson, Charlotte A.; Yang, Lin

    2015-01-01

    Accurate and robust detection of myonuclei in isolated single muscle fibers is required to calculate myonuclear domain size. However, this task is challenging because: 1) shape and size variations of the nuclei, 2) overlapping nuclear clumps, and 3) multiple z-stack images with out-of-focus regions. In this paper, we have proposed a novel automatic detection algorithm to robustly quantify myonuclei in isolated single skeletal muscle fibers. The original z-stack images are first converted into one all-in-focus image using multi-focus image fusion. A sufficient number of ellipse fitting hypotheses are then generated from them yonuclei contour segments using heteroscedastic errors-invariables (HEIV) regression. A set of representative training samples and a set of discriminative features are selected by a two-stage sparse model. The selected samples with representative features are utilized to train a classifier to select the best candidates. A modified inner geodesic distance based mean-shift clustering algorithm is used to produce the final nuclei detection results. The proposed method was extensively tested using 42 sets of z-stack images containing over 1,500 myonuclei. The method demonstrates excellent results that are better than current state-of-the-art approaches. PMID:26356342

  17. Turbulent heat transfer from a sparsely vegetated surface - Two-component representation

    NASA Technical Reports Server (NTRS)

    Otterman, J.; Novak, M. D.; Starr, D. O'C.

    1993-01-01

    The conventional calculation of heat fluxes from a vegetated surface involving the coefficient of turbulent heat transfer which increases logarithmically with surface roughness, is inappropriate such highly structured surfaces as desert scrub or open forest. An approach is developed here for computing sensible heat flux from sparsely vegetated surfaces, where the absorption of insolation and the transfer of absorbed heat to the atmosphere are calculated separately for the plants and for the soil. This approach is applied to a desert-scrub surface in the northern Sinai, for which the turbulent transfer coefficient of sensible heat flux from the plants is much larger than that from the soil below, as shown by an analysis of plant, soil, and air temperatures. The plant density is expressed as the sum of products (plant-height) x (plant-diameter) of plants per unit horizontal surface area. The solar heat absorbed by the plants is assumed to be transferred immediately to the airflow. The effective turbulent transfer coefficient k(g-eff) for sensible heat from the desert-scrub/soil surface computed under this assumption increases sharply with increasing solar zenith angle, as the plants absorb a greater fraction of the incoming irradiation. The surface absorptivity (the coalbedo) also increases sharply with increasing solar zenith angle, and thus the sensible heat flux from such complex surfaces is a much broader function of time of day than when computed under constant k(g-eff) and constant albedo assumptions.

  18. Concept Abstractness and the Representation of Noun-Noun Combinations

    ERIC Educational Resources Information Center

    Xu, Xu; Paulson, Lisa

    2013-01-01

    Research on noun-noun combinations has been largely focusing on concrete concepts. Three experiments examined the role of concept abstractness in the representation of noun-noun combinations. In Experiment 1, participants provided written interpretations for phrases constituted by nouns of varying degrees of abstractness. Interpretive focus (the…

  19. Combination of direct matching and collaborative representation for face recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Chongyang

    2013-06-01

    It has been proved that representation-based classification (RBC) can achieve high accuracy in face recognition. However, conventional RBC has a very high computational cost. Collaborative representation proposed in [1] not only has the advantages of RBC but also is computationally very efficient. In this paper, a combination of direct matching of images and collaborative representation is proposed for face recognition. Experimental results show that the proposed method can always classify more accurately than collaborative representation! The underlying reason is that direct matching of images and collaborative representation use different ways to calculate the dissimilarity between the test sample and training sample. As a result, the score obtained using direct matching of images is very complementary to the score obtained using collaborative representation. Actually, the analysis shows that the matching scores generated from direct matching of images and collaborative representation always have a low correlation. This allows the proposed method to exploit more information for face recognition and to produce a better result.

  20. Ring artifacts removal via spatial sparse representation in cone beam CT

    NASA Astrophysics Data System (ADS)

    Li, Zhongyuan; Li, Guang; Sun, Yi; Luo, Shouhua

    2016-03-01

    This paper is about the ring artifacts removal method in cone beam CT. Cone beam CT images often suffer from disturbance of ring artifacts which caused by the non-uniform responses of the elements in detectors. Conventional ring artifacts removal methods focus on the correlation of the elements and the ring artifacts' structural characteristics in either sinogram domain or cross-section image. The challenge in the conventional methods is how to distinguish the artifacts from the intrinsic structures; hence they often give rise to the blurred image results due to over processing. In this paper, we investigate the characteristics of the ring artifacts in spatial space, different from the continuous essence of 3D texture feature of the scanned objects, the ring artifacts are displayed discontinuously in spatial space, specifically along z-axis. Thus we can easily recognize the ring artifacts in spatial space than in cross-section. As a result, we choose dictionary representation for ring artifacts removal due to its high sensitivity to structural information. We verified our theory both in spatial space and coronal-section, the experimental results demonstrate that our methods can remove the artifacts efficiently while maintaining image details.

  1. Automatic approach to solve the morphological galaxy classification problem using the sparse representation technique and dictionary learning

    NASA Astrophysics Data System (ADS)

    Diaz-Hernandez, R.; Ortiz-Esquivel, A.; Peregrina-Barreto, H.; Altamirano-Robles, L.; Gonzalez-Bernal, J.

    2016-06-01

    The observation of celestial objects in the sky is a practice that helps astronomers to understand the way in which the Universe is structured. However, due to the large number of observed objects with modern telescopes, the analysis of these by hand is a difficult task. An important part in galaxy research is the morphological structure classification based on the Hubble sequence. In this research, we present an approach to solve the morphological galaxy classification problem in an automatic way by using the Sparse Representation technique and dictionary learning with K-SVD. For the tests in this work, we use a database of galaxies extracted from the Principal Galaxy Catalog (PGC) and the APM Equatorial Catalogue of Galaxies obtaining a total of 2403 useful galaxies. In order to represent each galaxy frame, we propose to calculate a set of 20 features such as Hu's invariant moments, galaxy nucleus eccentricity, gabor galaxy ratio and some other features commonly used in galaxy classification. A stage of feature relevance analysis was performed using Relief-f in order to determine which are the best parameters for the classification tests using 2, 3, 4, 5, 6 and 7 galaxy classes making signal vectors of different length values with the most important features. For the classification task, we use a 20-random cross-validation technique to evaluate classification accuracy with all signal sets achieving a score of 82.27 % for 2 galaxy classes and up to 44.27 % for 7 galaxy classes.

  2. A homotopy-based sparse representation for fast and accurate shape prior modeling in liver surgical planning.

    PubMed

    Wang, Guotai; Zhang, Shaoting; Xie, Hongzhi; Metaxas, Dimitris N; Gu, Lixu

    2015-01-01

    Shape prior plays an important role in accurate and robust liver segmentation. However, liver shapes have complex variations and accurate modeling of liver shapes is challenging. Using large-scale training data can improve the accuracy but it limits the computational efficiency. In order to obtain accurate liver shape priors without sacrificing the efficiency when dealing with large-scale training data, we investigate effective and scalable shape prior modeling method that is more applicable in clinical liver surgical planning system. We employed the Sparse Shape Composition (SSC) to represent liver shapes by an optimized sparse combination of shapes in the repository, without any assumptions on parametric distributions of liver shapes. To leverage large-scale training data and improve the computational efficiency of SSC, we also introduced a homotopy-based method to quickly solve the L1-norm optimization problem in SSC. This method takes advantage of the sparsity of shape modeling, and solves the original optimization problem in SSC by continuously transforming it into a series of simplified problems whose solution is fast to compute. When new training shapes arrive gradually, the homotopy strategy updates the optimal solution on the fly and avoids re-computing it from scratch. Experiments showed that SSC had a high accuracy and efficiency in dealing with complex liver shape variations, excluding gross errors and preserving local details on the input liver shape. The homotopy-based SSC had a high computational efficiency, and its runtime increased very slowly when repository's capacity and vertex number rose to a large degree. When repository's capacity was 10,000, with 2000 vertices on each shape, homotopy method cost merely about 11.29 s to solve the optimization problem in SSC, nearly 2000 times faster than interior point method. The dice similarity coefficient (DSC), average symmetric surface distance (ASD), and maximum symmetric surface distance measurement

  3. LOFAR sparse image reconstruction

    NASA Astrophysics Data System (ADS)

    Garsden, H.; Girard, J. N.; Starck, J. L.; Corbel, S.; Tasse, C.; Woiselle, A.; McKean, J. P.; van Amesfoort, A. S.; Anderson, J.; Avruch, I. M.; Beck, R.; Bentum, M. J.; Best, P.; Breitling, F.; Broderick, J.; Brüggen, M.; Butcher, H. R.; Ciardi, B.; de Gasperin, F.; de Geus, E.; de Vos, M.; Duscha, S.; Eislöffel, J.; Engels, D.; Falcke, H.; Fallows, R. A.; Fender, R.; Ferrari, C.; Frieswijk, W.; Garrett, M. A.; Grießmeier, J.; Gunst, A. W.; Hassall, T. E.; Heald, G.; Hoeft, M.; Hörandel, J.; van der Horst, A.; Juette, E.; Karastergiou, A.; Kondratiev, V. I.; Kramer, M.; Kuniyoshi, M.; Kuper, G.; Mann, G.; Markoff, S.; McFadden, R.; McKay-Bukowski, D.; Mulcahy, D. D.; Munk, H.; Norden, M. J.; Orru, E.; Paas, H.; Pandey-Pommier, M.; Pandey, V. N.; Pietka, G.; Pizzo, R.; Polatidis, A. G.; Renting, A.; Röttgering, H.; Rowlinson, A.; Schwarz, D.; Sluman, J.; Smirnov, O.; Stappers, B. W.; Steinmetz, M.; Stewart, A.; Swinbank, J.; Tagger, M.; Tang, Y.; Tasse, C.; Thoudam, S.; Toribio, C.; Vermeulen, R.; Vocks, C.; van Weeren, R. J.; Wijnholds, S. J.; Wise, M. W.; Wucknitz, O.; Yatawatta, S.; Zarka, P.; Zensus, A.

    2015-03-01

    Context. The LOw Frequency ARray (LOFAR) radio telescope is a giant digital phased array interferometer with multiple antennas distributed in Europe. It provides discrete sets of Fourier components of the sky brightness. Recovering the original brightness distribution with aperture synthesis forms an inverse problem that can be solved by various deconvolution and minimization methods. Aims: Recent papers have established a clear link between the discrete nature of radio interferometry measurement and the "compressed sensing" (CS) theory, which supports sparse reconstruction methods to form an image from the measured visibilities. Empowered by proximal theory, CS offers a sound framework for efficient global minimization and sparse data representation using fast algorithms. Combined with instrumental direction-dependent effects (DDE) in the scope of a real instrument, we developed and validated a new method based on this framework. Methods: We implemented a sparse reconstruction method in the standard LOFAR imaging tool and compared the photometric and resolution performance of this new imager with that of CLEAN-based methods (CLEAN and MS-CLEAN) with simulated and real LOFAR data. Results: We show that i) sparse reconstruction performs as well as CLEAN in recovering the flux of point sources; ii) performs much better on extended objects (the root mean square error is reduced by a factor of up to 10); and iii) provides a solution with an effective angular resolution 2-3 times better than the CLEAN images. Conclusions: Sparse recovery gives a correct photometry on high dynamic and wide-field images and improved realistic structures of extended sources (of simulated and real LOFAR datasets). This sparse reconstruction method is compatible with modern interferometric imagers that handle DDE corrections (A- and W-projections) required for current and future instruments such as LOFAR and SKA.

  4. Review of Sparse Representation-Based Classification Methods on EEG Signal Processing for Epilepsy Detection, Brain-Computer Interface and Cognitive Impairment

    PubMed Central

    Wen, Dong; Jia, Peilei; Lian, Qiusheng; Zhou, Yanhong; Lu, Chengbiao

    2016-01-01

    At present, the sparse representation-based classification (SRC) has become an important approach in electroencephalograph (EEG) signal analysis, by which the data is sparsely represented on the basis of a fixed dictionary or learned dictionary and classified based on the reconstruction criteria. SRC methods have been used to analyze the EEG signals of epilepsy, cognitive impairment and brain computer interface (BCI), which made rapid progress including the improvement in computational accuracy, efficiency and robustness. However, these methods have deficiencies in real-time performance, generalization ability and the dependence of labeled sample in the analysis of the EEG signals. This mini review described the advantages and disadvantages of the SRC methods in the EEG signal analysis with the expectation that these methods can provide the better tools for analyzing EEG signals. PMID:27458376

  5. Review of Sparse Representation-Based Classification Methods on EEG Signal Processing for Epilepsy Detection, Brain-Computer Interface and Cognitive Impairment.

    PubMed

    Wen, Dong; Jia, Peilei; Lian, Qiusheng; Zhou, Yanhong; Lu, Chengbiao

    2016-01-01

    At present, the sparse representation-based classification (SRC) has become an important approach in electroencephalograph (EEG) signal analysis, by which the data is sparsely represented on the basis of a fixed dictionary or learned dictionary and classified based on the reconstruction criteria. SRC methods have been used to analyze the EEG signals of epilepsy, cognitive impairment and brain computer interface (BCI), which made rapid progress including the improvement in computational accuracy, efficiency and robustness. However, these methods have deficiencies in real-time performance, generalization ability and the dependence of labeled sample in the analysis of the EEG signals. This mini review described the advantages and disadvantages of the SRC methods in the EEG signal analysis with the expectation that these methods can provide the better tools for analyzing EEG signals. PMID:27458376

  6. Review of Sparse Representation-Based Classification Methods on EEG Signal Processing for Epilepsy Detection, Brain-Computer Interface and Cognitive Impairment.

    PubMed

    Wen, Dong; Jia, Peilei; Lian, Qiusheng; Zhou, Yanhong; Lu, Chengbiao

    2016-01-01

    At present, the sparse representation-based classification (SRC) has become an important approach in electroencephalograph (EEG) signal analysis, by which the data is sparsely represented on the basis of a fixed dictionary or learned dictionary and classified based on the reconstruction criteria. SRC methods have been used to analyze the EEG signals of epilepsy, cognitive impairment and brain computer interface (BCI), which made rapid progress including the improvement in computational accuracy, efficiency and robustness. However, these methods have deficiencies in real-time performance, generalization ability and the dependence of labeled sample in the analysis of the EEG signals. This mini review described the advantages and disadvantages of the SRC methods in the EEG signal analysis with the expectation that these methods can provide the better tools for analyzing EEG signals.

  7. Application of linear graph embedding as a dimensionality reduction technique and sparse representation classifier as a post classifier for the classification of epilepsy risk levels from EEG signals

    NASA Astrophysics Data System (ADS)

    Prabhakar, Sunil Kumar; Rajaguru, Harikumar

    2015-12-01

    The most common and frequently occurring neurological disorder is epilepsy and the main method useful for the diagnosis of epilepsy is electroencephalogram (EEG) signal analysis. Due to the length of EEG recordings, EEG signal analysis method is quite time-consuming when it is processed manually by an expert. This paper proposes the application of Linear Graph Embedding (LGE) concept as a dimensionality reduction technique for processing the epileptic encephalographic signals and then it is classified using Sparse Representation Classifiers (SRC). SRC is used to analyze the classification of epilepsy risk levels from EEG signals and the parameters such as Sensitivity, Specificity, Time Delay, Quality Value, Performance Index and Accuracy are analyzed.

  8. Sex Education Representations in Spanish Combined Biology and Geology Textbooks

    NASA Astrophysics Data System (ADS)

    García-Cabeza, Belén; Sánchez-Bello, Ana

    2013-07-01

    Sex education is principally dealt with as part of the combined subject of Biology and Geology in the Spanish school curriculum. Teachers of this subject are not specifically trained to teach sex education, and thus the contents of their assigned textbooks are the main source of information available to them in this field. The main goal of this study was to determine what information Biology and Geology textbooks provide with regard to sex education and the vision of sexuality they give, but above all to reveal which perspectives of sex education they legitimise and which they silence. We analysed the textbooks in question by interpreting both visual and text representations, as a means of enabling us to investigate the nature of the discourse on sex education. With this aim, we have used a qualitative methodology, based on the content analysis. The main analytical tool was an in-house grid constructed to allow us to analyse the visual and textual representations. Our analysis of the combined Biology and Geology textbooks for Secondary Year 3 revealed that there is a tendency to reproduce models of sex education that take place within a framework of the more traditional discourses. Besides, the results suggested that the most of the sample chosen for this study makes a superficial, incomplete, incorrect or biased approach to sex education.

  9. Visual tracking via robust multitask sparse prototypes

    NASA Astrophysics Data System (ADS)

    Zhang, Huanlong; Hu, Shiqiang; Yu, Junyang

    2015-03-01

    Sparse representation has been applied to an online subspace learning-based tracking problem. To handle partial occlusion effectively, some researchers introduce l1 regularization to principal component analysis (PCA) reconstruction. However, in these traditional tracking methods, the representation of each object observation is often viewed as an individual task so the inter-relationship between PCA basis vectors is ignored. We propose a new online visual tracking algorithm with multitask sparse prototypes, which combines multitask sparse learning with PCA-based subspace representation. We first extend a visual tracking algorithm with sparse prototypes in multitask learning framework to mine inter-relations between subtasks. Then, to avoid the problem that enforcing all subtasks to share the same structure may result in degraded tracking results, we impose group sparse constraints on the coefficients of PCA basis vectors and element-wise sparse constraints on the error coefficients, respectively. Finally, we show that the proposed optimization problem can be effectively solved using the accelerated proximal gradient method with the fast convergence. Experimental results compared with the state-of-the-art tracking methods demonstrate that the proposed algorithm achieves favorable performance when the object undergoes partial occlusion, motion blur, and illumination changes.

  10. Predicting standard-dose PET image from low-dose PET and multimodal MR images using mapping-based sparse representation

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Zhang, Pei; An, Le; Ma, Guangkai; Kang, Jiayin; Shi, Feng; Wu, Xi; Zhou, Jiliu; Lalush, David S.; Lin, Weili; Shen, Dinggang

    2016-01-01

    Positron emission tomography (PET) has been widely used in clinical diagnosis for diseases and disorders. To obtain high-quality PET images requires a standard-dose radionuclide (tracer) injection into the human body, which inevitably increases risk of radiation exposure. One possible solution to this problem is to predict the standard-dose PET image from its low-dose counterpart and its corresponding multimodal magnetic resonance (MR) images. Inspired by the success of patch-based sparse representation (SR) in super-resolution image reconstruction, we propose a mapping-based SR (m-SR) framework for standard-dose PET image prediction. Compared with the conventional patch-based SR, our method uses a mapping strategy to ensure that the sparse coefficients, estimated from the multimodal MR images and low-dose PET image, can be applied directly to the prediction of standard-dose PET image. As the mapping between multimodal MR images (or low-dose PET image) and standard-dose PET images can be particularly complex, one step of mapping is often insufficient. To this end, an incremental refinement framework is therefore proposed. Specifically, the predicted standard-dose PET image is further mapped to the target standard-dose PET image, and then the SR is performed again to predict a new standard-dose PET image. This procedure can be repeated for prediction refinement of the iterations. Also, a patch selection based dictionary construction method is further used to speed up the prediction process. The proposed method is validated on a human brain dataset. The experimental results show that our method can outperform benchmark methods in both qualitative and quantitative measures.

  11. Predicting standard-dose PET image from low-dose PET and multimodal MR images using mapping-based sparse representation.

    PubMed

    Wang, Yan; Zhang, Pei; An, Le; Ma, Guangkai; Kang, Jiayin; Shi, Feng; Wu, Xi; Zhou, Jiliu; Lalush, David S; Lin, Weili; Shen, Dinggang

    2016-01-21

    Positron emission tomography (PET) has been widely used in clinical diagnosis for diseases and disorders. To obtain high-quality PET images requires a standard-dose radionuclide (tracer) injection into the human body, which inevitably increases risk of radiation exposure. One possible solution to this problem is to predict the standard-dose PET image from its low-dose counterpart and its corresponding multimodal magnetic resonance (MR) images. Inspired by the success of patch-based sparse representation (SR) in super-resolution image reconstruction, we propose a mapping-based SR (m-SR) framework for standard-dose PET image prediction. Compared with the conventional patch-based SR, our method uses a mapping strategy to ensure that the sparse coefficients, estimated from the multimodal MR images and low-dose PET image, can be applied directly to the prediction of standard-dose PET image. As the mapping between multimodal MR images (or low-dose PET image) and standard-dose PET images can be particularly complex, one step of mapping is often insufficient. To this end, an incremental refinement framework is therefore proposed. Specifically, the predicted standard-dose PET image is further mapped to the target standard-dose PET image, and then the SR is performed again to predict a new standard-dose PET image. This procedure can be repeated for prediction refinement of the iterations. Also, a patch selection based dictionary construction method is further used to speed up the prediction process. The proposed method is validated on a human brain dataset. The experimental results show that our method can outperform benchmark methods in both qualitative and quantitative measures. PMID:26732849

  12. Comparison of Support-Vector Machine and Sparse Representation Using a Modified Rule-Based Method for Automated Myocardial Ischemia Detection

    PubMed Central

    Tseng, Yi-Li; Lin, Keng-Sheng; Jaw, Fu-Shan

    2016-01-01

    An automatic method is presented for detecting myocardial ischemia, which can be considered as the early symptom of acute coronary events. Myocardial ischemia commonly manifests as ST- and T-wave changes on ECG signals. The methods in this study are proposed to detect abnormal ECG beats using knowledge-based features and classification methods. A novel classification method, sparse representation-based classification (SRC), is involved to improve the performance of the existing algorithms. A comparison was made between two classification methods, SRC and support-vector machine (SVM), using rule-based vectors as input feature space. The two methods are proposed with quantitative evaluation to validate their performances. The results of SRC method encompassed with rule-based features demonstrate higher sensitivity than that of SVM. However, the specificity and precision are a trade-off. Moreover, SRC method is less dependent on the selection of rule-based features and can achieve high performance using fewer features. The overall performances of the two methods proposed in this study are better than the previous methods. PMID:26925158

  13. Sparse-view computed tomography image reconstruction via a combination of L(1) and SL(0) regularization.

    PubMed

    Qi, Hongliang; Chen, Zijia; Guo, Jingyu; Zhou, Linghong

    2015-01-01

    Low-dose computed tomography reconstruction is an important issue in the medical imaging domain. Sparse-view has been widely studied as a potential strategy. Compressed sensing (CS) method has shown great potential to reconstruct high-quality CT images from sparse-view projection data. Nonetheless, low-contrast structures tend to be blurred by the total variation (TV, L1-norm of the gradient image) regularization. Moreover, TV will produce blocky effects on smooth and edge regions. To overcome this limitation, this study has proposed an iterative image reconstruction algorithm by combining L1 regularization and smoothed L0 (SL0) regularization. SL0 is a smooth approximation of L0 norm and can solve the problem of L0 norm being sensitive to noise. To evaluate the proposed method, both qualitative and quantitative studies were conducted on a digital Shepp-Logan phantom and a real head phantom. Experimental comparative results have indicated that the proposed L1/SL0-POCS algorithm can effectively suppress noise and artifacts, as well as preserve more structural information compared to other existing methods.

  14. Quantum dynamics with sparse grids: a combination of Smolyak scheme and cubature. Application to methanol in full dimensionality.

    PubMed

    Lauvergnat, David; Nauts, André

    2014-02-01

    Quantum dynamical approaches based on product-grids are limited to the studies of molecular systems with few degrees of freedom, typically less than ten. Recently, Avila et al. [G. Avila, T. Carrington, J. Chem. Phys., 131 (2009) 174103] have introduced the Smolyak scheme [S.A. Smolyak, Sov. Math. Dokl., 4 (1963) 240], which considerably reduces the size of the grids. This approach has pushed back the present calculation limits on the vibrational spectra of polyatomic molecules. In the present study, we have developed an extension of the standard Smolyak scheme in which this scheme is combined with multidimensional grids, such as cubatures, to obtain new sparse grids. This scheme has been applied to the study of the torsional energy levels of methanol in full dimensionality (12D).

  15. Asteroids' physical models from combined dense and sparse photometry and scaling of the YORP effect by the observed obliquity distribution

    NASA Astrophysics Data System (ADS)

    Hanuš, J.; Ďurech, J.; Brož, M.; Marciniak, A.; Warner, B. D.; Pilcher, F.; Stephens, R.; Behrend, R.; Carry, B.; Čapek, D.; Antonini, P.; Audejean, M.; Augustesen, K.; Barbotin, E.; Baudouin, P.; Bayol, A.; Bernasconi, L.; Borczyk, W.; Bosch, J.-G.; Brochard, E.; Brunetto, L.; Casulli, S.; Cazenave, A.; Charbonnel, S.; Christophe, B.; Colas, F.; Coloma, J.; Conjat, M.; Cooney, W.; Correira, H.; Cotrez, V.; Coupier, A.; Crippa, R.; Cristofanelli, M.; Dalmas, Ch.; Danavaro, C.; Demeautis, C.; Droege, T.; Durkee, R.; Esseiva, N.; Esteban, M.; Fagas, M.; Farroni, G.; Fauvaud, M.; Fauvaud, S.; Del Freo, F.; Garcia, L.; Geier, S.; Godon, C.; Grangeon, K.; Hamanowa, H.; Hamanowa, H.; Heck, N.; Hellmich, S.; Higgins, D.; Hirsch, R.; Husarik, M.; Itkonen, T.; Jade, O.; Kamiński, K.; Kankiewicz, P.; Klotz, A.; Koff, R. A.; Kryszczyńska, A.; Kwiatkowski, T.; Laffont, A.; Leroy, A.; Lecacheux, J.; Leonie, Y.; Leyrat, C.; Manzini, F.; Martin, A.; Masi, G.; Matter, D.; Michałowski, J.; Michałowski, M. J.; Michałowski, T.; Michelet, J.; Michelsen, R.; Morelle, E.; Mottola, S.; Naves, R.; Nomen, J.; Oey, J.; Ogłoza, W.; Oksanen, A.; Oszkiewicz, D.; Pääkkönen, P.; Paiella, M.; Pallares, H.; Paulo, J.; Pavic, M.; Payet, B.; Polińska, M.; Polishook, D.; Poncy, R.; Revaz, Y.; Rinner, C.; Rocca, M.; Roche, A.; Romeuf, D.; Roy, R.; Saguin, H.; Salom, P. A.; Sanchez, S.; Santacana, G.; Santana-Ros, T.; Sareyan, J.-P.; Sobkowiak, K.; Sposetti, S.; Starkey, D.; Stoss, R.; Strajnic, J.; Teng, J.-P.; Trégon, B.; Vagnozzi, A.; Velichko, F. P.; Waelchli, N.; Wagrez, K.; Wücher, H.

    2013-03-01

    Context. The larger number of models of asteroid shapes and their rotational states derived by the lightcurve inversion give us better insight into both the nature of individual objects and the whole asteroid population. With a larger statistical sample we can study the physical properties of asteroid populations, such as main-belt asteroids or individual asteroid families, in more detail. Shape models can also be used in combination with other types of observational data (IR, adaptive optics images, stellar occultations), e.g., to determine sizes and thermal properties. Aims: We use all available photometric data of asteroids to derive their physical models by the lightcurve inversion method and compare the observed pole latitude distributions of all asteroids with known convex shape models with the simulated pole latitude distributions. Methods: We used classical dense photometric lightcurves from several sources (Uppsala Asteroid Photometric Catalogue, Palomar Transient Factory survey, and from individual observers) and sparse-in-time photometry from the U.S. Naval Observatory in Flagstaff, Catalina Sky Survey, and La Palma surveys (IAU codes 689, 703, 950) in the lightcurve inversion method to determine asteroid convex models and their rotational states. We also extended a simple dynamical model for the spin evolution of asteroids used in our previous paper. Results: We present 119 new asteroid models derived from combined dense and sparse-in-time photometry. We discuss the reliability of asteroid shape models derived only from Catalina Sky Survey data (IAU code 703) and present 20 such models. By using different values for a scaling parameter cYORP (corresponds to the magnitude of the YORP momentum) in the dynamical model for the spin evolution and by comparing synthetic and observed pole-latitude distributions, we were able to constrain the typical values of the cYORP parameter as between 0.05 and 0.6. Table 3 is available in electronic form at http://www.aanda.org

  16. Image fusion using sparse overcomplete feature dictionaries

    DOEpatents

    Brumby, Steven P.; Bettencourt, Luis; Kenyon, Garrett T.; Chartrand, Rick; Wohlberg, Brendt

    2015-10-06

    Approaches for deciding what individuals in a population of visual system "neurons" are looking for using sparse overcomplete feature dictionaries are provided. A sparse overcomplete feature dictionary may be learned for an image dataset and a local sparse representation of the image dataset may be built using the learned feature dictionary. A local maximum pooling operation may be applied on the local sparse representation to produce a translation-tolerant representation of the image dataset. An object may then be classified and/or clustered within the translation-tolerant representation of the image dataset using a supervised classification algorithm and/or an unsupervised clustering algorithm.

  17. Evaluating coastal sea surface heights based on a novel sub-waveform approach using sparse representation and conditional random fields

    NASA Astrophysics Data System (ADS)

    Uebbing, Bernd; Roscher, Ribana; Kusche, Jürgen

    2016-04-01

    Satellite radar altimeters allow global monitoring of mean sea level changes over the last two decades. However, coastal regions are less well observed due to influences on the returned signal energy by land located inside the altimeter footprint. The altimeter emits a radar pulse, which is reflected at the nadir-surface and measures the two-way travel time, as well as the returned energy as a function of time, resulting in a return waveform. Over the open ocean the waveform shape corresponds to a theoretical model which can be used to infer information on range corrections, significant wave height or wind speed. However, in coastal areas the shape of the waveform is significantly influenced by return signals from land, located in the altimeter footprint, leading to peaks which tend to bias the estimated parameters. Recently, several approaches dealing with this problem have been published, including utilizing only parts of the waveform (sub-waveforms), estimating the parameters in two steps or estimating additional peak parameters. We present a new approach in estimating sub-waveforms using conditional random fields (CRF) based on spatio-temporal waveform information. The CRF piece-wise approximates the measured waveforms based on a pre-derived dictionary of theoretical waveforms for various combinations of the geophysical parameters; neighboring range gates are likely to be assigned to the same underlying sub-waveform model. Depending on the choice of hyperparameters in the CRF estimation, the classification into sub-waveforms can either be more fine or coarse resulting in multiple sub-waveform hypotheses. After the sub-waveforms have been detected, existing retracking algorithms can be applied to derive water heights or other desired geophysical parameters from particular sub-waveforms. To identify the optimal heights from the multiple hypotheses, instead of utilizing a known reference height, we apply a Dijkstra-algorithm to find the "shortest path" of all

  18. Image fusion via nonlocal sparse K-SVD dictionary learning.

    PubMed

    Li, Ying; Li, Fangyi; Bai, Bendu; Shen, Qiang

    2016-03-01

    Image fusion aims to merge two or more images captured via various sensors of the same scene to construct a more informative image by integrating their details. Generally, such integration is achieved through the manipulation of the representations of the images concerned. Sparse representation plays an important role in the effective description of images, offering a great potential in a variety of image processing tasks, including image fusion. Supported by sparse representation, in this paper, an approach for image fusion by the use of a novel dictionary learning scheme is proposed. The nonlocal self-similarity property of the images is exploited, not only at the stage of learning the underlying description dictionary but during the process of image fusion. In particular, the property of nonlocal self-similarity is combined with the traditional sparse dictionary. This results in an improved learned dictionary, hereafter referred to as the nonlocal sparse K-SVD dictionary (where K-SVD stands for the K times singular value decomposition that is commonly used in the literature), and abbreviated to NL_SK_SVD. The performance of the NL_SK_SVD dictionary is applied for image fusion using simultaneous orthogonal matching pursuit. The proposed approach is evaluated with different types of images, and compared with a number of alternative image fusion techniques. The resultant superior fused images using the present approach demonstrates the efficacy of the NL_SK_SVD dictionary in sparse image representation.

  19. Sparse Representations-Based Super-Resolution of Key-Frames Extracted from Frames-Sequences Generated by a Visual Sensor Network

    PubMed Central

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2014-01-01

    Visual sensor networks (VSNs) usually generate a low-resolution (LR) frame-sequence due to energy and processing constraints. These LR-frames are not very appropriate for use in certain surveillance applications. It is very important to enhance the resolution of the captured LR-frames using resolution enhancement schemes. In this paper, an effective framework for a super-resolution (SR) scheme is proposed that enhances the resolution of LR key-frames extracted from frame-sequences captured by visual-sensors. In a VSN, a visual processing hub (VPH) collects a huge amount of visual data from camera sensors. In the proposed framework, at the VPH, key-frames are extracted using our recent key-frame extraction technique and are streamed to the base station (BS) after compression. A novel effective SR scheme is applied at BS to produce a high-resolution (HR) output from the received key-frames. The proposed SR scheme uses optimized orthogonal matching pursuit (OOMP) for sparse-representation recovery in SR. OOMP does better in terms of detecting true sparsity than orthogonal matching pursuit (OMP). This property of the OOMP helps produce a HR image which is closer to the original image. The K-SVD dictionary learning procedure is incorporated for dictionary learning. Batch-OMP improves the dictionary learning process by removing the limitation in handling a large set of observed signals. Experimental results validate the effectiveness of the proposed scheme and show its superiority over other state-of-the-art schemes. PMID:24566632

  20. Sparse representations-based super-resolution of key-frames extracted from frames-sequences generated by a visual sensor network.

    PubMed

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2014-02-21

    Visual sensor networks (VSNs) usually generate a low-resolution (LR) frame-sequence due to energy and processing constraints. These LR-frames are not very appropriate for use in certain surveillance applications. It is very important to enhance the resolution of the captured LR-frames using resolution enhancement schemes. In this paper, an effective framework for a super-resolution (SR) scheme is proposed that enhances the resolution of LR key-frames extracted from frame-sequences captured by visual-sensors. In a VSN, a visual processing hub (VPH) collects a huge amount of visual data from camera sensors. In the proposed framework, at the VPH, key-frames are extracted using our recent key-frame extraction technique and are streamed to the base station (BS) after compression. A novel effective SR scheme is applied at BS to produce a high-resolution (HR) output from the received key-frames. The proposed SR scheme uses optimized orthogonal matching pursuit (OOMP) for sparse-representation recovery in SR. OOMP does better in terms of detecting true sparsity than orthogonal matching pursuit (OMP). This property of the OOMP helps produce a HR image which is closer to the original image. The K-SVD dictionary learning procedure is incorporated for dictionary learning. Batch-OMP improves the dictionary learning process by removing the limitation in handling a large set of observed signals. Experimental results validate the effectiveness of the proposed scheme and show its superiority over other state-of-the-art schemes.

  1. Sparse representations-based super-resolution of key-frames extracted from frames-sequences generated by a visual sensor network.

    PubMed

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2014-01-01

    Visual sensor networks (VSNs) usually generate a low-resolution (LR) frame-sequence due to energy and processing constraints. These LR-frames are not very appropriate for use in certain surveillance applications. It is very important to enhance the resolution of the captured LR-frames using resolution enhancement schemes. In this paper, an effective framework for a super-resolution (SR) scheme is proposed that enhances the resolution of LR key-frames extracted from frame-sequences captured by visual-sensors. In a VSN, a visual processing hub (VPH) collects a huge amount of visual data from camera sensors. In the proposed framework, at the VPH, key-frames are extracted using our recent key-frame extraction technique and are streamed to the base station (BS) after compression. A novel effective SR scheme is applied at BS to produce a high-resolution (HR) output from the received key-frames. The proposed SR scheme uses optimized orthogonal matching pursuit (OOMP) for sparse-representation recovery in SR. OOMP does better in terms of detecting true sparsity than orthogonal matching pursuit (OMP). This property of the OOMP helps produce a HR image which is closer to the original image. The K-SVD dictionary learning procedure is incorporated for dictionary learning. Batch-OMP improves the dictionary learning process by removing the limitation in handling a large set of observed signals. Experimental results validate the effectiveness of the proposed scheme and show its superiority over other state-of-the-art schemes. PMID:24566632

  2. Variable Selection for Sparse High-Dimensional Nonlinear Regression Models by Combining Nonnegative Garrote and Sure Independence Screening

    PubMed Central

    Xue, Hongqi; Wu, Yichao; Wu, Hulin

    2013-01-01

    In many regression problems, the relations between the covariates and the response may be nonlinear. Motivated by the application of reconstructing a gene regulatory network, we consider a sparse high-dimensional additive model with the additive components being some known nonlinear functions with unknown parameters. To identify the subset of important covariates, we propose a new method for simultaneous variable selection and parameter estimation by iteratively combining a large-scale variable screening (the nonlinear independence screening, NLIS) and a moderate-scale model selection (the nonnegative garrote, NNG) for the nonlinear additive regressions. We have shown that the NLIS procedure possesses the sure screening property and it is able to handle problems with non-polynomial dimensionality; and for finite dimension problems, the NNG for the nonlinear additive regressions has selection consistency for the unimportant covariates and also estimation consistency for the parameter estimates of the important covariates. The proposed method is applied to simulated data and a real data example for identifying gene regulations to illustrate its numerical performance. PMID:25170239

  3. Sparse Methods for Biomedical Data

    PubMed Central

    Ye, Jieping; Liu, Jun

    2013-01-01

    Following recent technological revolutions, the investigation of massive biomedical data with growing scale, diversity, and complexity has taken a center stage in modern data analysis. Although complex, the underlying representations of many biomedical data are often sparse. For example, for a certain disease such as leukemia, even though humans have tens of thousands of genes, only a few genes are relevant to the disease; a gene network is sparse since a regulatory pathway involves only a small number of genes; many biomedical signals are sparse or compressible in the sense that they have concise representations when expressed in a proper basis. Therefore, finding sparse representations is fundamentally important for scientific discovery. Sparse methods based on the ℓ1 norm have attracted a great amount of research efforts in the past decade due to its sparsity-inducing property, convenient convexity, and strong theoretical guarantees. They have achieved great success in various applications such as biomarker selection, biological network construction, and magnetic resonance imaging. In this paper, we review state-of-the-art sparse methods and their applications to biomedical data. PMID:24076585

  4. Algorithms for sparse nonnegative Tucker decompositions.

    PubMed

    Mørup, Morten; Hansen, Lars Kai; Arnfred, Sidse M

    2008-08-01

    There is a increasing interest in analysis of large-scale multiway data. The concept of multiway data refers to arrays of data with more than two dimensions, that is, taking the form of tensors. To analyze such data, decomposition techniques are widely used. The two most common decompositions for tensors are the Tucker model and the more restricted PARAFAC model. Both models can be viewed as generalizations of the regular factor analysis to data of more than two modalities. Nonnegative matrix factorization (NMF), in conjunction with sparse coding, has recently been given much attention due to its part-based and easy interpretable representation. While NMF has been extended to the PARAFAC model, no such attempt has been done to extend NMF to the Tucker model. However, if the tensor data analyzed are nonnegative, it may well be relevant to consider purely additive (i.e., nonnegative) Tucker decompositions). To reduce ambiguities of this type of decomposition, we develop updates that can impose sparseness in any combination of modalities, hence, proposed algorithms for sparse nonnegative Tucker decompositions (SN-TUCKER). We demonstrate how the proposed algorithms are superior to existing algorithms for Tucker decompositions when the data and interactions can be considered nonnegative. We further illustrate how sparse coding can help identify what model (PARAFAC or Tucker) is more appropriate for the data as well as to select the number of components by turning off excess components. The algorithms for SN-TUCKER can be downloaded from Mørup (2007).

  5. Robust visual multitask tracking via composite sparse model

    NASA Astrophysics Data System (ADS)

    Jin, Bo; Jing, Zhongliang; Wang, Meng; Pan, Han

    2014-11-01

    Recently, multitask learning was applied to visual tracking by learning sparse particle representations in a joint task, which led to the so-called multitask tracking algorithm (MTT). Although MTT shows impressive tracking performances by mining the interdependencies between particles, the individual feature of each particle is underestimated. The utilized L1,q norm regularization assumes all features are shared between all particles and results in nearly identical representation coefficients in nonsparse rows. We propose a composite sparse multitask tracking algorithm (CSMTT). We develop a composite sparse model to formulate the object appearance as a combination of the shared feature component, the individual feature component, and the outlier component. The composite sparsity is achieved via the L and L1,1 norm minimization, and is optimized by the alternating direction method of multipliers, which provides a favorable reconstruction performance and an impressive computational efficiency. Moreover, a dynamical dictionary updating scheme is proposed to capture appearance changes. CSMTT is tested on real-world video sequences under various challenges, and experimental results show that the composite sparse model achieves noticeable lower reconstruction errors and higher computational speeds than traditional sparse models, and CSMTT has consistently better tracking performances against seven state-of-the-art trackers.

  6. Extending peripersonal space representation without tool-use: evidence from a combined behavioral-computational approach.

    PubMed

    Serino, Andrea; Canzoneri, Elisa; Marzolla, Marilena; di Pellegrino, Giuseppe; Magosso, Elisa

    2015-01-01

    Stimuli from different sensory modalities occurring on or close to the body are integrated in a multisensory representation of the space surrounding the body, i.e., peripersonal space (PPS). PPS dynamically modifies depending on experience, e.g., it extends after using a tool to reach far objects. However, the neural mechanism underlying PPS plasticity after tool use is largely unknown. Here we use a combined computational-behavioral approach to propose and test a possible mechanism accounting for PPS extension. We first present a neural network model simulating audio-tactile representation in the PPS around one hand. Simulation experiments showed that our model reproduced the main property of PPS neurons, i.e., selective multisensory response for stimuli occurring close to the hand. We used the neural network model to simulate the effects of a tool-use training. In terms of sensory inputs, tool use was conceptualized as a concurrent tactile stimulation from the hand, due to holding the tool, and an auditory stimulation from the far space, due to tool-mediated action. Results showed that after exposure to those inputs, PPS neurons responded also to multisensory stimuli far from the hand. The model thus suggests that synchronous pairing of tactile hand stimulation and auditory stimulation from the far space is sufficient to extend PPS, such as after tool-use. Such prediction was confirmed by a behavioral experiment, where we used an audio-tactile interaction paradigm to measure the boundaries of PPS representation. We found that PPS extended after synchronous tactile-hand stimulation and auditory-far stimulation in a group of healthy volunteers. Control experiments both in simulation and behavioral settings showed that the same amount of tactile and auditory inputs administered out of synchrony did not change PPS representation. We conclude by proposing a simple, biological-plausible model to explain plasticity in PPS representation after tool-use, which is

  7. Extending peripersonal space representation without tool-use: evidence from a combined behavioral-computational approach

    PubMed Central

    Serino, Andrea; Canzoneri, Elisa; Marzolla, Marilena; di Pellegrino, Giuseppe; Magosso, Elisa

    2015-01-01

    Stimuli from different sensory modalities occurring on or close to the body are integrated in a multisensory representation of the space surrounding the body, i.e., peripersonal space (PPS). PPS dynamically modifies depending on experience, e.g., it extends after using a tool to reach far objects. However, the neural mechanism underlying PPS plasticity after tool use is largely unknown. Here we use a combined computational-behavioral approach to propose and test a possible mechanism accounting for PPS extension. We first present a neural network model simulating audio-tactile representation in the PPS around one hand. Simulation experiments showed that our model reproduced the main property of PPS neurons, i.e., selective multisensory response for stimuli occurring close to the hand. We used the neural network model to simulate the effects of a tool-use training. In terms of sensory inputs, tool use was conceptualized as a concurrent tactile stimulation from the hand, due to holding the tool, and an auditory stimulation from the far space, due to tool-mediated action. Results showed that after exposure to those inputs, PPS neurons responded also to multisensory stimuli far from the hand. The model thus suggests that synchronous pairing of tactile hand stimulation and auditory stimulation from the far space is sufficient to extend PPS, such as after tool-use. Such prediction was confirmed by a behavioral experiment, where we used an audio-tactile interaction paradigm to measure the boundaries of PPS representation. We found that PPS extended after synchronous tactile-hand stimulation and auditory-far stimulation in a group of healthy volunteers. Control experiments both in simulation and behavioral settings showed that the same amount of tactile and auditory inputs administered out of synchrony did not change PPS representation. We conclude by proposing a simple, biological-plausible model to explain plasticity in PPS representation after tool-use, which is

  8. A combined representation method for use in band structure calculations. 1: Method

    NASA Technical Reports Server (NTRS)

    Friedli, C.; Ashcroft, N. W.

    1975-01-01

    A representation was described whose basis levels combine the important physical aspects of a finite set of plane waves with those of a set of Bloch tight-binding levels. The chosen combination has a particularly simple dependence on the wave vector within the Brillouin Zone, and its use in reducing the standard one-electron band structure problem to the usual secular equation has the advantage that the lattice sums involved in the calculation of the matrix elements are actually independent of the wave vector. For systems with complicated crystal structures, for which the Korringa-Kohn-Rostoker (KKR), Augmented-Plane Wave (APW) and Orthogonalized-Plane Wave (OPW) methods are difficult to apply, the present method leads to results with satisfactory accuracy and convergence.

  9. Elemental representation and configural mappings: combining elemental and configural theories of associative learning.

    PubMed

    McLaren, I P L; Forrest, C L; McLaren, R P

    2012-09-01

    In this article, we present our first attempt at combining an elemental theory designed to model representation development in an associative system (based on McLaren, Kaye, & Mackintosh, 1989) with a configural theory that models associative learning and memory (McLaren, 1993). After considering the possible advantages of such a combination (and some possible pitfalls), we offer a hybrid model that allows both components to produce the phenomena that they are capable of without introducing unwanted interactions. We then successfully apply the model to a range of phenomena, including latent inhibition, perceptual learning, the Espinet effect, and first- and second-order retrospective revaluation. In some cases, we present new data for comparison with our model's predictions. In all cases, the model replicates the pattern observed in our experimental results. We conclude that this line of development is a promising one for arriving at general theories of associative learning and memory.

  10. Low-dose X-ray computed tomography image reconstruction with a combined low-mAs and sparse-view protocol.

    PubMed

    Gao, Yang; Bian, Zhaoying; Huang, Jing; Zhang, Yunwan; Niu, Shanzhou; Feng, Qianjin; Chen, Wufan; Liang, Zhengrong; Ma, Jianhua

    2014-06-16

    To realize low-dose imaging in X-ray computed tomography (CT) examination, lowering milliampere-seconds (low-mAs) or reducing the required number of projection views (sparse-view) per rotation around the body has been widely studied as an easy and effective approach. In this study, we are focusing on low-dose CT image reconstruction from the sinograms acquired with a combined low-mAs and sparse-view protocol and propose a two-step image reconstruction strategy. Specifically, to suppress significant statistical noise in the noisy and insufficient sinograms, an adaptive sinogram restoration (ASR) method is first proposed with consideration of the statistical property of sinogram data, and then to further acquire a high-quality image, a total variation based projection onto convex sets (TV-POCS) method is adopted with a slight modification. For simplicity, the present reconstruction strategy was termed as "ASR-TV-POCS." To evaluate the present ASR-TV-POCS method, both qualitative and quantitative studies were performed on a physical phantom. Experimental results have demonstrated that the present ASR-TV-POCS method can achieve promising gains over other existing methods in terms of the noise reduction, contrast-to-noise ratio, and edge detail preservation.

  11. Combination of geodetic measurements by means of a multi-resolution representation

    NASA Astrophysics Data System (ADS)

    Goebel, G.; Schmidt, M. G.; Börger, K.; List, H.; Bosch, W.

    2010-12-01

    Recent and in particular current satellite gravity missions provide important contributions for global Earth gravity models, and these global models can be refined by airborne and terrestrial gravity observations. The most common representation of a gravity field model in terms of spherical harmonics has the disadvantages that it is difficult to represent small spatial details and cannot handle data gaps appropriately. An adequate modeling using a multi-resolution representation (MRP) is necessary in order to exploit the highest degree of information out of all these mentioned measurements. The MRP provides a simple hierarchical framework for identifying the properties of a signal. The procedure starts from the measurements, performs the decomposition into frequency-dependent detail signals by applying a pyramidal algorithm and allows for data compression and filtering, i.e. data manipulations. Since different geodetic measurement types (terrestrial, airborne, spaceborne) cover different parts of the frequency spectrum, it seems reasonable to calculate the detail signals of the lower levels mainly from satellite data, the detail signals of medium levels mainly from airborne and the detail signals of the higher levels mainly from terrestrial data. A concept is presented how these different measurement types can be combined within the MRP. In this presentation the basic principles on strategies and concepts for the generation of MRPs will be shown. Examples of regional gravity field determination are presented.

  12. Effects of damage location and size on sparse representation of guided-waves for damage diagnosis of pipelines under varying temperature

    NASA Astrophysics Data System (ADS)

    Eybpoosh, Matineh; Berges, Mario; Noh, Hae Young

    2015-04-01

    In spite of their many advantages, real-world application of guided-waves for structural health monitoring (SHM) of pipelines is still quite limited. The challenges can be discussed under three headings: (1) Multiple modes, (2) Multipath reflections, and (3) Sensitivity to environmental and operational conditions (EOCs). These challenges are reviewed in the authors' previous work. This paper is part of a study whose objective is to overcome these challenges for damage diagnosis of pipes, while addressing the limitations of the current approaches. That is, develop methods that simplify signal while retaining damage information, perform well as EOCs vary, and minimize the use of transducers. In this paper, a supervised method is proposed to extract a sparse subset of the ultrasonic guided-wave signals that contain optimal damage information for detection purposes. That is, a discriminant vector is calculated so that the projections of undamaged and damaged pipes on this vector is separated. In the training stage, data is recorded from intact pipe, and from a pipe with an artificial structural abnormality (to simulate any variation from intact condition). During the monitoring stage, test signals are projected on the discriminant vector, and these projections are used as damage-sensitive features for detection purposes. Being a supervised method, factors such as EOC variations, and difference in the characteristics of the structural abnormality in training and test data, may affect the detection performance. This paper reports the experiments investigating the extent to which the differences in damage size and damage location, as well as temperatures, can influence the discriminatory power of the extracted damage-sensitive features. The results suggest that, for practical ranges of monitoring and damage sizes of interest, the proposed method has low sensitivity to such training factors. High detection performances are obtained for temperature differences up to 14

  13. Gravitational microlensing - Powerful combination of ray-shooting and parametric representation of caustics

    NASA Technical Reports Server (NTRS)

    Wambsganss, J.; Witt, H. J.; Schneider, P.

    1992-01-01

    We present a combination of two very different methods for numerically calculating the effects of gravitational microlensing: the backward-ray-tracing that results in two-dimensional magnification patterns, and the parametric representation of caustic lines; they are in a way complementary to each other. The combination of these methods is much more powerful than the sum of its parts. It allows to determine the total magnification and the number of microimages as a function of source position. The mean number of microimages is calculated analytically and compared to the numerical results. The peaks in the lightcurves, as obtained from one-dimensional tracks through the magnification pattern, can now be divided into two groups: those which correspond to a source crossing a caustic, and those which are due to sources passing outside cusps. We determine the frequencies of those two types of events as a function of the surface mass density, and the probability distributions of their magnitudes. We find that for low surface mass density as many as 40 percent of all events in a lightcurve are not due to caustic crossings, but rather due to passings outside cusps.

  14. Wavelet Sparse Approximate Inverse Preconditioners

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.; Tang, W.-P.; Wan, W. L.

    1996-01-01

    There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.

  15. Emergence of representations through repeated training on pronouncing novel letter combinations leads to efficient reading.

    PubMed

    Takashima, Atsuko; Hulzink, Iris; Wagensveld, Barbara; Verhoeven, Ludo

    2016-08-01

    Printed text can be decoded by utilizing different processing routes depending on the familiarity of the script. A predominant use of word-level decoding strategies can be expected in the case of a familiar script, and an almost exclusive use of letter-level decoding strategies for unfamiliar scripts. Behavioural studies have revealed that frequently occurring words are read more efficiently, suggesting that these words are read in a more holistic way at the word-level, than infrequent and unfamiliar words. To test whether repeated exposure to specific letter combinations leads to holistic reading, we monitored both behavioural and neural responses during novel script decoding and examined changes related to repeated exposure. We trained a group of Dutch university students to decode pseudowords written in an unfamiliar script, i.e., Korean Hangul characters. We compared behavioural and neural responses to pronouncing trained versus untrained two-character pseudowords (equivalent to two-syllable pseudowords). We tested once shortly after the initial training and again after a four days' delay that included another training session. We found that trained pseudowords were pronounced faster and more accurately than novel combinations of radicals (equivalent to letters). Imaging data revealed that pronunciation of trained pseudowords engaged the posterior temporo-parietal region, and engagement of this network was predictive of reading efficiency a month later. The results imply that repeated exposure to specific combinations of graphemes can lead to emergence of holistic representations that result in efficient reading. Furthermore, inter-individual differences revealed that good learners retained efficiency more than bad learners one month later.

  16. Local sparse component analysis for blind source separation: an application to resting state FMRI.

    PubMed

    Vieira, Gilson; Amaro, Edson; Baccala, Luiz A

    2014-01-01

    We propose a new Blind Source Separation technique for whole-brain activity estimation that best profits from FMRI's intrinsic spatial sparsity. The Local Sparse Component Analysis (LSCA) combines wavelet analysis, group-separable regularizers, contiguity-constrained clusterization and principal components analysis (PCA) into a unique spatial sparse representation of FMRI images towards efficient dimensionality reduction without sacrificing physiological characteristics by avoiding artificial stochastic model constraints. The LSCA outperforms classical PCA source reconstruction for artificial data sets over many noise levels. A real FMRI data illustration reveals resting-state activities in regions hard to observe, such as thalamus and basal ganglia, because of their small spatial scale. PMID:25571267

  17. Kernel sparse coding method for automatic target recognition in infrared imagery using covariance descriptor

    NASA Astrophysics Data System (ADS)

    Yang, Chunwei; Yao, Junping; Sun, Dawei; Wang, Shicheng; Liu, Huaping

    2016-05-01

    Automatic target recognition in infrared imagery is a challenging problem. In this paper, a kernel sparse coding method for infrared target recognition using covariance descriptor is proposed. First, covariance descriptor combining gray intensity and gradient information of the infrared target is extracted as a feature representation. Then, due to the reason that covariance descriptor lies in non-Euclidean manifold, kernel sparse coding theory is used to solve this problem. We verify the efficacy of the proposed algorithm in terms of the confusion matrices on the real images consisting of seven categories of infrared vehicle targets.

  18. Timing of emotion representation in right and left occipital region: Evidence from combined TMS-EEG.

    PubMed

    Mattavelli, Giulia; Rosanova, Mario; Casali, Adenauer G; Papagno, Costanza; Romero Lauro, Leonor J

    2016-07-01

    Neuroimaging and electrophysiological studies provide evidence of hemispheric differences in processing faces and, in particular, emotional expressions. However, the timing of emotion representation in the right and left hemisphere is still unclear. Transcranial magnetic stimulation combined with electroencephalography (TMS-EEG) was used to explore cortical responsiveness during behavioural tasks requiring processing of either identity or expression of faces. Single-pulse TMS was delivered 100ms after face onset over the medial prefrontal cortex (mPFC) while continuous EEG was recorded using a 60-channel TMS-compatible amplifier; right premotor cortex (rPMC) was also stimulated as control site. The same face stimuli with neutral, happy and fearful expressions were presented in separate blocks and participants were asked to complete either a facial identity or facial emotion matching task. Analyses performed on posterior face specific EEG components revealed that mPFC-TMS reduced the P1-N1 component. In particular, only when an explicit expression processing was required, mPFC-TMS interacted with emotion type in relation to hemispheric side at different timing; the first P1-N1 component was affected in the right hemisphere whereas the later N1-P2 component was modulated in the left hemisphere. These findings support the hypothesis that the frontal cortex exerts an early influence on the occipital cortex during face processing and suggest a different timing of the right and left hemisphere involvement in emotion discrimination.

  19. DNA binding protein identification by combining pseudo amino acid composition and profile-based protein representation

    NASA Astrophysics Data System (ADS)

    Liu, Bin; Wang, Shanyi; Wang, Xiaolong

    2015-10-01

    DNA-binding proteins play an important role in most cellular processes. Therefore, it is necessary to develop an efficient predictor for identifying DNA-binding proteins only based on the sequence information of proteins. The bottleneck for constructing a useful predictor is to find suitable features capturing the characteristics of DNA binding proteins. We applied PseAAC to DNA binding protein identification, and PseAAC was further improved by incorporating the evolutionary information by using profile-based protein representation. Finally, Combined with Support Vector Machines (SVMs), a predictor called iDNAPro-PseAAC was proposed. Experimental results on an updated benchmark dataset showed that iDNAPro-PseAAC outperformed some state-of-the-art approaches, and it can achieve stable performance on an independent dataset. By using an ensemble learning approach to incorporate more negative samples (non-DNA binding proteins) in the training process, the performance of iDNAPro-PseAAC was further improved. The web server of iDNAPro-PseAAC is available at http://bioinformatics.hitsz.edu.cn/iDNAPro-PseAAC/.

  20. JiTTree: A Just-in-Time Compiled Sparse GPU Volume Data Structure.

    PubMed

    Labschütz, Matthias; Bruckner, Stefan; Gröller, M Eduard; Hadwiger, Markus; Rautek, Peter

    2016-01-01

    Sparse volume data structures enable the efficient representation of large but sparse volumes in GPU memory for computation and visualization. However, the choice of a specific data structure for a given data set depends on several factors, such as the memory budget, the sparsity of the data, and data access patterns. In general, there is no single optimal sparse data structure, but a set of several candidates with individual strengths and drawbacks. One solution to this problem are hybrid data structures which locally adapt themselves to the sparsity. However, they typically suffer from increased traversal overhead which limits their utility in many applications. This paper presents JiTTree, a novel sparse hybrid volume data structure that uses just-in-time compilation to overcome these problems. By combining multiple sparse data structures and reducing traversal overhead we leverage their individual advantages. We demonstrate that hybrid data structures adapt well to a large range of data sets. They are especially superior to other sparse data structures for data sets that locally vary in sparsity. Possible optimization criteria are memory, performance and a combination thereof. Through just-in-time (JIT) compilation, JiTTree reduces the traversal overhead of the resulting optimal data structure. As a result, our hybrid volume data structure enables efficient computations on the GPU, while being superior in terms of memory usage when compared to non-hybrid data structures.

  1. JiTTree: A Just-in-Time Compiled Sparse GPU Volume Data Structure.

    PubMed

    Labschütz, Matthias; Bruckner, Stefan; Gröller, M Eduard; Hadwiger, Markus; Rautek, Peter

    2016-01-01

    Sparse volume data structures enable the efficient representation of large but sparse volumes in GPU memory for computation and visualization. However, the choice of a specific data structure for a given data set depends on several factors, such as the memory budget, the sparsity of the data, and data access patterns. In general, there is no single optimal sparse data structure, but a set of several candidates with individual strengths and drawbacks. One solution to this problem are hybrid data structures which locally adapt themselves to the sparsity. However, they typically suffer from increased traversal overhead which limits their utility in many applications. This paper presents JiTTree, a novel sparse hybrid volume data structure that uses just-in-time compilation to overcome these problems. By combining multiple sparse data structures and reducing traversal overhead we leverage their individual advantages. We demonstrate that hybrid data structures adapt well to a large range of data sets. They are especially superior to other sparse data structures for data sets that locally vary in sparsity. Possible optimization criteria are memory, performance and a combination thereof. Through just-in-time (JIT) compilation, JiTTree reduces the traversal overhead of the resulting optimal data structure. As a result, our hybrid volume data structure enables efficient computations on the GPU, while being superior in terms of memory usage when compared to non-hybrid data structures. PMID:26529746

  2. Inversion of magnetotelluric data in a sparse model domain

    NASA Astrophysics Data System (ADS)

    Nittinger, Christian G.; Becken, Michael

    2016-06-01

    The inversion of magnetotelluric data into subsurface electrical conductivity poses an ill-posed problem. Smoothing constraints are widely employed to estimate a regularized solution. Here, we present an alternative inversion scheme that estimates a sparse representation of the model in a wavelet basis. The objective of the inversion is to determine the few non-zero wavelet coefficients which are required to fit the data. This approach falls into the class of sparsity constrained inversion schemes and minimizes the combination of the data misfit in a least squares ℓ2 sense and of a model coefficient norm in a ℓ1 sense (ℓ2-ℓ1 minimization). The ℓ1 coefficient norm renders the solution sparse in a suitable representation such as the multi-resolution wavelet basis, but does not impose explicit structural penalties on the model as it is the case for ℓ2 regularization. The presented numerical algorithm solves the mixed ℓ2-ℓ1 norm minimization problem for the non-linear magnetotelluric inverse problem. We demonstrate the feasibility of our algorithm on synthetic 2-D MT data as well as on a real data example. We found that sparse models can be estimated by inversion and that the spatial distribution of non-vanishing coefficients indicates regions in the model which are resolved.

  3. Inversion of magnetotelluric data in a sparse model domain

    NASA Astrophysics Data System (ADS)

    Nittinger, Christian G.; Becken, Michael

    2016-08-01

    The inversion of magnetotelluric data into subsurface electrical conductivity poses an ill-posed problem. Smoothing constraints are widely employed to estimate a regularized solution. Here, we present an alternative inversion scheme that estimates a sparse representation of the model in a wavelet basis. The objective of the inversion is to determine the few non-zero wavelet coefficients which are required to fit the data. This approach falls into the class of sparsity constrained inversion schemes and minimizes the combination of the data misfit in a least-squares ℓ2 sense and of a model coefficient norm in an ℓ1 sense (ℓ2-ℓ1 minimization). The ℓ1 coefficient norm renders the solution sparse in a suitable representation such as the multiresolution wavelet basis, but does not impose explicit structural penalties on the model as it is the case for ℓ2 regularization. The presented numerical algorithm solves the mixed ℓ2-ℓ1 norm minimization problem for the nonlinear magnetotelluric inverse problem. We demonstrate the feasibility of our algorithm on synthetic 2-D MT data as well as on a real data example. We found that sparse models can be estimated by inversion and that the spatial distribution of non-vanishing coefficients indicates regions in the model which are resolved.

  4. Combined-hyperbolic-inverse-power-representation of potential energy surfaces: a preliminary assessment for H3 and HO2.

    PubMed

    Varandas, A J C

    2013-02-01

    The purpose is to fit an accurate smooth function of the many-body expansion type to a multidimensional large data set using a basis-set type method. By adopting a combined-hyperbolic-inverse-power-representation for the basis, the novel approach is tested in detail for the ground electronic state of tri-hydrogen and hydroperoxyl systems, assuming that their potential energy surfaces are single-sheeted representable. It is also shown that the method can be easily applicable to potential energy curves by considering as prototypes molecular oxygen and the hydroxyl radical. PMID:23406111

  5. Combined-hyperbolic-inverse-power-representation of potential energy surfaces: A preliminary assessment for H_3 and HO_2

    NASA Astrophysics Data System (ADS)

    Varandas, A. J. C.

    2013-02-01

    The purpose is to fit an accurate smooth function of the many-body expansion type to a multidimensional large data set using a basis-set type method. By adopting a combined-hyperbolic-inverse-power-representation for the basis, the novel approach is tested in detail for the ground electronic state of tri-hydrogen and hydroperoxyl systems, assuming that their potential energy surfaces are single-sheeted representable. It is also shown that the method can be easily applicable to potential energy curves by considering as prototypes molecular oxygen and the hydroxyl radical.

  6. Golden-Angle Radial Sparse Parallel MRI: Combination of Compressed Sensing, Parallel Imaging, and Golden-Angle Radial Sampling for Fast and Flexible Dynamic Volumetric MRI

    PubMed Central

    Feng, Li; Grimm, Robert; Block, Kai Tobias; Chandarana, Hersh; Kim, Sungheon; Xu, Jian; Axel, Leon; Sodickson, Daniel K.; Otazo, Ricardo

    2013-01-01

    Purpose To develop a fast and flexible free-breathing dynamic volumetric MRI technique, iterative Golden-angle RAdial Sparse Parallel MRI (iGRASP), that combines compressed sensing, parallel imaging, and golden-angle radial sampling. Methods Radial k-space data are acquired continuously using the golden-angle scheme and sorted into time series by grouping an arbitrary number of consecutive spokes into temporal frames. An iterative reconstruction procedure is then performed on the undersampled time series where joint multicoil sparsity is enforced by applying a total-variation constraint along the temporal dimension. Required coil-sensitivity profiles are obtained from the time-averaged data. Results iGRASP achieved higher acceleration capability than either parallel imaging or coil-by-coil compressed sensing alone. It enabled dynamic volumetric imaging with high spatial and temporal resolution for various clinical applications, including free-breathing dynamic contrast-enhanced imaging in the abdomen of both adult and pediatric patients, and in the breast and neck of adult patients. Conclusion The high performance and flexibility provided by iGRASP can improve clinical studies that require robustness to motion and simultaneous high spatial and temporal resolution. PMID:24142845

  7. Bundle block adjustment of large-scale remote sensing data with Block-based Sparse Matrix Compression combined with Preconditioned Conjugate Gradient

    NASA Astrophysics Data System (ADS)

    Zheng, Maoteng; Zhang, Yongjun; Zhou, Shunping; Zhu, Junfeng; Xiong, Xiaodong

    2016-07-01

    In recent years, new platforms and sensors in photogrammetry, remote sensing and computer vision areas have become available, such as Unmanned Aircraft Vehicles (UAV), oblique camera systems, common digital cameras and even mobile phone cameras. Images collected by all these kinds of sensors could be used as remote sensing data sources. These sensors can obtain large-scale remote sensing data which consist of a great number of images. Bundle block adjustment of large-scale data with conventional algorithm is very time and space (memory) consuming due to the super large normal matrix arising from large-scale data. In this paper, an efficient Block-based Sparse Matrix Compression (BSMC) method combined with the Preconditioned Conjugate Gradient (PCG) algorithm is chosen to develop a stable and efficient bundle block adjustment system in order to deal with the large-scale remote sensing data. The main contribution of this work is the BSMC-based PCG algorithm which is more efficient in time and memory than the traditional algorithm without compromising the accuracy. Totally 8 datasets of real data are used to test our proposed method. Preliminary results have shown that the BSMC method can efficiently decrease the time and memory requirement of large-scale data.

  8. Calculating and controlling the error of discrete representations of Pareto surfaces in convex multi-criteria optimization.

    PubMed

    Craft, David

    2010-10-01

    A discrete set of points and their convex combinations can serve as a sparse representation of the Pareto surface in multiple objective convex optimization. We develop a method to evaluate the quality of such a representation, and show by example that in multiple objective radiotherapy planning, the number of Pareto optimal solutions needed to represent Pareto surfaces of up to five dimensions grows at most linearly with the number of objectives. The method described is also applicable to the representation of convex sets.

  9. Deformable segmentation via sparse shape representation.

    PubMed

    Zhang, Shaoting; Zhan, Yiqiang; Dewan, Maneesh; Huang, Junzhou; Metaxas, Dimitris N; Zhou, Xiang Sean

    2011-01-01

    Appearance and shape are two key elements exploited in medical image segmentation. However, in some medical image analysis tasks, appearance cues are weak/misleading due to disease/artifacts and often lead to erroneous segmentation. In this paper, a novel deformable model is proposed for robust segmentation in the presence of weak/misleading appearance cues. Owing to the less trustable appearance information, this method focuses on the effective shape modeling with two contributions. First, a shape composition method is designed to incorporate shape prior on-the-fly. Based on two sparsity observations, this method is robust to false appearance information and adaptive to statistically insignificant shape modes. Second, shape priors are modeled and used in a hierarchical fashion. More specifically, by using affinity propagation method, our deformable surface is divided into multiple partitions, on which local shape models are built independently. This scheme facilitates a more compact shape prior modeling and hence a more robust and efficient segmentation. Our deformable model is applied on two very diverse segmentation problems, liver segmentation in PET-CT images and rodent brain segmentation in MR images. Compared to state-of-art methods, our method achieves better performance in both studies. PMID:21995060

  10. Sparse representation for a potential energy surface

    NASA Astrophysics Data System (ADS)

    Seko, Atsuto; Takahashi, Akira; Tanaka, Isao

    2014-07-01

    We propose a simple scheme to estimate the potential energy surface (PES) for which the accuracy can be easily controlled and improved. It is based on model selection within the framework of linear regression using the least absolute shrinkage and selection operator (LASSO) technique. Basis functions are selected from a systematic large set of candidate functions. The sparsity of the PES significantly reduces the computational cost of evaluating the energy and force in molecular dynamics simulations without losing accuracy. The usefulness of the scheme for describing the elemental metals Na and Mg is clearly demonstrated.

  11. A combined model of sensory and cognitive representations underlying tonal expectations in music: from audio signals to behavior.

    PubMed

    Collins, Tom; Tillmann, Barbara; Barrett, Frederick S; Delbé, Charles; Janata, Petr

    2014-01-01

    Listeners' expectations for melodies and harmonies in tonal music are perhaps the most studied aspect of music cognition. Long debated has been whether faster response times (RTs) to more strongly primed events (in a music theoretic sense) are driven by sensory or cognitive mechanisms, such as repetition of sensory information or activation of cognitive schemata that reflect learned tonal knowledge, respectively. We analyzed over 300 stimuli from 7 priming experiments comprising a broad range of musical material, using a model that transforms raw audio signals through a series of plausible physiological and psychological representations spanning a sensory-cognitive continuum. We show that RTs are modeled, in part, by information in periodicity pitch distributions, chroma vectors, and activations of tonal space--a representation on a toroidal surface of the major/minor key relationships in Western tonal music. We show that in tonal space, melodies are grouped by their tonal rather than timbral properties, whereas the reverse is true for the periodicity pitch representation. While tonal space variables explained more of the variation in RTs than did periodicity pitch variables, suggesting a greater contribution of cognitive influences to tonal expectation, a stepwise selection model contained variables from both representations and successfully explained the pattern of RTs across stimulus categories in 4 of the 7 experiments. The addition of closure--a cognitive representation of a specific syntactic relationship--succeeded in explaining results from all 7 experiments. We conclude that multiple representational stages along a sensory-cognitive continuum combine to shape tonal expectations in music. PMID:24490788

  12. A combined model of sensory and cognitive representations underlying tonal expectations in music: from audio signals to behavior.

    PubMed

    Collins, Tom; Tillmann, Barbara; Barrett, Frederick S; Delbé, Charles; Janata, Petr

    2014-01-01

    Listeners' expectations for melodies and harmonies in tonal music are perhaps the most studied aspect of music cognition. Long debated has been whether faster response times (RTs) to more strongly primed events (in a music theoretic sense) are driven by sensory or cognitive mechanisms, such as repetition of sensory information or activation of cognitive schemata that reflect learned tonal knowledge, respectively. We analyzed over 300 stimuli from 7 priming experiments comprising a broad range of musical material, using a model that transforms raw audio signals through a series of plausible physiological and psychological representations spanning a sensory-cognitive continuum. We show that RTs are modeled, in part, by information in periodicity pitch distributions, chroma vectors, and activations of tonal space--a representation on a toroidal surface of the major/minor key relationships in Western tonal music. We show that in tonal space, melodies are grouped by their tonal rather than timbral properties, whereas the reverse is true for the periodicity pitch representation. While tonal space variables explained more of the variation in RTs than did periodicity pitch variables, suggesting a greater contribution of cognitive influences to tonal expectation, a stepwise selection model contained variables from both representations and successfully explained the pattern of RTs across stimulus categories in 4 of the 7 experiments. The addition of closure--a cognitive representation of a specific syntactic relationship--succeeded in explaining results from all 7 experiments. We conclude that multiple representational stages along a sensory-cognitive continuum combine to shape tonal expectations in music.

  13. Sparse Regression as a Sparse Eigenvalue Problem

    NASA Technical Reports Server (NTRS)

    Moghaddam, Baback; Gruber, Amit; Weiss, Yair; Avidan, Shai

    2008-01-01

    We extend the l0-norm "subspectral" algorithms for sparse-LDA [5] and sparse-PCA [6] to general quadratic costs such as MSE in linear (kernel) regression. The resulting "Sparse Least Squares" (SLS) problem is also NP-hard, by way of its equivalence to a rank-1 sparse eigenvalue problem (e.g., binary sparse-LDA [7]). Specifically, for a general quadratic cost we use a highly-efficient technique for direct eigenvalue computation using partitioned matrix inverses which leads to dramatic x103 speed-ups over standard eigenvalue decomposition. This increased efficiency mitigates the O(n4) scaling behaviour that up to now has limited the previous algorithms' utility for high-dimensional learning problems. Moreover, the new computation prioritizes the role of the less-myopic backward elimination stage which becomes more efficient than forward selection. Similarly, branch-and-bound search for Exact Sparse Least Squares (ESLS) also benefits from partitioned matrix inverse techniques. Our Greedy Sparse Least Squares (GSLS) generalizes Natarajan's algorithm [9] also known as Order-Recursive Matching Pursuit (ORMP). Specifically, the forward half of GSLS is exactly equivalent to ORMP but more efficient. By including the backward pass, which only doubles the computation, we can achieve lower MSE than ORMP. Experimental comparisons to the state-of-the-art LARS algorithm [3] show forward-GSLS is faster, more accurate and more flexible in terms of choice of regularization

  14. Removing sparse noise from hyperspectral images with sparse and low-rank penalties

    NASA Astrophysics Data System (ADS)

    Tariyal, Snigdha; Aggarwal, Hemant Kumar; Majumdar, Angshul

    2016-03-01

    In diffraction grating, at times, there are defective pixels on the focal plane array; this results in horizontal lines of corrupted pixels in some channels. Since only a few such pixels exist, the corruption/noise is sparse. Studies on sparse noise removal from hyperspectral noise are parsimonious. To remove such sparse noise, a prior work exploited the interband spectral correlation along with intraband spatial redundancy to yield a sparse representation in transform domains. We improve upon the prior technique. The intraband spatial redundancy is modeled as a sparse set of transform coefficients and the interband spectral correlation is modeled as a rank deficient matrix. The resulting optimization problem is solved using the split Bregman technique. Comparative experimental results show that our proposed approach is better than the previous one.

  15. Guided wavefield reconstruction from sparse measurements

    NASA Astrophysics Data System (ADS)

    Mesnil, Olivier; Ruzzene, Massimo

    2016-02-01

    Guided wave measurements are at the basis of several Non-Destructive Evaluation (NDE) techniques. Although sparse measurements of guided wave obtained using piezoelectric sensors can efficiently detect and locate defects, extensive informa-tion on the shape and subsurface location of defects can be extracted from full-field measurements acquired by Laser Doppler Vibrometers (LDV). Wavefield acquisition from LDVs is generally a slow operation due to the fact that the wave propagation to record must be repeated for each point measurement and the initial conditions must be reached between each measurement. In this research, a Sparse Wavefield Reconstruction (SWR) process using Compressed Sensing is developed. The goal of this technique is to reduce the number of point measurements needed to apply NDE techniques by at least one order of magnitude by extrapolating the knowledge of a few randomly chosen measured pixels over an over-sampled grid. To achieve this, the Lamb wave propagation equation is used to formulate a basis of shape functions in which the wavefield has a sparse representation, in order to comply with the Compressed Sensing requirements and use l1-minimization solvers. The main assumption of this reconstruction process is that every material point of the studied area is a potential source. The Compressed Sensing matrix is defined as being the contribution that would have been received at a measurement location from each possible source, using the dispersion relations of the specimen computed using a Semi-Analytical Finite Element technique. The measurements are then processed through an l1-minimizer to find a minimum corresponding to the set of active sources and their corresponding excitation functions. This minimum represents the best combination of the parameters of the model matching the sparse measurements. Wavefields are then reconstructed using the propagation equation. The set of active sources found by minimization contains all the wave

  16. Infrared image recognition based on structure sparse and atomic sparse parallel

    NASA Astrophysics Data System (ADS)

    Wu, Yalu; Li, Ruilong; Xu, Yi; Wang, Liping

    2015-12-01

    Use the redundancy of the super complete dictionary can capture the structural features of the image effectively, can achieving the effective representation of the image. However, the commonly used atomic sparse representation without regard the structure of the dictionary and the unrelated non-zero-term in the process of the computation, though structure sparse consider the structure feature of dictionary, the majority coefficients of the blocks maybe are non-zero, it may affect the identification efficiency. For the disadvantages of these two sparse expressions, a weighted parallel atomic sparse and sparse structure is proposed, and the recognition efficiency is improved by the adaptive computation of the optimal weights. The atomic sparse expression and structure sparse expression are respectively, and the optimal weights are calculated by the adaptive method. Methods are as follows: training by using the less part of the identification sample, the recognition rate is calculated by the increase of the certain step size and t the constraint between weight. The recognition rate as the Z axis, two weight values respectively as X, Y axis, the resulting points can be connected in a straight line in the 3 dimensional coordinate system, by solving the highest recognition rate, the optimal weights can be obtained. Through simulation experiments can be known, the optimal weights based on adaptive method are better in the recognition rate, weights obtained by adaptive computation of a few samples, suitable for parallel recognition calculation, can effectively improve the recognition rate of infrared images.

  17. Developing body representations in early life: combining somatosensation and vision to perceive the interface between the body and the world.

    PubMed

    Bremner, Andrew J

    2016-03-01

    This article lays out the computational challenges involved in constructing multisensory representations of the body and the interface between the body and the external world. It then provides a review of the most pertinent empirical literature regarding the ontogeny of such representational abilities in early life, focussing especially on ability to make spatiotemporal links between bodily events transduced by vision and somatosensation (cutaneous touch and proprioception), and the ability to use multisensory bodily cues to locate tactile stimuli. Findings from infants, children, and blind adults point towards a trajectory of development in early life in which infants and children, as a result of sensory experience, learn new ways of combining cues concerning the body arising from vision and somatosensation, in order to best represent the layout of their limbs and sensory events occurring on their limbs in relation to the external environment.

  18. Finding communities in sparse networks

    NASA Astrophysics Data System (ADS)

    Singh, Abhinav; Humphries, Mark D.

    2015-03-01

    Spectral algorithms based on matrix representations of networks are often used to detect communities, but classic spectral methods based on the adjacency matrix and its variants fail in sparse networks. New spectral methods based on non-backtracking random walks have recently been introduced that successfully detect communities in many sparse networks. However, the spectrum of non-backtracking random walks ignores hanging trees in networks that can contain information about their community structure. We introduce the reluctant backtracking operators that explicitly account for hanging trees as they admit a small probability of returning to the immediately previous node, unlike the non-backtracking operators that forbid an immediate return. We show that the reluctant backtracking operators can detect communities in certain sparse networks where the non-backtracking operators cannot, while performing comparably on benchmark stochastic block model networks and real world networks. We also show that the spectrum of the reluctant backtracking operator approximately optimises the standard modularity function. Interestingly, for this family of non- and reluctant-backtracking operators the main determinant of performance on real-world networks is whether or not they are normalised to conserve probability at each node.

  19. Dissociable neural representations of grammatical gender in Broca's area investigated by the combination of satiation and TMS.

    PubMed

    Cattaneo, Zaira; Devlin, Joseph T; Vecchi, Tomaso; Silvanto, Juha

    2009-08-15

    Along with meaning and form, words can be described on the basis of their grammatical properties. Grammatical gender is often used to investigate the latter as it is a grammatical property that is independent of meaning. The left inferior frontal gyrus (IFG) has been implicated in the encoding of grammatical gender, but its causal role in this process in neurologically normal observers has not been demonstrated. Here we combined verbal satiation with transcranial magnetic stimulation (TMS) to demonstrate that subpopulations of neurons within Broca's area respond preferentially to different classes of grammatical gender. Subjects were asked to classify Italian nouns into living and nonliving categories; half of these words were of masculine and the other half of feminine grammatical gender. Prior to each test block, a satiation paradigm (a phenomenon in which verbal repetition of a category name leads to a reduced access to that category) was used to modulate the initial state of the representations of either masculine or feminine noun categories. In the No TMS condition, subjects were slower in responding to exemplars to the satiated category relative to exemplars of the nonsatiated category, implying that the neural representations for different classes of grammatical gender are partly dissociable. The application of TMS over Broca's area removed the behavioral impact of verbal (grammatical) satiation, demonstrating the causal role of this region in the encoding of grammatical gender. These results show that the neural representations for different cases of a grammatical property within Broca's area are dissociable. PMID:19442750

  20. A generalized representation-based approach for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Li, Jiaojiao; Li, Wei; Du, Qian; Li, Yunsong

    2016-05-01

    Sparse representation-based classifier (SRC) is of great interest recently for hyperspectral image classification. It is assumed that a testing pixel is linearly combined with atoms of a dictionary. Under this circumstance, the dictionary includes all the training samples. The objective is to find a weight vector that yields a minimum L2 representation error with the constraint that the weight vector is sparse with a minimum L1 norm. The pixel is assigned to the class whose training samples yield the minimum error. In addition, collaborative representation-based classifier (CRC) is also proposed, where the weight vector has a minimum L2 norm. The CRC has a closed-form solution; when using class-specific representation it can yield even better performance than the SRC. Compared to traditional classifiers such as support vector machine (SVM), SRC and CRC do not have a traditional training-testing fashion as in supervised learning, while their performance is similar to or even better than SVM. In this paper, we investigate a generalized representation-based classifier which uses Lq representation error, Lp weight norm, and adaptive regularization. The classification performance of Lq and Lp combinations is evaluated with several real hyperspectral datasets. Based on these experiments, recommendation is provide for practical implementation.

  1. Analog system for computing sparse codes

    DOEpatents

    Rozell, Christopher John; Johnson, Don Herrick; Baraniuk, Richard Gordon; Olshausen, Bruno A.; Ortman, Robert Lowell

    2010-08-24

    A parallel dynamical system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements, and for reconstructing compressively sensed images. The system is based on the principles of thresholding and local competition that solves a family of sparse approximation problems corresponding to various sparsity metrics. The system utilizes Locally Competitive Algorithms (LCAs), nodes in a population continually compete with neighboring units using (usually one-way) lateral inhibition to calculate coefficients representing an input in an over complete dictionary.

  2. Sparse pseudospectral approximation method

    NASA Astrophysics Data System (ADS)

    Constantine, Paul G.; Eldred, Michael S.; Phipps, Eric T.

    2012-07-01

    Multivariate global polynomial approximations - such as polynomial chaos or stochastic collocation methods - are now in widespread use for sensitivity analysis and uncertainty quantification. The pseudospectral variety of these methods uses a numerical integration rule to approximate the Fourier-type coefficients of a truncated expansion in orthogonal polynomials. For problems in more than two or three dimensions, a sparse grid numerical integration rule offers accuracy with a smaller node set compared to tensor product approximation. However, when using a sparse rule to approximately integrate these coefficients, one often finds unacceptable errors in the coefficients associated with higher degree polynomials. By reexamining Smolyak's algorithm and exploiting the connections between interpolation and projection in tensor product spaces, we construct a sparse pseudospectral approximation method that accurately reproduces the coefficients of basis functions that naturally correspond to the sparse grid integration rule. The compelling numerical results show that this is the proper way to use sparse grid integration rules for pseudospectral approximation.

  3. Combined numerical and linguistic knowledge representation and its application to medical diagnosis

    NASA Astrophysics Data System (ADS)

    Meesad, Phayung; Yen, Gary G.

    2002-07-01

    In this study, we propose a novel hybrid intelligent system (HIS) which provides a unified integration of numerical and linguistic knowledge representations. The proposed HIS is hierarchical integration of an incremental learning fuzzy neural network (ILFN) and a linguistic model, i.e., fuzzy expert system, optimized via the genetic algorithm. The ILFN is a self-organizing network with the capability of fast, one-pass, online, and incremental learning. The linguistic model is constructed based on knowledge embedded in the trained ILFN or provided by the domain expert. The knowledge captured from the low-level ILFN can be mapped to the higher-level linguistic model and vice versa. The GA is applied to optimize the linguistic model to maintain high accuracy, comprehensibility, completeness, compactness, and consistency. After the system being completely constructed, it can incrementally learn new information in both numerical and linguistic forms. To evaluate the system's performance, the well-known benchmark Wisconsin breast cancer data set was studied for an application to medical diagnosis. The simulation results have shown that the prosed HIS perform better than the individual standalone systems. The comparison results show that the linguistic rules extracted are competitive with or even superior to some well-known methods.

  4. Re-Examining Evidence for the Use of Independent Relational Representations during Conceptual Combination

    ERIC Educational Resources Information Center

    Gagne, Christina L.; Spalding, Thomas L.; Ji, Hongbo

    2005-01-01

    In a recent study of conceptual combination, Estes (2003) presented evidence for the priming of relational information in the absence of shared constituents between the prime and target (e.g., "pancake spatula" was interpreted more quickly following "bacon tongs" than following "city riots"). He argued that these data support the view that…

  5. Combining Multiple External Representations and Refutational Text: An Intervention on Learning to Interpret Box Plots

    ERIC Educational Resources Information Center

    Lem, Stephanie; Kempen, Goya; Ceulemans, Eva; Onghena, Patrick; Verschaffel, Lieven; Van Dooren, Wim

    2015-01-01

    Box plots are frequently misinterpreted and educational attempts to correct these misinterpretations have not been successful. In this study, we used two instructional techniques that seemed powerful to change the misinterpretation of the area of the box in box plots, both separately and in combination, leading to three experimental conditions,…

  6. Sparse and dense coding of natural stimuli by distinct midbrain neuron subpopulations in weakly electric fish

    PubMed Central

    Vonderschen, Katrin; Chacron, Maurice J.

    2015-01-01

    While peripheral sensory neurons respond to natural stimuli with a broad range of spatiotemporal frequencies, central neurons instead respond sparsely to specific features in general. The nonlinear transformations leading to this emergent selectivity are not well understood. Here we characterized how the neural representation of stimuli changes across successive brain areas, using the electrosensory system of weakly electric fish as a model system. We found that midbrain torus semicircularis (TS) neurons were on average more selective in their responses than hindbrain electrosensory lateral line lobe (ELL) neurons. Further analysis revealed two categories of TS neurons: dense coding TS neurons that were ELL-like and sparse coding TS neurons that displayed selective responses. These neurons in general responded to preferred stimuli with few spikes and were mostly silent for other stimuli. We further investigated whether information about stimulus attributes was contained in the activities of ELL and TS neurons. To do so, we used a spike train metric to quantify how well stimuli could be discriminated based on spiking responses. We found that sparse coding TS neurons performed poorly even when their activities were combined compared with ELL and dense coding TS neurons. In contrast, combining the activities of as few as 12 dense coding TS neurons could lead to optimal discrimination. On the other hand, sparse coding TS neurons were better detectors of whether their preferred stimulus occurred compared with either dense coding TS or ELL neurons. Our results therefore suggest that the TS implements parallel detection and estimation of sensory input. PMID:21940609

  7. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals

    SciTech Connect

    Pinski, Peter; Riplinger, Christoph; Neese, Frank E-mail: frank.neese@cec.mpg.de; Valeev, Edward F. E-mail: frank.neese@cec.mpg.de

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in

  8. Improving representation-based classification for robust face recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Hongzhi; Zhang, Zheng; Li, Zhengming; Chen, Yan; Shi, Jian

    2014-06-01

    The sparse representation classification (SRC) method proposed by Wright et al. is considered as the breakthrough of face recognition because of its good performance. Nevertheless it still cannot perfectly address the face recognition problem. The main reason for this is that variation of poses, facial expressions, and illuminations of the facial image can be rather severe and the number of available facial images are fewer than the dimensions of the facial image, so a certain linear combination of all the training samples is not able to fully represent the test sample. In this study, we proposed a novel framework to improve the representation-based classification (RBC). The framework first ran the sparse representation algorithm and determined the unavoidable deviation between the test sample and optimal linear combination of all the training samples in order to represent it. It then exploited the deviation and all the training samples to resolve the linear combination coefficients. Finally, the classification rule, the training samples, and the renewed linear combination coefficients were used to classify the test sample. Generally, the proposed framework can work for most RBC methods. From the viewpoint of regression analysis, the proposed framework has a solid theoretical soundness. Because it can, to an extent, identify the bias effect of the RBC method, it enables RBC to obtain more robust face recognition results. The experimental results on a variety of face databases demonstrated that the proposed framework can improve the collaborative representation classification, SRC, and improve the nearest neighbor classifier.

  9. ENSO and annual cycle interaction: the combination mode representation in CMIP5 models

    NASA Astrophysics Data System (ADS)

    Ren, Hong-Li; Zuo, Jinqing; Jin, Fei-Fei; Stuecker, Malte F.

    2016-06-01

    Recent research demonstrated the existence of a combination mode (C-mode) originating from the atmospheric nonlinear interaction between the El Niño-Southern Oscillation (ENSO) and the Pacific warm pool annual cycle. In this paper, we show that the majority of coupled climate models in the Coupled Model Intercomparison Project Phase 5 (CMIP5) are able to reproduce the observed spatial pattern of the C-mode in terms of surface wind anomalies reasonably well, and about half of the coupled models are able to reproduce spectral power at the combination tone periodicities of about 10 and/or 15 months. Compared to the CMIP5 historical simulations, the CMIP5 Atmospheric Model Intercomparison Project (AMIP) simulations can generally exhibit a more realistic simulation of the C-mode due to prescribed lower boundary forcing. Overall, the multi-model ensemble average of the CMIP5 models tends to capture the C-mode better than the individual models. Furthermore, the models with better performance in simulating the ENSO mode tend to also exhibit a more realistic C-mode with respect to its spatial pattern and amplitude, in both the CMIP5 historical and AMIP simulations. This study shows that the CMIP5 models are able to simulate the proposed combination mode mechanism to some degree, resulting from their reasonable performance in representing the ENSO mode. It is suggested that the main ENSO periods in the current climate models needs to be further improved for making the C-mode better.

  10. Haptic fMRI: combining functional neuroimaging with haptics for studying the brain's motor control representation.

    PubMed

    Menon, Samir; Brantner, Gerald; Aholt, Chris; Kay, Kendrick; Khatib, Oussama

    2013-01-01

    A challenging problem in motor control neuroimaging studies is the inability to perform complex human motor tasks given the Magnetic Resonance Imaging (MRI) scanner's disruptive magnetic fields and confined workspace. In this paper, we propose a novel experimental platform that combines Functional MRI (fMRI) neuroimaging, haptic virtual simulation environments, and an fMRI-compatible haptic device for real-time haptic interaction across the scanner workspace (above torso ∼ .65×.40×.20m(3)). We implement this Haptic fMRI platform with a novel haptic device, the Haptic fMRI Interface (HFI), and demonstrate its suitability for motor neuroimaging studies. HFI has three degrees-of-freedom (DOF), uses electromagnetic motors to enable high-fidelity haptic rendering (>350Hz), integrates radio frequency (RF) shields to prevent electromagnetic interference with fMRI (temporal SNR >100), and is kinematically designed to minimize currents induced by the MRI scanner's magnetic field during motor displacement (<2cm). HFI possesses uniform inertial and force transmission properties across the workspace, and has low friction (.05-.30N). HFI's RF noise levels, in addition, are within a 3 Tesla fMRI scanner's baseline noise variation (∼.85±.1%). Finally, HFI is haptically transparent and does not interfere with human motor tasks (tested for .4m reaches). By allowing fMRI experiments involving complex three-dimensional manipulation with haptic interaction, Haptic fMRI enables-for the first time-non-invasive neuroscience experiments involving interactive motor tasks, object manipulation, tactile perception, and visuo-motor integration.

  11. Haptic fMRI: combining functional neuroimaging with haptics for studying the brain's motor control representation.

    PubMed

    Menon, Samir; Brantner, Gerald; Aholt, Chris; Kay, Kendrick; Khatib, Oussama

    2013-01-01

    A challenging problem in motor control neuroimaging studies is the inability to perform complex human motor tasks given the Magnetic Resonance Imaging (MRI) scanner's disruptive magnetic fields and confined workspace. In this paper, we propose a novel experimental platform that combines Functional MRI (fMRI) neuroimaging, haptic virtual simulation environments, and an fMRI-compatible haptic device for real-time haptic interaction across the scanner workspace (above torso ∼ .65×.40×.20m(3)). We implement this Haptic fMRI platform with a novel haptic device, the Haptic fMRI Interface (HFI), and demonstrate its suitability for motor neuroimaging studies. HFI has three degrees-of-freedom (DOF), uses electromagnetic motors to enable high-fidelity haptic rendering (>350Hz), integrates radio frequency (RF) shields to prevent electromagnetic interference with fMRI (temporal SNR >100), and is kinematically designed to minimize currents induced by the MRI scanner's magnetic field during motor displacement (<2cm). HFI possesses uniform inertial and force transmission properties across the workspace, and has low friction (.05-.30N). HFI's RF noise levels, in addition, are within a 3 Tesla fMRI scanner's baseline noise variation (∼.85±.1%). Finally, HFI is haptically transparent and does not interfere with human motor tasks (tested for .4m reaches). By allowing fMRI experiments involving complex three-dimensional manipulation with haptic interaction, Haptic fMRI enables-for the first time-non-invasive neuroscience experiments involving interactive motor tasks, object manipulation, tactile perception, and visuo-motor integration. PMID:24110643

  12. SU-E-I-87: Automated Liver Segmentation Method for CBCT Dataset by Combining Sparse Shape Composition and Probabilistic Atlas Construction

    SciTech Connect

    Li, Dengwang; Liu, Li; Chen, Jinhu; Li, Hongsheng

    2014-06-01

    Purpose: The aiming of this study was to extract liver structures for daily Cone beam CT (CBCT) images automatically. Methods: Datasets were collected from 50 intravenous contrast planning CT images, which were regarded as training dataset for probabilistic atlas and shape prior model construction. Firstly, probabilistic atlas and shape prior model based on sparse shape composition (SSC) were constructed by iterative deformable registration. Secondly, the artifacts and noise were removed from the daily CBCT image by an edge-preserving filtering using total variation with L1 norm (TV-L1). Furthermore, the initial liver region was obtained by registering the incoming CBCT image with the atlas utilizing edge-preserving deformable registration with multi-scale strategy, and then the initial liver region was converted to surface meshing which was registered with the shape model where the major variation of specific patient was modeled by sparse vectors. At the last stage, the shape and intensity information were incorporated into joint probabilistic model, and finally the liver structure was extracted by maximum a posteriori segmentation.Regarding the construction process, firstly the manually segmented contours were converted into meshes, and then arbitrary patient data was chosen as reference image to register with the rest of training datasets by deformable registration algorithm for constructing probabilistic atlas and prior shape model. To improve the efficiency of proposed method, the initial probabilistic atlas was used as reference image to register with other patient data for iterative construction for removing bias caused by arbitrary selection. Results: The experiment validated the accuracy of the segmentation results quantitatively by comparing with the manually ones. The volumetric overlap percentage between the automatically generated liver contours and the ground truth were on an average 88%–95% for CBCT images. Conclusion: The experiment demonstrated

  13. Symbol Systems and Pictorial Representations

    NASA Astrophysics Data System (ADS)

    Diederich, Joachim; Wright, Susan

    All problem-solvers are subject to the same ultimate constraints -- limitations on space, time, and materials (Minsky, 1985). He introduces two principles: (1) Economics: Every intelligence must develop symbol-systems for representing objects, causes and goals, and (2) Sparseness: Every evolving intelligence will eventually encounter certain very special ideas -- e.g., about arithmetic, causal reasoning, and economics -- because these particular ideas are very much simpler than other ideas with similar uses. An extra-terrestrial intelligence (ETI) would have developed symbol systems to express these ideas and would have the capacity of multi-modal processing. Vakoch (1998) states that ...``ETI may rely significantly on other sensory modalities (than vision). Particularly useful representations would be ones that may be intelligible through more than one sensory modality. For instance, the information used to create a three-dimensional representation of an object might be intelligible to ETI heavily reliant on either visual or tactile sensory processes.'' The cross-modal representations Vakoch (1998) describes and the symbol systems Minsky (1985) proposes are called ``metaphors'' when combined. Metaphors allow for highly efficient communication. Metaphors are compact, condensed ways of expressing an idea: words, sounds, gestures or images are used in novel ways to refer to something they do not literally denote. Due to the importance of Minsky's ``economics'' principle, it is therefore possible that a message heavily relies on metaphors.

  14. Sparse distributed memory overview

    NASA Technical Reports Server (NTRS)

    Raugh, Mike

    1990-01-01

    The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.

  15. Sparse regularization for force identification using dictionaries

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  16. Structured Multifrontal Sparse Solver

    2014-05-01

    StruMF is an algebraic structured preconditioner for the interative solution of large sparse linear systems. The preconditioner corresponds to a multifrontal variant of sparse LU factorization in which some dense blocks of the factors are approximated with low-rank matrices. It is algebraic in that it only requires the linear system itself, and the approximation threshold that determines the accuracy of individual low-rank approximations. Favourable rank properties are obtained using a block partitioning which is amore » refinement of the partitioning induced by nested dissection ordering.« less

  17. Decimal fraction representations are not distinct from natural number representations - evidence from a combined eye-tracking and computational modeling approach.

    PubMed

    Huber, Stefan; Klein, Elise; Willmes, Klaus; Nuerk, Hans-Christoph; Moeller, Korbinian

    2014-01-01

    Decimal fractions comply with the base-10 notational system of natural Arabic numbers. Nevertheless, recent research suggested that decimal fractions may be represented differently than natural numbers because two number processing effects (i.e., semantic interference and compatibility effects) differed in their size between decimal fractions and natural numbers. In the present study, we examined whether these differences indeed indicate that decimal fractions are represented differently from natural numbers. Therefore, we provided an alternative explanation for the semantic congruity effect, namely a string length congruity effect. Moreover, we suggest that the smaller compatibility effect for decimal fractions compared to natural numbers was driven by differences in processing strategy (sequential vs. parallel). To evaluate this claim, we manipulated the tenth and hundredth digits in a magnitude comparison task with participants' eye movements recorded, while the unit digits remained identical. In addition, we evaluated whether our empirical findings could be simulated by an extended version of our computational model originally developed to simulate magnitude comparisons of two-digit natural numbers. In the eye-tracking study, we found evidence that participants processed decimal fractions more sequentially than natural numbers because of the identical leading digit. Importantly, our model was able to account for the smaller compatibility effect found for decimal fractions. Moreover, string length congruity was an alternative account for the prolonged reaction times for incongruent decimal pairs. Consequently, we suggest that representations of natural numbers and decimal fractions do not differ.

  18. Sparse inpainting and isotropy

    SciTech Connect

    Feeney, Stephen M.; McEwen, Jason D.; Peiris, Hiranya V.; Marinucci, Domenico; Cammarota, Valentina; Wandelt, Benjamin D. E-mail: marinucc@axp.mat.uniroma2.it E-mail: h.peiris@ucl.ac.uk E-mail: cammarot@axp.mat.uniroma2.it

    2014-01-01

    Sparse inpainting techniques are gaining in popularity as a tool for cosmological data analysis, in particular for handling data which present masked regions and missing observations. We investigate here the relationship between sparse inpainting techniques using the spherical harmonic basis as a dictionary and the isotropy properties of cosmological maps, as for instance those arising from cosmic microwave background (CMB) experiments. In particular, we investigate the possibility that inpainted maps may exhibit anisotropies in the behaviour of higher-order angular polyspectra. We provide analytic computations and simulations of inpainted maps for a Gaussian isotropic model of CMB data, suggesting that the resulting angular trispectrum may exhibit small but non-negligible deviations from isotropy.

  19. Water storage variations extracted from GRACE data by combination of multi-resolution representation (MRR) and principal component analysis (PCA)

    NASA Astrophysics Data System (ADS)

    Ressler, Gerhard; Eicker, Annette; Lieb, Verena; Schmidt, Michael; Seitz, Florian; Shang, Kun; Shum, Che-Kwan

    2015-04-01

    Regionally changing hydrological conditions and their link to the availability of water for human consumption and agriculture is a challenging topic in the context of global change that is receiving increasing attention. Gravity field changes related to signals of land hydrology have been observed by the Gravity Recovery And Climate Experiment (GRACE) satellite mission over a period of more than 12 years. These changes are being analysed in our studies with respect to changing hydrological conditions, especially as a consequence of extreme weather situations and/or a change of climatic conditions. Typically, variations of the Earth's gravity field are modeled as a series expansion in terms of global spherical harmonics with time dependent harmonic coefficients. In order to investigate specific structures in the signal we alternatively apply a wavelet-based multi-resolution technique for the determination of regional spatiotemporal variations of the Earth's gravitational potential in combination with principal component analysis (PCA) for detailed evaluation of these structures. The multi-resolution representation (MRR) i.e. the composition of a signal considering different resolution levels is a suitable approach for spatial gravity modeling especially in case of inhomogeneous distribution of observation data on the one hand and because of the inhomogeneous structure of the Earth's gravity field itself on the other hand. In the MRR the signal is split into detail signals by applying low- and band-pass filters realized e.g. by spherical scaling and wavelet functions. Each detail signal is related to a specific resolution level and covers a certain part of the signal spectrum. Principal component analysis (PCA) enables for revealing specific signal patterns in the space as well as the time domain like trends and seasonal as well as semi seasonal variations. We apply the above mentioned combined technique to GRACE L1C residual potential differences that have been

  20. A Multiobjective Sparse Feature Learning Model for Deep Neural Networks.

    PubMed

    Gong, Maoguo; Liu, Jia; Li, Hao; Cai, Qing; Su, Linzhi

    2015-12-01

    Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features.

  1. Sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1988-01-01

    Theoretical models of the human brain and proposed neural-network computers are developed analytically. Chapters are devoted to the mathematical foundations, background material from computer science, the theory of idealized neurons, neurons as address decoders, and the search of memory for the best match. Consideration is given to sparse memory, distributed storage, the storage and retrieval of sequences, the construction of distributed memory, and the organization of an autonomous learning system.

  2. Sparse distributed memory

    SciTech Connect

    Kanerva, P.

    1988-01-01

    Theoretical models of the human brain and proposed neural-network computers are developed analytically. Chapters are devoted to the mathematical foundations, background material from computer science, the theory of idealized neurons, neurons as address decoders, and the search of memory for the best match. Consideration is given to sparse memory, distributed storage, the storage and retrieval of sequences, the construction of distributed memory, and the organization of an autonomous learning system. 63 refs.

  3. Sparse matrix test collections

    SciTech Connect

    Duff, I.

    1996-12-31

    This workshop will discuss plans for coordinating and developing sets of test matrices for the comparison and testing of sparse linear algebra software. We will talk of plans for the next release (Release 2) of the Harwell-Boeing Collection and recent work on improving the accessibility of this Collection and others through the World Wide Web. There will only be three talks of about 15 to 20 minutes followed by a discussion from the floor.

  4. Accurate combined-hyperbolic-inverse-power-representation of ab initio potential energy surface for the hydroperoxyl radical and dynamics study of O + OH reaction.

    PubMed

    Varandas, A J C

    2013-04-01

    The Combined-Hyperbolic-Inverse-Power-Representation method, which treats evenly both short- and long-range interactions, is used to fit an extensive set of ab initio points for HO2 previously utilized [Xu et al., J. Chem. Phys. 122, 244305 (2005)] to develop a spline interpolant. The novel form is shown to perform accurately when compared with others, while quasiclassical trajectory calculations of the O + OH reaction clearly pinpoint the role of long-range forces at low temperatures. PMID:23574218

  5. Learning Low-Rank Class-Specific Dictionary and Sparse Intra-Class Variant Dictionary for Face Recognition

    PubMed Central

    Tang, Xin; Feng, Guo-can; Li, Xiao-xin; Cai, Jia-xin

    2015-01-01

    Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the

  6. Learning Low-Rank Class-Specific Dictionary and Sparse Intra-Class Variant Dictionary for Face Recognition.

    PubMed

    Tang, Xin; Feng, Guo-Can; Li, Xiao-Xin; Cai, Jia-Xin

    2015-01-01

    Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the

  7. TASMANIAN Sparse Grids Module

    SciTech Connect

    and Drayton Munster, Miroslav Stoyanov

    2013-09-20

    Sparse Grids are the family of methods of choice for multidimensional integration and interpolation in low to moderate number of dimensions. The method is to select extend a one dimensional set of abscissas, weights and basis functions by taking a subset of all possible tensor products. The module provides the ability to create global and local approximations based on polynomials and wavelets. The software has three components, a library, a wrapper for the library that provides a command line interface via text files ad a MATLAB interface via the command line tool.

  8. Sparse Image Format

    2007-04-12

    The Sparse Image Format (SIF) is a file format for storing spare raster images. It works by breaking an image down into tiles. Space is savid by only storing non-uniform tiles, i.e. tiles with at least two different pixel values. If a tile is completely uniform, its common pixel value is stored instead of the complete tile raster. The software is a library in the C language used for manipulating files in SIF format. Itmore » supports large files (> 2GB) and is designed to build in Windows and Linux environments.« less

  9. TASMANIAN Sparse Grids Module

    2013-09-20

    Sparse Grids are the family of methods of choice for multidimensional integration and interpolation in low to moderate number of dimensions. The method is to select extend a one dimensional set of abscissas, weights and basis functions by taking a subset of all possible tensor products. The module provides the ability to create global and local approximations based on polynomials and wavelets. The software has three components, a library, a wrapper for the library thatmore » provides a command line interface via text files ad a MATLAB interface via the command line tool.« less

  10. Sparse Image Format

    SciTech Connect

    Eads, Damian Ryan

    2007-04-12

    The Sparse Image Format (SIF) is a file format for storing spare raster images. It works by breaking an image down into tiles. Space is savid by only storing non-uniform tiles, i.e. tiles with at least two different pixel values. If a tile is completely uniform, its common pixel value is stored instead of the complete tile raster. The software is a library in the C language used for manipulating files in SIF format. It supports large files (> 2GB) and is designed to build in Windows and Linux environments.

  11. Multiscale Region-Level VHR Image Change Detection via Sparse Change Descriptor and Robust Discriminative Dictionary Learning.

    PubMed

    Xu, Yuan; Ding, Kun; Huo, Chunlei; Zhong, Zisha; Li, Haichang; Pan, Chunhong

    2015-01-01

    Very high resolution (VHR) image change detection is challenging due to the low discriminative ability of change feature and the difficulty of change decision in utilizing the multilevel contextual information. Most change feature extraction techniques put emphasis on the change degree description (i.e., in what degree the changes have happened), while they ignore the change pattern description (i.e., how the changes changed), which is of equal importance in characterizing the change signatures. Moreover, the simultaneous consideration of the classification robust to the registration noise and the multiscale region-consistent fusion is often neglected in change decision. To overcome such drawbacks, in this paper, a novel VHR image change detection method is proposed based on sparse change descriptor and robust discriminative dictionary learning. Sparse change descriptor combines the change degree component and the change pattern component, which are encoded by the sparse representation error and the morphological profile feature, respectively. Robust change decision is conducted by multiscale region-consistent fusion, which is implemented by the superpixel-level cosparse representation with robust discriminative dictionary and the conditional random field model. Experimental results confirm the effectiveness of the proposed change detection technique.

  12. Multiscale Region-Level VHR Image Change Detection via Sparse Change Descriptor and Robust Discriminative Dictionary Learning

    PubMed Central

    Xu, Yuan; Ding, Kun; Huo, Chunlei; Zhong, Zisha; Li, Haichang; Pan, Chunhong

    2015-01-01

    Very high resolution (VHR) image change detection is challenging due to the low discriminative ability of change feature and the difficulty of change decision in utilizing the multilevel contextual information. Most change feature extraction techniques put emphasis on the change degree description (i.e., in what degree the changes have happened), while they ignore the change pattern description (i.e., how the changes changed), which is of equal importance in characterizing the change signatures. Moreover, the simultaneous consideration of the classification robust to the registration noise and the multiscale region-consistent fusion is often neglected in change decision. To overcome such drawbacks, in this paper, a novel VHR image change detection method is proposed based on sparse change descriptor and robust discriminative dictionary learning. Sparse change descriptor combines the change degree component and the change pattern component, which are encoded by the sparse representation error and the morphological profile feature, respectively. Robust change decision is conducted by multiscale region-consistent fusion, which is implemented by the superpixel-level cosparse representation with robust discriminative dictionary and the conditional random field model. Experimental results confirm the effectiveness of the proposed change detection technique. PMID:25918748

  13. Constructing a Nonnegative Low-Rank and Sparse Graph With Data-Adaptive Features.

    PubMed

    Zhuang, Liansheng; Gao, Shenghua; Tang, Jinhui; Wang, Jingjing; Lin, Zhouchen; Ma, Yi; Yu, Nenghai

    2015-11-01

    This paper aims at constructing a good graph to discover the intrinsic data structures under a semisupervised learning setting. First, we propose to build a nonnegative low-rank and sparse (referred to as NNLRS) graph for the given data representation. In particular, the weights of edges in the graph are obtained by seeking a nonnegative low-rank and sparse reconstruction coefficients matrix that represents each data sample as a linear combination of others. The so-obtained NNLRS-graph captures both the global mixture of subspaces structure (by the low-rankness) and the locally linear structure (by the sparseness) of the data, hence it is both generative and discriminative. Second, as good features are extremely important for constructing a good graph, we propose to learn the data embedding matrix and construct the graph simultaneously within one framework, which is termed as NNLRS with embedded features (referred to as NNLRS-EF). Extensive NNLRS experiments on three publicly available data sets demonstrate that the proposed method outperforms the state-of-the-art graph construction method by a large margin for both semisupervised classification and discriminative analysis, which verifies the effectiveness of our proposed method.

  14. Cerebellar Functional Parcellation Using Sparse Dictionary Learning Clustering.

    PubMed

    Wang, Changqing; Kipping, Judy; Bao, Chenglong; Ji, Hui; Qiu, Anqi

    2016-01-01

    The human cerebellum has recently been discovered to contribute to cognition and emotion beyond the planning and execution of movement, suggesting its functional heterogeneity. We aimed to identify the functional parcellation of the cerebellum using information from resting-state functional magnetic resonance imaging (rs-fMRI). For this, we introduced a new data-driven decomposition-based functional parcellation algorithm, called Sparse Dictionary Learning Clustering (SDLC). SDLC integrates dictionary learning, sparse representation of rs-fMRI, and k-means clustering into one optimization problem. The dictionary is comprised of an over-complete set of time course signals, with which a sparse representation of rs-fMRI signals can be constructed. Cerebellar functional regions were then identified using k-means clustering based on the sparse representation of rs-fMRI signals. We solved SDLC using a multi-block hybrid proximal alternating method that guarantees strong convergence. We evaluated the reliability of SDLC and benchmarked its classification accuracy against other clustering techniques using simulated data. We then demonstrated that SDLC can identify biologically reasonable functional regions of the cerebellum as estimated by their cerebello-cortical functional connectivity. We further provided new insights into the cerebello-cortical functional organization in children.

  15. Cerebellar Functional Parcellation Using Sparse Dictionary Learning Clustering

    PubMed Central

    Wang, Changqing; Kipping, Judy; Bao, Chenglong; Ji, Hui; Qiu, Anqi

    2016-01-01

    The human cerebellum has recently been discovered to contribute to cognition and emotion beyond the planning and execution of movement, suggesting its functional heterogeneity. We aimed to identify the functional parcellation of the cerebellum using information from resting-state functional magnetic resonance imaging (rs-fMRI). For this, we introduced a new data-driven decomposition-based functional parcellation algorithm, called Sparse Dictionary Learning Clustering (SDLC). SDLC integrates dictionary learning, sparse representation of rs-fMRI, and k-means clustering into one optimization problem. The dictionary is comprised of an over-complete set of time course signals, with which a sparse representation of rs-fMRI signals can be constructed. Cerebellar functional regions were then identified using k-means clustering based on the sparse representation of rs-fMRI signals. We solved SDLC using a multi-block hybrid proximal alternating method that guarantees strong convergence. We evaluated the reliability of SDLC and benchmarked its classification accuracy against other clustering techniques using simulated data. We then demonstrated that SDLC can identify biologically reasonable functional regions of the cerebellum as estimated by their cerebello-cortical functional connectivity. We further provided new insights into the cerebello-cortical functional organization in children. PMID:27199650

  16. A new sparse Bayesian learning method for inverse synthetic aperture radar imaging via exploiting cluster patterns

    NASA Astrophysics Data System (ADS)

    Fang, Jun; Zhang, Lizao; Duan, Huiping; Huang, Lei; Li, Hongbin

    2016-05-01

    The application of sparse representation to SAR/ISAR imaging has attracted much attention over the past few years. This new class of sparse representation based imaging methods present a number of unique advantages over conventional range-Doppler methods, the basic idea behind these works is to formulate SAR/ISAR imaging as a sparse signal recovery problem. In this paper, we propose a new two-dimensional pattern-coupled sparse Bayesian learning(SBL) method to capture the underlying cluster patterns of the ISAR target images. Based on this model, an expectation-maximization (EM) algorithm is developed to infer the maximum a posterior (MAP) estimate of the hyperparameters, along with the posterior distribution of the sparse signal. Experimental results demonstrate that the proposed method is able to achieve a substantial performance improvement over existing algorithms, including the conventional SBL method.

  17. Sparse Coding on Symmetric Positive Definite Manifolds Using Bregman Divergences.

    PubMed

    Harandi, Mehrtash T; Hartley, Richard; Lovell, Brian; Sanderson, Conrad

    2016-06-01

    This paper introduces sparse coding and dictionary learning for symmetric positive definite (SPD) matrices, which are often used in machine learning, computer vision, and related areas. Unlike traditional sparse coding schemes that work in vector spaces, in this paper, we discuss how SPD matrices can be described by sparse combination of dictionary atoms, where the atoms are also SPD matrices. We propose to seek sparse coding by embedding the space of SPD matrices into the Hilbert spaces through two types of the Bregman matrix divergences. This not only leads to an efficient way of performing sparse coding but also an online and iterative scheme for dictionary learning. We apply the proposed methods to several computer vision tasks where images are represented by region covariance matrices. Our proposed algorithms outperform state-of-the-art methods on a wide range of classification tasks, including face recognition, action recognition, material classification, and texture categorization. PMID:25643414

  18. Adaptive sparse grid expansions of the vibrational Hamiltonian.

    PubMed

    Strobusch, D; Scheurer, Ch

    2014-02-21

    The vibrational Hamiltonian involves two high dimensional operators, the kinetic energy operator (KEO), and the potential energy surface (PES). Both must be approximated for systems involving more than a few atoms. Adaptive approximation schemes are not only superior to truncated Taylor or many-body expansions (MBE), they also allow for error estimates, and thus operators of predefined precision. To this end, modified sparse grids (SG) are developed that can be combined with adaptive MBEs. This MBE/SG hybrid approach yields a unified, fully adaptive representation of the KEO and the PES. Refinement criteria, based on the vibrational self-consistent field (VSCF) and vibrational configuration interaction (VCI) methods, are presented. The combination of the adaptive MBE/SG approach and the VSCF plus VCI methods yields a black box like procedure to compute accurate vibrational spectra. This is demonstrated on a test set of molecules, comprising water, formaldehyde, methanimine, and ethylene. The test set is first employed to prove convergence for semi-empirical PM3-PESs and subsequently to compute accurate vibrational spectra from CCSD(T)-PESs that agree well with experimental values.

  19. Adaptive sparse grid expansions of the vibrational Hamiltonian

    SciTech Connect

    Strobusch, D.; Scheurer, Ch.

    2014-02-21

    The vibrational Hamiltonian involves two high dimensional operators, the kinetic energy operator (KEO), and the potential energy surface (PES). Both must be approximated for systems involving more than a few atoms. Adaptive approximation schemes are not only superior to truncated Taylor or many-body expansions (MBE), they also allow for error estimates, and thus operators of predefined precision. To this end, modified sparse grids (SG) are developed that can be combined with adaptive MBEs. This MBE/SG hybrid approach yields a unified, fully adaptive representation of the KEO and the PES. Refinement criteria, based on the vibrational self-consistent field (VSCF) and vibrational configuration interaction (VCI) methods, are presented. The combination of the adaptive MBE/SG approach and the VSCF plus VCI methods yields a black box like procedure to compute accurate vibrational spectra. This is demonstrated on a test set of molecules, comprising water, formaldehyde, methanimine, and ethylene. The test set is first employed to prove convergence for semi-empirical PM3-PESs and subsequently to compute accurate vibrational spectra from CCSD(T)-PESs that agree well with experimental values.

  20. Sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1989-01-01

    Sparse distributed memory was proposed be Pentti Kanerva as a realizable architecture that could store large patterns and retrieve them based on partial matches with patterns representing current sensory inputs. This memory exhibits behaviors, both in theory and in experiment, that resemble those previously unapproached by machines - e.g., rapid recognition of faces or odors, discovery of new connections between seemingly unrelated ideas, continuation of a sequence of events when given a cue from the middle, knowing that one doesn't know, or getting stuck with an answer on the tip of one's tongue. These behaviors are now within reach of machines that can be incorporated into the computing systems of robots capable of seeing, talking, and manipulating. Kanerva's theory is a break with the Western rationalistic tradition, allowing a new interpretation of learning and cognition that respects biology and the mysteries of individual human beings.

  1. Sparse Hashing Tracking.

    PubMed

    Zhang, Lihe; Lu, Huchuan; Du, Dandan; Liu, Luning

    2016-02-01

    In this paper, we propose a novel tracking framework based on a sparse and discriminative hashing method. Different from the previous work, we treat object tracking as an approximate nearest neighbor searching process in a binary space. Using the hash functions, the target templates and the candidates can be projected into the Hamming space, facilitating the distance calculation and tracking efficiency. First, we integrate both the inter-class and intra-class information to train multiple hash functions for better classification, while most classifiers in previous tracking methods usually neglect the inter-class correlation, which may cause the inaccuracy. Then, we introduce sparsity into the hash coefficient vectors for dynamic feature selection, which is crucial to select the discriminative and stable features to adapt to visual variations during the tracking process. Extensive experiments on various challenging sequences show that the proposed algorithm performs favorably against the state-of-the-art methods.

  2. A comparison of methods for representing sparsely sampled random quantities.

    SciTech Connect

    Romero, Vicente Jose; Swiler, Laura Painton; Urbina, Angel; Mullins, Joshua

    2013-09-01

    This report discusses the treatment of uncertainties stemming from relatively few samples of random quantities. The importance of this topic extends beyond experimental data uncertainty to situations involving uncertainty in model calibration, validation, and prediction. With very sparse data samples it is not practical to have a goal of accurately estimating the underlying probability density function (PDF). Rather, a pragmatic goal is that the uncertainty representation should be conservative so as to bound a specified percentile range of the actual PDF, say the range between 0.025 and .975 percentiles, with reasonable reliability. A second, opposing objective is that the representation not be overly conservative; that it minimally over-estimate the desired percentile range of the actual PDF. The presence of the two opposing objectives makes the sparse-data uncertainty representation problem interesting and difficult. In this report, five uncertainty representation techniques are characterized for their performance on twenty-one test problems (over thousands of trials for each problem) according to these two opposing objectives and other performance measures. Two of the methods, statistical Tolerance Intervals and a kernel density approach specifically developed for handling sparse data, exhibit significantly better overall performance than the others.

  3. Adaptive sparse signal processing for discrimination of satellite-based radiofrequency (RF) recordings of lightning events

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Smith, David A.

    2015-05-01

    For over two decades, Los Alamos National Laboratory programs have included an active research effort utilizing satellite observations of terrestrial lightning to learn more about the Earth's RF background. The FORTE satellite provided a rich satellite lightning database, which has been previously used for some event classification, and remains relevant for advancing lightning research. Lightning impulses are dispersed as they travel through the ionosphere, appearing as nonlinear chirps at the receiver on orbit. The data processing challenge arises from the combined complexity of the lightning source model, the propagation medium nonlinearities, and the sensor artifacts. We continue to develop modern event classification capability on the FORTE database using adaptive signal processing combined with compressive sensing techniques. The focus of our work is improved feature extraction using sparse representations in overcomplete analytical dictionaries. We explore two possible techniques for detecting lightning events, and showcase the algorithms on few representative data examples. We present preliminary results of our work and discuss future development.

  4. Accurate combined-hyperbolic-inverse-power-representation of ab initio potential energy surface for the hydroperoxyl radical and dynamics study of O+OH reaction

    NASA Astrophysics Data System (ADS)

    Varandas, A. J. C.

    2013-04-01

    The Combined-Hyperbolic-Inverse-Power-Representation method, which treats evenly both short- and long-range interactions, is used to fit an extensive set of ab initio points for HO2 previously utilized [Xu et al., J. Chem. Phys. 122, 244305 (2005), 10.1063/1.1944290] to develop a spline interpolant. The novel form is shown to perform accurately when compared with others, while quasiclassical trajectory calculations of the O + OH reaction clearly pinpoint the role of long-range forces at low temperatures.

  5. Sparse nonnegative matrix factorization with ℓ0-constraints

    PubMed Central

    Peharz, Robert; Pernkopf, Franz

    2012-01-01

    Although nonnegative matrix factorization (NMF) favors a sparse and part-based representation of nonnegative data, there is no guarantee for this behavior. Several authors proposed NMF methods which enforce sparseness by constraining or penalizing the ℓ1-norm of the factor matrices. On the other hand, little work has been done using a more natural sparseness measure, the ℓ0-pseudo-norm. In this paper, we propose a framework for approximate NMF which constrains the ℓ0-norm of the basis matrix, or the coefficient matrix, respectively. For this purpose, techniques for unconstrained NMF can be easily incorporated, such as multiplicative update rules, or the alternating nonnegative least-squares scheme. In experiments we demonstrate the benefits of our methods, which compare to, or outperform existing approaches. PMID:22505792

  6. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/sqrt{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/sqrt{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  7. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    PubMed Central

    Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus

    2015-01-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. PMID:26146475

  8. Blind source separation by sparse decomposition

    NASA Astrophysics Data System (ADS)

    Zibulevsky, Michael; Pearlmutter, Barak A.

    2000-04-01

    The blind source separation problem is to extract the underlying source signals from a set of their linear mixtures, where the mixing matrix is unknown. This situation is common, eg in acoustics, radio, and medical signal processing. We exploit the property of the sources to have a sparse representation in a corresponding signal dictionary. Such a dictionary may consist of wavelets, wavelet packets, etc., or be obtained by learning from a given family of signals. Starting from the maximum a posteriori framework, which is applicable to the case of more sources than mixtures, we derive a few other categories of objective functions, which provide faster and more robust computations, when there are an equal number of sources and mixtures. Our experiments with artificial signals and with musical sounds demonstrate significantly better separation than other known techniques.

  9. Integer sparse distributed memory: analysis and results.

    PubMed

    Snaider, Javier; Franklin, Stan; Strain, Steve; George, E Olusegun

    2013-10-01

    Sparse distributed memory is an auto-associative memory system that stores high dimensional Boolean vectors. Here we present an extension of the original SDM, the Integer SDM that uses modular arithmetic integer vectors rather than binary vectors. This extension preserves many of the desirable properties of the original SDM: auto-associativity, content addressability, distributed storage, and robustness over noisy inputs. In addition, it improves the representation capabilities of the memory and is more robust over normalization. It can also be extended to support forgetting and reliable sequence storage. We performed several simulations that test the noise robustness property and capacity of the memory. Theoretical analyses of the memory's fidelity and capacity are also presented. PMID:23747569

  10. Scene Classfication Based on the Semantic-Feature Fusion Fully Sparse Topic Model for High Spatial Resolution Remote Sensing Imagery

    NASA Astrophysics Data System (ADS)

    Zhu, Qiqi; Zhong, Yanfei; Zhang, Liangpei

    2016-06-01

    Topic modeling has been an increasingly mature method to bridge the semantic gap between the low-level features and high-level semantic information. However, with more and more high spatial resolution (HSR) images to deal with, conventional probabilistic topic model (PTM) usually presents the images with a dense semantic representation. This consumes more time and requires more storage space. In addition, due to the complex spectral and spatial information, a combination of multiple complementary features is proved to be an effective strategy to improve the performance for HSR image scene classification. But it should be noticed that how the distinct features are fused to fully describe the challenging HSR images, which is a critical factor for scene classification. In this paper, a semantic-feature fusion fully sparse topic model (SFF-FSTM) is proposed for HSR imagery scene classification. In SFF-FSTM, three heterogeneous features - the mean and standard deviation based spectral feature, wavelet based texture feature, and dense scale-invariant feature transform (SIFT) based structural feature are effectively fused at the latent semantic level. The combination of multiple semantic-feature fusion strategy and sparse based FSTM is able to provide adequate feature representations, and can achieve comparable performance with limited training samples. Experimental results on the UC Merced dataset and Google dataset of SIRI-WHU demonstrate that the proposed method can improve the performance of scene classification compared with other scene classification methods for HSR imagery.

  11. CCD Sparse Field CTE Internal

    NASA Astrophysics Data System (ADS)

    Hernandez, Svea

    2012-10-01

    CTE measurements are made using the "internal sparse field test", along the parallelaxis. The "POS=" optional parameter, introduced during cycle 11, is used to provideoff-center MSM positioning of some slits. All exposures are internals.

  12. CCD Sparse Field CTE Internal

    NASA Astrophysics Data System (ADS)

    Wolfe, Michael

    2011-10-01

    CTE measurements are made using the "internal sparse field test", along the parallelaxis. The "POS=" optional parameter, introduced during cycle 11, is used to provideoff-center MSM positionings of some slits. All exposures are internals.

  13. CCD Sparse Field CTE Internal

    NASA Astrophysics Data System (ADS)

    Wolfe, Michael

    2010-09-01

    CTE measurements are made using the "internal sparse field test", along the parallelaxis. The "POS=" optional parameter, introduced during cycle 11, is used to provideoff-center MSM positionings of some slits. All exposures are internals.

  14. CCD Sparse Field CTE Internal

    NASA Astrophysics Data System (ADS)

    Hernandez, Svea

    2013-10-01

    CTE measurements are made using the "internal sparse field test", along the parallelaxis. The "POS=" optional parameter, introduced during cycle 11, is used to provideoff-center MSM positioning of some slits. All exposures are internals.

  15. CCD Sparse Field CTE Internal

    NASA Astrophysics Data System (ADS)

    Wolfe, Michael

    2009-07-01

    CTE measurements are made using the "internal sparse field test", along the parallelaxis. The "POS=" optional parameter, introduced during cycle 11, is used to provideoff-center MSM positionings of some slits. All exposures are internals.

  16. Sparse Solution of High-Dimensional Model Calibration Inverse Problems under Uncertainty in Prior Structural Connectivity

    NASA Astrophysics Data System (ADS)

    Mohammad khaninezhad, M.; Jafarpour, B.

    2012-12-01

    Data limitation and heterogeneity of the geologic formations introduce significant uncertainty in predicting the related flow and transport processes in these environments. Fluid flow and displacement behavior in subsurface systems is mainly controlled by the structural connectivity models that create preferential flow pathways (or barriers). The connectivity of extreme geologic features strongly constrains the evolution of the related flow and transport processes in subsurface formations. Therefore, characterization of the geologic continuity and facies connectivity is critical for reliable prediction of the flow and transport behavior. The goal of this study is to develop a robust and geologically consistent framework for solving large-scale nonlinear subsurface characterization inverse problems under uncertainty about geologic continuity and structural connectivity. We formulate a novel inverse modeling approach by adopting a sparse reconstruction perspective, which involves two major components: 1) sparse description of hydraulic property distribution under significant uncertainty in structural connectivity and 2) formulation of an effective sparsity-promoting inversion method that is robust against prior model uncertainty. To account for the significant variability in the structural connectivity, we use, as prior, multiple distinct connectivity models. For sparse/compact representation of high-dimensional hydraulic property maps, we investigate two methods. In one approach, we apply the principle component analysis (PCA) to each prior connectivity model individually and combine the resulting leading components from each model to form a diverse geologic dictionary. Alternatively, we combine many realizations of the hydraulic properties from different prior connectivity models and use them to generate a diverse training dataset. We use the training dataset with a sparsifying transform, such as K-SVD, to construct a sparse geologic dictionary that is robust to

  17. Image superresolution by midfrequency sparse representation and total variation regularization

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi

    2015-01-01

    Machine learning has provided many good tools for superresolution, whereas existing methods still need to be improved in many aspects. On one hand, the memory and time cost should be reduced. On the other hand, the step edges of the results obtained by the existing methods are not clear enough. We do the following work. First, we propose a method to extract the midfrequency features for dictionary learning. This method brings the benefit of a reduction of the memory and time complexity without sacrificing the performance. Second, we propose a detailed wiping-off total variation (DWO-TV) regularization model to reconstruct the sharp step edges. This model adds a novel constraint on the downsampling version of the high-resolution image to wipe off the details and artifacts and sharpen the step edges. Finally, step edges produced by the DWO-TV regularization and the details provided by learning are fused. Experimental results show that the proposed method offers a desirable compromise between low time and memory cost and the reconstruction quality.

  18. Cellular Adaptation Facilitates Sparse and Reliable Coding in Sensory Pathways

    PubMed Central

    Farkhooi, Farzad; Froese, Anja; Muller, Eilif; Menzel, Randolf; Nawrot, Martin P.

    2013-01-01

    Most neurons in peripheral sensory pathways initially respond vigorously when a preferred stimulus is presented, but adapt as stimulation continues. It is unclear how this phenomenon affects stimulus coding in the later stages of sensory processing. Here, we show that a temporally sparse and reliable stimulus representation develops naturally in sequential stages of a sensory network with adapting neurons. As a modeling framework we employ a mean-field approach together with an adaptive population density treatment, accompanied by numerical simulations of spiking neural networks. We find that cellular adaptation plays a critical role in the dynamic reduction of the trial-by-trial variability of cortical spike responses by transiently suppressing self-generated fast fluctuations in the cortical balanced network. This provides an explanation for a widespread cortical phenomenon by a simple mechanism. We further show that in the insect olfactory system cellular adaptation is sufficient to explain the emergence of the temporally sparse and reliable stimulus representation in the mushroom body. Our results reveal a generic, biophysically plausible mechanism that can explain the emergence of a temporally sparse and reliable stimulus representation within a sequential processing architecture. PMID:24098101

  19. Threaded Operations on Sparse Matrices

    SciTech Connect

    Sneed, Brett

    2015-09-01

    We investigate the use of sparse matrices and OpenMP multi-threading on linear algebra operations involving them. Several sparse matrix data structures are presented. Implementation of the multi- threading primarily occurs in the level one and two BLAS functions used within the four algorithms investigated{the Power Method, Conjugate Gradient, Biconjugate Gradient, and Jacobi's Method. The bene ts of launching threads once per high level algorithm are explored.

  20. Sparse recovery of the multimodal and dispersive characteristics of Lamb waves.

    PubMed

    Harley, Joel B; Moura, José M F

    2013-05-01

    Guided waves in plates, known as Lamb waves, are characterized by complex, multimodal, and frequency dispersive wave propagation, which distort signals and make their analysis difficult. Estimating these multimodal and dispersive characteristics from experimental data becomes a difficult, underdetermined inverse problem. To accurately and robustly recover these multimodal and dispersive properties, this paper presents a methodology referred to as sparse wavenumber analysis based on sparse recovery methods. By utilizing a general model for Lamb waves, waves propagating in a plate structure, and robust l1 optimization strategies, sparse wavenumber analysis accurately recovers the Lamb wave's frequency-wavenumber representation with a limited number of surface mounted transducers. This is demonstrated with both simulated and experimental data in the presence of multipath reflections. With accurate frequency-wavenumber representations, sparse wavenumber synthesis is then used to accurately remove multipath interference in each measurement and predict the responses between arbitrary points on a plate.

  1. Why Representations?

    ERIC Educational Resources Information Center

    Schultz, James E.; Waters, Michael S.

    2000-01-01

    Discusses representations in the context of solving a system of linear equations. Views representations (concrete, tables, graphs, algebraic, matrices) from perspectives of understanding, technology, generalization, exact versus approximate solution, and learning style. (KHR)

  2. Segmenting hippocampus from infant brains by sparse patch matching with deep-learned features.

    PubMed

    Guo, Yanrong; Wu, Guorong; Commander, Leah A; Szary, Stephanie; Jewells, Valerie; Lin, Weili; Shent, Dinggang

    2014-01-01

    Accurate segmentation of the hippocampus from infant MR brain images is a critical step for investigating early brain development. Unfortunately, the previous tools developed for adult hippocampus segmentation are not suitable for infant brain images acquired from the first year of life, which often have poor tissue contrast and variable structural patterns of early hippocampal development. From our point of view, the main problem is lack of discriminative and robust feature representations for distinguishing the hippocampus from the surrounding brain structures. Thus, instead of directly using the predefined features as popularly used in the conventional methods, we propose to learn the latent feature representations of infant MR brain images by unsupervised deep learning. Since deep learning paradigms can learn low-level features and then successfully build up more comprehensive high-level features in a layer-by-layer manner, such hierarchical feature representations can be more competitive for distinguishing the hippocampus from entire brain images. To this end, we apply Stacked Auto Encoder (SAE) to learn the deep feature representations from both T1- and T2-weighed MR images combining their complementary information, which is important for characterizing different development stages of infant brains after birth. Then, we present a sparse patch matching method for transferring hippocampus labels from multiple atlases to the new infant brain image, by using deep-learned feature representations to measure the interpatch similarity. Experimental results on 2-week-old to 9-month-old infant brain images show the effectiveness of the proposed method, especially compared to the state-of-the-art counterpart methods. PMID:25485393

  3. Ontology Sparse Vector Learning Algorithm for Ontology Similarity Measuring and Ontology Mapping via ADAL Technology

    NASA Astrophysics Data System (ADS)

    Gao, Wei; Zhu, Linli; Wang, Kaiyun

    2015-12-01

    Ontology, a model of knowledge representation and storage, has had extensive applications in pharmaceutics, social science, chemistry and biology. In the age of “big data”, the constructed concepts are often represented as higher-dimensional data by scholars, and thus the sparse learning techniques are introduced into ontology algorithms. In this paper, based on the alternating direction augmented Lagrangian method, we present an ontology optimization algorithm for ontological sparse vector learning, and a fast version of such ontology technologies. The optimal sparse vector is obtained by an iterative procedure, and the ontology function is then obtained from the sparse vector. Four simulation experiments show that our ontological sparse vector learning model has a higher precision ratio on plant ontology, humanoid robotics ontology, biology ontology and physics education ontology data for similarity measuring and ontology mapping applications.

  4. Inverse sparse tracker with a locally weighted distance metric.

    PubMed

    Wang, Dong; Lu, Huchuan; Xiao, Ziyang; Yang, Ming-Hsuan

    2015-09-01

    Sparse representation has been recently extensively studied for visual tracking and generally facilitates more accurate tracking results than classic methods. In this paper, we propose a sparsity-based tracking algorithm that is featured with two components: 1) an inverse sparse representation formulation and 2) a locally weighted distance metric. In the inverse sparse representation formulation, the target template is reconstructed with particles, which enables the tracker to compute the weights of all particles by solving only one l1 optimization problem and thereby provides a quite efficient model. This is in direct contrast to most previous sparse trackers that entail solving one optimization problem for each particle. However, we notice that this formulation with normal Euclidean distance metric is sensitive to partial noise like occlusion and illumination changes. To this end, we design a locally weighted distance metric to replace the Euclidean one. Similar ideas of using local features appear in other works, but only being supported by popular assumptions like local models could handle partial noise better than holistic models, without any solid theoretical analysis. In this paper, we attempt to explicitly explain it from a mathematical view. On that basis, we further propose a method to assign local weights by exploiting the temporal and spatial continuity. In the proposed method, appearance changes caused by partial occlusion and shape deformation are carefully considered, thereby facilitating accurate similarity measurement and model update. The experimental validation is conducted from two aspects: 1) self validation on key components and 2) comparison with other state-of-the-art algorithms. Results over 15 challenging sequences show that the proposed tracking algorithm performs favorably against the existing sparsity-based trackers and the other state-of-the-art methods. PMID:25935033

  5. A view of Kanerva's sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1986-01-01

    Pentti Kanerva is working on a new class of computers, which are called pattern computers. Pattern computers may close the gap between capabilities of biological organisms to recognize and act on patterns (visual, auditory, tactile, or olfactory) and capabilities of modern computers. Combinations of numeric, symbolic, and pattern computers may one day be capable of sustaining robots. The overview of the requirements for a pattern computer, a summary of Kanerva's Sparse Distributed Memory (SDM), and examples of tasks this computer can be expected to perform well are given.

  6. Predefined Redundant Dictionary for Effective Depth Maps Representation

    NASA Astrophysics Data System (ADS)

    Sebai, Dorsaf; Chaieb, Faten; Ghorbel, Faouzi

    2016-01-01

    The multi-view video plus depth (MVD) video format consists of two components: texture and depth map, where a combination of these components enables a receiver to generate arbitrary virtual views. However, MVD presents a very voluminous video format that requires a compression process for storage and especially for transmission. Conventional codecs are perfectly efficient for texture images compression but not for intrinsic depth maps properties. Depth images indeed are characterized by areas of smoothly varying grey levels separated by sharp discontinuities at the position of object boundaries. Preserving these characteristics is important to enable high quality view synthesis at the receiver side. In this paper, sparse representation of depth maps is discussed. It is shown that a significant gain in sparsity is achieved when particular mixed dictionaries are used for approximating these types of images with greedy selection strategies. Experiments are conducted to confirm the effectiveness at producing sparse representations, and competitiveness, with respect to candidate state-of-art dictionaries. Finally, the resulting method is shown to be effective for depth maps compression and represents an advantage over the ongoing 3D high efficiency video coding compression standard, particularly at medium and high bitrates.

  7. Measuring sparseness in the brain: Comment on Bowers (2009)

    PubMed Central

    Quiroga, R. Quian; Kreiman, G.

    2011-01-01

    Bowers (2009) challenged the common view in favor of distributed representations in psychological modeling and the main arguments given against localist and grandmother cell coding schemes. He revisited the results of several single-cell studies arguing that they do not support distributed representations. We praise the contribution of Bowers for joining evidence from psychological modeling and neurophysiological recordings, but disagree with several of his claims. In this comment we argue that distinctions between distributed, localist and grandmother cell coding can be troublesome with real data. Moreover, these distinctions seem to be lying within the same continuum, and we argue that it may be sensible to characterize coding schemes using a sparseness measure. We further argue that there may not be a unique coding scheme implemented in all brain areas and for all possible functions. In particular, current evidence suggests that the brain may use distributed codes in primary sensory areas and sparser and invariant representations in higher areas. PMID:20063978

  8. Simultaneously Sparse and Low-Rank Abundance Matrix Estimation for Hyperspectral Image Unmixing

    NASA Astrophysics Data System (ADS)

    Giampouras, Paris V.; Themelis, Konstantinos E.; Rontogiannis, Athanasios A.; Koutroumbas, Konstantinos D.

    2016-08-01

    In a plethora of applications dealing with inverse problems, e.g. in image processing, social networks, compressive sensing, biological data processing etc., the signal of interest is known to be structured in several ways at the same time. This premise has recently guided the research to the innovative and meaningful idea of imposing multiple constraints on the parameters involved in the problem under study. For instance, when dealing with problems whose parameters form sparse and low-rank matrices, the adoption of suitably combined constraints imposing sparsity and low-rankness, is expected to yield substantially enhanced estimation results. In this paper, we address the spectral unmixing problem in hyperspectral images. Specifically, two novel unmixing algorithms are introduced, in an attempt to exploit both spatial correlation and sparse representation of pixels lying in homogeneous regions of hyperspectral images. To this end, a novel convex mixed penalty term is first defined consisting of the sum of the weighted $\\ell_1$ and the weighted nuclear norm of the abundance matrix corresponding to a small area of the image determined by a sliding square window. This penalty term is then used to regularize a conventional quadratic cost function and impose simultaneously sparsity and row-rankness on the abundance matrix. The resulting regularized cost function is minimized by a) an incremental proximal sparse and low-rank unmixing algorithm and b) an algorithm based on the alternating minimization method of multipliers (ADMM). The effectiveness of the proposed algorithms is illustrated in experiments conducted both on simulated and real data.

  9. Sparse field stellar photometry.

    NASA Astrophysics Data System (ADS)

    Reid, N.

    The past few years have seen substantial developments in the capability of high speed measuring machines in the field of automated stellar photometry. In this review, after describing some of the limitations on photometric precision, empirical results are used to demonstrate the sort of accuracies that are possible with the UK Schmidt plate plus COSMOS/APM images-scan combination. The astronomical results obtained to date from these machines are discussed, and some consideration is given to the future role of measuring machines in stellar astronomy.

  10. Dentate Gyrus Circuitry Features Improve Performance of Sparse Approximation Algorithms

    PubMed Central

    Petrantonakis, Panagiotis C.; Poirazi, Panayiota

    2015-01-01

    Memory-related activity in the Dentate Gyrus (DG) is characterized by sparsity. Memory representations are seen as activated neuronal populations of granule cells, the main encoding cells in DG, which are estimated to engage 2–4% of the total population. This sparsity is assumed to enhance the ability of DG to perform pattern separation, one of the most valuable contributions of DG during memory formation. In this work, we investigate how features of the DG such as its excitatory and inhibitory connectivity diagram can be used to develop theoretical algorithms performing Sparse Approximation, a widely used strategy in the Signal Processing field. Sparse approximation stands for the algorithmic identification of few components from a dictionary that approximate a certain signal. The ability of DG to achieve pattern separation by sparsifing its representations is exploited here to improve the performance of the state of the art sparse approximation algorithm “Iterative Soft Thresholding” (IST) by adding new algorithmic features inspired by the DG circuitry. Lateral inhibition of granule cells, either direct or indirect, via mossy cells, is shown to enhance the performance of the IST. Apart from revealing the potential of DG-inspired theoretical algorithms, this work presents new insights regarding the function of particular cell types in the pattern separation task of the DG. PMID:25635776

  11. Clutter Mitigation in Echocardiography Using Sparse Signal Separation

    PubMed Central

    Turek, Javier S.; Elad, Michael; Yavneh, Irad

    2015-01-01

    In ultrasound imaging, clutter artifacts degrade images and may cause inaccurate diagnosis. In this paper, we apply a method called Morphological Component Analysis (MCA) for sparse signal separation with the objective of reducing such clutter artifacts. The MCA approach assumes that the two signals in the additive mix have each a sparse representation under some dictionary of atoms (a matrix), and separation is achieved by finding these sparse representations. In our work, an adaptive approach is used for learning the dictionary from the echo data. MCA is compared to Singular Value Filtering (SVF), a Principal Component Analysis- (PCA-) based filtering technique, and to a high-pass Finite Impulse Response (FIR) filter. Each filter is applied to a simulated hypoechoic lesion sequence, as well as experimental cardiac ultrasound data. MCA is demonstrated in both cases to outperform the FIR filter and obtain results comparable to the SVF method in terms of contrast-to-noise ratio (CNR). Furthermore, MCA shows a lower impact on tissue sections while removing the clutter artifacts. In experimental heart data, MCA obtains in our experiments clutter mitigation with an average CNR improvement of 1.33 dB. PMID:26199622

  12. Hyperspherical Sparse Approximation Techniques for High-Dimensional Discontinuity Detection

    DOE PAGES

    Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max; Burkardt, John

    2016-08-04

    This work proposes a hyperspherical sparse approximation framework for detecting jump discontinuities in functions in high-dimensional spaces. The need for a novel approach results from the theoretical and computational inefficiencies of well-known approaches, such as adaptive sparse grids, for discontinuity detection. Our approach constructs the hyperspherical coordinate representation of the discontinuity surface of a function. Then sparse approximations of the transformed function are built in the hyperspherical coordinate system, with values at each point estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the new technique can identify jump discontinuities with significantly reduced computationalmore » cost, compared to existing methods. Several approaches are used to approximate the transformed discontinuity surface in the hyperspherical system, including adaptive sparse grid and radial basis function interpolation, discrete least squares projection, and compressed sensing approximation. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. In conclusion, rigorous complexity analyses of the new methods are provided, as are several numerical examples that illustrate the effectiveness of our approach.« less

  13. An adaptive hierarchical sensing scheme for sparse signals

    NASA Astrophysics Data System (ADS)

    Schütze, Henry; Barth, Erhardt; Martinetz, Thomas

    2014-02-01

    In this paper, we present Adaptive Hierarchical Sensing (AHS), a novel adaptive hierarchical sensing algorithm for sparse signals. For a given but unknown signal with a sparse representation in an orthogonal basis, the sensing task is to identify its non-zero transform coefficients by performing only few measurements. A measurement is simply the inner product of the signal and a particular measurement vector. During sensing, AHS partially traverses a binary tree and performs one measurement per visited node. AHS is adaptive in the sense that after each measurement a decision is made whether the entire subtree of the current node is either further traversed or omitted depending on the measurement value. In order to acquire an N -dimensional signal that is K-sparse, AHS performs O(K log N/K) measurements. With AHS, the signal is easily reconstructed by a basis transform without the need to solve an optimization problem. When sensing full-size images, AHS can compete with a state-of-the-art compressed sensing approach in terms of reconstruction performance versus number of measurements. Additionally, we simulate the sensing of image patches by AHS and investigate the impact of the choice of the sparse coding basis as well as the impact of the tree composition.

  14. Combining multiple feature representations and AdaBoost ensemble learning for reducing false-positive detections in computer-aided detection of masses on mammograms.

    PubMed

    Choi, Jae Young; Kim, Dae Hoe; Plataniotis, Konstantinos N; Ro, Yong Man

    2012-01-01

    One of the drawbacks of current Computer-aided Detection (CADe) systems is a high number of false-positive (FP) detections, especially for detecting mass abnormalities. In a typical CADe system, classifier design is one of the key steps for determining FP detection rates. This paper presents the effective classifier ensemble system for tackling FP reduction problem in CADe. To construct ensemble consisting of correct classifiers while disagreeing with each other as much as possible, we develop a new ensemble construction solution that combines data resampling underpinning AdaBoost learning with the use of different feature representations. In addition, to cope with the limitation of weak classifiers in conventional AdaBoost, our method has an effective mechanism for tuning the level of weakness of base classifiers. Further, for combining multiple decision outputs of ensemble members, a weighted sum fusion strategy is used to maximize a complementary effect for correct classification. Comparative experiments have been conducted on benchmark mammogram dataset. Results show that the proposed classifier ensemble outperforms the best single classifier in terms of reducing the FP detections of masses.

  15. Sparse field stellar photometry

    NASA Astrophysics Data System (ADS)

    Reid, N.

    The past few years have seen substantial developments in the capability of high speed measuring machines in the field of automated stellar photometry. However, it is only very recently that these machines have started to make any impact on stellar astronomy, and even now their potential is scarcely being exploited. In this review, after describing some of the limitations on photometric precision, empirical results are used to demonstrate the sort of accuracies that are possible with the UK Schmidt plate plus COSMOS/APM images-scan combination. The astronomical results obtained to date from these machines are discussed, and some consideration is given to the future role of measuring machines in stellar astronomy.

  16. STIS Sparse Field CTE test

    NASA Astrophysics Data System (ADS)

    Goudfrooij, Paul

    1997-07-01

    CTE measurements are made using the "sparse field test", along both the serial and parallel axes. This program needs special commanding to provide {a} off-center MSM positionings of some slits, and {b} the ability to read out with any amplifier {A, B, C, or D}. All exposures are internals.

  17. Efficient particle filtering via sparse kernel density estimation.

    PubMed

    Banerjee, Amit; Burlina, Philippe

    2010-09-01

    Particle filters (PFs) are Bayesian filters capable of modeling nonlinear, non-Gaussian, and nonstationary dynamical systems. Recent research in PFs has investigated ways to appropriately sample from the posterior distribution, maintain multiple hypotheses, and alleviate computational costs while preserving tracking accuracy. To address these issues, a novel utilization of the support vector data description (SVDD) density estimation method within the particle filtering framework is presented. The SVDD density estimate can be integrated into a wide range of PFs to realize several benefits. It yields a sparse representation of the posterior density that reduces the computational complexity of the PF. The proposed approach also provides an analytical expression for the posterior distribution that can be used to identify its modes for maintaining multiple hypotheses and computing the MAP estimate, and to directly sample from the posterior. We present several experiments that demonstrate the advantages of incorporating a sparse kernel density estimate in a particle filter.

  18. Amesos2 Templated Direct Sparse Solver Package

    2011-05-24

    Amesos2 is a templated direct sparse solver package. Amesos2 provides interfaces to direct sparse solvers, rather than providing native solver capabilities. Amesos2 is a derivative work of the Trilinos package Amesos.

  19. The Use of Lesson Study Combined with Content Representation in the Planning of Physics Lessons During Field Practice to Develop Pedagogical Content Knowledge

    NASA Astrophysics Data System (ADS)

    Juhler, Martin Vogt

    2016-08-01

    Recent research, both internationally and in Norway, has clearly expressed concerns about missing connections between subject-matter knowledge, pedagogical competence and real-life practice in schools. This study addresses this problem within the domain of field practice in teacher education, studying pre-service teachers' planning of a Physics lesson. Two means of intervention were introduced. The first was lesson study, which is a method for planning, carrying out and reflecting on a research lesson in detail with a learner and content-centered focus. This was used in combination with a second means, content representations, which is a systematic tool that connects overall teaching aims with pedagogical prompts. Changes in teaching were assessed through the construct of pedagogical content knowledge (PCK). A deductive coding analysis was carried out for this purpose. Transcripts of pre-service teachers' planning of a Physics lesson were coded into four main PCK categories, which were thereafter divided into 16 PCK sub-categories. The results showed that the intervention affected the pre-service teachers' potential to start developing PCK. First, they focused much more on categories concerning the learners. Second, they focused far more uniformly in all of the four main categories comprising PCK. Consequently, these differences could affect their potential to start developing PCK.

  20. Normalization for Sparse Encoding of Odors by a Wide-Field Interneuron

    PubMed Central

    Papadopoulou, Maria; Cassenaer, Stijn; Nowotny, Thomas; Laurent, Gilles

    2011-01-01

    Summary Sparse coding presents practical advantages for sensory representations and memory storage. In the insect olfactory system, the representation of general odors is dense in the antennal lobes but sparse in the mushroom bodies, only one synapse downstream. In locusts, this transformation relies on the oscillatory structure of antennal lobe output, feed-forward inhibitory circuits, intrinsic properties of mushroom body neurons, and connectivity between antennal lobe and mushroom bodies. Here we show the existence of a normalizing negative feedback loop within the mushroom body to maintain sparse output over a wide range of input conditions. This loop consists of an identifiable “giant” nonspiking inhibitory interneuron with ubiquitous connectivity and graded release properties. PMID:21551062

  1. Topological sparse learning of dynamic form patterns.

    PubMed

    Guthier, T; Willert, V; Eggert, J

    2015-01-01

    Motion is a crucial source of information for a variety of tasks in social interactions. The process of how humans recognize complex articulated movements such as gestures or face expressions remains largely unclear. There is an ongoing discussion if and how explicit low-level motion information, such as optical flow, is involved in the recognition process. Motivated by this discussion, we introduce a computational model that classifies the spatial configuration of gradient and optical flow patterns. The patterns are learned with an unsupervised learning algorithm based on translation-invariant nonnegative sparse coding called VNMF that extracts prototypical optical flow patterns shaped, for example, as moving heads or limb parts. A key element of the proposed system is a lateral inhibition term that suppresses activations of competing patterns in the learning process, leading to a low number of dominant and topological sparse activations. We analyze the classification performance of the gradient and optical flow patterns on three real-world human action recognition and one face expression recognition data set. The results indicate that the recognition of human actions can be achieved by gradient patterns alone, but adding optical flow patterns increases the classification performance. The combined patterns outperform other biological-inspired models and are competitive with current computer vision approaches. PMID:25248088

  2. A Probabilistic Analysis of Sparse Coded Feature Pooling and Its Application for Image Retrieval

    PubMed Central

    Zhang, Yunchao; Chen, Jing; Huang, Xiujie; Wang, Yongtian

    2015-01-01

    Feature coding and pooling as a key component of image retrieval have been widely studied over the past several years. Recently sparse coding with max-pooling is regarded as the state-of-the-art for image classification. However there is no comprehensive study concerning the application of sparse coding for image retrieval. In this paper, we first analyze the effects of different sampling strategies for image retrieval, then we discuss feature pooling strategies on image retrieval performance with a probabilistic explanation in the context of sparse coding framework, and propose a modified sum pooling procedure which can improve the retrieval accuracy significantly. Further we apply sparse coding method to aggregate multiple types of features for large-scale image retrieval. Extensive experiments on commonly-used evaluation datasets demonstrate that our final compact image representation improves the retrieval accuracy significantly. PMID:26132080

  3. Highly parallel sparse Cholesky factorization

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Schreiber, Robert

    1990-01-01

    Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms.

  4. Symbolic Givens reduction in large sparse least squares problems

    SciTech Connect

    Ostrouchov, G.

    1984-12-01

    Orthogonal Givens factorization is a popular method for solving large sparse least squares problems. In order to exploit sparsity and to use a fixed data structure in Givens reduction, a preliminary symbolic factorization needs to be performed. Some results on row-ordering and structure of rows in a partially reduced matrix are obtained using a graph-theoretic representation. These results provide a basis for a symbolic Givens factorization. Column-ordering is also discussed, and an algorithm for symbolic Givens reduction is developed and tested.

  5. Sparse Matrices in MATLAB: Design and Implementation

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Moler, Cleve; Schreiber, Robert

    1992-01-01

    The matrix computation language and environment MATLAB is extended to include sparse matrix storage and operations. The only change to the outward appearance of the MATLAB language is a pair of commands to create full or sparse matrices. Nearly all the operations of MATLAB now apply equally to full or sparse matrices, without any explicit action by the user. The sparse data structure represents a matrix in space proportional to the number of nonzero entries, and most of the operations compute sparse results in time proportional to the number of arithmetic operations on nonzeros.

  6. Approximation and compression with sparse orthonormal transforms.

    PubMed

    Sezer, Osman Gokhan; Guleryuz, Onur G; Altunbasak, Yucel

    2015-08-01

    We propose a new transform design method that targets the generation of compression-optimized transforms for next-generation multimedia applications. The fundamental idea behind transform compression is to exploit regularity within signals such that redundancy is minimized subject to a fidelity cost. Multimedia signals, in particular images and video, are well known to contain a diverse set of localized structures, leading to many different types of regularity and to nonstationary signal statistics. The proposed method designs sparse orthonormal transforms (SOTs) that automatically exploit regularity over different signal structures and provides an adaptation method that determines the best representation over localized regions. Unlike earlier work that is motivated by linear approximation constructs and model-based designs that are limited to specific types of signal regularity, our work uses general nonlinear approximation ideas and a data-driven setup to significantly broaden its reach. We show that our SOT designs provide a safe and principled extension of the Karhunen-Loeve transform (KLT) by reducing to the KLT on Gaussian processes and by automatically exploiting non-Gaussian statistics to significantly improve over the KLT on more general processes. We provide an algebraic optimization framework that generates optimized designs for any desired transform structure (multiresolution, block, lapped, and so on) with significantly better n -term approximation performance. For each structure, we propose a new prototype codec and test over a database of images. Simulation results show consistent increase in compression and approximation performance compared with conventional methods. PMID:25823033

  7. Optimal parallel solution of sparse triangular systems

    NASA Technical Reports Server (NTRS)

    Alvarado, Fernando L.; Schreiber, Robert

    1990-01-01

    A method for the parallel solution of triangular sets of equations is described that is appropriate when there are many right-handed sides. By preprocessing, the method can reduce the number of parallel steps required to solve Lx = b compared to parallel forward or backsolve. Applications are to iterative solvers with triangular preconditioners, to structural analysis, or to power systems applications, where there may be many right-handed sides (not all available a priori). The inverse of L is represented as a product of sparse triangular factors. The problem is to find a factored representation of this inverse of L with the smallest number of factors (or partitions), subject to the requirement that no new nonzero elements be created in the formation of these inverse factors. A method from an earlier reference is shown to solve this problem. This method is improved upon by constructing a permutation of the rows and columns of L that preserves triangularity and allow for the best possible such partition. A number of practical examples and algorithmic details are presented. The parallelism attainable is illustrated by means of elimination trees and clique trees.

  8. Dictionary learning for stereo image representation.

    PubMed

    Tošić, Ivana; Frossard, Pascal

    2011-04-01

    One of the major challenges in multi-view imaging is the definition of a representation that reveals the intrinsic geometry of the visual information. Sparse image representations with overcomplete geometric dictionaries offer a way to efficiently approximate these images, such that the multi-view geometric structure becomes explicit in the representation. However, the choice of a good dictionary in this case is far from obvious. We propose a new method for learning overcomplete dictionaries that are adapted to the joint representation of stereo images. We first formulate a sparse stereo image model where the multi-view correlation is described by local geometric transforms of dictionary elements (atoms) in two stereo views. A maximum-likelihood (ML) method for learning stereo dictionaries is then proposed, where a multi-view geometry constraint is included in the probabilistic model. The ML objective function is optimized using the expectation-maximization algorithm. We apply the learning algorithm to the case of omnidirectional images, where we learn scales of atoms in a parametric dictionary. The resulting dictionaries provide better performance in the joint representation of stereo omnidirectional images as well as improved multi-view feature matching. We finally discuss and demonstrate the benefits of dictionary learning for distributed scene representation and camera pose estimation.

  9. Sparse principal component analysis in medical shape modeling

    NASA Astrophysics Data System (ADS)

    Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus

    2006-03-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.

  10. Robust Sparse Blind Source Separation

    NASA Astrophysics Data System (ADS)

    Chenot, Cecile; Bobin, Jerome; Rapin, Jeremy

    2015-11-01

    Blind Source Separation is a widely used technique to analyze multichannel data. In many real-world applications, its results can be significantly hampered by the presence of unknown outliers. In this paper, a novel algorithm coined rGMCA (robust Generalized Morphological Component Analysis) is introduced to retrieve sparse sources in the presence of outliers. It explicitly estimates the sources, the mixing matrix, and the outliers. It also takes advantage of the estimation of the outliers to further implement a weighting scheme, which provides a highly robust separation procedure. Numerical experiments demonstrate the efficiency of rGMCA to estimate the mixing matrix in comparison with standard BSS techniques.

  11. Finite representations of continuum environments

    NASA Astrophysics Data System (ADS)

    Zwolak, Michael

    2008-09-01

    Understanding dissipative and decohering processes is fundamental to the study of quantum systems. An accurate and generic method for investigating these processes is to simulate both the system and environment, which, however, is computationally very demanding. We develop a novel approach to constructing finite representations of the environment based on the influence of different frequency scales on the system's dynamics. As an illustration, we analyze a solvable model of an optical mode decaying into a reservoir. The influence of the environment modes is constant for small frequencies, but drops off rapidly for large frequencies, allowing for a very sparse representation at high frequencies that gives a significant computational speedup in simulating the environment. This approach provides a general framework for simulating open quantum systems.

  12. Drosophila Gene Expression Pattern Annotation Using Sparse Features and Term-Term Interactions

    PubMed Central

    Ji, Shuiwang; Yuan, Lei; Li, Ying-Xin; Zhou, Zhi-Hua; Kumar, Sudhir; Ye, Jieping

    2010-01-01

    The Drosophila gene expression pattern images document the spatial and temporal dynamics of gene expression and they are valuable tools for explicating the gene functions, interaction, and networks during Drosophila embryogenesis. To provide text-based pattern searching, the images in the Berkeley Drosophila Genome Project (BDGP) study are annotated with ontology terms manually by human curators. We present a systematic approach for automating this task, because the number of images needing text descriptions is now rapidly increasing. We consider both improved feature representation and novel learning formulation to boost the annotation performance. For feature representation, we adapt the bag-of-words scheme commonly used in visual recognition problems so that the image group information in the BDGP study is retained. Moreover, images from multiple views can be integrated naturally in this representation. To reduce the quantization error caused by the bag-of-words representation, we propose an improved feature representation scheme based on the sparse learning technique. In the design of learning formulation, we propose a local regularization framework that can incorporate the correlations among terms explicitly. We further show that the resulting optimization problem admits an analytical solution. Experimental results show that the representation based on sparse learning outperforms the bag-of-words representation significantly. Results also show that incorporation of the term-term correlations improves the annotation performance consistently. PMID:21614142

  13. Generation of Rayleigh-wave dispersion images from multichannel seismic data using sparse signal reconstruction

    NASA Astrophysics Data System (ADS)

    Mun, Songchol; Bao, Yuequan; Li, Hui

    2015-11-01

    The accurate estimation of dispersion curves has been a key issue for ensuring high quality in geophysical surface wave exploration. Many studies have been carried out on the generation of a high-resolution dispersion image from array measurements. In this study, the sparse signal representation and reconstruction techniques are employed to obtain the high resolution Rayleigh-wave dispersion image from seismic wave data. First, a sparse representation of the seismic wave data is introduced, in which the signal is assumed to be sparse in terms of wave speed. Then, the sparse signal is reconstructed by optimization using l1-norm regularization, which gives the signal amplitude spectrum as a function of wave speed. A dispersion image in the f-v domain is generated by arranging the sparse spectra for all frequency slices in the frequency range. Finally, to show the efficiency of the proposed approach, the Surfbar-2 field test data, acquired by B. Luke and colleagues at the University of Nevada Las Vegas, are analysed. By comparing the real-field dispersion image with the results from other methods, the high mode-resolving ability of the proposed approach is demonstrated, particularly for a case with strongly coherent modes.

  14. Cellular-resolution population imaging reveals robust sparse coding in the Drosophila Mushroom Body

    PubMed Central

    Honegger, Kyle S.; Campbell, Robert A. A.; Turner, Glenn C.

    2011-01-01

    Sensory stimuli are represented in the brain by the activity of populations of neurons. In most biological systems, studying population coding is challenging since only a tiny proportion of cells can be recorded simultaneously. Here we used 2-photon imaging to record neural activity in the relatively simple Drosophila mushroom body (MB), an area involved in olfactory learning and memory. Using the highly sensitive calcium indicator, GCaMP3, we simultaneously monitored the activity of >100 MB neurons in vivo (about 5% of the total population). The MB is thought to encode odors in sparse patterns of activity, but the code has yet to be explored either on a population level or with a wide variety of stimuli. We therefore imaged responses to odors chosen to evaluate the robustness of sparse representations. Different odors activated distinct patterns of MB neurons, however we found no evidence for spatial organization of neurons by either response probability or odor tuning within the cell body layer. The degree of sparseness was consistent across a wide range of stimuli, from monomolecular odors to artificial blends and even complex natural smells. Sparseness was mainly invariant across concentrations, largely because of the influence of recent odor experience. Finally, in contrast to sensory processing in other systems, no response features distinguished natural stimuli from monomolecular odors. Our results indicate that the fundamental feature of odor processing in the MB is to create sparse stimulus representations in a format that facilitates arbitrary associations between odor and punishment or reward. PMID:21849538

  15. Nonlocal sparse model with adaptive structural clustering for feature extraction of aero-engine bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Li, Xiang; Yan, Ruqiang

    2016-04-01

    Fault information of aero-engine bearings presents two particular phenomena, i.e., waveform distortion and impulsive feature frequency band dispersion, which leads to a challenging problem for current techniques of bearing fault diagnosis. Moreover, although many progresses of sparse representation theory have been made in feature extraction of fault information, the theory also confronts inevitable performance degradation due to the fact that relatively weak fault information has not sufficiently prominent and sparse representations. Therefore, a novel nonlocal sparse model (coined NLSM) and its algorithm framework has been proposed in this paper, which goes beyond simple sparsity by introducing more intrinsic structures of feature information. This work adequately exploits the underlying prior information that feature information exhibits nonlocal self-similarity through clustering similar signal fragments and stacking them together into groups. Within this framework, the prior information is transformed into a regularization term and a sparse optimization problem, which could be solved through block coordinate descent method (BCD), is formulated. Additionally, the adaptive structural clustering sparse dictionary learning technique, which utilizes k-Nearest-Neighbor (kNN) clustering and principal component analysis (PCA) learning, is adopted to further enable sufficient sparsity of feature information. Moreover, the selection rule of regularization parameter and computational complexity are described in detail. The performance of the proposed framework is evaluated through numerical experiment and its superiority with respect to the state-of-the-art method in the field is demonstrated through the vibration signals of experimental rig of aircraft engine bearings.

  16. The Real-Valued Sparse Direction of Arrival (DOA) Estimation Based on the Khatri-Rao Product.

    PubMed

    Chen, Tao; Wu, Huanxin; Zhao, Zhongkai

    2016-01-01

    There is a problem that complex operation which leads to a heavy calculation burden is required when the direction of arrival (DOA) of a sparse signal is estimated by using the array covariance matrix. The solution of the multiple measurement vectors (MMV) model is difficult. In this paper, a real-valued sparse DOA estimation algorithm based on the Khatri-Rao (KR) product called the L₁-RVSKR is proposed. The proposed algorithm is based on the sparse representation of the array covariance matrix. The array covariance matrix is transformed to a real-valued matrix via a unitary transformation so that a real-valued sparse model is achieved. The real-valued sparse model is vectorized for transforming to a single measurement vector (SMV) model, and a new virtual overcomplete dictionary is constructed according to the KR product's property. Finally, the sparse DOA estimation is solved by utilizing the idea of a sparse representation of array covariance vectors (SRACV). The simulation results demonstrate the superior performance and the low computational complexity of the proposed algorithm. PMID:27187409

  17. The Real-Valued Sparse Direction of Arrival (DOA) Estimation Based on the Khatri-Rao Product

    PubMed Central

    Chen, Tao; Wu, Huanxin; Zhao, Zhongkai

    2016-01-01

    There is a problem that complex operation which leads to a heavy calculation burden is required when the direction of arrival (DOA) of a sparse signal is estimated by using the array covariance matrix. The solution of the multiple measurement vectors (MMV) model is difficult. In this paper, a real-valued sparse DOA estimation algorithm based on the Khatri-Rao (KR) product called the L1-RVSKR is proposed. The proposed algorithm is based on the sparse representation of the array covariance matrix. The array covariance matrix is transformed to a real-valued matrix via a unitary transformation so that a real-valued sparse model is achieved. The real-valued sparse model is vectorized for transforming to a single measurement vector (SMV) model, and a new virtual overcomplete dictionary is constructed according to the KR product’s property. Finally, the sparse DOA estimation is solved by utilizing the idea of a sparse representation of array covariance vectors (SRACV). The simulation results demonstrate the superior performance and the low computational complexity of the proposed algorithm. PMID:27187409

  18. Sparseness of vowel category structure: Evidence from English dialect comparison.

    PubMed

    Scharinger, Mathias; Idsardi, William J

    2014-02-01

    Current models of speech perception tend to emphasize either fine-grained acoustic properties or coarse-grained abstract characteristics of speech sounds. We argue for a particular kind of 'sparse' vowel representations and provide new evidence that these representations account for the successful access of the corresponding categories. In an auditory semantic priming experiment, American English listeners made lexical decisions on targets (e.g. load) preceded by semantically related primes (e.g. pack). Changes of the prime vowel that crossed a vowel-category boundary (e.g. peck) were not treated as a tolerable variation, as assessed by a lack of priming, although the phonetic categories of the two different vowels considerably overlap in American English. Compared to the outcome of the same experiment with New Zealand English listeners, where such prime variations were tolerated, our experiment supports the view that phonological representations are important in guiding the mapping process from the acoustic signal to an abstract mental representation. Our findings are discussed with regard to current models of speech perception and recent findings from brain imaging research.

  19. Representing Representation

    ERIC Educational Resources Information Center

    Kuntz, Aaron M.

    2010-01-01

    What can be known and how to render what we know are perpetual quandaries met by qualitative research, complicated further by the understanding that the everyday discourses influencing our representations are often tacit, unspoken or heard so often that they seem to warrant little reflection. In this article, I offer analytic memos as a means for…

  20. Sparse coding for hyperspectral images using random dictionary and soft thresholding

    NASA Astrophysics Data System (ADS)

    Oguslu, Ender; Iftekharuddin, Khan; Li, Jiang

    2012-05-01

    Many techniques have been recently developed for classification of hyperspectral images (HSI) including support vector machines (SVMs), neural networks and graph-based methods. To achieve good performances for the classification, a good feature representation of the HSI is essential. A great deal of feature extraction algorithms have been developed such as principal component analysis (PCA) and independent component analysis (ICA). Sparse coding has recently shown state-of-the-art performances in many applications including image classification. In this paper, we present a feature extraction method for HSI data motivated by a recently developed sparse coding based image representation technique. Sparse coding consists of a dictionary learning step and an encoding step. In the learning step, we compared two different methods, L1-penalized sparse coding and random selection for the dictionary learning. In the encoding step, we utilized a soft threshold activation function to obtain feature representations for HSI. We applied the proposed algorithm to a HSI dataset collected at the Kennedy Space Center (KSC) and compared our results with those obtained by a recently proposed method, supervised locally linear embedding weighted k-nearest-neighbor (SLLE-WkNN) classifier. We have achieved better performances on this dataset in terms of the overall accuracy with a random dictionary. We conclude that this simple feature extraction framework might lead to more efficient HSI classification systems.

  1. A novel sparse boosting method for crater detection in the high resolution planetary image

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Yang, Gang; Guo, Lei

    2015-09-01

    Impact craters distributed on planetary surface become one of the main barriers during the soft landing of planetary probes. In order to accelerate the crater detection, in this paper, we present a new sparse boosting (SparseBoost) method for automatic detection of sub-kilometer craters. The SparseBoost method integrates an improved sparse kernel density estimator (RSDE-WL1) into the Boost algorithm and the RSDE-WL1 estimator is achieved by introducing weighted l1 penalty term into the reduced set density estimator. An iterative algorithm is proposed to implement the RSDE-WL1. The SparseBoost algorithm has the advantage of fewer selected features and simpler representation of the weak classifiers compared with the Boost algorithm. Our SparseBoost based crater detection method is evaluated on a large and high resolution image of Martian surface. Experimental results demonstrate that the proposed method can achieve less computational complexity in comparison with other crater detection methods in terms of selected features.

  2. Dimensionality reduction of hyperspectral images based on sparse discriminant manifold embedding

    NASA Astrophysics Data System (ADS)

    Huang, Hong; Luo, Fulin; Liu, Jiamin; Yang, Yaqiong

    2015-08-01

    Sparse manifold clustering and embedding (SMCE) adaptively selects neighbor points from the same manifold and approximately spans a low-dimensional affine subspace, but it does not explicitly give a projection matrix and encounters the out-of-sample problem. To overcome this drawback, we propose a new dimensionality reduction method, called sparse manifold embedding (SME), based on graph embedding and sparse representation for hyperspectral image (HSI). It utilizes the sparse coefficients of affine subspace to construct a similarity graph and preserves this sparse similarity in embedding space. Furthermore, we try to make full use of the prior label information to design a novel supervised learning method termed sparse discriminant manifold embedding (SDME). SDME not only inherits the merits of the sparsity property of affine subspace but also boosts the compactness of intra-manifold, which achieves discriminating features and further improves the classification performance of HSI. Experiments on two real hyperspectral data sets (Indian Pines and PaviaU) show the benefits of the proposed SME and SDME methods.

  3. Parallel sparse and dense information coding streams in the electrosensory midbrain.

    PubMed

    Sproule, Michael K J; Metzen, Michael G; Chacron, Maurice J

    2015-10-21

    Efficient processing of incoming sensory information is critical for an organism's survival. It has been widely observed across systems and species that the representation of sensory information changes across successive brain areas. Indeed, peripheral sensory neurons tend to respond densely to a broad range of sensory stimuli while more central neurons tend to instead respond sparsely to a narrow range of stimuli. Such a transition might be advantageous as sparse neural codes are thought to be metabolically efficient and optimize coding efficiency. Here we investigated whether the neural code transitions from dense to sparse within the midbrain Torus semicircularis (TS) of weakly electric fish. Confirming previous results, we found both dense and sparse coding neurons. However, subsequent histological classification revealed that most dense neurons projected to higher brain areas. Our results thus provide strong evidence against the hypothesis that the neural code transitions from dense to sparse in the electrosensory system. Rather, they support the alternative hypothesis that higher brain areas receive parallel streams of dense and sparse coded information from the electrosensory midbrain. We discuss the implications and possible advantages of such a coding strategy and argue that it is a general feature of sensory processing.

  4. Sparse Modeling for Astronomical Data Analysis

    NASA Astrophysics Data System (ADS)

    Ikeda, Shiro; Odaka, Hirokazu; Uemura, Makoto

    2016-03-01

    For astronomical data analysis, there have been proposed multiple methods based on sparse modeling. We have proposed a method for Compton camera imaging. The proposed approach is a sparse modeling method, but the derived algorithm is different from LASSO. We explain the problem and how we derived the method.

  5. Statistical method for sparse coding of speech including a linear predictive model

    NASA Astrophysics Data System (ADS)

    Rufiner, Hugo L.; Goddard, John; Rocha, Luis F.; Torres, María E.

    2006-07-01

    Recently, different methods for obtaining sparse representations of a signal using dictionaries of waveforms have been studied. They are often motivated by the way the brain seems to process certain sensory signals. Algorithms have been developed using a specific criterion to choose the waveforms occurring in the representation. The waveforms are choosen from a fixed dictionary and some algorithms also construct them as a part of the method. In the case of speech signals, most approaches do not take into consideration the important temporal correlations that are exhibited. It is known that these correlations are well approximated by linear models. Incorporating this a priori knowledge of the signal can facilitate the search for a suitable representation solution and also can help with its interpretation. Lewicki proposed a method to solve the noisy and overcomplete independent component analysis problem. In the present paper we propose a modification of this statistical technique for obtaining a sparse representation using a generative parametric model. The representations obtained with the method proposed here and other techniques are applied to artificial data and real speech signals, and compared using different coding costs and sparsity measures. The results show that the proposed method achieves more efficient representations of these signals compared to the others. A qualitative analysis of these results is also presented, which suggests that the restriction imposed by the parametric model is helpful in discovering meaningful characteristics of the signals.

  6. Compressive sensing of sparse tensors.

    PubMed

    Friedland, Shmuel; Li, Qun; Schonfeld, Dan

    2014-10-01

    Compressive sensing (CS) has triggered an enormous research activity since its first appearance. CS exploits the signal's sparsity or compressibility in a particular domain and integrates data compression and acquisition, thus allowing exact reconstruction through relatively few nonadaptive linear measurements. While conventional CS theory relies on data representation in the form of vectors, many data types in various applications, such as color imaging, video sequences, and multisensor networks, are intrinsically represented by higher order tensors. Application of CS to higher order data representation is typically performed by conversion of the data to very long vectors that must be measured using very large sampling matrices, thus imposing a huge computational and memory burden. In this paper, we propose generalized tensor compressive sensing (GTCS)-a unified framework for CS of higher order tensors, which preserves the intrinsic structure of tensor data with reduced computational complexity at reconstruction. GTCS offers an efficient means for representation of multidimensional data by providing simultaneous acquisition and compression from all tensor modes. In addition, we propound two reconstruction procedures, a serial method and a parallelizable method. We then compare the performance of the proposed method with Kronecker compressive sensing (KCS) and multiway compressive sensing (MWCS). We demonstrate experimentally that GTCS outperforms KCS and MWCS in terms of both reconstruction accuracy (within a range of compression ratios) and processing speed. The major disadvantage of our methods (and of MWCS as well) is that the compression ratios may be worse than that offered by KCS.

  7. Approximate inverse preconditioners for general sparse matrices

    SciTech Connect

    Chow, E.; Saad, Y.

    1994-12-31

    Preconditioned Krylov subspace methods are often very efficient in solving sparse linear matrices that arise from the discretization of elliptic partial differential equations. However, for general sparse indifinite matrices, the usual ILU preconditioners fail, often because of the fact that the resulting factors L and U give rise to unstable forward and backward sweeps. In such cases, alternative preconditioners based on approximate inverses may be attractive. We are currently developing a number of such preconditioners based on iterating on each column to get the approximate inverse. For this approach to be efficient, the iteration must be done in sparse mode, i.e., we must use sparse-matrix by sparse-vector type operatoins. We will discuss a few options and compare their performance on standard problems from the Harwell-Boeing collection.

  8. Large-scale sparse singular value computations

    NASA Technical Reports Server (NTRS)

    Berry, Michael W.

    1992-01-01

    Four numerical methods for computing the singular value decomposition (SVD) of large sparse matrices on a multiprocessor architecture are presented. Lanczos and subspace iteration-based methods for determining several of the largest singular triplets (singular values and corresponding left and right-singular vectors) for sparse matrices arising from two practical applications: information retrieval and seismic reflection tomography are emphasized. The target architectures for implementations are the CRAY-2S/4-128 and Alliant FX/80. The sparse SVD problem is well motivated by recent information-retrieval techniques in which dominant singular values and their corresponding singular vectors of large sparse term-document matrices are desired, and by nonlinear inverse problems from seismic tomography applications which require approximate pseudo-inverses of large sparse Jacobian matrices.

  9. A Data Type for Efficient Representation of Other Data Types

    NASA Technical Reports Server (NTRS)

    James, Mark

    2008-01-01

    A self-organizing, monomorphic data type denoted a sequence has been conceived to address certain concerns that arise in programming parallel computers. A sequence in the present sense can be regarded abstractly as a vector, set, bag, queue, or other construct. Heretofore, in programming a parallel computer, it has been necessary for the programmer to state explicitly, at the outset, what parts of the program and the underlying data structures must be represented in parallel form. Not only is this requirement not optimal from the perspective of implementation; it entails an additional requirement that the programmer have intimate understanding of the underlying parallel structure. The present sequence data type overcomes both the implementation and parallel structure obstacles. In so doing, the sequence data type provides unified means by which the programmer can represent a data structure for natural and automatic decomposition to a parallel computing architecture. Sequences exhibit the behavioral and structural characteristics of vectors, but the underlying representations are automatically synthesized from combinations of programmers advice and execution use metrics. Sequences can vary bidirectionally between sparseness and density, making them excellent choices for many kinds of algorithms. The novelty and benefit of this behavior lies in the fact that it can relieve programmers of the details of implementations. The creation of a sequence enables decoupling of a conceptual representation from an implementation. The underlying representation of a sequence is a hybrid of representations composed of vectors, linked lists, connected blocks, and hash tables. The internal structure of a sequence can automatically change from time to time on the basis of how it is being used. Those portions of a sequence where elements have not been added or removed can be as efficient as vectors. As elements are inserted and removed in a given portion, then different methods are

  10. Towards robust and effective shape modeling: sparse shape composition.

    PubMed

    Zhang, Shaoting; Zhan, Yiqiang; Dewan, Maneesh; Huang, Junzhou; Metaxas, Dimitris N; Zhou, Xiang Sean

    2012-01-01

    Organ shape plays an important role in various clinical practices, e.g., diagnosis, surgical planning and treatment evaluation. It is usually derived from low level appearance cues in medical images. However, due to diseases and imaging artifacts, low level appearance cues might be weak or misleading. In this situation, shape priors become critical to infer and refine the shape derived by image appearances. Effective modeling of shape priors is challenging because: (1) shape variation is complex and cannot always be modeled by a parametric probability distribution; (2) a shape instance derived from image appearance cues (input shape) may have gross errors; and (3) local details of the input shape are difficult to preserve if they are not statistically significant in the training data. In this paper we propose a novel Sparse Shape Composition model (SSC) to deal with these three challenges in a unified framework. In our method, a sparse set of shapes in the shape repository is selected and composed together to infer/refine an input shape. The a priori information is thus implicitly incorporated on-the-fly. Our model leverages two sparsity observations of the input shape instance: (1) the input shape can be approximately represented by a sparse linear combination of shapes in the shape repository; (2) parts of the input shape may contain gross errors but such errors are sparse. Our model is formulated as a sparse learning problem. Using L1 norm relaxation, it can be solved by an efficient expectation-maximization (EM) type of framework. Our method is extensively validated on two medical applications, 2D lung localization in X-ray images and 3D liver segmentation in low-dose CT scans. Compared to state-of-the-art methods, our model exhibits better performance in both studies. PMID:21963296

  11. Sparse-view ultrasound diffraction tomography using compressed sensing with nonuniform FFT.

    PubMed

    Hua, Shaoyan; Ding, Mingyue; Yuchi, Ming

    2014-01-01

    Accurate reconstruction of the object from sparse-view sampling data is an appealing issue for ultrasound diffraction tomography (UDT). In this paper, we present a reconstruction method based on compressed sensing framework for sparse-view UDT. Due to the piecewise uniform characteristics of anatomy structures, the total variation is introduced into the cost function to find a more faithful sparse representation of the object. The inverse problem of UDT is iteratively resolved by conjugate gradient with nonuniform fast Fourier transform. Simulation results show the effectiveness of the proposed method that the main characteristics of the object can be properly presented with only 16 views. Compared to interpolation and multiband method, the proposed method can provide higher resolution and lower artifacts with the same view number. The robustness to noise and the computation complexity are also discussed. PMID:24868241

  12. A Hyperspherical Adaptive Sparse-Grid Method for High-Dimensional Discontinuity Detection

    SciTech Connect

    Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.; Burkardt, John V.

    2015-06-24

    This study proposes and analyzes a hyperspherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hypersurface of an N-dimensional discontinuous quantity of interest, by virtue of a hyperspherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyperspherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. In addition, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.

  13. Improved image registration by sparse patch-based deformation estimation.

    PubMed

    Kim, Minjeong; Wu, Guorong; Wang, Qian; Lee, Seong-Whan; Shen, Dinggang

    2015-01-15

    Despite intensive efforts for decades, deformable image registration is still a challenging problem due to the potential large anatomical differences across individual images, which limits the registration performance. Fortunately, this issue could be alleviated if a good initial deformation can be provided for the two images under registration, which are often termed as the moving subject and the fixed template, respectively. In this work, we present a novel patch-based initial deformation prediction framework for improving the performance of existing registration algorithms. Our main idea is to estimate the initial deformation between subject and template in a patch-wise fashion by using the sparse representation technique. We argue that two image patches should follow the same deformation toward the template image if their patch-wise appearance patterns are similar. To this end, our framework consists of two stages, i.e., the training stage and the application stage. In the training stage, we register all training images to the pre-selected template, such that the deformation of each training image with respect to the template is known. In the application stage, we apply the following four steps to efficiently calculate the initial deformation field for the new test subject: (1) We pick a small number of key points in the distinctive regions of the test subject; (2) for each key point, we extract a local patch and form a coupled appearance-deformation dictionary from training images where each dictionary atom consists of the image intensity patch as well as their respective local deformations; (3) a small set of training image patches in the coupled dictionary are selected to represent the image patch of each subject key point by sparse representation. Then, we can predict the initial deformation for each subject key point by propagating the pre-estimated deformations on the selected training patches with the same sparse representation coefficients; and (4) we

  14. Representation of discrete Steklov-Poincare operator arising in domain decomposition methods in wavelet basis

    SciTech Connect

    Jemcov, A.; Matovic, M.D.

    1996-12-31

    This paper examines the sparse representation and preconditioning of a discrete Steklov-Poincare operator which arises in domain decomposition methods. A non-overlapping domain decomposition method is applied to a second order self-adjoint elliptic operator (Poisson equation), with homogeneous boundary conditions, as a model problem. It is shown that the discrete Steklov-Poincare operator allows sparse representation with a bounded condition number in wavelet basis if the transformation is followed by thresholding and resealing. These two steps combined enable the effective use of Krylov subspace methods as an iterative solution procedure for the system of linear equations. Finding the solution of an interface problem in domain decomposition methods, known as a Schur complement problem, has been shown to be equivalent to the discrete form of Steklov-Poincare operator. A common way to obtain Schur complement matrix is by ordering the matrix of discrete differential operator in subdomain node groups then block eliminating interface nodes. The result is a dense matrix which corresponds to the interface problem. This is equivalent to reducing the original problem to several smaller differential problems and one boundary integral equation problem for the subdomain interface.

  15. Efficient nearest neighbors via robust sparse hashing.

    PubMed

    Cherian, Anoop; Sra, Suvrit; Morellas, Vassilios; Papanikolopoulos, Nikolaos

    2014-08-01

    This paper presents a new nearest neighbor (NN) retrieval framework: robust sparse hashing (RSH). Our approach is inspired by the success of dictionary learning for sparse coding. Our key idea is to sparse code the data using a learned dictionary, and then to generate hash codes out of these sparse codes for accurate and fast NN retrieval. But, direct application of sparse coding to NN retrieval poses a technical difficulty: when data are noisy or uncertain (which is the case with most real-world data sets), for a query point, an exact match of the hash code generated from the sparse code seldom happens, thereby breaking the NN retrieval. Borrowing ideas from robust optimization theory, we circumvent this difficulty via our novel robust dictionary learning and sparse coding framework called RSH, by learning dictionaries on the robustified counterparts of the perturbed data points. The algorithm is applied to NN retrieval on both simulated and real-world data. Our results demonstrate that RSH holds significant promise for efficient NN retrieval against the state of the art.

  16. Women and political representation.

    PubMed

    Rathod, P B

    1999-01-01

    A remarkable progress in women's participation in politics throughout the world was witnessed in the final decade of the 20th century. According to the Inter-Parliamentary Union report, there were only eight countries with no women in their legislatures in 1998. The number of women ministers at the cabinet level worldwide doubled in a decade, and the number of countries without any women ministers dropped from 93 to 48 during 1987-96. However, this progress is far from satisfactory. Political representation of women, minorities, and other social groups is still inadequate. This may be due to a complex combination of socioeconomic, cultural, and institutional factors. The view that women's political participation increases with social and economic development is supported by data from the Nordic countries, where there are higher proportions of women legislators than in less developed countries. While better levels of socioeconomic development, having a women-friendly political culture, and higher literacy are considered favorable factors for women's increased political representation, adopting one of the proportional representation systems (such as a party-list system, a single transferable vote system, or a mixed proportional system with multi-member constituencies) is the single factor most responsible for the higher representation of women.

  17. Hyperspectral image classification using a spectral-spatial sparse coding model

    NASA Astrophysics Data System (ADS)

    Oguslu, Ender; Zhou, Guoqing; Li, Jiang

    2013-10-01

    We present a sparse coding based spectral-spatial classification model for hyperspectral image (HSI) datasets. The proposed method consists of an efficient sparse coding method in which the l1/lq regularized multi-class logistic regression technique was utilized to achieve a compact representation of hyperspectral image pixels for land cover classification. We applied the proposed algorithm to a HSI dataset collected at the Kennedy Space Center and compared our algorithm to a recently proposed method, Gaussian process maximum likelihood (GP-ML) classifier. Experimental results show that the proposed method can achieve significantly better performances than the GP-ML classifier when training data is limited with a compact pixel representation, leading to more efficient HSI classification systems.

  18. Nonlinear model reduction for dynamical systems using sparse sensor locations from learned libraries.

    PubMed

    Sargsyan, Syuzanna; Brunton, Steven L; Kutz, J Nathan

    2015-09-01

    We demonstrate the synthesis of sparse sampling and dimensionality reduction to characterize and model nonlinear dynamical systems over a range of bifurcation parameters. First, we construct modal libraries using the classical proper orthogonal decomposition in order to expose the dominant low-rank coherent structures. Here, libraries of the nonlinear terms are also constructed in order to take advantage of the discrete empirical interpolation method and projection that allows for the approximation of nonlinear terms from a sparse number of grid points. The selected grid points are shown to be effective sensing and measurement locations for characterizing the underlying dynamics, stability, and bifurcations of nonlinear dynamical systems. The use of empirical interpolation points and sparse representation facilitates a family of local reduced-order models for each physical regime, rather than a higher-order global model, which has the benefit of physical interpretability of energy transfer between coherent structures. The method advocated also allows for orders-of-magnitude improvement in computational speed and memory requirements. To illustrate the method, the discrete interpolation points and nonlinear modal libraries are used for sparse representation in order to classify and reconstruct the dynamic bifurcation regimes in the complex Ginzburg-Landau equation. It is also shown that point measurements of the nonlinearity are more effective than linear measurements when sensor noise is present.

  19. Data-driven and calibration-free Lamb wave source localization with sparse sensor arrays.

    PubMed

    Harley, Joel B; Moura, José M F

    2015-08-01

    Most Lamb wave localization techniques require that we know the wave's velocity characteristics; yet, in many practical scenarios, velocity estimates can be challenging to acquire, are unavailable, or are unreliable because of the complexity of Lamb waves. As a result, there is a significant need for new methods that can reduce a system's reliance on a priori velocity information. This paper addresses this challenge through two novel source localization methods designed for sparse sensor arrays in isotropic media. Both methods exploit the fundamental sparse structure of a Lamb wave's frequency-wavenumber representation. The first method uses sparse recovery techniques to extract velocities from calibration data. The second method uses kurtosis and the support earth mover's distance to measure the sparseness of a Lamb wave's approximate frequency-wavenumber representation. These measures are then used to locate acoustic sources with no prior calibration data. We experimentally study each method with a collection of acoustic emission data measured from a 1.22 m by 1.22 m isotropic aluminum plate. We show that both methods can achieve less than 1 cm localization error and have less systematic error than traditional time-of-arrival localization methods. PMID:26276960

  20. Nonlinear model reduction for dynamical systems using sparse sensor locations from learned libraries

    NASA Astrophysics Data System (ADS)

    Sargsyan, Syuzanna; Brunton, Steven L.; Kutz, J. Nathan

    2015-09-01

    We demonstrate the synthesis of sparse sampling and dimensionality reduction to characterize and model nonlinear dynamical systems over a range of bifurcation parameters. First, we construct modal libraries using the classical proper orthogonal decomposition in order to expose the dominant low-rank coherent structures. Here, libraries of the nonlinear terms are also constructed in order to take advantage of the discrete empirical interpolation method and projection that allows for the approximation of nonlinear terms from a sparse number of grid points. The selected grid points are shown to be effective sensing and measurement locations for characterizing the underlying dynamics, stability, and bifurcations of nonlinear dynamical systems. The use of empirical interpolation points and sparse representation facilitates a family of local reduced-order models for each physical regime, rather than a higher-order global model, which has the benefit of physical interpretability of energy transfer between coherent structures. The method advocated also allows for orders-of-magnitude improvement in computational speed and memory requirements. To illustrate the method, the discrete interpolation points and nonlinear modal libraries are used for sparse representation in order to classify and reconstruct the dynamic bifurcation regimes in the complex Ginzburg-Landau equation. It is also shown that point measurements of the nonlinearity are more effective than linear measurements when sensor noise is present.

  1. Sparse High Dimensional Models in Economics.

    PubMed

    Fan, Jianqing; Lv, Jinchi; Qi, Lei

    2011-09-01

    This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance. Recent developments of theory, methods, and implementations in penalized least squares and penalized likelihood methods are highlighted. These variable selection methods are proved to be effective in high dimensional sparse modeling. The limits of dimensionality that regularization methods can handle, the role of penalty functions, and their statistical properties are detailed. Some recent advances in ultra-high dimensional sparse modeling are also briefly discussed. PMID:22022635

  2. Sparse High Dimensional Models in Economics

    PubMed Central

    Fan, Jianqing; Lv, Jinchi; Qi, Lei

    2010-01-01

    This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance. Recent developments of theory, methods, and implementations in penalized least squares and penalized likelihood methods are highlighted. These variable selection methods are proved to be effective in high dimensional sparse modeling. The limits of dimensionality that regularization methods can handle, the role of penalty functions, and their statistical properties are detailed. Some recent advances in ultra-high dimensional sparse modeling are also briefly discussed. PMID:22022635

  3. Enhancing Scalability of Sparse Direct Methods

    SciTech Connect

    Li, Xiaoye S.; Demmel, James; Grigori, Laura; Gu, Ming; Xia,Jianlin; Jardin, Steve; Sovinec, Carl; Lee, Lie-Quan

    2007-07-23

    TOPS is providing high-performance, scalable sparse direct solvers, which have had significant impacts on the SciDAC applications, including fusion simulation (CEMM), accelerator modeling (COMPASS), as well as many other mission-critical applications in DOE and elsewhere. Our recent developments have been focusing on new techniques to overcome scalability bottleneck of direct methods, in both time and memory. These include parallelizing symbolic analysis phase and developing linear-complexity sparse factorization methods. The new techniques will make sparse direct methods more widely usable in large 3D simulations on highly-parallel petascale computers.

  4. Universal regularizers for robust sparse coding and modeling.

    PubMed

    Ramírez, Ignacio; Sapiro, Guillermo

    2012-09-01

    Sparse data models, where data is assumed to be well represented as a linear combination of a few elements from a dictionary, have gained considerable attention in recent years, and their use has led to state-of-the-art results in many signal and image processing tasks. It is now well understood that the choice of the sparsity regularization term is critical in the success of such models. Based on a codelength minimization interpretation of sparse coding, and using tools from universal coding theory, we propose a framework for designing sparsity regularization terms which have theoretical and practical advantages when compared with the more standard l(0) or l(1) ones. The presentation of the framework and theoretical foundations is complemented with examples that show its practical advantages in image denoising, zooming and classification.

  5. Social Representations of High School Students about Mathematics Assessment

    ERIC Educational Resources Information Center

    Martínez-Sierra, Gustavo; Valle-Zequeida, María E.; Miranda-Tirado, Marisa; Dolores-Flores, Crisólogo

    2016-01-01

    The perceptions of students about assessment in mathematics classes have been sparsely investigated. In order to fill this gap, this qualitative study aims to identify the social "representations" (understood as the system of values, ideas, and practices about a social object) of high school students regarding "assessment in…

  6. Sparse approximation using M-term pursuit and application in image and video coding.

    PubMed

    Rahmoune, Adel; Vandergheynst, Pierre; Frossard, Pascal

    2012-04-01

    This paper introduces a novel algorithm for sparse approximation in redundant dictionaries called the M-term pursuit (MTP). This algorithm decomposes a signal into a linear combination of atoms that are selected in order to represent the main signal components. The MTP algorithm provides an adaptive representation for signals in any complete dictionary. The basic idea behind the MTP is to partition the dictionary into L quasi-disjoint subdictionaries. A k-term signal approximation is then iteratively computed, where each iteration leads to the selection of M ≤ L atoms based on thresholding. The MTP algorithm is shown to achieve competitive performance with the matching pursuit (MP) algorithm that greedily selects atoms one by one. This is due to efficient partitioning of the dictionary. At the same time, the computational complexity is dramatically reduced compared to MP due to the batch selection of atoms. We finally illustrate the performance of MTP in image and video compression applications, where we show that the suboptimal atom selection of MTP is largely compensated by the reduction in complexity compared with MP.

  7. Interpretable exemplar-based shape classification using constrained sparse linear models

    NASA Astrophysics Data System (ADS)

    Sigurdsson, Gunnar A.; Yang, Zhen; Tran, Trac D.; Prince, Jerry L.

    2015-03-01

    Many types of diseases manifest themselves as observable changes in the shape of the affected organs. Using shape classification, we can look for signs of disease and discover relationships between diseases. We formulate the problem of shape classification in a holistic framework that utilizes a lossless scalar field representation and a non-parametric classification based on sparse recovery. This framework generalizes over certain classes of unseen shapes while using the full information of the shape, bypassing feature extraction. The output of the method is the class whose combination of exemplars most closely approximates the shape, and furthermore, the algorithm returns the most similar exemplars along with their similarity to the shape, which makes the result simple to interpret. Our results show that the method offers accurate classification between three cerebellar diseases and controls in a database of cerebellar ataxia patients. For reproducible comparison, promising results are presented on publicly available 2D datasets, including the ETH-80 dataset where the method achieves 88.4% classification accuracy.

  8. Sparse Bayesian learning machine for real-time management of reservoir releases

    NASA Astrophysics Data System (ADS)

    Khalil, Abedalrazq; McKee, Mac; Kemblowski, Mariush; Asefa, Tirusew

    2005-11-01

    Water scarcity and uncertainties in forecasting future water availabilities present serious problems for basin-scale water management. These problems create a need for intelligent prediction models that learn and adapt to their environment in order to provide water managers with decision-relevant information related to the operation of river systems. This manuscript presents examples of state-of-the-art techniques for forecasting that combine excellent generalization properties and sparse representation within a Bayesian paradigm. The techniques are demonstrated as decision tools to enhance real-time water management. A relevance vector machine, which is a probabilistic model, has been used in an online fashion to provide confident forecasts given knowledge of some state and exogenous conditions. In practical applications, online algorithms should recognize changes in the input space and account for drift in system behavior. Support vectors machines lend themselves particularly well to the detection of drift and hence to the initiation of adaptation in response to a recognized shift in system structure. The resulting model will normally have a structure and parameterization that suits the information content of the available data. The utility and practicality of this proposed approach have been demonstrated with an application in a real case study involving real-time operation of a reservoir in a river basin in southern Utah.

  9. Separation of seismic blended data by sparse inversion over dictionary learning

    NASA Astrophysics Data System (ADS)

    Zhou, Yanhui; Chen, Wenchao; Gao, Jinghuai

    2014-07-01

    Recent development of blended acquisition calls for the new procedure to process blended seismic measurements. Presently, deblending and reconstructing unblended data followed by conventional processing is the most practical processing workflow. We study seismic deblending by advanced sparse inversion with a learned dictionary in this paper. To make our method more effective, hybrid acquisition and time-dithering sequential shooting are introduced so that clean single-shot records can be used to train the dictionary to favor the sparser representation of data to be recovered. Deblending and dictionary learning with l1-norm based sparsity are combined to construct the corresponding problem with respect to unknown recovery, dictionary, and coefficient sets. A two-step optimization approach is introduced. In the step of dictionary learning, the clean single-shot data are selected as trained data to learn the dictionary. For deblending, we fix the dictionary and employ an alternating scheme to update the recovery and coefficients separately. Synthetic and real field data were used to verify the performance of our method. The outcome can be a significant reference in designing high-efficient and low-cost blended acquisition.

  10. Sparse ice: Geophysical, biological and Indigenous knowledge perspectives on a habitat for ice-associated fauna

    NASA Astrophysics Data System (ADS)

    Lee, O. A.; Eicken, H.; Weyapuk, W., Jr.; Adams, B.; Mohoney, A. R.

    2015-12-01

    The significance of highly dispersed, remnant Arctic sea ice as a platform for marine mammals and indigenous hunters in spring and summer may have increased disproportionately with changes in the ice cover. As dispersed remnant ice becomes more common in the future it will be increasingly important to understand its ecological role for upper trophic levels such as marine mammals and its role for supporting primary productivity of ice-associated algae. Potential sparse ice habitat at sea ice concentrations below 15% is difficult to detect using remote sensing data alone. A combination of high resolution satellite imagery (including Synthetic Aperture Radar), data from the Barrow sea ice radar, and local observations from indigenous sea ice experts was used to detect sparse sea ice in the Alaska Arctic. Traditional knowledge on sea ice use by marine mammals was used to delimit the scales where sparse ice could still be used as habitat for seals and walrus. Potential sparse ice habitat was quantified with respect to overall spatial extent, size of ice floes, and density of floes. Sparse ice persistence offshore did not prevent the occurrence of large coastal walrus haul outs, but the lack of sparse ice and early sea ice retreat coincided with local observations of ringed seal pup mortality. Observations from indigenous hunters will continue to be an important source of information for validating remote sensing detections of sparse ice, and improving understanding of marine mammal adaptations to sea ice change.

  11. Social biases determine spatiotemporal sparseness of ciliate mating heuristics.

    PubMed

    Clark, Kevin B

    2012-01-01

    Ciliates become highly social, even displaying animal-like qualities, in the joint presence of aroused conspecifics and nonself mating pheromones. Pheromone detection putatively helps trigger instinctual and learned courtship and dominance displays from which social judgments are made about the availability, compatibility, and fitness representativeness or likelihood of prospective mates and rivals. In earlier studies, I demonstrated the heterotrich Spirostomum ambiguum improves mating competence by effecting preconjugal strategies and inferences in mock social trials via behavioral heuristics built from Hebbian-like associative learning. Heuristics embody serial patterns of socially relevant action that evolve into ordered, topologically invariant computational networks supporting intra- and intermate selection. S. ambiguum employs heuristics to acquire, store, plan, compare, modify, select, and execute sets of mating propaganda. One major adaptive constraint over formation and use of heuristics involves a ciliate's initial subjective bias, responsiveness, or preparedness, as defined by Stevens' Law of subjective stimulus intensity, for perceiving the meaningfulness of mechanical pressures accompanying cell-cell contacts and additional perimating events. This bias controls durations and valences of nonassociative learning, search rates for appropriate mating strategies, potential net reproductive payoffs, levels of social honesty and deception, successful error diagnosis and correction of mating signals, use of insight or analysis to solve mating dilemmas, bioenergetics expenditures, and governance of mating decisions by classical or quantum statistical mechanics. I now report this same social bias also differentially affects the spatiotemporal sparseness, as measured with metric entropy, of ciliate heuristics. Sparseness plays an important role in neural systems through optimizing the specificity, efficiency, and capacity of memory representations. The present

  12. Social biases determine spatiotemporal sparseness of ciliate mating heuristics.

    PubMed

    Clark, Kevin B

    2012-01-01

    Ciliates become highly social, even displaying animal-like qualities, in the joint presence of aroused conspecifics and nonself mating pheromones. Pheromone detection putatively helps trigger instinctual and learned courtship and dominance displays from which social judgments are made about the availability, compatibility, and fitness representativeness or likelihood of prospective mates and rivals. In earlier studies, I demonstrated the heterotrich Spirostomum ambiguum improves mating competence by effecting preconjugal strategies and inferences in mock social trials via behavioral heuristics built from Hebbian-like associative learning. Heuristics embody serial patterns of socially relevant action that evolve into ordered, topologically invariant computational networks supporting intra- and intermate selection. S. ambiguum employs heuristics to acquire, store, plan, compare, modify, select, and execute sets of mating propaganda. One major adaptive constraint over formation and use of heuristics involves a ciliate's initial subjective bias, responsiveness, or preparedness, as defined by Stevens' Law of subjective stimulus intensity, for perceiving the meaningfulness of mechanical pressures accompanying cell-cell contacts and additional perimating events. This bias controls durations and valences of nonassociative learning, search rates for appropriate mating strategies, potential net reproductive payoffs, levels of social honesty and deception, successful error diagnosis and correction of mating signals, use of insight or analysis to solve mating dilemmas, bioenergetics expenditures, and governance of mating decisions by classical or quantum statistical mechanics. I now report this same social bias also differentially affects the spatiotemporal sparseness, as measured with metric entropy, of ciliate heuristics. Sparseness plays an important role in neural systems through optimizing the specificity, efficiency, and capacity of memory representations. The present

  13. Multi-frame image super resolution based on sparse coding.

    PubMed

    Kato, Toshiyuki; Hino, Hideitsu; Murata, Noboru

    2015-06-01

    An image super-resolution method from multiple observation of low-resolution images is proposed. The method is based on sub-pixel accuracy block matching for estimating relative displacements of observed images, and sparse signal representation for estimating the corresponding high-resolution image, where correspondence between high- and low-resolution images are modeled by a certain degradation process. Relative displacements of small patches of observed low-resolution images are accurately estimated by a computationally efficient block matching method. The matching scores of the block matching are used to select a subset of low-resolution patches for reconstructing a high-resolution patch, that is, an adaptive selection of informative low-resolution images is realized. The proposed method is shown to perform comparable or superior to conventional super-resolution methods through experiments using various images.

  14. A bit allocation method for sparse source coding.

    PubMed

    Kaaniche, Mounir; Fraysse, Aurélia; Pesquet-Popescu, Béatrice; Pesquet, Jean-Christophe

    2014-01-01

    In this paper, we develop an efficient bit allocation strategy for subband-based image coding systems. More specifically, our objective is to design a new optimization algorithm based on a rate-distortion optimality criterion. To this end, we consider the uniform scalar quantization of a class of mixed distributed sources following a Bernoulli-generalized Gaussian distribution. This model appears to be particularly well-adapted for image data, which have a sparse representation in a wavelet basis. In this paper, we propose new approximations of the entropy and the distortion functions using piecewise affine and exponential forms, respectively. Because of these approximations, bit allocation is reformulated as a convex optimization problem. Solving the resulting problem allows us to derive the optimal quantization step for each subband. Experimental results show the benefits that can be drawn from the proposed bit allocation method in a typical transform-based coding application.

  15. Sparse Downscaling and Adaptive Fusion of Multi-sensor Precipitation

    NASA Astrophysics Data System (ADS)

    Ebtehaj, M.; Foufoula, E.

    2011-12-01

    The past decades have witnessed a remarkable emergence of new sources of multiscale multi-sensor precipitation data including data from global spaceborne active and passive sensors, regional ground based weather surveillance radars and local rain-gauges. Resolution enhancement of remotely sensed rainfall and optimal integration of multi-sensor data promise a posteriori estimates of precipitation fluxes with increased accuracy and resolution to be used in hydro-meteorological applications. In this context, new frameworks are proposed for resolution enhancement and multiscale multi-sensor precipitation data fusion, which capitalize on two main observations: (1) sparseness of remotely sensed precipitation fields in appropriately chosen transformed domains, (e.g., in wavelet space) which promotes the use of the newly emerged theory of sparse representation and compressive sensing for resolution enhancement; (2) a conditionally Gaussian Scale Mixture (GSM) parameterization in the wavelet domain which allows exploiting the efficient linear estimation methodologies, while capturing the non-Gaussian data structure of rainfall. The proposed methodologies are demonstrated using a data set of coincidental observations of precipitation reflectivity images by the spaceborne precipitation radar (PR) aboard the Tropical Rainfall Measurement Mission (TRMM) satellite and ground-based NEXRAD weather surveillance Doppler radars. Uniqueness and stability of the solution, capturing non-Gaussian singular structure of rainfall, reduced uncertainty of estimation and efficiency of computation are the main advantages of the proposed methodologies over the commonly used standard Gaussian techniques.

  16. One-step Sparse Estimates in Nonconcave Penalized Likelihood Models.

    PubMed

    Zou, Hui; Li, Runze

    2008-08-01

    Fan & Li (2001) propose a family of variable selection methods via penalized likelihood using concave penalty functions. The nonconcave penalized likelihood estimators enjoy the oracle properties, but maximizing the penalized likelihood function is computationally challenging, because the objective function is nondifferentiable and nonconcave. In this article we propose a new unified algorithm based on the local linear approximation (LLA) for maximizing the penalized likelihood for a broad class of concave penalty functions. Convergence and other theoretical properties of the LLA algorithm are established. A distinguished feature of the LLA algorithm is that at each LLA step, the LLA estimator can naturally adopt a sparse representation. Thus we suggest using the one-step LLA estimator from the LLA algorithm as the final estimates. Statistically, we show that if the regularization parameter is appropriately chosen, the one-step LLA estimates enjoy the oracle properties with good initial estimators. Computationally, the one-step LLA estimation methods dramatically reduce the computational cost in maximizing the nonconcave penalized likelihood. We conduct some Monte Carlo simulation to assess the finite sample performance of the one-step sparse estimation methods. The results are very encouraging.

  17. Framelet-Based Sparse Unmixing of Hyperspectral Images.

    PubMed

    Zhang, Guixu; Xu, Yingying; Fang, Faming

    2016-04-01

    Spectral unmixing aims at estimating the proportions (abundances) of pure spectrums (endmembers) in each mixed pixel of hyperspectral data. Recently, a semi-supervised approach, which takes the spectral library as prior knowledge, has been attracting much attention in unmixing. In this paper, we propose a new semi-supervised unmixing model, termed framelet-based sparse unmixing (FSU), which promotes the abundance sparsity in framelet domain and discriminates the approximation and detail components of hyperspectral data after framelet decomposition. Due to the advantages of the framelet representations, e.g., images have good sparse approximations in framelet domain, and most of the additive noises are included in the detail coefficients, the FSU model has a better antinoise capability, and accordingly leads to more desirable unmixing performance. The existence and uniqueness of the minimizer of the FSU model are then discussed, and the split Bregman algorithm and its convergence property are presented to obtain the minimal solution. Experimental results on both simulated data and real data demonstrate that the FSU model generally performs better than the compared methods. PMID:26849863

  18. Sparse Geologic Dictionaries for Flexible and Low-Rank Subsurface Flow Model Calibration: Field Applications

    NASA Astrophysics Data System (ADS)

    Khaninezhad, M. R. M.; Jafarpour, B.

    2014-12-01

    Inference of spatially distributed reservoir and aquifer properties from scattered and spatially limited data poses a poorly constrained nonlinear inverse problem that can have many solutions. In particular, the uncertainty in the geologic continuity model can remarkably degrade the quality of fluid displacement predictions, hence, the efficiency of resource development plans. For model calibration, instead of estimating aquifer properties for each grid cell in the model, the sparse representation of the aquifer properties is estimated from nonlinear production data. The resulting calibration problem can be solved using recent developments in sparse signal processing, widely known as compressed sensing. This novel formulation leads to a sparse data inversion technique that effectively searches for relevant geologic patterns that can explain the available spatiotemporal data. We recently introduced a new model calibration framework by using sparse geologic dictionaries that are constructed from uncertain prior geologic models. Here, we first demonstrate the effectiveness of the proposed sparse geologic dictionaries for flexible and robust model calibration under prior geologic uncertainty. We illustrate the effectiveness of the proposed approach in using limited nonlinear production data to identify a consistent geologic scenario from a number of candidate scenarios, which is usually a challenging problem in geostatistical reservoir characterization. We then evaluate the feasibility of adopting this framework for field application. In particular, we present subsurface field model calibration applications in which sparse geologic dictionaries are learned from uncertain prior information on large-scale reservoir property descriptions. We consider two large-scale field case studies, the Brugges and the Norne field examples. We discuss the construction of geologic dictionaries for large-scale problems and present reduced-order methods to speed up the computational

  19. Evaluation of protein-protein docking model structures using all-atom molecular dynamics simulations combined with the solution theory in the energy representation

    NASA Astrophysics Data System (ADS)

    Takemura, Kazuhiro; Guo, Hao; Sakuraba, Shun; Matubayasi, Nobuyuki; Kitao, Akio

    2012-12-01

    We propose a method to evaluate binding free energy differences among distinct protein-protein complex model structures through all-atom molecular dynamics simulations in explicit water using the solution theory in the energy representation. Complex model structures are generated from a pair of monomeric structures using the rigid-body docking program ZDOCK. After structure refinement by side chain optimization and all-atom molecular dynamics simulations in explicit water, complex models are evaluated based on the sum of their conformational and solvation free energies, the latter calculated from the energy distribution functions obtained from relatively short molecular dynamics simulations of the complex in water and of pure water based on the solution theory in the energy representation. We examined protein-protein complex model structures of two protein-protein complex systems, bovine trypsin/CMTI-1 squash inhibitor (PDB ID: 1PPE) and RNase SA/barstar (PDB ID: 1AY7), for which both complex and monomer structures were determined experimentally. For each system, we calculated the energies for the crystal complex structure and twelve generated model structures including the model most similar to the crystal structure and very different from it. In both systems, the sum of the conformational and solvation free energies tended to be lower for the structure similar to the crystal. We concluded that our energy calculation method is useful for selecting low energy complex models similar to the crystal structure from among a set of generated models.

  20. On finding supernodes for sparse matrix computations

    SciTech Connect

    Liu, J.W.H. . Dept. of Computer Science); Ng, E.; Peyton, B.W. )

    1990-06-01

    A simple characterization of fundamental supernodes is given in terms of the row subtrees of sparse Cholesky factors in the elimination tree. Using this characterization, we present an efficient algorithm that determines the set of such supernodes in time proportional to the number of nonzeros and equations in the original matrix. Experimental results are included to demonstrate the use of this algorithm in the context of sparse supernodal symbolic factorization. 18 refs., 3 figs., 3 tabs.

  1. Finding Nonoverlapping Substructures of a Sparse Matrix

    SciTech Connect

    Pinar, Ali; Vassilevska, Virginia

    2005-08-11

    Many applications of scientific computing rely on computations on sparse matrices. The design of efficient implementations of sparse matrix kernels is crucial for the overall efficiency of these applications. Due to the high compute-to-memory ratio and irregular memory access patterns, the performance of sparse matrix kernels is often far away from the peak performance on a modern processor. Alternative data structures have been proposed, which split the original matrix A into A{sub d} and A{sub s}, so that A{sub d} contains all dense blocks of a specified size in the matrix, and A{sub s} contains the remaining entries. This enables the use of dense matrix kernels on the entries of A{sub d} producing better memory performance. In this work, we study the problem of finding a maximum number of nonoverlapping dense blocks in a sparse matrix, which is previously not studied in the sparse matrix community. We show that the maximum nonoverlapping dense blocks problem is NP-complete by using a reduction from the maximum independent set problem on cubic planar graphs. We also propose a 2/3-approximation algorithm that runs in linear time in the number of nonzeros in the matrix. This extended abstract focuses on our results for 2x2 dense blocks. However we show that our results can be generalized to arbitrary sized dense blocks, and many other oriented substructures, which can be exploited to improve the memory performance of sparse matrix operations.

  2. Sparse extreme learning machine for classification.

    PubMed

    Bai, Zuo; Huang, Guang-Bin; Wang, Danwei; Wang, Han; Westover, M Brandon

    2014-10-01

    Extreme learning machine (ELM) was initially proposed for single-hidden-layer feedforward neural networks (SLFNs). In the hidden layer (feature mapping), nodes are randomly generated independently of training data. Furthermore, a unified ELM was proposed, providing a single framework to simplify and unify different learning methods, such as SLFNs, least square support vector machines, proximal support vector machines, and so on. However, the solution of unified ELM is dense, and thus, usually plenty of storage space and testing time are required for large-scale applications. In this paper, a sparse ELM is proposed as an alternative solution for classification, reducing storage space and testing time. In addition, unified ELM obtains the solution by matrix inversion, whose computational complexity is between quadratic and cubic with respect to the training size. It still requires plenty of training time for large-scale problems, even though it is much faster than many other traditional methods. In this paper, an efficient training algorithm is specifically developed for sparse ELM. The quadratic programming problem involved in sparse ELM is divided into a series of smallest possible sub-problems, each of which are solved analytically. Compared with SVM, sparse ELM obtains better generalization performance with much faster training speed. Compared with unified ELM, sparse ELM achieves similar generalization performance for binary classification applications, and when dealing with large-scale binary classification problems, sparse ELM realizes even faster training speed than unified ELM. PMID:25222727

  3. A new sparse design method on phased array-based acoustic emission sensor for partial discharge detection

    NASA Astrophysics Data System (ADS)

    Xie, Qing; Cheng, Shuyi; Lü, Fangcheng; Li, Yanqing

    2014-03-01

    The acoustic detecting performance of a partial discharge (PD) ultrasonic sensor array can be improved by increasing the number of array elements. However, it will increase the complexity and cost of the PD detection system. Therefore, a sparse sensor with an optimization design can be chosen to ensure good acoustic performance. In this paper, first, a quantitative method is proposed for evaluating the acoustic performance of a square PD ultrasonic array sensor. Second, a method of sparse design is presented to combine the evaluation method with the chaotic monkey algorithm. Third, an optimal sparse structure of a 3 × 3 square PD ultrasonic array sensor is deduced. It is found that, under different sparseness and sparse structure, the main beam width of the directivity function shows a small variation, while the sidelobe amplitude shows a bigger variation. For a specific sparseness, the acoustic performance under the optimal sparse structure is close to that using a full array. Finally, some simulations based on the above method show that, for certain sparseness, the sensor with the optimal sparse structure exhibits superior positioning accuracy compared to that with a stochastic one. The sensor array structure may be chosen according to the actual requirements for an actual engineering application.

  4. Synaptic and Network Mechanisms of Sparse and Reliable Visual Cortical Activity during Nonclassical Receptive Field Stimulation

    PubMed Central

    Haider, Bilal; Krause, Matthew R.; Duque, Alvaro; Yu, Yuguo; Touryan, Jonathan; Mazer, James A.; McCormick, David A.

    2011-01-01

    SUMMARY During natural vision, the entire visual field is stimulated by images rich in spatiotemporal structure. Although many visual system studies restrict stimuli to the classical receptive field (CRF), it is known that costimulation of the CRF and the surrounding nonclassical receptive field (nCRF) increases neuronal response sparseness. The cellular and network mechanisms underlying increased response sparseness remain largely unexplored. Here we show that combined CRF + nCRF stimulation increases the sparseness, reliability, and precision of spiking and membrane potential responses in classical regular spiking (RSC) pyramidal neurons of cat primary visual cortex. Conversely, fast-spiking interneurons exhibit increased activity and decreased selectivity during CRF + nCRF stimulation. The increased sparseness and reliability of RSC neuron spiking is associated with increased inhibitory barrages and narrower visually evoked synaptic potentials. Our experimental observations were replicated with a simple computational model, suggesting that network interactions among neuronal subtypes ultimately sharpen recurrent excitation, producing specific and reliable visual responses. PMID:20152117

  5. Wavelet Representation of Contour Sets

    SciTech Connect

    Bertram, M; Laney, D E; Duchaineau, M A; Hansen, C D; Hamann, B; Joy, K I

    2001-07-19

    We present a new wavelet compression and multiresolution modeling approach for sets of contours (level sets). In contrast to previous wavelet schemes, our algorithm creates a parametrization of a scalar field induced by its contoum and compactly stores this parametrization rather than function values sampled on a regular grid. Our representation is based on hierarchical polygon meshes with subdivision connectivity whose vertices are transformed into wavelet coefficients. From this sparse set of coefficients, every set of contours can be efficiently reconstructed at multiple levels of resolution. When applying lossy compression, introducing high quantization errors, our method preserves contour topology, in contrast to compression methods applied to the corresponding field function. We provide numerical results for scalar fields defined on planar domains. Our approach generalizes to volumetric domains, time-varying contours, and level sets of vector fields.

  6. Polygenic Modeling with Bayesian Sparse Linear Mixed Models

    PubMed Central

    Zhou, Xiang; Carbonetto, Peter; Stephens, Matthew

    2013-01-01

    Both linear mixed models (LMMs) and sparse regression models are widely used in genetics applications, including, recently, polygenic modeling in genome-wide association studies. These two approaches make very different assumptions, so are expected to perform well in different situations. However, in practice, for a given dataset one typically does not know which assumptions will be more accurate. Motivated by this, we consider a hybrid of the two, which we refer to as a “Bayesian sparse linear mixed model” (BSLMM) that includes both these models as special cases. We address several key computational and statistical issues that arise when applying BSLMM, including appropriate prior specification for the hyper-parameters and a novel Markov chain Monte Carlo algorithm for posterior inference. We apply BSLMM and compare it with other methods for two polygenic modeling applications: estimating the proportion of variance in phenotypes explained (PVE) by available genotypes, and phenotype (or breeding value) prediction. For PVE estimation, we demonstrate that BSLMM combines the advantages of both standard LMMs and sparse regression modeling. For phenotype prediction it considerably outperforms either of the other two methods, as well as several other large-scale regression methods previously suggested for this problem. Software implementing our method is freely available from http://stephenslab.uchicago.edu/software.html. PMID:23408905

  7. Fock Representation

    NASA Astrophysics Data System (ADS)

    Strocchi, Franco

    The general lesson from the GNS theorem is that a state on the algebra of observables, namely a set of expectations, defines a realization of the system in terms of a Hilbert space of states with a reference vector which represents as a cyclic vector (so that all the other vectors of can be obtained by applying the observables to PSgrOHgr). In this sense, a state identifies the family of states related to it by observables, equivalently accessible from it by means of physically realizable operations. Thus, one may say that mathcal{H}_{Omega} describes a closed world, or phase, to which OHgr belongs. An interesting physical and mathematical question is how many closed worlds or phases are associated to a quantum system. In the mathematical language this amounts to investigating how many inequivalent (physically acceptable) representations of the observable algebra which defines the system exist.

  8. Sparse SPM: Group Sparse-dictionary learning in SPM framework for resting-state functional connectivity MRI analysis.

    PubMed

    Lee, Young-Beom; Lee, Jeonghyeon; Tak, Sungho; Lee, Kangjoo; Na, Duk L; Seo, Sang Won; Jeong, Yong; Ye, Jong Chul

    2016-01-15

    Recent studies of functional connectivity MR imaging have revealed that the default-mode network activity is disrupted in diseases such as Alzheimer's disease (AD). However, there is not yet a consensus on the preferred method for resting-state analysis. Because the brain is reported to have complex interconnected networks according to graph theoretical analysis, the independency assumption, as in the popular independent component analysis (ICA) approach, often does not hold. Here, rather than using the independency assumption, we present a new statistical parameter mapping (SPM)-type analysis method based on a sparse graph model where temporal dynamics at each voxel position are described as a sparse combination of global brain dynamics. In particular, a new concept of a spatially adaptive design matrix has been proposed to represent local connectivity that shares the same temporal dynamics. If we further assume that local network structures within a group are similar, the estimation problem of global and local dynamics can be solved using sparse dictionary learning for the concatenated temporal data across subjects. Moreover, under the homoscedasticity variance assumption across subjects and groups that is often used in SPM analysis, the aforementioned individual and group analyses using sparse dictionary learning can be accurately modeled by a mixed-effect model, which also facilitates a standard SPM-type group-level inference using summary statistics. Using an extensive resting fMRI data set obtained from normal, mild cognitive impairment (MCI), and Alzheimer's disease patient groups, we demonstrated that the changes in the default mode network extracted by the proposed method are more closely correlated with the progression of Alzheimer's disease.

  9. Sparse SPM: Group Sparse-dictionary learning in SPM framework for resting-state functional connectivity MRI analysis.

    PubMed

    Lee, Young-Beom; Lee, Jeonghyeon; Tak, Sungho; Lee, Kangjoo; Na, Duk L; Seo, Sang Won; Jeong, Yong; Ye, Jong Chul

    2016-01-15

    Recent studies of functional connectivity MR imaging have revealed that the default-mode network activity is disrupted in diseases such as Alzheimer's disease (AD). However, there is not yet a consensus on the preferred method for resting-state analysis. Because the brain is reported to have complex interconnected networks according to graph theoretical analysis, the independency assumption, as in the popular independent component analysis (ICA) approach, often does not hold. Here, rather than using the independency assumption, we present a new statistical parameter mapping (SPM)-type analysis method based on a sparse graph model where temporal dynamics at each voxel position are described as a sparse combination of global brain dynamics. In particular, a new concept of a spatially adaptive design matrix has been proposed to represent local connectivity that shares the same temporal dynamics. If we further assume that local network structures within a group are similar, the estimation problem of global and local dynamics can be solved using sparse dictionary learning for the concatenated temporal data across subjects. Moreover, under the homoscedasticity variance assumption across subjects and groups that is often used in SPM analysis, the aforementioned individual and group analyses using sparse dictionary learning can be accurately modeled by a mixed-effect model, which also facilitates a standard SPM-type group-level inference using summary statistics. Using an extensive resting fMRI data set obtained from normal, mild cognitive impairment (MCI), and Alzheimer's disease patient groups, we demonstrated that the changes in the default mode network extracted by the proposed method are more closely correlated with the progression of Alzheimer's disease. PMID:26524138

  10. Sparse coded image super-resolution using K-SVD trained dictionary based on regularized orthogonal matching pursuit.

    PubMed

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2015-01-01

    Image super-resolution (SR) plays a vital role in medical imaging that allows a more efficient and effective diagnosis process. Usually, diagnosing is difficult and inaccurate from low-resolution (LR) and noisy images. Resolution enhancement through conventional interpolation methods strongly affects the precision of consequent processing steps, such as segmentation and registration. Therefore, we propose an efficient sparse coded image SR reconstruction technique using a trained dictionary. We apply a simple and efficient regularized version of orthogonal matching pursuit (ROMP) to seek the coefficients of sparse representation. ROMP has the transparency and greediness of OMP and the robustness of the L1-minization that enhance the dictionary learning process to capture feature descriptors such as oriented edges and contours from complex images like brain MRIs. The sparse coding part of the K-SVD dictionary training procedure is modified by substituting OMP with ROMP. The dictionary update stage allows simultaneously updating an arbitrary number of atoms and vectors of sparse coefficients. In SR reconstruction, ROMP is used to determine the vector of sparse coefficients for the underlying patch. The recovered representations are then applied to the trained dictionary, and finally, an optimization leads to high-resolution output of high-quality. Experimental results demonstrate that the super-resolution reconstruction quality of the proposed scheme is comparatively better than other state-of-the-art schemes.

  11. Sparse coded image super-resolution using K-SVD trained dictionary based on regularized orthogonal matching pursuit.

    PubMed

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2015-01-01

    Image super-resolution (SR) plays a vital role in medical imaging that allows a more efficient and effective diagnosis process. Usually, diagnosing is difficult and inaccurate from low-resolution (LR) and noisy images. Resolution enhancement through conventional interpolation methods strongly affects the precision of consequent processing steps, such as segmentation and registration. Therefore, we propose an efficient sparse coded image SR reconstruction technique using a trained dictionary. We apply a simple and efficient regularized version of orthogonal matching pursuit (ROMP) to seek the coefficients of sparse representation. ROMP has the transparency and greediness of OMP and the robustness of the L1-minization that enhance the dictionary learning process to capture feature descriptors such as oriented edges and contours from complex images like brain MRIs. The sparse coding part of the K-SVD dictionary training procedure is modified by substituting OMP with ROMP. The dictionary update stage allows simultaneously updating an arbitrary number of atoms and vectors of sparse coefficients. In SR reconstruction, ROMP is used to determine the vector of sparse coefficients for the underlying patch. The recovered representations are then applied to the trained dictionary, and finally, an optimization leads to high-resolution output of high-quality. Experimental results demonstrate that the super-resolution reconstruction quality of the proposed scheme is comparatively better than other state-of-the-art schemes. PMID:26405902

  12. Finding nonoverlapping substructures of a sparse matrix

    SciTech Connect

    Pinar, Ali; Vassilevska, Virginia

    2004-08-09

    Many applications of scientific computing rely on computations on sparse matrices, thus the design of efficient implementations of sparse matrix kernels is crucial for the overall efficiency of these applications. Due to the high compute-to-memory ratio and irregular memory access patterns, the performance of sparse matrix kernels is often far away from the peak performance on a modern processor. Alternative data structures have been proposed, which split the original matrix A into A{sub d} and A{sub s}, so that A{sub d} contains all dense blocks of a specified size in the matrix, and A{sub s} contains the remaining entries. This enables the use of dense matrix kernels on the entries of A{sub d} producing better memory performance. In this work, we study the problem of finding a maximum number of non overlapping rectangular dense blocks in a sparse matrix, which has not been studied in the sparse matrix community. We show that the maximum non overlapping dense blocks problem is NP-complete by using a reduction from the maximum independent set problem on cubic planar graphs. We also propose a 2/3-approximation algorithm for 2 times 2 blocks that runs in linear time in the number of nonzeros in the matrix. We discuss alternatives to rectangular blocks such as diagonal blocks and cross blocks and present complexity analysis and approximation algorithms.

  13. A Sparse Neural Code for Some Speech Sounds but Not for Others

    PubMed Central

    Scharinger, Mathias; Bendixen, Alexandra; Trujillo-Barreto, Nelson J.; Obleser, Jonas

    2012-01-01

    The precise neural mechanisms underlying speech sound representations are still a matter of debate. Proponents of ‘sparse representations’ assume that on the level of speech sounds, only contrastive or otherwise not predictable information is stored in long-term memory. Here, in a passive oddball paradigm, we challenge the neural foundations of such a ‘sparse’ representation; we use words that differ only in their penultimate consonant (“coronal” [t] vs. “dorsal” [k] place of articulation) and for example distinguish between the German nouns Latz ([lats]; bib) and Lachs ([laks]; salmon). Changes from standard [t] to deviant [k] and vice versa elicited a discernible Mismatch Negativity (MMN) response. Crucially, however, the MMN for the deviant [lats] was stronger than the MMN for the deviant [laks]. Source localization showed this difference to be due to enhanced brain activity in right superior temporal cortex. These findings reflect a difference in phonological ‘sparsity’: Coronal [t] segments, but not dorsal [k] segments, are based on more sparse representations and elicit less specific neural predictions; sensory deviations from this prediction are more readily ‘tolerated’ and accordingly trigger weaker MMNs. The results foster the neurocomputational reality of ‘representationally sparse’ models of speech perception that are compatible with more general predictive mechanisms in auditory perception. PMID:22815876

  14. Integrative analysis of multiple diverse omics datasets by sparse group multitask regression.

    PubMed

    Lin, Dongdong; Zhang, Jigang; Li, Jingyao; He, Hao; Deng, Hong-Wen; Wang, Yu-Ping

    2014-01-01

    A variety of high throughput genome-wide assays enable the exploration of genetic risk factors underlying complex traits. Although these studies have remarkable impact on identifying susceptible biomarkers, they suffer from issues such as limited sample size and low reproducibility. Combining individual studies of different genetic levels/platforms has the promise to improve the power and consistency of biomarker identification. In this paper, we propose a novel integrative method, namely sparse group multitask regression, for integrating diverse omics datasets, platforms, and populations to identify risk genes/factors of complex diseases. This method combines multitask learning with sparse group regularization, which will: (1) treat the biomarker identification in each single study as a task and then combine them by multitask learning; (2) group variables from all studies for identifying significant genes; (3) enforce sparse constraint on groups of variables to overcome the "small sample, but large variables" problem. We introduce two sparse group penalties: sparse group lasso and sparse group ridge in our multitask model, and provide an effective algorithm for each model. In addition, we propose a significance test for the identification of potential risk genes. Two simulation studies are performed to evaluate the performance of our integrative method by comparing it with conventional meta-analysis method. The results show that our sparse group multitask method outperforms meta-analysis method significantly. In an application to our osteoporosis studies, 7 genes are identified as significant genes by our method and are found to have significant effects in other three independent studies for validation. The most significant gene SOD2 has been identified in our previous osteoporosis study involving the same expression dataset. Several other genes such as TREML2, HTR1E, and GLO1 are shown to be novel susceptible genes for osteoporosis, as confirmed from other

  15. Fast wavelet based sparse approximate inverse preconditioner

    SciTech Connect

    Wan, W.L.

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  16. Semi-implicit Integration Factor Methods on Sparse Grids for High-Dimensional Systems

    PubMed Central

    Wang, Dongyong; Chen, Weitao; Nie, Qing

    2015-01-01

    Numerical methods for partial differential equations in high-dimensional spaces are often limited by the curse of dimensionality. Though the sparse grid technique, based on a one-dimensional hierarchical basis through tensor products, is popular for handling challenges such as those associated with spatial discretization, the stability conditions on time step size due to temporal discretization, such as those associated with high-order derivatives in space and stiff reactions, remain. Here, we incorporate the sparse grids with the implicit integration factor method (IIF) that is advantageous in terms of stability conditions for systems containing stiff reactions and diffusions. We combine IIF, in which the reaction is treated implicitly and the diffusion is treated explicitly and exactly, with various sparse grid techniques based on the finite element and finite difference methods and a multi-level combination approach. The overall method is found to be efficient in terms of both storage and computational time for solving a wide range of PDEs in high dimensions. In particular, the IIF with the sparse grid combination technique is flexible and effective in solving systems that may include cross-derivatives and non-constant diffusion coefficients. Extensive numerical simulations in both linear and nonlinear systems in high dimensions, along with applications of diffusive logistic equations and Fokker-Planck equations, demonstrate the accuracy, efficiency, and robustness of the new methods, indicating potential broad applications of the sparse grid-based integration factor method. PMID:25897178

  17. Sparsey™: event recognition via deep hierarchical sparse distributed codes

    PubMed Central

    Rinkus, Gerard J.

    2014-01-01

    The visual cortex's hierarchical, multi-level organization is captured in many biologically inspired computational vision models, the general idea being that progressively larger scale (spatially/temporally) and more complex visual features are represented in progressively higher areas. However, most earlier models use localist representations (codes) in each representational field (which we equate with the cortical macrocolumn, “mac”), at each level. In localism, each represented feature/concept/event (hereinafter “item”) is coded by a single unit. The model we describe, Sparsey, is hierarchical as well but crucially, it uses sparse distributed coding (SDC) in every mac in all levels. In SDC, each represented item is coded by a small subset of the mac's units. The SDCs of different items can overlap and the size of overlap between items can be used to represent their similarity. The difference between localism and SDC is crucial because SDC allows the two essential operations of associative memory, storing a new item and retrieving the best-matching stored item, to be done in fixed time for the life of the model. Since the model's core algorithm, which does both storage and retrieval (inference), makes a single pass over all macs on each time step, the overall model's storage/retrieval operation is also fixed-time, a criterion we consider essential for scalability to the huge (“Big Data”) problems. A 2010 paper described a nonhierarchical version of this model in the context of purely spatial pattern processing. Here, we elaborate a fully hierarchical model (arbitrary numbers of levels and macs per level), describing novel model principles like progressive critical periods, dynamic modulation of principal cells' activation functions based on a mac-level familiarity measure, representation of multiple simultaneously active hypotheses, a novel method of time warp invariant recognition, and we report results showing learning/recognition of spatiotemporal

  18. The hierarchical sparse selection model of visual crowding.

    PubMed

    Chaney, Wesley; Fischer, Jason; Whitney, David

    2014-01-01

    Because the environment is cluttered, objects rarely appear in isolation. The visual system must therefore attentionally select behaviorally relevant objects from among many irrelevant ones. A limit on our ability to select individual objects is revealed by the phenomenon of visual crowding: an object seen in the periphery, easily recognized in isolation, can become impossible to identify when surrounded by other, similar objects. The neural basis of crowding is hotly debated: while prevailing theories hold that crowded information is irrecoverable - destroyed due to over-integration in early stage visual processing - recent evidence demonstrates otherwise. Crowding can occur between high-level, configural object representations, and crowded objects can contribute with high precision to judgments about the "gist" of a group of objects, even when they are individually unrecognizable. While existing models can account for the basic diagnostic criteria of crowding (e.g., specific critical spacing, spatial anisotropies, and temporal tuning), no present model explains how crowding can operate simultaneously at multiple levels in the visual processing hierarchy, including at the level of whole objects. Here, we present a new model of visual crowding-the hierarchical sparse selection (HSS) model, which accounts for object-level crowding, as well as a number of puzzling findings in the recent literature. Counter to existing theories, we posit that crowding occurs not due to degraded visual representations in the brain, but due to impoverished sampling of visual representations for the sake of perception. The HSS model unifies findings from a disparate array of visual crowding studies and makes testable predictions about how information in crowded scenes can be accessed.

  19. Tensor methods for large, sparse unconstrained optimization

    SciTech Connect

    Bouaricha, A.

    1996-11-01

    Tensor methods for unconstrained optimization were first introduced by Schnabel and Chow [SIAM J. Optimization, 1 (1991), pp. 293-315], who describe these methods for small to moderate size problems. This paper extends these methods to large, sparse unconstrained optimization problems. This requires an entirely new way of solving the tensor model that makes the methods suitable for solving large, sparse optimization problems efficiently. We present test results for sets of problems where the Hessian at the minimizer is nonsingular and where it is singular. These results show that tensor methods are significantly more efficient and more reliable than standard methods based on Newton`s method.

  20. Multisnapshot Sparse Bayesian Learning for DOA

    NASA Astrophysics Data System (ADS)

    Gerstoft, Peter; Mecklenbrauker, Christoph F.; Xenaki, Angeliki; Nannuru, Santosh

    2016-10-01

    The directions of arrival (DOA) of plane waves are estimated from multi-snapshot sensor array data using Sparse Bayesian Learning (SBL). The prior source amplitudes is assumed independent zero-mean complex Gaussian distributed with hyperparameters the unknown variances (i.e. the source powers). For a complex Gaussian likelihood with hyperparameter the unknown noise variance, the corresponding Gaussian posterior distribution is derived. For a given number of DOAs, the hyperparameters are automatically selected by maximizing the evidence and promote sparse DOA estimates. The SBL scheme for DOA estimation is discussed and evaluated competitively against LASSO ($\\ell_1$-regularization), conventional beamforming, and MUSIC

  1. Efficient quantum circuits for arbitrary sparse unitaries

    SciTech Connect

    Jordan, Stephen P.; Wocjan, Pawel

    2009-12-15

    Arbitrary exponentially large unitaries cannot be implemented efficiently by quantum circuits. However, we show that quantum circuits can efficiently implement any unitary provided it has at most polynomially many nonzero entries in any row or column, and these entries are efficiently computable. One can formulate a model of computation based on the composition of sparse unitaries which includes the quantum Turing machine model, the quantum circuit model, anyonic models, permutational quantum computation, and discrete time quantum walks as special cases. Thus, we obtain a simple unified proof that these models are all contained in BQP. Furthermore, our general method for implementing sparse unitaries simplifies several existing quantum algorithms.

  2. Sparse Density Estimation on the Multinomial Manifold.

    PubMed

    Hong, Xia; Gao, Junbin; Chen, Sheng; Zia, Tanveer

    2015-11-01

    A new sparse kernel density estimator is introduced based on the minimum integrated square error criterion for the finite mixture model. Since the constraint on the mixing coefficients of the finite mixture model is on the multinomial manifold, we use the well-known Riemannian trust-region (RTR) algorithm for solving this problem. The first- and second-order Riemannian geometry of the multinomial manifold are derived and utilized in the RTR algorithm. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with an accuracy competitive with those of existing kernel density estimators. PMID:25647665

  3. An application of the novel quantum mechanical/molecular mechanical method combined with the theory of energy representation: An ionic dissociation of a water molecule in the supercritical water.

    PubMed

    Takahashi, Hideaki; Satou, Wataru; Hori, Takumi; Nitta, Tomoshige

    2005-01-22

    A novel quantum chemical approach recently developed has been applied to an ionic dissociation of a water molecule (2H(2)O-->H(3)O(+)+OH(-)) in ambient and supercritical water. The method is based on the quantum mechanical/molecular mechanical (QM/MM) simulations combined with the theory of energy representation (QM/MM-ER), where the energy distribution function of MM solvent molecules around a QM solute serves as a fundamental variable to determine the hydration free energy of the solute according to the rigorous framework of the theory of energy representation. The density dependence of the dissociation free energy in the supercritical water has been investigated for the density range from 0.1 to 0.6 g/cm(3) with the temperature fixed at a constant. It has been found that the product ionic species significantly stabilizes in the high density region as compared with the low density. Consequently, the dissociation free energy decreases monotonically as the density increases. The decomposition of the hydration free energy has revealed that the entropic term (-TDeltaS) strongly depends on the density of the solution and dominates the behavior of the dissociation free energy with respect to the variation of the density. The increase in the entropic term in the low density region can be attributed to the decrease in the translational degrees of freedom brought about by the aggregation of solvent water molecules around the ionic solute.

  4. Visual Tracking Based on the Adaptive Color Attention Tuned Sparse Generative Object Model.

    PubMed

    Tian, Chunna; Gao, Xinbo; Wei, Wei; Zheng, Hong

    2015-12-01

    This paper presents a new visual tracking framework based on an adaptive color attention tuned local sparse model. The histograms of sparse coefficients of all patches in an object are pooled together according to their spatial distribution. A particle filter methodology is used as the location model to predict candidates for object verification during tracking. Since color is an important visual clue to distinguish objects from background, we calculate the color similarity between objects in the previous frames and the candidates in current frame, which is adopted as color attention to tune the local sparse representation-based appearance similarity measurement between the object template and candidates. The color similarity can be calculated efficiently with hash coded color names, which helps the tracker find more reliable objects during tracking. We use a flexible local sparse coding of the object to evaluate the degeneration degree of the appearance model, based on which we build a model updating mechanism to alleviate drifting caused by temporal varying multi-factors. Experiments on 76 challenging benchmark color sequences and the evaluation under the object tracking benchmark protocol demonstrate the superiority of the proposed tracker over the state-of-the-art methods in accuracy. PMID:26390460

  5. Second SIAM conference on sparse matrices: Abstracts. Final technical report

    SciTech Connect

    1996-12-31

    This report contains abstracts on the following topics: invited and long presentations (IP1 & LP1); sparse matrix reordering & graph theory I; sparse matrix tools & environments I; eigenvalue computations I; iterative methods & acceleration techniques I; applications I; parallel algorithms I; sparse matrix reordering & graphy theory II; sparse matrix tool & environments II; least squares & optimization I; iterative methods & acceleration techniques II; applications II; eigenvalue computations II; least squares & optimization II; parallel algorithms II; sparse direct methods; iterative methods & acceleration techniques III; eigenvalue computations III; and sparse matrix reordering & graph theory III.

  6. Sparse imaging of cortical electrical current densities via wavelet transforms

    NASA Astrophysics Data System (ADS)

    Liao, Ke; Zhu, Min; Ding, Lei; Valette, Sébastien; Zhang, Wenbo; Dickens, Deanna

    2012-11-01

    While the cerebral cortex in the human brain is of functional importance, functions defined on this structure are difficult to analyze spatially due to its highly convoluted irregular geometry. This study developed a novel L1-norm regularization method using a newly proposed multi-resolution face-based wavelet method to estimate cortical electrical activities in electroencephalography (EEG) and magnetoencephalography (MEG) inverse problems. The proposed wavelets were developed based on multi-resolution models built from irregular cortical surface meshes, which were realized in this study too. The multi-resolution wavelet analysis was used to seek sparse representation of cortical current densities in transformed domains, which was expected due to the compressibility of wavelets, and evaluated using Monte Carlo simulations. The EEG/MEG inverse problems were solved with the use of the novel L1-norm regularization method exploring the sparseness in the wavelet domain. The inverse solutions obtained from the new method using MEG data were evaluated by Monte Carlo simulations too. The present results indicated that cortical current densities could be efficiently compressed using the proposed face-based wavelet method, which exhibited better performance than the vertex-based wavelet method. In both simulations and auditory experimental data analysis, the proposed L1-norm regularization method showed better source detection accuracy and less estimation errors than other two classic methods, i.e. weighted minimum norm (wMNE) and cortical low-resolution electromagnetic tomography (cLORETA). This study suggests that the L1-norm regularization method with the use of face-based wavelets is a promising tool for studying functional activations of the human brain.

  7. Adaptive sparse polynomial chaos expansion based on least angle regression

    NASA Astrophysics Data System (ADS)

    Blatman, Géraud; Sudret, Bruno

    2011-03-01

    Polynomial chaos (PC) expansions are used in stochastic finite element analysis to represent the random model response by a set of coefficients in a suitable (so-called polynomial chaos) basis. The number of terms to be computed grows dramatically with the size of the input random vector, which makes the computational cost of classical solution schemes (may it be intrusive (i.e. of Galerkin type) or non intrusive) unaffordable when the deterministic finite element model is expensive to evaluate. To address such problems, the paper describes a non intrusive method that builds a sparse PC expansion. First, an original strategy for truncating the PC expansions, based on hyperbolic index sets, is proposed. Then an adaptive algorithm based on least angle regression (LAR) is devised for automatically detecting the significant coefficients of the PC expansion. Beside the sparsity of the basis, the experimental design used at each step of the algorithm is systematically complemented in order to avoid the overfitting phenomenon. The accuracy of the PC metamodel is checked using an estimate inspired by statistical learning theory, namely the corrected leave-one-out error. As a consequence, a rather small number of PC terms are eventually retained ( sparse representation), which may be obtained at a reduced computational cost compared to the classical "full" PC approximation. The convergence of the algorithm is shown on an analytical function. Then the method is illustrated on three stochastic finite element problems. The first model features 10 input random variables, whereas the two others involve an input random field, which is discretized into 38 and 30 - 500 random variables, respectively.

  8. Multilevel sparse functional principal component analysis.

    PubMed

    Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S

    2014-01-29

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions. PMID:24872597

  9. Multilevel sparse functional principal component analysis

    PubMed Central

    Di, Chongzhi; Crainiceanu, Ciprian M.; Jank, Wolfgang S.

    2014-01-01

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions. PMID:24872597

  10. Self-Control in Sparsely Coded Networks

    NASA Astrophysics Data System (ADS)

    Dominguez, D. R. C.; Bollé, D.

    1998-03-01

    A complete self-control mechanism is proposed in the dynamics of neural networks through the introduction of a time-dependent threshold, determined in function of both the noise and the pattern activity in the network. Especially for sparsely coded models this mechanism is shown to considerably improve the storage capacity, the basins of attraction, and the mutual information content.

  11. Sparse matrix orderings for factorized inverse preconditioners

    SciTech Connect

    Benzi, M.; Tuama, M.

    1998-09-01

    The effect of reorderings on the performance of factorized sparse approximate inverse preconditioners is considered. It is shown that certain reorderings can be very beneficial both in the preconditioner construction phase and in terms of the rate of convergence of the preconditioned iteration.

  12. STIS Sparse Field CTE test {Cycle 9}

    NASA Astrophysics Data System (ADS)

    Goudfrooij, Paul

    2000-07-01

    CTE measurements are made using the "sparse field test", along both the serial and parallel axes. This program needs special commanding to provide {a} off-center MSM positionings of some slits, and {b} the ability to read out with any amplifier {A, B, C, or D}. All exposures are internals.

  13. STIS Sparse Field CTE test {Cycle 8}

    NASA Astrophysics Data System (ADS)

    Goudfrooij, Paul

    1999-07-01

    CTE measurements are made using the "sparse field test", along both the serial and parallel axes. This program needs special commanding to provide {a} off-center MSM positionings of some slits, and {b} the ability to read out with any amplifier {A, B, C, or D}. All exposures are internals.

  14. A Comparative Study of Sparse Associative Memories

    NASA Astrophysics Data System (ADS)

    Gripon, Vincent; Heusel, Judith; Löwe, Matthias; Vermet, Franck

    2016-07-01

    We study various models of associative memories with sparse information, i.e. a pattern to be stored is a random string of 0s and 1s with about log N 1s, only. We compare different synaptic weights, architectures and retrieval mechanisms to shed light on the influence of the various parameters on the storage capacity.

  15. Hyperspectral anomaly detection using sparse kernel-based ensemble learning

    NASA Astrophysics Data System (ADS)

    Gurram, Prudhvi; Han, Timothy; Kwon, Heesung

    2011-06-01

    In this paper, sparse kernel-based ensemble learning for hyperspectral anomaly detection is proposed. The proposed technique is aimed to optimize an ensemble of kernel-based one class classifiers, such as Support Vector Data Description (SVDD) classifiers, by estimating optimal sparse weights. In this method, hyperspectral signatures are first randomly sub-sampled into a large number of spectral feature subspaces. An enclosing hypersphere that defines the support of spectral data, corresponding to the normalcy/background data, in the Reproducing Kernel Hilbert Space (RKHS) of each respective feature subspace is then estimated using regular SVDD. The enclosing hypersphere basically represents the spectral characteristics of the background data in the respective feature subspace. The joint hypersphere is learned by optimally combining the hyperspheres from the individual RKHS, while imposing the l1 constraint on the combining weights. The joint hypersphere representing the most optimal compact support of the local hyperspectral data in the joint feature subspaces is then used to test each pixel in hyperspectral image data to determine if it belongs to the local background data or not. The outliers are considered to be targets. The performance comparison between the proposed technique and the regular SVDD is provided using the HYDICE hyperspectral images.

  16. A Low-Complexity Transceiver Design in Sparse Multipath Massive MIMO Channels

    NASA Astrophysics Data System (ADS)

    Yu, Yuehua; Wang, Peng; Chen, He; Li, Yonghui; Vucetic, Branka

    2016-10-01

    In this letter, we develop a low-complexity transceiver design, referred to as semi-random beam pairing (SRBP), for sparse multipath massive MIMO channels. By exploring a sparse representation of the MIMO channel in the virtual angular domain, we generate a set of transmit-receive beam pairs in a semi-random way to support the simultaneous transmission of multiple data streams. These data streams can be easily separated at the receiver via a successive interference cancelation (SIC) technique, and the power allocation among them are optimized based on the classical waterfilling principle. The achieved degree of freedom (DoF) and capacity of the proposed approach are analyzed. Simulation results show that, compared to the conventional singular value decomposition (SVD)-based method, the proposed transceiver design can achieve near-optimal DoF and capacity with a significantly lower computational complexity.

  17. Hyperspectral Image Kernel Sparse Subspace Clustering with Spatial Max Pooling Operation

    NASA Astrophysics Data System (ADS)

    Zhang, Hongyan; Zhai, Han; Liao, Wenzhi; Cao, Liqin; Zhang, Liangpei; Pižurica, Aleksandra

    2016-06-01

    In this paper, we present a kernel sparse subspace clustering with spatial max pooling operation (KSSC-SMP) algorithm for hyperspectral remote sensing imagery. Firstly, the feature points are mapped from the original space into a higher dimensional space with a kernel strategy. In particular, the sparse subspace clustering (SSC) model is extended to nonlinear manifolds, which can better explore the complex nonlinear structure of hyperspectral images (HSIs) and obtain a much more accurate representation coefficient matrix. Secondly, through the spatial max pooling operation, the spatial contextual information is integrated to obtain a smoother clustering result. Through experiments, it is verified that the KSSC-SMP algorithm is a competitive clustering method for HSIs and outperforms the state-of-the-art clustering methods.

  18. Sparse and compositionally robust inference of microbial ecological networks.

    PubMed

    Kurtz, Zachary D; Müller, Christian L; Miraldi, Emily R; Littman, Dan R; Blaser, Martin J; Bonneau, Richard A

    2015-05-01

    16S ribosomal RNA (rRNA) gene and other environmental sequencing techniques provide snapshots of microbial communities, revealing phylogeny and the abundances of microbial populations across diverse ecosystems. While changes in microbial community structure are demonstrably associated with certain environmental conditions (from metabolic and immunological health in mammals to ecological stability in soils and oceans), identification of underlying mechanisms requires new statistical tools, as these datasets present several technical challenges. First, the abundances of microbial operational taxonomic units (OTUs) from amplicon-based datasets are compositional. Counts are normalized to the total number of counts in the sample. Thus, microbial abundances are not independent, and traditional statistical metrics (e.g., correlation) for the detection of OTU-OTU relationships can lead to spurious results. Secondly, microbial sequencing-based studies typically measure hundreds of OTUs on only tens to hundreds of samples; thus, inference of OTU-OTU association networks is severely under-powered, and additional information (or assumptions) are required for accurate inference. Here, we present SPIEC-EASI (SParse InversE Covariance Estimation for Ecological Association Inference), a statistical method for the inference of microbial ecological networks from amplicon sequencing datasets that addresses both of these issues. SPIEC-EASI combines data transformations developed for compositional data analysis with a graphical model inference framework that assumes the underlying ecological association network is sparse. To reconstruct the network, SPIEC-EASI relies on algorithms for sparse neighborhood and inverse covariance selection. To provide a synthetic benchmark in the absence of an experimentally validated gold-standard network, SPIEC-EASI is accompanied by a set of computational tools to generate OTU count data from a set of diverse underlying network topologies. SPIEC

  19. A Comparison of Methods for Ocean Reconstruction from Sparse Observations

    NASA Astrophysics Data System (ADS)

    Streletz, G. J.; Kronenberger, M.; Weber, C.; Gebbie, G.; Hagen, H.; Garth, C.; Hamann, B.; Kreylos, O.; Kellogg, L. H.; Spero, H. J.

    2014-12-01

    We present a comparison of two methods for developing reconstructions of oceanic scalar property fields from sparse scattered observations. Observed data from deep sea core samples provide valuable information regarding the properties of oceans in the past. However, because the locations of sample sites are distributed on the ocean floor in a sparse and irregular manner, developing a global ocean reconstruction is a difficult task. Our methods include a flow-based and a moving least squares -based approximation method. The flow-based method augments the process of interpolating or approximating scattered scalar data by incorporating known flow information. The scheme exploits this additional knowledge to define a non-Euclidean distance measure between points in the spatial domain. This distance measure is used to create a reconstruction of the desired scalar field on the spatial domain. The resulting reconstruction thus incorporates information from both the scattered samples and the known flow field. The second method does not assume a known flow field, but rather works solely with the observed scattered samples. It is based on a modification of the moving least squares approach, a weighted least squares approximation method that blends local approximations into a global result. The modifications target the selection of data used for these local approximations and the construction of the weighting function. The definition of distance used in the weighting function is crucial for this method, so we use a machine learning approach to determine a set of near-optimal parameters for the weighting. We have implemented both of the reconstruction methods and have tested them using several sparse oceanographic datasets. Based upon these studies, we discuss the advantages and disadvantages of each method and suggest possible ways to combine aspects of both methods in order to achieve an overall high-quality reconstruction.

  20. Sparse and Compositionally Robust Inference of Microbial Ecological Networks

    PubMed Central

    Kurtz, Zachary D.; Müller, Christian L.; Miraldi, Emily R.; Littman, Dan R.; Blaser, Martin J.; Bonneau, Richard A.

    2015-01-01

    16S ribosomal RNA (rRNA) gene and other environmental sequencing techniques provide snapshots of microbial communities, revealing phylogeny and the abundances of microbial populations across diverse ecosystems. While changes in microbial community structure are demonstrably associated with certain environmental conditions (from metabolic and immunological health in mammals to ecological stability in soils and oceans), identification of underlying mechanisms requires new statistical tools, as these datasets present several technical challenges. First, the abundances of microbial operational taxonomic units (OTUs) from amplicon-based datasets are compositional. Counts are normalized to the total number of counts in the sample. Thus, microbial abundances are not independent, and traditional statistical metrics (e.g., correlation) for the detection of OTU-OTU relationships can lead to spurious results. Secondly, microbial sequencing-based studies typically measure hundreds of OTUs on only tens to hundreds of samples; thus, inference of OTU-OTU association networks is severely under-powered, and additional information (or assumptions) are required for accurate inference. Here, we present SPIEC-EASI (SParse InversE Covariance Estimation for Ecological Association Inference), a statistical method for the inference of microbial ecological networks from amplicon sequencing datasets that addresses both of these issues. SPIEC-EASI combines data transformations developed for compositional data analysis with a graphical model inference framework that assumes the underlying ecological association network is sparse. To reconstruct the network, SPIEC-EASI relies on algorithms for sparse neighborhood and inverse covariance selection. To provide a synthetic benchmark in the absence of an experimentally validated gold-standard network, SPIEC-EASI is accompanied by a set of computational tools to generate OTU count data from a set of diverse underlying network topologies. SPIEC

  1. Deformable MR Prostate Segmentation via Deep Feature Learning and Sparse Patch Matching.

    PubMed

    Guo, Yanrong; Gao, Yaozong; Shen, Dinggang

    2016-04-01

    Automatic and reliable segmentation of the prostate is an important but difficult task for various clinical applications such as prostate cancer radiotherapy. The main challenges for accurate MR prostate localization lie in two aspects: (1) inhomogeneous and inconsistent appearance around prostate boundary, and (2) the large shape variation across different patients. To tackle these two problems, we propose a new deformable MR prostate segmentation method by unifying deep feature learning with the sparse patch matching. First, instead of directly using handcrafted features, we propose to learn the latent feature representation from prostate MR images by the stacked sparse auto-encoder (SSAE). Since the deep learning algorithm learns the feature hierarchy from the data, the learned features are often more concise and effective than the handcrafted features in describing the underlying data. To improve the discriminability of learned features, we further refine the feature representation in a supervised fashion. Second, based on the learned features, a sparse patch matching method is proposed to infer a prostate likelihood map by transferring the prostate labels from multiple atlases to the new prostate MR image. Finally, a deformable segmentation is used to integrate a sparse shape model with the prostate likelihood map for achieving the final segmentation. The proposed method has been extensively evaluated on the dataset that contains 66 T2-wighted prostate MR images. Experimental results show that the deep-learned features are more effective than the handcrafted features in guiding MR prostate segmentation. Moreover, our method shows superior performance than other state-of-the-art segmentation methods. PMID:26685226

  2. Deformable MR Prostate Segmentation via Deep Feature Learning and Sparse Patch Matching.

    PubMed

    Guo, Yanrong; Gao, Yaozong; Shen, Dinggang

    2016-04-01

    Automatic and reliable segmentation of the prostate is an important but difficult task for various clinical applications such as prostate cancer radiotherapy. The main challenges for accurate MR prostate localization lie in two aspects: (1) inhomogeneous and inconsistent appearance around prostate boundary, and (2) the large shape variation across different patients. To tackle these two problems, we propose a new deformable MR prostate segmentation method by unifying deep feature learning with the sparse patch matching. First, instead of directly using handcrafted features, we propose to learn the latent feature representation from prostate MR images by the stacked sparse auto-encoder (SSAE). Since the deep learning algorithm learns the feature hierarchy from the data, the learned features are often more concise and effective than the handcrafted features in describing the underlying data. To improve the discriminability of learned features, we further refine the feature representation in a supervised fashion. Second, based on the learned features, a sparse patch matching method is proposed to infer a prostate likelihood map by transferring the prostate labels from multiple atlases to the new prostate MR image. Finally, a deformable segmentation is used to integrate a sparse shape model with the prostate likelihood map for achieving the final segmentation. The proposed method has been extensively evaluated on the dataset that contains 66 T2-wighted prostate MR images. Experimental results show that the deep-learned features are more effective than the handcrafted features in guiding MR prostate segmentation. Moreover, our method shows superior performance than other state-of-the-art segmentation methods.

  3. An ultra-sparse code underliesthe generation of neural sequences in a songbird

    NASA Astrophysics Data System (ADS)

    Hahnloser, Richard H. R.; Kozhevnikov, Alexay A.; Fee, Michale S.

    2002-09-01

    Sequences of motor activity are encoded in many vertebrate brains by complex spatio-temporal patterns of neural activity; however, the neural circuit mechanisms underlying the generation of these pre-motor patterns are poorly understood. In songbirds, one prominent site of pre-motor activity is the forebrain robust nucleus of the archistriatum (RA), which generates stereotyped sequences of spike bursts during song and recapitulates these sequences during sleep. We show that the stereotyped sequences in RA are driven from nucleus HVC (high vocal centre), the principal pre-motor input to RA. Recordings of identified HVC neurons in sleeping and singing birds show that individual HVC neurons projecting onto RA neurons produce bursts sparsely, at a single, precise time during the RA sequence. These HVC neurons burst sequentially with respect to one another. We suggest that at each time in the RA sequence, the ensemble of active RA neurons is driven by a subpopulation of RA-projecting HVC neurons that is active only at that time. As a population, these HVC neurons may form an explicit representation of time in the sequence. Such a sparse representation, a temporal analogue of the `grandmother cell' concept for object recognition, eliminates the problem of temporal interference during sequence generation and learning attributed to more distributed representations.

  4. Dense and Sparse Matrix Operations on the Cell Processor

    SciTech Connect

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Husbands,Parry; Yelick, Katherine

    2005-05-01

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. Therefore, the high performance computing community is examining alternative architectures that address the limitations of modern superscalar designs. In this work, we examine STI's forthcoming Cell processor: a novel, low-power architecture that combines a PowerPC core with eight independent SIMD processing units coupled with a software-controlled memory to offer high FLOP/s/Watt. Since neither Cell hardware nor cycle-accurate simulators are currently publicly available, we develop an analytic framework to predict Cell performance on dense and sparse matrix operations, using a variety of algorithmic approaches. Results demonstrate Cell's potential to deliver more than an order of magnitude better GFLOP/s per watt performance, when compared with the Intel Itanium2 and Cray X1 processors.

  5. Functional representation of living and nonliving domains across the cerebral hemispheres: a combined event-related potential/transcranial magnetic stimulation study.

    PubMed

    Fuggetta, Giorgio; Rizzo, Silvia; Pobric, Gorana; Lavidor, Michal; Walsh, Vincent

    2009-02-01

    Transcranial magnetic stimulation (TMS) over the left hemisphere has been shown to disrupt semantic processing but, to date, there has been no direct demonstration of the electrophysiological correlates of this interference. To gain insight into the neural basis of semantic systems, and in particular, study the temporal and functional organization of object categorization processing, we combined repetitive TMS (rTMS) and ERPs. Healthy volunteers performed a picture-word matching task in which Snodgrass drawings of natural (e.g., animal) and artifactual (e.g., tool) categories were associated with a word. When short trains of high-frequency rTMS were applied over Wernicke's area (in the region of the CP5 electrode) immediately before the stimulus onset, we observed delayed response times to artifactual items, and thus, an increased dissociation between natural and artifactual domains. This behavioral effect had a direct ERP correlate. In the response period, the stimuli from the natural domain elicited a significant larger late positivity complex than those from the artifactual domain. These differences were significant over the centro-parietal region of the right hemisphere. These findings demonstrate that rTMS interferes with post-perceptual categorization processing of natural and artifactual stimuli that involve separate subsystems in distinct cortical areas. PMID:18510439

  6. Learning an enriched representation from unlabeled data for protein-protein interaction extraction

    PubMed Central

    2010-01-01

    Background Extracting protein-protein interactions from biomedical literature is an important task in biomedical text mining. Supervised machine learning methods have been used with great success in this task but they tend to suffer from data sparseness because of their restriction to obtain knowledge from limited amount of labelled data. In this work, we study the use of unlabeled biomedical texts to enhance the performance of supervised learning for this task. We use feature coupling generalization (FCG) – a recently proposed semi-supervised learning strategy – to learn an enriched representation of local contexts in sentences from 47 million unlabeled examples and investigate the performance of the new features on AIMED corpus. Results The new features generated by FCG achieve a 60.1 F-score and produce significant improvement over supervised baselines. The experimental analysis shows that FCG can utilize well the sparse features which have little effect in supervised learning. The new features perform better in non-linear classifiers than linear ones. We combine the new features with local lexical features, obtaining an F-score of 63.5 on AIMED corpus, which is comparable with the current state-of-the-art results. We also find that simple Boolean lexical features derived only from local contexts are able to achieve competitive results against most syntactic feature/kernel based methods. Conclusions FCG creates a lot of opportunities for designing new features, since a lot of sparse features ignored by supervised learning can be utilized well. Interestingly, our results also demonstrate that the state-of-the art performance can be achieved without using any syntactic information in this task. PMID:20406505

  7. Multipath sparse coding for scene classification in very high resolution satellite imagery

    NASA Astrophysics Data System (ADS)

    Fan, Jiayuan; Tan, Hui Li; Lu, Shijian

    2015-10-01

    With the rapid development of various satellite sensors, automatic and advanced scene classification technique is urgently needed to process a huge amount of satellite image data. Recently, a few of research works start to implant the sparse coding for feature learning in aerial scene classification. However, these previous research works use the single-layer sparse coding in their system and their performances are highly related with multiple low-level features, such as scale-invariant feature transform (SIFT) and saliency. Motivated by the importance of feature learning through multiple layers, we propose a new unsupervised feature learning approach for scene classification on very high resolution satellite imagery. The proposed unsupervised feature learning utilizes multipath sparse coding architecture in order to capture multiple aspects of discriminative structures within complex satellite scene images. In addition, the dense low-level features are extracted from the raw satellite data by using different image patches with varying size at different layers, and this approach is not limited to a particularly designed feature descriptors compared with the other related works. The proposed technique has been evaluated on two challenging high-resolution datasets, including the UC Merced dataset containing 21 different aerial scene categories with a 1 foot resolution and the Singapore dataset containing 5 land-use categories with a 0.5m spatial resolution. Experimental results show that it outperforms the state-of-the-art that uses the single-layer sparse coding. The major contributions of this proposed technique include (1) a new unsupervised feature learning approach to generate feature representation for very high-resolution satellite imagery, (2) the first multipath sparse coding that is used for scene classification in very high-resolution satellite imagery, (3) a simple low-level feature descriptor instead of many particularly designed low-level descriptor

  8. Causal Network Inference Via Group Sparse Regularization.

    PubMed

    Bolstad, Andrew; Van Veen, Barry D; Nowak, Robert

    2011-06-11

    This paper addresses the problem of inferring sparse causal networks modeled by multivariate autoregressive (MAR) processes. Conditions are derived under which the Group Lasso (gLasso) procedure consistently estimates sparse network structure. The key condition involves a "false connection score" ψ. In particular, we show that consistent recovery is possible even when the number of observations of the network is far less than the number of parameters describing the network, provided that ψ < 1. The false connection score is also demonstrated to be a useful metric of recovery in nonasymptotic regimes. The conditions suggest a modified gLasso procedure which tends to improve the false connection score and reduce the chances of reversing the direction of causal influence. Computational experiments and a real network based electrocorticogram (ECoG) simulation study demonstrate the effectiveness of the approach.

  9. Statistical prediction with Kanerva's sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1989-01-01

    A new viewpoint of the processing performed by Kanerva's sparse distributed memory (SDM) is presented. In conditions of near- or over-capacity, where the associative-memory behavior of the model breaks down, the processing performed by the model can be interpreted as that of a statistical predictor. Mathematical results are presented which serve as the framework for a new statistical viewpoint of sparse distributed memory and for which the standard formulation of SDM is a special case. This viewpoint suggests possible enhancements to the SDM model, including a procedure for improving the predictiveness of the system based on Holland's work with genetic algorithms, and a method for improving the capacity of SDM even when used as an associative memory.

  10. Solving large sparse eigenvalue problems on supercomputers

    NASA Technical Reports Server (NTRS)

    Philippe, Bernard; Saad, Youcef

    1988-01-01

    An important problem in scientific computing consists in finding a few eigenvalues and corresponding eigenvectors of a very large and sparse matrix. The most popular methods to solve these problems are based on projection techniques on appropriate subspaces. The main attraction of these methods is that they only require the use of the matrix in the form of matrix by vector multiplications. The implementations on supercomputers of two such methods for symmetric matrices, namely Lanczos' method and Davidson's method are compared. Since one of the most important operations in these two methods is the multiplication of vectors by the sparse matrix, methods of performing this operation efficiently are discussed. The advantages and the disadvantages of each method are compared and implementation aspects are discussed. Numerical experiments on a one processor CRAY 2 and CRAY X-MP are reported. Possible parallel implementations are also discussed.

  11. Sparse brain network using penalized linear regression

    NASA Astrophysics Data System (ADS)

    Lee, Hyekyoung; Lee, Dong Soo; Kang, Hyejin; Kim, Boong-Nyun; Chung, Moo K.

    2011-03-01

    Sparse partial correlation is a useful connectivity measure for brain networks when it is difficult to compute the exact partial correlation in the small-n large-p setting. In this paper, we formulate the problem of estimating partial correlation as a sparse linear regression with a l1-norm penalty. The method is applied to brain network consisting of parcellated regions of interest (ROIs), which are obtained from FDG-PET images of the autism spectrum disorder (ASD) children and the pediatric control (PedCon) subjects. To validate the results, we check their reproducibilities of the obtained brain networks by the leave-one-out cross validation and compare the clustered structures derived from the brain networks of ASD and PedCon.

  12. Causal Network Inference Via Group Sparse Regularization

    PubMed Central

    Bolstad, Andrew; Van Veen, Barry D.; Nowak, Robert

    2011-01-01

    This paper addresses the problem of inferring sparse causal networks modeled by multivariate autoregressive (MAR) processes. Conditions are derived under which the Group Lasso (gLasso) procedure consistently estimates sparse network structure. The key condition involves a “false connection score” ψ. In particular, we show that consistent recovery is possible even when the number of observations of the network is far less than the number of parameters describing the network, provided that ψ < 1. The false connection score is also demonstrated to be a useful metric of recovery in nonasymptotic regimes. The conditions suggest a modified gLasso procedure which tends to improve the false connection score and reduce the chances of reversing the direction of causal influence. Computational experiments and a real network based electrocorticogram (ECoG) simulation study demonstrate the effectiveness of the approach. PMID:21918591

  13. Neural process reconstruction from sparse user scribbles.

    PubMed

    Roberts, Mike; Jeong, Won-Ki; Vázquez-Reina, Amelio; Unger, Markus; Bischof, Horst; Lichtman, Jeff; Pfister, Hanspeter

    2011-01-01

    We present a novel semi-automatic method for segmenting neural processes in large, highly anisotropic EM (electron microscopy) image stacks. Our method takes advantage of sparse scribble annotations provided by the user to guide a 3D variational segmentation model, thereby allowing our method to globally optimally enforce 3D geometric constraints on the segmentation. Moreover, we leverage a novel algorithm for propagating segmentation constraints through the image stack via optimal volumetric pathways, thereby allowing our method to compute highly accurate 3D segmentations from very sparse user input. We evaluate our method by reconstructing 16 neural processes in a 1024 x 1024 x 50 nanometer-scale EM image stack of a mouse hippocampus. We demonstrate that, on average, our method is 68% more accurate than previous state-of-the-art semi-automatic methods. PMID:22003670

  14. Perception of biological motion from size-invariant body representations

    PubMed Central

    Lappe, Markus; Wittinghofer, Karin; de Lussanet, Marc H. E.

    2015-01-01

    The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion. PMID:25852505

  15. Perception of biological motion from size-invariant body representations.

    PubMed

    Lappe, Markus; Wittinghofer, Karin; de Lussanet, Marc H E

    2015-01-01

    The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion.

  16. Sparse image reconstruction for molecular imaging.

    PubMed

    Ting, Michael; Raich, Raviv; Hero, Alfred O

    2009-06-01

    The application that motivates this paper is molecular imaging at the atomic level. When discretized at subatomic distances, the volume is inherently sparse. Noiseless measurements from an imaging technology can be modeled by convolution of the image with the system point spread function (psf). Such is the case with magnetic resonance force microscopy (MRFM), an emerging technology where imaging of an individual tobacco mosaic virus was recently demonstrated with nanometer resolution. We also consider additive white Gaussian noise (AWGN) in the measurements. Many prior works of sparse estimators have focused on the case when H has low coherence; however, the system matrix H in our application is the convolution matrix for the system psf. A typical convolution matrix has high coherence. This paper, therefore, does not assume a low coherence H. A discrete-continuous form of the Laplacian and atom at zero (LAZE) p.d.f. used by Johnstone and Silverman is formulated, and two sparse estimators derived by maximizing the joint p.d.f. of the observation and image conditioned on the hyperparameters. A thresholding rule that generalizes the hard and soft thresholding rule appears in the course of the derivation. This so-called hybrid thresholding rule, when used in the iterative thresholding framework, gives rise to the hybrid estimator, a generalization of the lasso. Estimates of the hyperparameters for the lasso and hybrid estimator are obtained via Stein's unbiased risk estimate (SURE). A numerical study with a Gaussian psf and two sparse images shows that the hybrid estimator outperforms the lasso.

  17. Notes on implementation of sparsely distributed memory

    NASA Technical Reports Server (NTRS)

    Keeler, J. D.; Denning, P. J.

    1986-01-01

    The Sparsely Distributed Memory (SDM) developed by Kanerva is an unconventional memory design with very interesting and desirable properties. The memory works in a manner that is closely related to modern theories of human memory. The SDM model is discussed in terms of its implementation in hardware. Two appendices discuss the unconventional approaches of the SDM: Appendix A treats a resistive circuit for fast, parallel address decoding; and Appendix B treats a systolic array for high throughput read and write operations.

  18. A survey of visual preprocessing and shape representation techniques

    NASA Technical Reports Server (NTRS)

    Olshausen, Bruno A.

    1988-01-01

    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention).

  19. Sparse Gamma Rhythms Arising through Clustering in Adapting Neuronal Networks

    PubMed Central

    Kilpatrick, Zachary P.; Ermentrout, Bard

    2011-01-01

    Gamma rhythms (30–100 Hz) are an extensively studied synchronous brain state responsible for a number of sensory, memory, and motor processes. Experimental evidence suggests that fast-spiking interneurons are responsible for carrying the high frequency components of the rhythm, while regular-spiking pyramidal neurons fire sparsely. We propose that a combination of spike frequency adaptation and global inhibition may be responsible for this behavior. Excitatory neurons form several clusters that fire every few cycles of the fast oscillation. This is first shown in a detailed biophysical network model and then analyzed thoroughly in an idealized model. We exploit the fact that the timescale of adaptation is much slower than that of the other variables. Singular perturbation theory is used to derive an approximate periodic solution for a single spiking unit. This is then used to predict the relationship between the number of clusters arising spontaneously in the network as it relates to the adaptation time constant. We compare this to a complementary analysis that employs a weak coupling assumption to predict the first Fourier mode to destabilize from the incoherent state of an associated phase model as the external noise is reduced. Both approaches predict the same scaling of cluster number with respect to the adaptation time constant, which is corroborated in numerical simulations of the full system. Thus, we develop several testable predictions regarding the formation and characteristics of gamma rhythms with sparsely firing excitatory neurons. PMID:22125486

  20. Dictionary learning and sparse recovery for electrodermal activity analysis

    NASA Astrophysics Data System (ADS)

    Kelsey, Malia; Dallal, Ahmed; Eldeeb, Safaa; Akcakaya, Murat; Kleckner, Ian; Gerard, Christophe; Quigley, Karen S.; Goodwin, Matthew S.

    2016-05-01

    Measures of electrodermal activity (EDA) have advanced research in a wide variety of areas including psychophysiology; however, the majority of this research is typically undertaken in laboratory settings. To extend the ecological validity of laboratory assessments, researchers are taking advantage of advances in wireless biosensors to gather EDA data in ambulatory settings, such as in school classrooms. While measuring EDA in naturalistic contexts may enhance ecological validity, it also introduces analytical challenges that current techniques cannot address. One limitation is the limited efficiency and automation of analysis techniques. Many groups either analyze their data by hand, reviewing each individual record, or use computationally inefficient software that limits timely analysis of large data sets. To address this limitation, we developed a method to accurately and automatically identify SCRs using curve fitting methods. Curve fitting has been shown to improve the accuracy of SCR amplitude and location estimations, but have not yet been used to reduce computational complexity. In this paper, sparse recovery and dictionary learning methods are combined to improve computational efficiency of analysis and decrease run time, while maintaining a high degree of accuracy in detecting SCRs. Here, a dictionary is first created using curve fitting methods for a standard SCR shape. Then, orthogonal matching pursuit (OMP) is used to detect SCRs within a dataset using the dictionary to complete sparse recovery. Evaluation of our method, including a comparison to for speed and accuracy with existing software, showed an accuracy of 80% and a reduced run time.

  1. Explorations of Representational Momentum.

    ERIC Educational Resources Information Center

    Kelly, Michael H.; Freyd, Jennifer J.

    1987-01-01

    Figures that undergo an implied rotation are remembered as being slightly beyond their final position, a phenomenon called representational momentum. Eight experiments explored the questions of what gets transformed and what types of transformations induce such representational distortions. (GDC)

  2. Multi dose computed tomography image fusion based on hybrid sparse methodology.

    PubMed

    Venkataraman, Anuyogam; Alirezaie, Javad; Babyn, Paul; Ahmadian, Alireza

    2014-01-01

    With the increasing utilization of X-ray Computed Tomography (CT) in medical diagnosis, obtaining higher quality image with lower exposure to radiation has become a highly challenging task in image processing. In this paper, a novel sparse fusion algorithm is proposed to address the problem of lower Signal to Noise Ratio (SNR) in low dose CT images. Initial fused image is obtained by combining low dose and medium dose images in sparse domain, utilizing the Dual Tree Complex Wavelet Transform (DTCWT) dictionary which is trained by high dose image. And then, the strongly focused image is obtained by determining the pixels of source images which have high similarity with the pixels of the initial fused image. Final denoised image is obtained by fusing strongly focused image and decomposed sparse vectors of source images, thereby preserving the edges and other critical information needed for diagnosis. This paper demonstrates the effectiveness of the proposed algorithm both quantitatively and qualitatively. PMID:25570844

  3. Modified sparse regularization for electrical impedance tomography.

    PubMed

    Fan, Wenru; Wang, Huaxiang; Xue, Qian; Cui, Ziqiang; Sun, Benyuan; Wang, Qi

    2016-03-01

    Electrical impedance tomography (EIT) aims to estimate the electrical properties at the interior of an object from current-voltage measurements on its boundary. It has been widely investigated due to its advantages of low cost, non-radiation, non-invasiveness, and high speed. Image reconstruction of EIT is a nonlinear and ill-posed inverse problem. Therefore, regularization techniques like Tikhonov regularization are used to solve the inverse problem. A sparse regularization based on L1 norm exhibits superiority in preserving boundary information at sharp changes or discontinuous areas in the image. However, the limitation of sparse regularization lies in the time consumption for solving the problem. In order to further improve the calculation speed of sparse regularization, a modified method based on separable approximation algorithm is proposed by using adaptive step-size and preconditioning technique. Both simulation and experimental results show the effectiveness of the proposed method in improving the image quality and real-time performance in the presence of different noise intensities and conductivity contrasts. PMID:27036798

  4. Modified sparse regularization for electrical impedance tomography.

    PubMed

    Fan, Wenru; Wang, Huaxiang; Xue, Qian; Cui, Ziqiang; Sun, Benyuan; Wang, Qi

    2016-03-01

    Electrical impedance tomography (EIT) aims to estimate the electrical properties at the interior of an object from current-voltage measurements on its boundary. It has been widely investigated due to its advantages of low cost, non-radiation, non-invasiveness, and high speed. Image reconstruction of EIT is a nonlinear and ill-posed inverse problem. Therefore, regularization techniques like Tikhonov regularization are used to solve the inverse problem. A sparse regularization based on L1 norm exhibits superiority in preserving boundary information at sharp changes or discontinuous areas in the image. However, the limitation of sparse regularization lies in the time consumption for solving the problem. In order to further improve the calculation speed of sparse regularization, a modified method based on separable approximation algorithm is proposed by using adaptive step-size and preconditioning technique. Both simulation and experimental results show the effectiveness of the proposed method in improving the image quality and real-time performance in the presence of different noise intensities and conductivity contrasts.

  5. Mean-field sparse optimal control

    PubMed Central

    Fornasier, Massimo; Piccoli, Benedetto; Rossi, Francesco

    2014-01-01

    We introduce the rigorous limit process connecting finite dimensional sparse optimal control problems with ODE constraints, modelling parsimonious interventions on the dynamics of a moving population divided into leaders and followers, to an infinite dimensional optimal control problem with a constraint given by a system of ODE for the leaders coupled with a PDE of Vlasov-type, governing the dynamics of the probability distribution of the followers. In the classical mean-field theory, one studies the behaviour of a large number of small individuals freely interacting with each other, by simplifying the effect of all the other individuals on any given individual by a single averaged effect. In this paper, we address instead the situation where the leaders are actually influenced also by an external policy maker, and we propagate its effect for the number N of followers going to infinity. The technical derivation of the sparse mean-field optimal control is realized by the simultaneous development of the mean-field limit of the equations governing the followers dynamics together with the Γ-limit of the finite dimensional sparse optimal control problems. PMID:25288818

  6. Representations of fuzzy torus

    NASA Astrophysics Data System (ADS)

    Aizawa, N.; Chakrabarti, R.

    2008-08-01

    A classification of Hermitian representations for the recently introduced fuzzy torus algebra is presented. This is carried out by regarding the fuzzy torus algebra as a q-deformation of parafermion. In addition to the known representations, new representations of both finite and infinite dimension are found. Using the infinite dimensional representation, coherent state for the fuzzy torus is constructed. Dirac operator on commutative torus is also discussed.

  7. Representation in Memory.

    ERIC Educational Resources Information Center

    Rumelhart, David E.; Norman, Donald A.

    This paper reviews work on the representation of knowledge from within psychology and artificial intelligence. The work covers the nature of representation, the distinction between the represented world and the representing world, and significant issues concerned with propositional, analogical, and superpositional representations. Specific topics…

  8. MR image reconstruction of sparsely sampled 3D k-space data by projection-onto-convex sets.

    PubMed

    Peng, Haidong; Sabati, Mohammad; Lauzon, Louis; Frayne, Richard

    2006-07-01

    In many rapid three-dimensional (3D) magnetic resonance (MR) imaging applications, such as when following a contrast bolus in the vasculature using a moving table technique, the desired k-space data cannot be fully acquired due to scan time limitations. One solution to this problem is to sparsely sample the data space. Typically, the central zone of k-space is fully sampled, but the peripheral zone is partially sampled. We have experimentally evaluated the application of the projection-onto-convex sets (POCS) and zero-filling (ZF) algorithms for the reconstruction of sparsely sampled 3D k-space data. Both a subjective assessment (by direct image visualization) and an objective analysis [using standard image quality parameters such as global and local performance error and signal-to-noise ratio (SNR)] were employed. Compared to ZF, the POCS algorithm was found to be a powerful and robust method for reconstructing images from sparsely sampled 3D k-space data, a practical strategy for greatly reducing scan time. The POCS algorithm reconstructed a faithful representation of the true image and improved image quality with regard to global and local performance error, with respect to the ZF images. SNR, however, was superior to ZF only when more than 20% of the data were sparsely sampled. POCS-based methods show potential for reconstructing fast 3D MR images obtained by sparse sampling.

  9. Stacked Predictive Sparse Decomposition for Classification of Histology Sections

    PubMed Central

    Zhou, Yin; Borowsky, Alexander; Barner, Kenneth; Spellman, Paul

    2016-01-01

    Image-based classification of histology sections, in terms of distinct components (e.g., tumor, stroma, normal), provides a series of indices for histology composition (e.g., the percentage of each distinct components in histology sections), and enables the study of nuclear properties within each component. Furthermore, the study of these indices, constructed from each whole slide image in a large cohort, has the potential to provide predictive models of clinical outcome. For example, correlations can be established between the constructed indices and the patients’ survival information at cohort level, which is a fundamental step towards personalized medicine. However, performance of the existing techniques is hindered as a result of large technical variations (e.g., variations of color/textures in tissue images due to non-standard experimental protocols) and biological heterogeneities (e.g., cell type, cell state) that are always present in a large cohort. We propose a system that automatically learns a series of dictionary elements for representing the underlying spatial distribution using stacked predictive sparse decomposition. The learned representation is then fed into the spatial pyramid matching framework with a linear support vector machine classifier. The system has been evaluated for classification of distinct histological components for two cohorts of tumor types. Throughput has been increased by using of graphical processing unit (GPU), and evaluation indicates a superior performance results, compared with previous research.

  10. Sparse + low-energy decomposition for viscous conservation laws

    NASA Astrophysics Data System (ADS)

    Hou, Thomas Y.; Li, Qin; Schaeffer, Hayden

    2015-05-01

    For viscous conservation laws, solutions contain smooth but high-contrast features, which require the use of fine grids to properly resolve. On coarse grids, these high-contrast jumps resemble shocks rather than their true viscous profiles, which could lead to issues in the numerical approximation of their underlying dynamics. In many cases, the equations of motion emit traveling wave solutions which can be used to represent the viscous profiles analytically. The traveling wave solutions can be thought of as a lower dimensional representation of the motion, since they contain information from the evolution equation, but are constant along certain time-space curves. Using a parameterized basis involving the traveling waves, along with the sparse + low-energy decompositions found in imaging sciences, we propose an approximation to viscous conservation laws which separates the coarse smooth component from the sharp fine one. Our method provides an appropriate approximation to the solution on a coarse grid, thereby accurately under-resolving the viscous profile. This is similar to the philosophy of shock capturing methods, in the sense that we want to capture the viscous front without needing to resolve the profile. Theoretical results on the consistency of our method are shown in general. We provide several computational examples for convex and non-convex fluxes.

  11. Cognitive Dissonance as an Instructional Tool for Understanding Chemical Representations

    ERIC Educational Resources Information Center

    Corradi, David; Clarebout, Geraldine; Elen, Jan

    2015-01-01

    Previous research on multiple external representations (MER) indicates that sequencing representations (compared with presenting them as a whole) can, in some cases, increase conceptual understanding if there is interference between internal and external representations. We tested this mechanism by sequencing different combinations of scientific…

  12. Adaptive block-wise alphabet reduction scheme for lossless compression of images with sparse and locally sparse histograms

    NASA Astrophysics Data System (ADS)

    Masmoudi, Atef; Zouari, Sonia; Ghribi, Abdelaziz

    2015-11-01

    We propose a new adaptive block-wise lossless image compression algorithm, which is based on the so-called alphabet reduction scheme combined with an adaptive arithmetic coding (AC). This new encoding algorithm is particularly efficient for lossless compression of images with sparse and locally sparse histograms. AC is a very efficient technique for lossless data compression and produces a rate that is close to the entropy; however, a compression performance loss occurs when encoding images or blocks with a limited number of active symbols by comparison with the number of symbols in the nominal alphabet, which consists in the amplification of the zero frequency problem. Generally, most methods add one to the frequency count of each symbol from the nominal alphabet, which leads to a statistical model distortion, and therefore reduces the efficiency of the AC. The aim of this work is to overcome this drawback by assigning to each image block the smallest possible set including all the existing symbols called active symbols. This is an alternative of using the nominal alphabet when applying the conventional arithmetic encoders. We show experimentally that the proposed method outperforms several lossless image compression encoders and standards including the conventional arithmetic encoders, JPEG2000, and JPEG-LS.

  13. Sparse Parallel MRI Based on Accelerated Operator Splitting Schemes

    PubMed Central

    Xie, Weisi; Su, Zhenghang

    2016-01-01

    Recently, the sparsity which is implicit in MR images has been successfully exploited for fast MR imaging with incomplete acquisitions. In this paper, two novel algorithms are proposed to solve the sparse parallel MR imaging problem, which consists of l1 regularization and fidelity terms. The two algorithms combine forward-backward operator splitting and Barzilai-Borwein schemes. Theoretically, the presented algorithms overcome the nondifferentiable property in l1 regularization term. Meanwhile, they are able to treat a general matrix operator that may not be diagonalized by fast Fourier transform and to ensure that a well-conditioned optimization system of equations is simply solved. In addition, we build connections between the proposed algorithms and the state-of-the-art existing methods and prove their convergence with a constant stepsize in Appendix. Numerical results and comparisons with the advanced methods demonstrate the efficiency of proposed algorithms. PMID:27746824

  14. Sparse labeling of proteins: Structural characterization from long range constraints

    NASA Astrophysics Data System (ADS)

    Prestegard, James H.; Agard, David A.; Moremen, Kelley W.; Lavery, Laura A.; Morris, Laura C.; Pederson, Kari

    2014-04-01

    Structural characterization of biologically important proteins faces many challenges associated with degradation of resolution as molecular size increases and loss of resolution improving tools such as perdeuteration when non-bacterial hosts must be used for expression. In these cases, sparse isotopic labeling (single or small subsets of amino acids) combined with long range paramagnetic constraints and improved computational modeling offer an alternative. This perspective provides a brief overview of this approach and two discussions of potential applications; one involving a very large system (an Hsp90 homolog) in which perdeuteration is possible and methyl-TROSY sequences can potentially be used to improve resolution, and one involving ligand placement in a glycosylated protein where resolution is achieved by single amino acid labeling (the sialyltransferase, ST6Gal1). This is not intended as a comprehensive review, but as a discussion of future prospects that promise impact on important questions in the structural biology area.

  15. Computational representation of biological systems

    SciTech Connect

    Frazier, Zach; McDermott, Jason E.; Guerquin, Michal; Samudrala, Ram

    2009-04-20

    Integration of large and diverse biological data sets is a daunting problem facing systems biology researchers. Exploring the complex issues of data validation, integration, and representation, we present a systematic approach for the management and analysis of large biological data sets based on data warehouses. Our system has been implemented in the Bioverse, a framework combining diverse protein information from a variety of knowledge areas such as molecular interactions, pathway localization, protein structure, and protein function.

  16. Computer aided surface representation

    SciTech Connect

    Barnhill, R.E.

    1991-04-02

    Modern computing resources permit the generation of large amounts of numerical data. These large data sets, if left in numerical form, can be overwhelming. Such large data sets are usually discrete points from some underlying physical phenomenon. Because we need to evaluate the phenomenon at places where we don't have data, a continuous representation (a surface'') is required. A simple example is a weather map obtained from a discrete set of weather stations. (For more examples including multi-dimensional ones, see the article by Dr. Rosemary Chang in the enclosed IRIS Universe). In order to create a scientific structure encompassing the data, we construct an interpolating mathematical surface which can evaluate at arbitrary locations. We can also display and analyze the results via interactive computer graphics. In our research we construct a very wide variety of surfaces for applied geometry problems that have sound theoretical foundations. However, our surfaces have the distinguishing feature that they are constructed to solve short or long term practical problems. This DOE-funded project has developed the premiere research team in the subject of constructing surfaces (3D and higher dimensional) that provide smooth representations of real scientific and engineering information, including state of the art computer graphics visualizations. However, our main contribution is in the development of fundamental constructive mathematical methods and visualization techniques which can be incorporated into a wide variety of applications. This project combines constructive mathematics, algorithms, and computer graphics, all applied to real problems. The project is a unique resource, considered by our peers to be a de facto national center for this type of research.

  17. Galaxy redshift surveys with sparse sampling

    SciTech Connect

    Chiang, Chi-Ting; Wullstein, Philipp; Komatsu, Eiichiro; Jee, Inh; Jeong, Donghui; Blanc, Guillermo A.; Ciardullo, Robin; Gronwall, Caryl; Hagen, Alex; Schneider, Donald P.; Drory, Niv; Fabricius, Maximilian; Landriau, Martin; Finkelstein, Steven; Jogee, Shardha; Cooper, Erin Mentuch; Tuttle, Sarah; Gebhardt, Karl; Hill, Gary J.

    2013-12-01

    Survey observations of the three-dimensional locations of galaxies are a powerful approach to measure the distribution of matter in the universe, which can be used to learn about the nature of dark energy, physics of inflation, neutrino masses, etc. A competitive survey, however, requires a large volume (e.g., V{sub survey} ∼ 10Gpc{sup 3}) to be covered, and thus tends to be expensive. A ''sparse sampling'' method offers a more affordable solution to this problem: within a survey footprint covering a given survey volume, V{sub survey}, we observe only a fraction of the volume. The distribution of observed regions should be chosen such that their separation is smaller than the length scale corresponding to the wavenumber of interest. Then one can recover the power spectrum of galaxies with precision expected for a survey covering a volume of V{sub survey} (rather than the volume of the sum of observed regions) with the number density of galaxies given by the total number of observed galaxies divided by V{sub survey} (rather than the number density of galaxies within an observed region). We find that regularly-spaced sampling yields an unbiased power spectrum with no window function effect, and deviations from regularly-spaced sampling, which are unavoidable in realistic surveys, introduce calculable window function effects and increase the uncertainties of the recovered power spectrum. On the other hand, we show that the two-point correlation function (pair counting) is not affected by sparse sampling. While we discuss the sparse sampling method within the context of the forthcoming Hobby-Eberly Telescope Dark Energy Experiment, the method is general and can be applied to other galaxy surveys.

  18. A hyper-spherical adaptive sparse-grid method for high-dimensional discontinuity detection

    SciTech Connect

    Zhang, Guannan; Webster, Clayton G; Gunzburger, Max D; Burkardt, John V

    2014-03-01

    This work proposes and analyzes a hyper-spherical adaptive hi- erarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces is proposed. The method is motivated by the the- oretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a func- tion representation of the discontinuity hyper-surface of an N-dimensional dis- continuous quantity of interest, by virtue of a hyper-spherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyper-spherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smooth- ness of the hyper-surface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous error estimates and complexity anal- yses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.

  19. Smoothed l0 Norm Regularization for Sparse-View X-Ray CT Reconstruction

    PubMed Central

    Li, Ming; Peng, Chengtao; Guan, Yihui; Xu, Pin

    2016-01-01

    Low-dose computed tomography (CT) reconstruction is a challenging problem in medical imaging. To complement the standard filtered back-projection (FBP) reconstruction, sparse regularization reconstruction gains more and more research attention, as it promises to reduce radiation dose, suppress artifacts, and improve noise properties. In this work, we present an iterative reconstruction approach using improved smoothed l0 (SL0) norm regularization which is used to approximate l0 norm by a family of continuous functions to fully exploit the sparseness of the image gradient. Due to the excellent sparse representation of the reconstruction signal, the desired tissue details are preserved in the resulting images. To evaluate the performance of the proposed SL0 regularization method, we reconstruct the simulated dataset acquired from the Shepp-Logan phantom and clinical head slice image. Additional experimental verification is also performed with two real datasets from scanned animal experiment. Compared to the referenced FBP reconstruction and the total variation (TV) regularization reconstruction, the results clearly reveal that the presented method has characteristic strengths. In particular, it improves reconstruction quality via reducing noise while preserving anatomical features. PMID:27725935

  20. Edge-preserving traveltime tomography with a sparse multiscale imaging constraint

    NASA Astrophysics Data System (ADS)

    Sun, Mengyao; Zhang, Jie

    2016-08-01

    Solving the near-surface statics problem is often the first step in land or shallow marine seismic data processing. Near-surface velocity structures can be very complex, with large velocity contrasts within a small depth range. First-arrival traveltime tomography is a common approach for near-surface imaging. However, first-arrival traveltime tomography generally produces smooth model solutions due to the Tikhonov regularization, which constrains the model for minimum structures. Failing to resolve high velocity contrasts may result in inaccurate static values for reflection imaging. In this study, we develop a sparse multiscale imaging constraint for traveltime tomography to address this issue. In this method, we assume that the velocity model is sparse under a known wavelet basis. According to the model sparse representation, we first obtain the low wavenumber velocity structures, followed by the finer features, by alternately solving two sets of inversion problems. The synthetic tests and two real data applications show that this method exhibits better performance in reconstructing near-surface models with high velocity contrasts.