Science.gov

Sample records for sparse representation combined

  1. Grassmannian sparse representations

    NASA Astrophysics Data System (ADS)

    Azary, Sherif; Savakis, Andreas

    2015-05-01

    We present Grassmannian sparse representations (GSR), a sparse representation Grassmann learning framework for efficient classification. Sparse representation classification offers a powerful approach for recognition in a variety of contexts. However, a major drawback of sparse representation methods is their computational performance and memory utilization for high-dimensional data. A Grassmann manifold is a space that promotes smooth surfaces where points represent subspaces and the relationship between points is defined by the mapping of an orthogonal matrix. Grassmann manifolds are well suited for computer vision problems because they promote high between-class discrimination and within-class clustering, while offering computational advantages by mapping each subspace onto a single point. The GSR framework combines Grassmannian kernels and sparse representations, including regularized least squares and least angle regression, to improve high accuracy recognition while overcoming the drawbacks of performance and dependencies on high dimensional data distributions. The effectiveness of GSR is demonstrated on computationally intensive multiview action sequences, three-dimensional action sequences, and face recognition datasets.

  2. Sparse representation with kernels.

    PubMed

    Gao, Shenghua; Tsang, Ivor Wai-Hung; Chia, Liang-Tien

    2013-02-01

    Recent research has shown the initial success of sparse coding (Sc) in solving many computer vision tasks. Motivated by the fact that kernel trick can capture the nonlinear similarity of features, which helps in finding a sparse representation of nonlinear features, we propose kernel sparse representation (KSR). Essentially, KSR is a sparse coding technique in a high dimensional feature space mapped by an implicit mapping function. We apply KSR to feature coding in image classification, face recognition, and kernel matrix approximation. More specifically, by incorporating KSR into spatial pyramid matching (SPM), we develop KSRSPM, which achieves a good performance for image classification. Moreover, KSR-based feature coding can be shown as a generalization of efficient match kernel and an extension of Sc-based SPM. We further show that our proposed KSR using a histogram intersection kernel (HIK) can be considered a soft assignment extension of HIK-based feature quantization in the feature coding process. Besides feature coding, comparing with sparse coding, KSR can learn more discriminative sparse codes and achieve higher accuracy for face recognition. Moreover, KSR can also be applied to kernel matrix approximation in large scale learning tasks, and it demonstrates its robustness to kernel matrix approximation, especially when a small fraction of the data is used. Extensive experimental results demonstrate promising results of KSR in image classification, face recognition, and kernel matrix approximation. All these applications prove the effectiveness of KSR in computer vision and machine learning tasks. PMID:23014744

  3. [Identification of transmission fluid based on NIR spectroscopy by combining sparse representation method with manifold learning].

    PubMed

    Jiang, Lu-Lu; Luo, Mei-Fu; Zhang, Yu; Yu, Xin-Jie; Kong, Wen-Wen; Liu, Fei

    2014-01-01

    An identification method based on sparse representation (SR) combined with autoencoder network (AN) manifold learning was proposed for discriminating the varieties of transmission fluid by using near infrared (NIR) spectroscopy technology. NIR transmittance spectra from 600 to 1 800 nm were collected from 300 transmission fluid samples of five varieties (each variety consists of 60 samples). For each variety, 30 samples were randomly selected as training set (totally 150 samples), and the rest 30 ones as testing set (totally 150 samples). Autoencoder network manifold learning was applied to obtain the characteristic information in the 600-1800 nm spectra and the number of characteristics was reduced to 10. Principal component analysis (PCA) was applied to extract several relevant variables to represent the useful information of spectral variables. All of the training samples made up a data dictionary of the sparse representation (SR). Then the transmission fluid variety identification problem was reduced to the problem as how to represent the testing samples from the data dictionary (training samples data). The identification result thus could be achieved by solving the L-1 norm-based optimization problem. We compared the effectiveness of the proposed method with that of linear discriminant analysis (LDA), least squares support vector machine (LS-SVM) and sparse representation (SR) using the relevant variables selected by principal component analysis (PCA) and AN. Experimental results demonstrated that the overall identification accuracy of the proposed method for the five transmission fluid varieties was 97.33% by AN-SR, which was significantly higher than that of LDA or LS-SVM. Therefore, the proposed method can provide a new effective method for identification of transmission fluid variety. PMID:24783534

  4. Facial expression recognition with facial parts based sparse representation classifier

    NASA Astrophysics Data System (ADS)

    Zhi, Ruicong; Ruan, Qiuqi

    2009-10-01

    Facial expressions play important role in human communication. The understanding of facial expression is a basic requirement in the development of next generation human computer interaction systems. Researches show that the intrinsic facial features always hide in low dimensional facial subspaces. This paper presents facial parts based facial expression recognition system with sparse representation classifier. Sparse representation classifier exploits sparse representation to select face features and classify facial expressions. The sparse solution is obtained by solving l1 -norm minimization problem with constraint of linear combination equation. Experimental results show that sparse representation is efficient for facial expression recognition and sparse representation classifier obtain much higher recognition accuracies than other compared methods.

  5. Learning Sparse Representations of Depth

    NASA Astrophysics Data System (ADS)

    Tosic, Ivana; Olshausen, Bruno A.; Culpepper, Benjamin J.

    2011-09-01

    This paper introduces a new method for learning and inferring sparse representations of depth (disparity) maps. The proposed algorithm relaxes the usual assumption of the stationary noise model in sparse coding. This enables learning from data corrupted with spatially varying noise or uncertainty, typically obtained by laser range scanners or structured light depth cameras. Sparse representations are learned from the Middlebury database disparity maps and then exploited in a two-layer graphical model for inferring depth from stereo, by including a sparsity prior on the learned features. Since they capture higher-order dependencies in the depth structure, these priors can complement smoothness priors commonly used in depth inference based on Markov Random Field (MRF) models. Inference on the proposed graph is achieved using an alternating iterative optimization technique, where the first layer is solved using an existing MRF-based stereo matching algorithm, then held fixed as the second layer is solved using the proposed non-stationary sparse coding algorithm. This leads to a general method for improving solutions of state of the art MRF-based depth estimation algorithms. Our experimental results first show that depth inference using learned representations leads to state of the art denoising of depth maps obtained from laser range scanners and a time of flight camera. Furthermore, we show that adding sparse priors improves the results of two depth estimation methods: the classical graph cut algorithm by Boykov et al. and the more recent algorithm of Woodford et al.

  6. Sparse representation for vehicle recognition

    NASA Astrophysics Data System (ADS)

    Monnig, Nathan D.; Sakla, Wesam

    2014-06-01

    The Sparse Representation for Classification (SRC) algorithm has been demonstrated to be a state-of-the-art algorithm for facial recognition applications. Wright et al. demonstrate that under certain conditions, the SRC algorithm classification performance is agnostic to choice of linear feature space and highly resilient to image corruption. In this work, we examined the SRC algorithm performance on the vehicle recognition application, using images from the semi-synthetic vehicle database generated by the Air Force Research Laboratory. To represent modern operating conditions, vehicle images were corrupted with noise, blurring, and occlusion, with representation of varying pose and lighting conditions. Experiments suggest that linear feature space selection is important, particularly in the cases involving corrupted images. Overall, the SRC algorithm consistently outperforms a standard k nearest neighbor classifier on the vehicle recognition task.

  7. Sparse representation for the ISAR image reconstruction

    NASA Astrophysics Data System (ADS)

    Hu, Mengqi; Montalbo, John; Li, Shuxia; Sun, Ligang; Qiao, Zhijun G.

    2016-05-01

    In this paper, a sparse representation of the data for an inverse synthetic aperture radar (ISAR) system is provided in two dimensions. The proposed sparse representation motivates the use a of a Convex Optimization that recovers the image with far less samples, which is required by Nyquist-Shannon sampling theorem to increases the efficiency and decrease the cost of calculation in radar imaging.

  8. SAR Image despeckling via sparse representation

    NASA Astrophysics Data System (ADS)

    Wang, Zhongmei; Yang, Xiaomei; Zheng, Liang

    2014-11-01

    SAR image despeckling is an active research area in image processing due to its importance in improving the quality of image for object detection and classification.In this paper, a new approach is proposed for multiplicative noise in SAR image removal based on nonlocal sparse representation by dictionary learning and collaborative filtering. First, a image is divided into many patches, and then a cluster is formed by clustering log-similar image patches using Fuzzy C-means (FCM). For each cluster, an over-complete dictionary is computed using the K-SVD method that iteratively updates the dictionary and the sparse coefficients. The patches belonging to the same cluster are then reconstructed by a sparse combination of the corresponding dictionary atoms. The reconstructed patches are finally collaboratively aggregated to build the denoised image. The experimental results show that the proposed method achieves much better results than many state-of-the-art algorithms in terms of both objective evaluation index (PSNR and ENL) and subjective visual perception.

  9. Bayesian learning of sparse multiscale image representations.

    PubMed

    Hughes, James Michael; Rockmore, Daniel N; Wang, Yang

    2013-12-01

    Multiscale representations of images have become a standard tool in image analysis. Such representations offer a number of advantages over fixed-scale methods, including the potential for improved performance in denoising, compression, and the ability to represent distinct but complementary information that exists at various scales. A variety of multiresolution transforms exist, including both orthogonal decompositions such as wavelets as well as nonorthogonal, overcomplete representations. Recently, techniques for finding adaptive, sparse representations have yielded state-of-the-art results when applied to traditional image processing problems. Attempts at developing multiscale versions of these so-called dictionary learning models have yielded modest but encouraging results. However, none of these techniques has sought to combine a rigorous statistical formulation of the multiscale dictionary learning problem and the ability to share atoms across scales. We present a model for multiscale dictionary learning that overcomes some of the drawbacks of previous approaches by first decomposing an input into a pyramid of distinct frequency bands using a recursive filtering scheme, after which we perform dictionary learning and sparse coding on the individual levels of the resulting pyramid. The associated image model allows us to use a single set of adapted dictionary atoms that is shared--and learned--across all scales in the model. The underlying statistical model of our proposed method is fully Bayesian and allows for efficient inference of parameters, including the level of additive noise for denoising applications. We apply the proposed model to several common image processing problems including non-Gaussian and nonstationary denoising of real-world color images. PMID:24002002

  10. Sparse representation of complex MRI images.

    PubMed

    Nandakumar, Hari Prasad; Ji, Jim

    2008-01-01

    Sparse representation of images acquired from Magnet Resonance Imaging (MRI) has several potential applications. MRI is unique in that the raw images are complex. Complex wavelet transforms (CWT) can be used to produce flexible signal representations when compared to Discrete Wavelet Transform (DWT). In this work, five different schemes using CWT or DWT are tested for sparse representation of MRI images which are in the form of complex values, separate real/imaginary, or separate magnitude/phase. The experimental results on real in-vivo MRI images show that appropriate CWT, e.g., dual-tree CWT (DTCWT), can achieve sparsity better than DWT with similar Mean Square Error. PMID:19162677

  11. Sparse representation in speech signal processing

    NASA Astrophysics Data System (ADS)

    Lee, Te-Won; Jang, Gil-Jin; Kwon, Oh-Wook

    2003-11-01

    We review the sparse representation principle for processing speech signals. A transformation for encoding the speech signals is learned such that the resulting coefficients are as independent as possible. We use independent component analysis with an exponential prior to learn a statistical representation for speech signals. This representation leads to extremely sparse priors that can be used for encoding speech signals for a variety of purposes. We review applications of this method for speech feature extraction, automatic speech recognition and speaker identification. Furthermore, this method is also suited for tackling the difficult problem of separating two sounds given only a single microphone.

  12. Visual Tracking Based on Extreme Learning Machine and Sparse Representation

    PubMed Central

    Wang, Baoxian; Tang, Linbo; Yang, Jinglin; Zhao, Baojun; Wang, Shuigen

    2015-01-01

    The existing sparse representation-based visual trackers mostly suffer from both being time consuming and having poor robustness problems. To address these issues, a novel tracking method is presented via combining sparse representation and an emerging learning technique, namely extreme learning machine (ELM). Specifically, visual tracking can be divided into two consecutive processes. Firstly, ELM is utilized to find the optimal separate hyperplane between the target observations and background ones. Thus, the trained ELM classification function is able to remove most of the candidate samples related to background contents efficiently, thereby reducing the total computational cost of the following sparse representation. Secondly, to further combine ELM and sparse representation, the resultant confidence values (i.e., probabilities to be a target) of samples on the ELM classification function are used to construct a new manifold learning constraint term of the sparse representation framework, which tends to achieve robuster results. Moreover, the accelerated proximal gradient method is used for deriving the optimal solution (in matrix form) of the constrained sparse tracking model. Additionally, the matrix form solution allows the candidate samples to be calculated in parallel, thereby leading to a higher efficiency. Experiments demonstrate the effectiveness of the proposed tracker. PMID:26506359

  13. Visual tracking based on extreme learning machine and sparse representation.

    PubMed

    Wang, Baoxian; Tang, Linbo; Yang, Jinglin; Zhao, Baojun; Wang, Shuigen

    2015-01-01

    The existing sparse representation-based visual trackers mostly suffer from both being time consuming and having poor robustness problems. To address these issues, a novel tracking method is presented via combining sparse representation and an emerging learning technique, namely extreme learning machine (ELM). Specifically, visual tracking can be divided into two consecutive processes. Firstly, ELM is utilized to find the optimal separate hyperplane between the target observations and background ones. Thus, the trained ELM classification function is able to remove most of the candidate samples related to background contents efficiently, thereby reducing the total computational cost of the following sparse representation. Secondly, to further combine ELM and sparse representation, the resultant confidence values (i.e., probabilities to be a target) of samples on the ELM classification function are used to construct a new manifold learning constraint term of the sparse representation framework, which tends to achieve robuster results. Moreover, the accelerated proximal gradient method is used for deriving the optimal solution (in matrix form) of the constrained sparse tracking model. Additionally, the matrix form solution allows the candidate samples to be calculated in parallel, thereby leading to a higher efficiency. Experiments demonstrate the effectiveness of the proposed tracker. PMID:26506359

  14. Ensemble polarimetric SAR image classification based on contextual sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Lamei; Wang, Xiao; Zou, Bin; Qiao, Zhijun

    2016-05-01

    Polarimetric SAR image interpretation has become one of the most interesting topics, in which the construction of the reasonable and effective technique of image classification is of key importance. Sparse representation represents the data using the most succinct sparse atoms of the over-complete dictionary and the advantages of sparse representation also have been confirmed in the field of PolSAR classification. However, it is not perfect, like the ordinary classifier, at different aspects. So ensemble learning is introduced to improve the issue, which makes a plurality of different learners training and obtained the integrated results by combining the individual learner to get more accurate and ideal learning results. Therefore, this paper presents a polarimetric SAR image classification method based on the ensemble learning of sparse representation to achieve the optimal classification.

  15. SparsePZ: Sparse Representation of Photometric Redshift PDFs

    NASA Astrophysics Data System (ADS)

    Carrasco Kind, Matias; Brunner, R. J.

    2015-11-01

    SparsePZ uses sparse basis representation to fully represent individual photometric redshift probability density functions (PDFs). This approach requires approximately half the parameters for the same multi-Gaussian fitting accuracy, and has the additional advantage that an entire PDF can be stored by using a 4-byte integer per basis function. Only 10-20 points per galaxy are needed to reconstruct both the individual PDFs and the ensemble redshift distribution, N(z), to an accuracy of 99.9 per cent when compared to the one built using the original PDFs computed with a resolution of δz = 0.01, reducing the required storage of 200 original values by a factor of 10-20. This basis representation can be directly extended to a cosmological analysis, thereby increasing computational performance without losing resolution or accuracy.

  16. Automatic landslide and mudflow detection method via multichannel sparse representation

    NASA Astrophysics Data System (ADS)

    Chao, Chen; Zhou, Jianjun; Hao, Zhuo; Sun, Bo; He, Jun; Ge, Fengxiang

    2015-10-01

    Landslide and mudflow detection is an important application of aerial images and high resolution remote sensing images, which is crucial for national security and disaster relief. Since the high resolution images are often large in size, it's necessary to develop an efficient algorithm for landslide and mudflow detection. Based on the theory of sparse representation and, we propose a novel automatic landslide and mudflow detection method in this paper, which combines multi-channel sparse representation and eight neighbor judgment methods. The whole process of the detection is totally automatic. We make the experiment on a high resolution image of ZhouQu district of Gansu province in China on August, 2010 and get a promising result which proved the effective of using sparse representation on landslide and mudflow detection.

  17. Feature Selection and Pedestrian Detection Based on Sparse Representation.

    PubMed

    Yao, Shihong; Wang, Tao; Shen, Weiming; Pan, Shaoming; Chong, Yanwen; Ding, Fei

    2015-01-01

    Pedestrian detection have been currently devoted to the extraction of effective pedestrian features, which has become one of the obstacles in pedestrian detection application according to the variety of pedestrian features and their large dimension. Based on the theoretical analysis of six frequently-used features, SIFT, SURF, Haar, HOG, LBP and LSS, and their comparison with experimental results, this paper screens out the sparse feature subsets via sparse representation to investigate whether the sparse subsets have the same description abilities and the most stable features. When any two of the six features are fused, the fusion feature is sparsely represented to obtain its important components. Sparse subsets of the fusion features can be rapidly generated by avoiding calculation of the corresponding index of dimension numbers of these feature descriptors; thus, the calculation speed of the feature dimension reduction is improved and the pedestrian detection time is reduced. Experimental results show that sparse feature subsets are capable of keeping the important components of these six feature descriptors. The sparse features of HOG and LSS possess the same description ability and consume less time compared with their full features. The ratios of the sparse feature subsets of HOG and LSS to their full sets are the highest among the six, and thus these two features can be used to best describe the characteristics of the pedestrian and the sparse feature subsets of the combination of HOG-LSS show better distinguishing ability and parsimony. PMID:26295480

  18. Feature Selection and Pedestrian Detection Based on Sparse Representation

    PubMed Central

    Yao, Shihong; Wang, Tao; Shen, Weiming; Pan, Shaoming; Chong, Yanwen; Ding, Fei

    2015-01-01

    Pedestrian detection have been currently devoted to the extraction of effective pedestrian features, which has become one of the obstacles in pedestrian detection application according to the variety of pedestrian features and their large dimension. Based on the theoretical analysis of six frequently-used features, SIFT, SURF, Haar, HOG, LBP and LSS, and their comparison with experimental results, this paper screens out the sparse feature subsets via sparse representation to investigate whether the sparse subsets have the same description abilities and the most stable features. When any two of the six features are fused, the fusion feature is sparsely represented to obtain its important components. Sparse subsets of the fusion features can be rapidly generated by avoiding calculation of the corresponding index of dimension numbers of these feature descriptors; thus, the calculation speed of the feature dimension reduction is improved and the pedestrian detection time is reduced. Experimental results show that sparse feature subsets are capable of keeping the important components of these six feature descriptors. The sparse features of HOG and LSS possess the same description ability and consume less time compared with their full features. The ratios of the sparse feature subsets of HOG and LSS to their full sets are the highest among the six, and thus these two features can be used to best describe the characteristics of the pedestrian and the sparse feature subsets of the combination of HOG-LSS show better distinguishing ability and parsimony. PMID:26295480

  19. Aerial Scene Recognition using Efficient Sparse Representation

    SciTech Connect

    Cheriyadat, Anil M

    2012-01-01

    Advanced scene recognition systems for processing large volumes of high-resolution aerial image data are in great demand today. However, automated scene recognition remains a challenging problem. Efficient encoding and representation of spatial and structural patterns in the imagery are key in developing automated scene recognition algorithms. We describe an image representation approach that uses simple and computationally efficient sparse code computation to generate accurate features capable of producing excellent classification performance using linear SVM kernels. Our method exploits unlabeled low-level image feature measurements to learn a set of basis vectors. We project the low-level features onto the basis vectors and use simple soft threshold activation function to derive the sparse features. The proposed technique generates sparse features at a significantly lower computational cost than other methods~\\cite{Yang10, newsam11}, yet it produces comparable or better classification accuracy. We apply our technique to high-resolution aerial image datasets to quantify the aerial scene classification performance. We demonstrate that the dense feature extraction and representation methods are highly effective for automatic large-facility detection on wide area high-resolution aerial imagery.

  20. Latent subspace sparse representation-based unsupervised domain adaptation

    NASA Astrophysics Data System (ADS)

    Shuai, Liu; Sun, Hao; Zhao, Fumin; Zhou, Shilin

    2015-12-01

    In this paper, we introduce and study a novel unsupervised domain adaptation (DA) algorithm, called latent subspace sparse representation based domain adaptation, based on the fact that source and target data that lie in different but related low-dimension subspaces. The key idea is that each point in a union of subspaces can be constructed by a combination of other points in the dataset. In this method, we propose to project the source and target data onto a common latent generalized subspace which is a union of subspaces of source and target domains and learn the sparse representation in the latent generalized subspace. By employing the minimum reconstruction error and maximum mean discrepancy (MMD) constraints, the structure of source and target domain are preserved and the discrepancy is reduced between the source and target domains and thus reflected in the sparse representation. We then utilize the sparse representation to build a weighted graph which reflect the relationship of points from the different domains (source-source, source- target, and target-target) to predict the labels of the target domain. We also proposed an efficient optimization method for the algorithm. Our method does not need to combine with any classifiers and therefore does not need train the test procedures. Various experiments show that the proposed method perform better than the competitive state of art subspace-based domain adaptation.

  1. Efficient visual tracking via low-complexity sparse representation

    NASA Astrophysics Data System (ADS)

    Lu, Weizhi; Zhang, Jinglin; Kpalma, Kidiyo; Ronsin, Joseph

    2015-12-01

    Thanks to its good performance on object recognition, sparse representation has recently been widely studied in the area of visual object tracking. Up to now, little attention has been paid to the complexity of sparse representation, while most works are focused on the performance improvement. By reducing the computation load related to sparse representation hundreds of times, this paper proposes by far the most computationally efficient tracking approach based on sparse representation. The proposal simply consists of two stages of sparse representation, one is for object detection and the other for object validation. Experimentally, it achieves better performance than some state-of-the-art methods in both accuracy and speed.

  2. Dictionary Learning Algorithms for Sparse Representation

    PubMed Central

    Kreutz-Delgado, Kenneth; Murray, Joseph F.; Rao, Bhaskar D.; Engan, Kjersti; Lee, Te-Won; Sejnowski, Terrence J.

    2010-01-01

    Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial “25 words or less”), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations. Experiments were performed using synthetic data and natural images. For complete dictionaries, we demonstrate that our algorithms have improved performance over other independent component analysis (ICA) methods, measured in terms of signal-to-noise ratios of separated sources. In the overcomplete case, we show that the true underlying dictionary and sparse sources can be accurately recovered. In tests with natural images, learned overcomplete dictionaries are shown to have higher coding efficiency than complete dictionaries; that is, images encoded with an over-complete dictionary have both higher compression (fewer bits per pixel) and higher accuracy (lower mean square error). PMID:12590811

  3. Joint sparse representation for robust multimodal biometrics recognition.

    PubMed

    Shekhar, Sumit; Patel, Vishal M; Nasrabadi, Nasser M; Chellappa, Rama

    2014-01-01

    Traditional biometric recognition systems rely on a single biometric signature for authentication. While the advantage of using multiple sources of information for establishing the identity has been widely recognized, computational models for multimodal biometrics recognition have only recently received attention. We propose a multimodal sparse representation method, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations. Thus, we simultaneously take into account correlations as well as coupling information among biometric modalities. A multimodal quality measure is also proposed to weigh each modality as it gets fused. Furthermore, we also kernelize the algorithm to handle nonlinearity in data. The optimization problem is solved using an efficient alternative direction method. Various experiments show that the proposed method compares favorably with competing fusion-based methods. PMID:24231870

  4. Color demosaicking via robust adaptive sparse representation

    NASA Astrophysics Data System (ADS)

    Huang, Lili; Xiao, Liang; Chen, Qinghua; Wang, Kai

    2015-09-01

    A single sensor camera can capture scenes by means of a color filter array. Each pixel samples only one of the three primary colors. We use a color demosaicking (CDM) technique to produce full color images and propose a robust adaptive sparse representation model for high quality CDM. The data fidelity term is characterized by l1 norm to suppress the heavy-tailed visual artifacts with an adaptively learned dictionary, while the regularization term is encouraged to seek sparsity by forcing sparse coding close to its nonlocal means to reduce coding errors. Based on the classical quadratic penalty function technique in optimization and an operator splitting method in convex analysis, we further present an effective iterative algorithm to solve the variational problem. The efficiency of the proposed method is demonstrated by experimental results with simulated and real camera data.

  5. Sparse representation for color image restoration.

    PubMed

    Mairal, Julien; Elad, Michael; Sapiro, Guillermo

    2008-01-01

    Sparse representations of signals have drawn considerable interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. In particular, the design of well adapted dictionaries for images has been a major challenge. The K-SVD has been recently proposed for this task and shown to perform very well for various grayscale image processing tasks. In this paper, we address the problem of learning dictionaries for color images and extend the K-SVD-based grayscale image denoising algorithm that appears in. This work puts forward ways for handling nonhomogeneous noise and missing information, paving the way to state-of-the-art results in applications such as color image denoising, demosaicing, and inpainting, as demonstrated in this paper. PMID:18229804

  6. Neonatal Atlas Construction Using Sparse Representation

    PubMed Central

    Shi, Feng; Wang, Li; Wu, Guorong; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang

    2014-01-01

    Atlas construction generally includes first an image registration step to normalize all images into a common space and then an atlas building step to fuse the information from all the aligned images. Although numerous atlas construction studies have been performed to improve the accuracy of the image registration step, unweighted or simply weighted average is often used in the atlas building step. In this article, we propose a novel patch-based sparse representation method for atlas construction after all images have been registered into the common space. By taking advantage of local sparse representation, more anatomical details can be recovered in the built atlas. To make the anatomical structures spatially smooth in the atlas, the anatomical feature constraints on group structure of representations and also the overlapping of neighboring patches are imposed to ensure the anatomical consistency between neighboring patches. The proposed method has been applied to 73 neonatal MR images with poor spatial resolution and low tissue contrast, for constructing a neonatal brain atlas with sharp anatomical details. Experimental results demonstrate that the proposed method can significantly enhance the quality of the constructed atlas by discovering more anatomical details especially in the highly convoluted cortical regions. The resulting atlas demonstrates superior performance of our atlas when applied to spatially normalizing three different neonatal datasets, compared with other start-of-the-art neonatal brain atlases. PMID:24638883

  7. Voxel selection in FMRI data analysis based on sparse representation.

    PubMed

    Li, Yuanqing; Namburi, Praneeth; Yu, Zhuliang; Guan, Cuntai; Feng, Jianfeng; Gu, Zhenghui

    2009-10-01

    Multivariate pattern analysis approaches toward detection of brain regions from fMRI data have been gaining attention recently. In this study, we introduce an iterative sparse-representation-based algorithm for detection of voxels in functional MRI (fMRI) data with task relevant information. In each iteration of the algorithm, a linear programming problem is solved and a sparse weight vector is subsequently obtained. The final weight vector is the mean of those obtained in all iterations. The characteristics of our algorithm are as follows: 1) the weight vector (output) is sparse; 2) the magnitude of each entry of the weight vector represents the significance of its corresponding variable or feature in a classification or regression problem; and 3) due to the convergence of this algorithm, a stable weight vector is obtained. To demonstrate the validity of our algorithm and illustrate its application, we apply the algorithm to the Pittsburgh Brain Activity Interpretation Competition 2007 functional fMRI dataset for selecting the voxels, which are the most relevant to the tasks of the subjects. Based on this dataset, the aforementioned characteristics of our algorithm are analyzed, and a comparison between our method with the univariate general-linear-model-based statistical parametric mapping is performed. Using our method, a combination of voxels are selected based on the principle of effective/sparse representation of a task. Data analysis results in this paper show that this combination of voxels is suitable for decoding tasks and demonstrate the effectiveness of our method. PMID:19567340

  8. Automatic target recognition via sparse representations

    NASA Astrophysics Data System (ADS)

    Estabridis, Katia

    2010-04-01

    Automatic target recognition (ATR) based on the emerging technology of Compressed Sensing (CS) can considerably improve accuracy, speed and cost associated with these types of systems. An image based ATR algorithm has been built upon this new theory, which can perform target detection and recognition in a low dimensional space. Compressed dictionaries (A) are formed to include rotational information for a scale of interest. The algorithm seeks to identify y(test sample) as a linear combination of the dictionary elements : y=Ax, where A ∈ Rnxm(n<sparse vector whose non-zero entries identify the input y. The signal x will be sparse with respect to the dictionary A as long as y is a valid target. The algorithm can reject clutter and background, which are part of the input image. The detection and recognition problems are solved by finding the sparse-solution to the undetermined system y=Ax via Orthogonal Matching Pursuit (OMP) and l1 minimization techniques. Visible and MWIR imagery collected by the Army Night Vision and Electronic Sensors Directorate (NVESD) was utilized to test the algorithm. Results show an average detection and recognition rates above 95% for targets at ranges up to 3Km for both image modalities.

  9. Cervigram image segmentation based on reconstructive sparse representations

    NASA Astrophysics Data System (ADS)

    Zhang, Shaoting; Huang, Junzhou; Wang, Wei; Huang, Xiaolei; Metaxas, Dimitris

    2010-03-01

    We proposed an approach based on reconstructive sparse representations to segment tissues in optical images of the uterine cervix. Because of large variations in image appearance caused by the changing of the illumination and specular reflection, the color and texture features in optical images often overlap with each other and are not linearly separable. By leveraging sparse representations the data can be transformed to higher dimensions with sparse constraints and become more separated. K-SVD algorithm is employed to find sparse representations and corresponding dictionaries. The data can be reconstructed from its sparse representations and positive and/or negative dictionaries. Classification can be achieved based on comparing the reconstructive errors. In the experiments we applied our method to automatically segment the biomarker AcetoWhite (AW) regions in an archive of 60,000 images of the uterine cervix. Compared with other general methods, our approach showed lower space and time complexity and higher sensitivity.

  10. Robust visual tracking of infrared object via sparse representation model

    NASA Astrophysics Data System (ADS)

    Ma, Junkai; Liu, Haibo; Chang, Zheng; Hui, Bin

    2014-11-01

    In this paper, we propose a robust tracking method for infrared object. We introduce the appearance model and the sparse representation in the framework of particle filter to achieve this goal. Representing every candidate image patch as a linear combination of bases in the subspace which is spanned by the target templates is the mechanism behind this method. The natural property, that if the candidate image patch is the target so the coefficient vector must be sparse, can ensure our algorithm successfully. Firstly, the target must be indicated manually in the first frame of the video, then construct the dictionary using the appearance model of the target templates. Secondly, the candidate image patches are selected in following frames and the sparse coefficient vectors of them are calculated via l1-norm minimization algorithm. According to the sparse coefficient vectors the right candidates is determined as the target. Finally, the target templates update dynamically to cope with appearance change in the tracking process. This paper also addresses the problem of scale changing and the rotation of the target occurring in tracking. Theoretic analysis and experimental results show that the proposed algorithm is effective and robust.

  11. Subspace segmentation by dense block and sparse representation.

    PubMed

    Tang, Kewei; Dunson, David B; Su, Zhixun; Liu, Risheng; Zhang, Jie; Dong, Jiangxin

    2016-03-01

    Subspace segmentation is a fundamental topic in computer vision and machine learning. However, the success of many popular methods is about independent subspace segmentation instead of the more flexible and realistic disjoint subspace segmentation. Focusing on the disjoint subspaces, we provide theoretical and empirical evidence of inferior performance for popular algorithms such as LRR. To solve these problems, we propose a novel dense block and sparse representation (DBSR) for subspace segmentation and provide related theoretical results. DBSR minimizes a combination of the 1,1-norm and maximum singular value of the representation matrix, leading to a combination of dense block and sparsity. We provide experimental results for synthetic and benchmark data showing that our method can outperform the state-of-the-art. PMID:26720247

  12. Image inpainting based on sparse representations with a perceptual metric

    NASA Astrophysics Data System (ADS)

    Ogawa, Takahiro; Haseyama, Miki

    2013-12-01

    This paper presents an image inpainting method based on sparse representations optimized with respect to a perceptual metric. In the proposed method, the structural similarity (SSIM) index is utilized as a criterion to optimize the representation performance of image data. Specifically, the proposed method enables the formulation of two important procedures in the sparse representation problem, 'estimation of sparse representation coefficients' and 'update of the dictionary', based on the SSIM index. Then, using the generated dictionary, approximation of target patches including missing areas via the SSIM-based sparse representation becomes feasible. Consequently, image inpainting for which procedures are totally derived from the SSIM index is realized. Experimental results show that the proposed method enables successful inpainting of missing areas.

  13. Sparse and redundant representations for inverse problems and recognition

    NASA Astrophysics Data System (ADS)

    Patel, Vishal M.

    Sparse and redundant representation of data enables the description of signals as linear combinations of a few atoms from a dictionary. In this dissertation, we study applications of sparse and redundant representations in inverse problems and object recognition. Furthermore, we propose two novel imaging modalities based on the recently introduced theory of Compressed Sensing (CS). This dissertation consists of four major parts. In the first part of the dissertation, we study a new type of deconvolution algorithm that is based on estimating the image from a shearlet decomposition. Shearlets provide a multi-directional and multi-scale decomposition that has been mathematically shown to represent distributed discontinuities such as edges better than traditional wavelets. We develop a deconvolution algorithm that allows for the approximation inversion operator to be controlled on a multi-scale and multi-directional basis. Furthermore, we develop a method for the automatic determination of the threshold values for the noise shrinkage for each scale and direction without explicit knowledge of the noise variance using a generalized cross validation method. In the second part of the dissertation, we study a reconstruction method that recovers highly undersampled images assumed to have a sparse representation in a gradient domain by using partial measurement samples that are collected in the Fourier domain. Our method makes use of a robust generalized Poisson solver that greatly aids in achieving a significantly improved performance over similar proposed methods. We will demonstrate by experiments that this new technique is more flexible to work with either random or restricted sampling scenarios better than its competitors. In the third part of the dissertation, we introduce a novel Synthetic Aperture Radar (SAR) imaging modality which can provide a high resolution map of the spatial distribution of targets and terrain using a significantly reduced number of needed

  14. Group-based sparse representation for image restoration.

    PubMed

    Zhang, Jian; Zhao, Debin; Gao, Wen

    2014-08-01

    Traditional patch-based sparse representation modeling of natural images usually suffer from two problems. First, it has to solve a large-scale optimization problem with high computational complexity in dictionary learning. Second, each patch is considered independently in dictionary learning and sparse coding, which ignores the relationship among patches, resulting in inaccurate sparse coding coefficients. In this paper, instead of using patch as the basic unit of sparse representation, we exploit the concept of group as the basic unit of sparse representation, which is composed of nonlocal patches with similar structures, and establish a novel sparse representation modeling of natural images, called group-based sparse representation (GSR). The proposed GSR is able to sparsely represent natural images in the domain of group, which enforces the intrinsic local sparsity and nonlocal self-similarity of images simultaneously in a unified framework. In addition, an effective self-adaptive dictionary learning method for each group with low complexity is designed, rather than dictionary learning from natural images. To make GSR tractable and robust, a split Bregman-based technique is developed to solve the proposed GSR-driven ℓ0 minimization problem for image restoration efficiently. Extensive experiments on image inpainting, image deblurring and image compressive sensing recovery manifest that the proposed GSR modeling outperforms many current state-of-the-art schemes in both peak signal-to-noise ratio and visual perception. PMID:24835225

  15. Maximum margin sparse representation discriminative mapping with application to face recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Qiang; Cai, Yunze; Xu, Xiaoming

    2013-02-01

    Sparse subspace learning has drawn more and more attention recently. We propose a novel sparse subspace learning algorithm called maximum margin sparse representation discriminative mapping (MSRDM), which adds the discriminative information into sparse neighborhood preservation. Based on combination of maximum margin discriminant criterion and sparse representation, MSRDM can preserve both local geometry structure and classification information. MSRDM can avoid the small sample size problem in face recognition naturally and the computation is efficient. To improve face recognition performance, we propose to integrate Gabor-like complex wavelet and natural image features by complex vectors as input features of MSRDM. Experimental results on ORL, UMIST, Yale, and PIE face databases demonstrate the effectiveness of the proposed face recognition method.

  16. Robust Fringe Projection Profilometry via Sparse Representation.

    PubMed

    Budianto; Lun, Daniel P K

    2016-04-01

    In this paper, a robust fringe projection profilometry (FPP) algorithm using the sparse dictionary learning and sparse coding techniques is proposed. When reconstructing the 3D model of objects, traditional FPP systems often fail to perform if the captured fringe images have a complex scene, such as having multiple and occluded objects. It introduces great difficulty to the phase unwrapping process of an FPP system that can result in serious distortion in the final reconstructed 3D model. For the proposed algorithm, it encodes the period order information, which is essential to phase unwrapping, into some texture patterns and embeds them to the projected fringe patterns. When the encoded fringe image is captured, a modified morphological component analysis and a sparse classification procedure are performed to decode and identify the embedded period order information. It is then used to assist the phase unwrapping process to deal with the different artifacts in the fringe images. Experimental results show that the proposed algorithm can significantly improve the robustness of an FPP system. It performs equally well no matter the fringe images have a simple or complex scene, or are affected due to the ambient lighting of the working environment. PMID:26890867

  17. Learning joint intensity-depth sparse representations.

    PubMed

    Tosic, Ivana; Drewes, Sarah

    2014-05-01

    This paper presents a method for learning overcomplete dictionaries of atoms composed of two modalities that describe a 3D scene: 1) image intensity and 2) scene depth. We propose a novel joint basis pursuit (JBP) algorithm that finds related sparse features in two modalities using conic programming and we integrate it into a two-step dictionary learning algorithm. The JBP differs from related convex algorithms because it finds joint sparsity models with different atoms and different coefficient values for intensity and depth. This is crucial for recovering generative models where the same sparse underlying causes (3D features) give rise to different signals (intensity and depth). We give a bound for recovery error of sparse coefficients obtained by JBP, and show numerically that JBP is superior to the group lasso algorithm. When applied to the Middlebury depth-intensity database, our learning algorithm converges to a set of related features, such as pairs of depth and intensity edges or image textures and depth slants. Finally, we show that JBP outperforms state of the art methods on depth inpainting for time-of-flight and Microsoft Kinect 3D data. PMID:24723574

  18. Vector sparse representation of color image using quaternion matrix analysis.

    PubMed

    Xu, Yi; Yu, Licheng; Xu, Hongteng; Zhang, Hao; Nguyen, Truong

    2015-04-01

    Traditional sparse image models treat color image pixel as a scalar, which represents color channels separately or concatenate color channels as a monochrome image. In this paper, we propose a vector sparse representation model for color images using quaternion matrix analysis. As a new tool for color image representation, its potential applications in several image-processing tasks are presented, including color image reconstruction, denoising, inpainting, and super-resolution. The proposed model represents the color image as a quaternion matrix, where a quaternion-based dictionary learning algorithm is presented using the K-quaternion singular value decomposition (QSVD) (generalized K-means clustering for QSVD) method. It conducts the sparse basis selection in quaternion space, which uniformly transforms the channel images to an orthogonal color space. In this new color space, it is significant that the inherent color structures can be completely preserved during vector reconstruction. Moreover, the proposed sparse model is more efficient comparing with the current sparse models for image restoration tasks due to lower redundancy between the atoms of different color channels. The experimental results demonstrate that the proposed sparse image model avoids the hue bias issue successfully and shows its potential as a general and powerful tool in color image analysis and processing domain. PMID:25643407

  19. Representation-Independent Iteration of Sparse Data Arrays

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    An approach is defined that describes a method of iterating over massively large arrays containing sparse data using an approach that is implementation independent of how the contents of the sparse arrays are laid out in memory. What is unique and important here is the decoupling of the iteration over the sparse set of array elements from how they are internally represented in memory. This enables this approach to be backward compatible with existing schemes for representing sparse arrays as well as new approaches. What is novel here is a new approach for efficiently iterating over sparse arrays that is independent of the underlying memory layout representation of the array. A functional interface is defined for implementing sparse arrays in any modern programming language with a particular focus for the Chapel programming language. Examples are provided that show the translation of a loop that computes a matrix vector product into this representation for both the distributed and not-distributed cases. This work is directly applicable to NASA and its High Productivity Computing Systems (HPCS) program that JPL and our current program are engaged in. The goal of this program is to create powerful, scalable, and economically viable high-powered computer systems suitable for use in national security and industry by 2010. This is important to NASA for its computationally intensive requirements for analyzing and understanding the volumes of science data from our returned missions.

  20. Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking

    PubMed Central

    Qu, Shiru

    2016-01-01

    Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness.

  1. SAR target classification based on multiscale sparse representation

    NASA Astrophysics Data System (ADS)

    Ruan, Huaiyu; Zhang, Rong; Li, Jingge; Zhan, Yibing

    2016-03-01

    We propose a novel multiscale sparse representation approach for SAR target classification. It firstly extracts the dense SIFT descriptors on multiple scales, then trains a global multiscale dictionary by sparse coding algorithm. After obtaining the sparse representation, the method applies spatial pyramid matching (SPM) and max pooling to summarize the features for each image. The proposed method can provide more information and descriptive ability than single-scale ones. Moreover, it costs less extra computation than existing multiscale methods which compute a dictionary for each scale. The MSTAR database and ship database collected from TerraSAR-X images are used in classification setup. Results show that the best overall classification rate of the proposed approach can achieve 98.83% on the MSTAR database and 92.67% on the TerraSAR-X ship database.

  2. Multiple kernel learning for sparse representation-based classification.

    PubMed

    Shrivastava, Ashish; Patel, Vishal M; Chellappa, Rama

    2014-07-01

    In this paper, we propose a multiple kernel learning (MKL) algorithm that is based on the sparse representation-based classification (SRC) method. Taking advantage of the nonlinear kernel SRC in efficiently representing the nonlinearities in the high-dimensional feature space, we propose an MKL method based on the kernel alignment criteria. Our method uses a two step training method to learn the kernel weights and sparse codes. At each iteration, the sparse codes are updated first while fixing the kernel mixing coefficients, and then the kernel mixing coefficients are updated while fixing the sparse codes. These two steps are repeated until a stopping criteria is met. The effectiveness of the proposed method is demonstrated using several publicly available image classification databases and it is shown that this method can perform significantly better than many competitive image classification algorithms. PMID:24835226

  3. Feature selection and multi-kernel learning for sparse representation on a manifold.

    PubMed

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2014-03-01

    Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao et al. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods. PMID:24333479

  4. Spatial, Temporal and Spectral Satellite Image Fusion via Sparse Representation

    NASA Astrophysics Data System (ADS)

    Song, Huihui

    Remote sensing provides good measurements for monitoring and further analyzing the climate change, dynamics of ecosystem, and human activities in global or regional scales. Over the past two decades, the number of launched satellite sensors has been increasing with the development of aerospace technologies and the growing requirements on remote sensing data in a vast amount of application fields. However, a key technological challenge confronting these sensors is that they tradeoff between spatial resolution and other properties, including temporal resolution, spectral resolution, swath width, etc., due to the limitations of hardware technology and budget constraints. To increase the spatial resolution of data with other good properties, one possible cost-effective solution is to explore data integration methods that can fuse multi-resolution data from multiple sensors, thereby enhancing the application capabilities of available remote sensing data. In this thesis, we propose to fuse the spatial resolution with temporal resolution and spectral resolution, respectively, based on sparse representation theory. Taking the study case of Landsat ETM+ (with spatial resolution of 30m and temporal resolution of 16 days) and MODIS (with spatial resolution of 250m ~ 1km and daily temporal resolution) reflectance, we propose two spatial-temporal fusion methods to combine the fine spatial information of Landsat image and the daily temporal resolution of MODIS image. Motivated by that the images from these two sensors are comparable on corresponding bands, we propose to link their spatial information on available Landsat- MODIS image pair (captured on prior date) and then predict the Landsat image from the MODIS counterpart on prediction date. To well-learn the spatial details from the prior images, we use a redundant dictionary to extract the basic representation atoms for both Landsat and MODIS images based on sparse representation. Under the scenario of two prior Landsat

  5. Sparse representation based face recognition using weighted regions

    NASA Astrophysics Data System (ADS)

    Bilgazyev, Emil; Yeniaras, E.; Uyanik, I.; Unan, Mahmut; Leiss, E. L.

    2013-12-01

    Face recognition is a challenging research topic, especially when the training (gallery) and recognition (probe) images are acquired using different cameras under varying conditions. Even a small noise or occlusion in the images can compromise the accuracy of recognition. Lately, sparse encoding based classification algorithms gave promising results for such uncontrollable scenarios. In this paper, we introduce a novel methodology by modeling the sparse encoding with weighted patches to increase the robustness of face recognition even further. In the training phase, we define a mask (i.e., weight matrix) using a sparse representation selecting the facial regions, and in the recognition phase, we perform comparison on selected facial regions. The algorithm was evaluated both quantitatively and qualitatively using two comprehensive surveillance facial image databases, i.e., SCfaceandMFPV, with the results clearly superior to common state-of-the-art methodologies in different scenarios.

  6. Joint Low-Rank and Sparse Principal Feature Coding for Enhanced Robust Representation and Visual Classification.

    PubMed

    Zhang, Zhao; Li, Fanzhang; Zhao, Mingbo; Zhang, Li; Yan, Shuicheng

    2016-06-01

    Recovering low-rank and sparse subspaces jointly for enhanced robust representation and classification is discussed. Technically, we first propose a transductive low-rank and sparse principal feature coding (LSPFC) formulation that decomposes given data into a component part that encodes low-rank sparse principal features and a noise-fitting error part. To well handle the outside data, we then present an inductive LSPFC (I-LSPFC). I-LSPFC incorporates embedded low-rank and sparse principal features by a projection into one problem for direct minimization, so that the projection can effectively map both inside and outside data into the underlying subspaces to learn more powerful and informative features for representation. To ensure that the learned features by I-LSPFC are optimal for classification, we further combine the classification error with the feature coding error to form a unified model, discriminative LSPFC (D-LSPFC), to boost performance. The model of D-LSPFC seamlessly integrates feature coding and discriminative classification, so the representation and classification powers can be enhanced. The proposed approaches are more general, and several recent existing low-rank or sparse coding algorithms can be embedded into our problems as special cases. Visual and numerical results demonstrate the effectiveness of our methods for representation and classification. PMID:27046875

  7. Automatic stellar spectral classification via sparse representations and dictionary learning

    NASA Astrophysics Data System (ADS)

    Díaz-Hernández, R.; Peregrina-Barreto, H.; Altamirano-Robles, L.; González-Bernal, J. A.; Ortiz-Esquivel, A. E.

    2014-11-01

    Stellar classification is an important topic in astronomical tasks such as the study of stellar populations. However, stellar classification of a region of the sky is a time-consuming process due to the large amount of objects present in an image. Therefore, automatic techniques to speed up the process are required. In this work, we study the application of a sparse representation and a dictionary learning for automatic spectral stellar classification. Our dataset consist of 529 calibrated stellar spectra of classes B to K, belonging to the Pulkovo Spectrophotometric catalog, in the 3400-5500Å range. These stellar spectra are used for both training and testing of the proposed methodology. The sparse technique is applied by using the greedy algorithm OMP (Orthogonal Matching Pursuit) for finding an approximated solution, and the K-SVD (K-Singular Value Decomposition) for the dictionary learning step. Thus, sparse classification is based on the recognition of the common characteristics of a particular stellar type through the construction of a trained basis. In this work, we propose a classification criterion that evaluates the results of the sparse representation techniques and determines the final classification of the spectra. This methodology demonstrates its ability to achieve levels of classification comparable with automatic methodologies previously reported such as the Maximum Correlation Coefficient (MCC) and Artificial Neural Networks (ANN).

  8. Sparse Representation for Prediction of HIV-1 Protease Drug Resistance.

    PubMed

    Yu, Xiaxia; Weber, Irene T; Harrison, Robert W

    2013-01-01

    HIV rapidly evolves drug resistance in response to antiviral drugs used in AIDS therapy. Estimating the specific resistance of a given strain of HIV to individual drugs from sequence data has important benefits for both the therapy of individual patients and the development of novel drugs. We have developed an accurate classification method based on the sparse representation theory, and demonstrate that this method is highly effective with HIV-1 protease. The protease structure is represented using our newly proposed encoding method based on Delaunay triangulation, and combined with the mutated amino acid sequences of known drug-resistant strains to train a machine-learning algorithm both for classification and regression of drug-resistant mutations. An overall cross-validated classification accuracy of 97% is obtained when trained on a publically available data base of approximately 1.5×10(4) known sequences (Stanford HIV database http://hivdb.stanford.edu/cgi-bin/GenoPhenoDS.cgi). Resistance to four FDA approved drugs is computed and comparisons with other algorithms demonstrate that our method shows significant improvements in classification accuracy. PMID:24910813

  9. Sparse Representation for Prediction of HIV-1 Protease Drug Resistance

    PubMed Central

    Yu, Xiaxia; Weber, Irene T.; Harrison, Robert W.

    2013-01-01

    HIV rapidly evolves drug resistance in response to antiviral drugs used in AIDS therapy. Estimating the specific resistance of a given strain of HIV to individual drugs from sequence data has important benefits for both the therapy of individual patients and the development of novel drugs. We have developed an accurate classification method based on the sparse representation theory, and demonstrate that this method is highly effective with HIV-1 protease. The protease structure is represented using our newly proposed encoding method based on Delaunay triangulation, and combined with the mutated amino acid sequences of known drug-resistant strains to train a machine-learning algorithm both for classification and regression of drug-resistant mutations. An overall cross-validated classification accuracy of 97% is obtained when trained on a publically available data base of approximately 1.5×104 known sequences (Stanford HIV database http://hivdb.stanford.edu/cgi-bin/GenoPhenoDS.cgi). Resistance to four FDA approved drugs is computed and comparisons with other algorithms demonstrate that our method shows significant improvements in classification accuracy. PMID:24910813

  10. Inpainting with sparse linear combinations of exemplars

    SciTech Connect

    Wohlberg, Brendt

    2008-01-01

    We introduce a new exemplar-based inpainting algorithm based on representing the region to be inpainted as a sparse linear combination of blocks extracted from similar parts of the image being inpainted. This method is conceptually simple, being computed by functional minimization, and avoids the complexity of correctly ordering the filling in of missing regions of other exemplar-based methods. Initial performance comparisons on small inpainting regions indicate that this method provides similar or better performance than other recent methods.

  11. Sparse signal representation and its applications in ultrasonic NDE.

    PubMed

    Zhang, Guang-Ming; Zhang, Cheng-Zhong; Harvey, David M

    2012-03-01

    Many sparse signal representation (SSR) algorithms have been developed in the past decade. The advantages of SSR such as compact representations and super resolution lead to the state of the art performance of SSR for processing ultrasonic non-destructive evaluation (NDE) signals. Choosing a suitable SSR algorithm and designing an appropriate overcomplete dictionary is a key for success. After a brief review of sparse signal representation methods and the design of overcomplete dictionaries, this paper addresses the recent accomplishments of SSR for processing ultrasonic NDE signals. The advantages and limitations of SSR algorithms and various overcomplete dictionaries widely-used in ultrasonic NDE applications are explored in depth. Their performance improvement compared to conventional signal processing methods in many applications such as ultrasonic flaw detection and noise suppression, echo separation and echo estimation, and ultrasonic imaging is investigated. The challenging issues met in practical ultrasonic NDE applications for example the design of a good dictionary are discussed. Representative experimental results are presented for demonstration. PMID:22040650

  12. Supervised Discriminative Group Sparse Representation for Mild Cognitive Impairment Diagnosis

    PubMed Central

    Suk, Heung-Il; Wee, Chong-Yaw; Lee, Seong-Whan; Shen, Dinggang

    2014-01-01

    Research on an early detection of Mild Cognitive Impairment (MCI), a prodromal stage of Alzheimer’s Disease (AD), with resting-state functional Magnetic Resonance Imaging (rs-fMRI) has been of great interest for the last decade. Witnessed by recent studies, functional connectivity is a useful concept in extracting brain network features and finding biomarkers for brain disease diagnosis. However, it still remains challenging for the estimation of functional connectivity from rs-fMRI due to the inevitable high dimensional problem. In order to tackle this problem, we utilize a group sparse representation along with a structural equation model. Unlike the conventional group sparse representation method that does not explicitly consider class-label information, which can help enhance the diagnostic performance, in this paper, we propose a novel supervised discriminative group sparse representation method by penalizing a large within-class variance and a small between-class variance of connectivity coefficients. Thanks to the newly devised penalization terms, we can learn connectivity coefficients that are similar within the same class and distinct between classes, thus helping enhance the diagnostic accuracy. The proposed method also allows the learned common network structure to preserve the network specific and label-related characteristics. In our experiments on the rs-fMRI data of 37 subjects (12 MCI; 25 healthy normal control) with a cross-validation technique, we demonstrated the validity and effectiveness of the proposed method, showing the diagnostic accuracy of 89.19% and the sensitivity of 0.9167. PMID:25501275

  13. MR image super-resolution reconstruction using sparse representation, nonlocal similarity and sparse derivative prior.

    PubMed

    Zhang, Di; He, Jiazhong; Zhao, Yun; Du, Minghui

    2015-03-01

    In magnetic resonance (MR) imaging, image spatial resolution is determined by various instrumental limitations and physical considerations. This paper presents a new algorithm for producing a high-resolution version of a low-resolution MR image. The proposed method consists of two consecutive steps: (1) reconstructs a high-resolution MR image from a given low-resolution observation via solving a joint sparse representation and nonlocal similarity L1-norm minimization problem; and (2) applies a sparse derivative prior based post-processing to suppress blurring effects. Extensive experiments on simulated brain MR images and two real clinical MR image datasets validate that the proposed method achieves much better results than many state-of-the-art algorithms in terms of both quantitative measures and visual perception. PMID:25638262

  14. Learning feature representations with a cost-relevant sparse autoencoder.

    PubMed

    Längkvist, Martin; Loutfi, Amy

    2015-02-01

    There is an increasing interest in the machine learning community to automatically learn feature representations directly from the (unlabeled) data instead of using hand-designed features. The autoencoder is one method that can be used for this purpose. However, for data sets with a high degree of noise, a large amount of the representational capacity in the autoencoder is used to minimize the reconstruction error for these noisy inputs. This paper proposes a method that improves the feature learning process by focusing on the task relevant information in the data. This selective attention is achieved by weighting the reconstruction error and reducing the influence of noisy inputs during the learning process. The proposed model is trained on a number of publicly available image data sets and the test error rate is compared to a standard sparse autoencoder and other methods, such as the denoising autoencoder and contractive autoencoder. PMID:25515941

  15. Dictionary learning method for joint sparse representation-based image fusion

    NASA Astrophysics Data System (ADS)

    Zhang, Qiheng; Fu, Yuli; Li, Haifeng; Zou, Jian

    2013-05-01

    Recently, sparse representation (SR) and joint sparse representation (JSR) have attracted a lot of interest in image fusion. The SR models signals by sparse linear combinations of prototype signal atoms that make a dictionary. The JSR indicates that different signals from the various sensors of the same scene form an ensemble. These signals have a common sparse component and each individual signal owns an innovation sparse component. The JSR offers lower computational complexity compared with SR. First, for JSR-based image fusion, we give a new fusion rule. Then, motivated by the method of optimal directions (MOD), for JSR, we propose a novel dictionary learning method (MODJSR) whose dictionary updating procedure is derived by employing the JSR structure one time with singular value decomposition (SVD). MODJSR has lower complexity than the K-SVD algorithm which is often used in previous JSR-based fusion algorithms. To capture the image details more efficiently, we proposed the generalized JSR in which the signals ensemble depends on two dictionaries. MODJSR is extended to MODGJSR in this case. MODJSR/MODGJSR can simultaneously carry out dictionary learning, denoising, and fusion of noisy source images. Some experiments are given to demonstrate the validity of the MODJSR/MODGJSR for image fusion.

  16. Blind deconvolution subject to sparse representation for fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Dai, Qionghai; Cai, Qiang; Guo, Peiyuan; Liu, Zaiwen

    2013-01-01

    Blind deconvolution is an effective fluorescence microscopic image processing technique to improve the quality of degraded digital images resulting from photon counting noise and out-of-focus blur. For solving the severely ill-posed problem of deconvolution, in this paper we propose an alternate minimized blind deconvolution method which considers sparse representation as constraint condition to confine the solution space of traditional Richardson-Lucy method and Gaussian model as initial value of point spread function (PSF). We assume that Poisson noise is dominating during the course of imaging. The maximum-likelihood estimation on a fluorescence image and corresponding point spread function (PSF) is developed. By solving the Euler-Lagrange equation of the total cost function, including the data term obtained by the hypothetical Poisson noise distribution model and the regularized term corresponding to the sparse representation constraint, and using gradient descent method we can get the iterative equations of the original fluorescence image and PSF respectively. Compared with the related blind deconvolution methods, our model shows superior performance in terms of both objective criteria and subjective human vision via processing simulated and real fluorescence microscopic degraded images.

  17. Magnetic resonance brain tissue segmentation based on sparse representations

    NASA Astrophysics Data System (ADS)

    Rueda, Andrea

    2015-12-01

    Segmentation or delineation of specific organs and structures in medical images is an important task in the clinical diagnosis and treatment, since it allows to characterize pathologies through imaging measures (biomarkers). In brain imaging, segmentation of main tissues or specific structures is challenging, due to the anatomic variability and complexity, and the presence of image artifacts (noise, intensity inhomogeneities, partial volume effect). In this paper, an automatic segmentation strategy is proposed, based on sparse representations and coupled dictionaries. Image intensity patterns are singly related to tissue labels at the level of small patches, gathering this information in coupled intensity/segmentation dictionaries. This dictionaries are used within a sparse representation framework to find the projection of a new intensity image onto the intensity dictionary, and the same projection can be used with the segmentation dictionary to estimate the corresponding segmentation. Preliminary results obtained with two publicly available datasets suggest that the proposal is capable of estimating adequate segmentations for gray matter (GM) and white matter (WM) tissues, with an average overlapping of 0:79 for GM and 0:71 for WM (with respect to original segmentations).

  18. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation.

    PubMed

    Sun, Rui; Zhang, Guanghai; Yan, Xiaoxing; Gao, Jun

    2016-01-01

    Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods. PMID:27537888

  19. Classification of transient signals using sparse representations over adaptive dictionaries

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Brumby, Steven P.; Myers, Kary L.; Pawley, Norma H.

    2011-06-01

    Automatic classification of broadband transient radio frequency (RF) signals is of particular interest in persistent surveillance applications. Because such transients are often acquired in noisy, cluttered environments, and are characterized by complex or unknown analytical models, feature extraction and classification can be difficult. We propose a fast, adaptive classification approach based on non-analytical dictionaries learned from data. Conventional representations using fixed (or analytical) orthogonal dictionaries, e.g., Short Time Fourier and Wavelet Transforms, can be suboptimal for classification of transients, as they provide a rigid tiling of the time-frequency space, and are not specifically designed for a particular signal class. They do not usually lead to sparse decompositions, and require separate feature selection algorithms, creating additional computational overhead. Pursuit-type decompositions over analytical, redundant dictionaries yield sparse representations by design, and work well for target signals in the same function class as the dictionary atoms. The pursuit search however has a high computational cost, and the method can perform poorly in the presence of realistic noise and clutter. Our approach builds on the image analysis work of Mairal et al. (2008) to learn a discriminative dictionary for RF transients directly from data without relying on analytical constraints or additional knowledge about the signal characteristics. We then use a pursuit search over this dictionary to generate sparse classification features. We demonstrate that our learned dictionary is robust to unexpected changes in background content and noise levels. The target classification decision is obtained in almost real-time via a parallel, vectorized implementation.

  20. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable

    PubMed Central

    Isnard, Vincent; Taffou, Marine; Viaud-Delmon, Isabelle; Suied, Clara

    2016-01-01

    Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second) could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs). This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition. PMID:26950589

  1. 3D ear identification based on sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying

    2014-01-01

    Biometrics based personal authentication is an effective way for automatically recognizing, with a high confidence, a person's identity. Recently, 3D ear shape has attracted tremendous interests in research field due to its richness of feature and ease of acquisition. However, the existing ICP (Iterative Closet Point)-based 3D ear matching methods prevalent in the literature are not quite efficient to cope with the one-to-many identification case. In this paper, we aim to fill this gap by proposing a novel effective fully automatic 3D ear identification system. We at first propose an accurate and efficient template-based ear detection method. By utilizing such a method, the extracted ear regions are represented in a common canonical coordinate system determined by the ear contour template, which facilitates much the following stages of feature extraction and classification. For each extracted 3D ear, a feature vector is generated as its representation by making use of a PCA-based local feature descriptor. At the stage of classification, we resort to the sparse representation based classification approach, which actually solves an l1-minimization problem. To the best of our knowledge, this is the first work introducing the sparse representation framework into the field of 3D ear identification. Extensive experiments conducted on a benchmark dataset corroborate the effectiveness and efficiency of the proposed approach. The associated Matlab source code and the evaluation results have been made publicly online available at http://sse.tongji.edu.cn/linzhang/ear/srcear/srcear.htm. PMID:24740247

  2. A Modified Sparse Representation Method for Facial Expression Recognition.

    PubMed

    Wang, Wei; Xu, LiHong

    2016-01-01

    In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result. PMID:26880878

  3. Pedestrian detection from thermal images: A sparse representation based approach

    NASA Astrophysics Data System (ADS)

    Qi, Bin; John, Vijay; Liu, Zheng; Mita, Seiichi

    2016-05-01

    Pedestrian detection, a key technology in computer vision, plays a paramount role in the applications of advanced driver assistant systems (ADASs) and autonomous vehicles. The objective of pedestrian detection is to identify and locate people in a dynamic environment so that accidents can be avoided. With significant variations introduced by illumination, occlusion, articulated pose, and complex background, pedestrian detection is a challenging task for visual perception. Different from visible images, thermal images are captured and presented with intensity maps based objects' emissivity, and thus have an enhanced spectral range to make human beings perceptible from the cool background. In this study, a sparse representation based approach is proposed for pedestrian detection from thermal images. We first adopted the histogram of sparse code to represent image features and then detect pedestrian with the extracted features in an unimodal and a multimodal framework respectively. In the unimodal framework, two types of dictionaries, i.e. joint dictionary and individual dictionary, are built by learning from prepared training samples. In the multimodal framework, a weighted fusion scheme is proposed to further highlight the contributions from features with higher separability. To validate the proposed approach, experiments were conducted to compare with three widely used features: Haar wavelets (HWs), histogram of oriented gradients (HOG), and histogram of phase congruency (HPC) as well as two classification methods, i.e. AdaBoost and support vector machine (SVM). Experimental results on a publicly available data set demonstrate the superiority of the proposed approach.

  4. A Modified Sparse Representation Method for Facial Expression Recognition

    PubMed Central

    Wang, Wei; Xu, LiHong

    2016-01-01

    In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result. PMID:26880878

  5. Inpainting of historical seismograms using sparse representation method

    NASA Astrophysics Data System (ADS)

    Wang, Lifu; Sun, Yi; Cai, Xiaogang

    2015-01-01

    This paper presents a method of inpainting historical seismograms recorded by a pen and paper drum-type seismograph. In the seismogram, some portions of the wave may be lost or distorted owing to time marks or violent shaking. In this study, the seismic waveform is divided into several frames of equal length, and the lost or distorted portions are restored frame by frame. Because a seismogram contains several repetitive patterns in the entire waveform, each frame can be sparsely represented on the basis of these patterns. Therefore, the sparse representation model is employed to represent historical seismograms. In addition, an inpainting model that employs sparsity as a prior is formulated, and it is used to restore the lost portions by solving a L0-norm minimization problem. However, this minimization problem may be ill posed and result in an incorrect outcome if the missing interval duration of the wave is very large. Therefore, to solve this ill-posed problem, a prior based on the Fourier spectrum of the waveform is added to the inpainting method. Simulation results prove that the proposed inpainting method can restore the missing wave well.

  6. Sparse Representation of Electrodermal Activity With Knowledge-Driven Dictionaries

    PubMed Central

    Tsiartas, Andreas; Stein, Leah I.; Cermak, Sharon A.; Narayanan, Shrikanth S.

    2015-01-01

    Biometric sensors and portable devices are being increasingly embedded into our everyday life, creating the need for robust physiological models that efficiently represent, analyze, and interpret the acquired signals. We propose a knowledge-driven method to represent electrodermal activity (EDA), a psychophysiological signal linked to stress, affect, and cognitive processing. We build EDA-specific dictionaries that accurately model both the slow varying tonic part and the signal fluctuations, called skin conductance responses (SCR), and use greedy sparse representation techniques to decompose the signal into a small number of atoms from the dictionary. Quantitative evaluation of our method considers signal reconstruction, compression rate, and information retrieval measures, that capture the ability of the model to incorporate the main signal characteristics, such as SCR occurrences. Compared to previous studies fitting a predetermined structure to the signal, results indicate that our approach provides benefits across all aforementioned criteria. This paper demonstrates the ability of appropriate dictionaries along with sparse decomposition methods to reliably represent EDA signals and provides a foundation for automatic measurement of SCR characteristics and the extraction of meaningful EDA features. PMID:25494494

  7. Fault feature extraction of rolling element bearings using sparse representation

    NASA Astrophysics Data System (ADS)

    He, Guolin; Ding, Kang; Lin, Huibin

    2016-03-01

    Influenced by factors such as speed fluctuation, rolling element sliding and periodical variation of load distribution and impact force on the measuring direction of sensor, the impulse response signals caused by defective rolling bearing are non-stationary, and the amplitudes of the impulse may even drop to zero when the fault is out of load zone. The non-stationary characteristic and impulse missing phenomenon reduce the effectiveness of the commonly used demodulation method on rolling element bearing fault diagnosis. Based on sparse representation theories, a new approach for fault diagnosis of rolling element bearing is proposed. The over-complete dictionary is constructed by the unit impulse response function of damped second-order system, whose natural frequencies and relative damping ratios are directly identified from the fault signal by correlation filtering method. It leads to a high similarity between atoms and defect induced impulse, and also a sharply reduction of the redundancy of the dictionary. To improve the matching accuracy and calculation speed of sparse coefficient solving, the fault signal is divided into segments and the matching pursuit algorithm is carried out by segments. After splicing together all the reconstructed signals, the fault feature is extracted successfully. The simulation and experimental results show that the proposed method is effective for the fault diagnosis of rolling element bearing in large rolling element sliding and low signal to noise ratio circumstances.

  8. Dynamic Facial Expression Recognition With Atlas Construction and Sparse Representation.

    PubMed

    Guo, Yimo; Zhao, Guoying; Pietikainen, Matti

    2016-05-01

    In this paper, a new dynamic facial expression recognition method is proposed. Dynamic facial expression recognition is formulated as a longitudinal groupwise registration problem. The main contributions of this method lie in the following aspects: 1) subject-specific facial feature movements of different expressions are described by a diffeomorphic growth model; 2) salient longitudinal facial expression atlas is built for each expression by a sparse groupwise image registration method, which can describe the overall facial feature changes among the whole population and can suppress the bias due to large intersubject facial variations; and 3) both the image appearance information in spatial domain and topological evolution information in temporal domain are used to guide recognition by a sparse representation method. The proposed framework has been extensively evaluated on five databases for different applications: the extended Cohn-Kanade, MMI, FERA, and AFEW databases for dynamic facial expression recognition, and UNBC-McMaster database for spontaneous pain expression monitoring. This framework is also compared with several state-of-the-art dynamic facial expression recognition methods. The experimental results demonstrate that the recognition rates of the new method are consistently higher than other methods under comparison. PMID:26955032

  9. Sinogram denoising via simultaneous sparse representation in learned dictionaries

    NASA Astrophysics Data System (ADS)

    Karimi, Davood; Ward, Rabab K.

    2016-05-01

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster.

  10. Sinogram denoising via simultaneous sparse representation in learned dictionaries.

    PubMed

    Karimi, Davood; Ward, Rabab K

    2016-05-01

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster. PMID:27055224

  11. A Modified Rife Algorithm for Off-Grid DOA Estimation Based on Sparse Representations.

    PubMed

    Chen, Tao; Wu, Huanxin; Guo, Limin; Liu, Lutao

    2015-01-01

    In this paper we address the problem of off-grid direction of arrival (DOA) estimation based on sparse representations in the situation of multiple measurement vectors (MMV). A novel sparse DOA estimation method which changes MMV problem to SMV is proposed. This method uses sparse representations based on weighted eigenvectors (SRBWEV) to deal with the MMV problem. MMV problem can be changed to single measurement vector (SMV) problem by using the linear combination of eigenvectors of array covariance matrix in signal subspace as a new SMV for sparse solution calculation. So the complexity of this proposed algorithm is smaller than other DOA estimation algorithms of MMV. Meanwhile, it can overcome the limitation of the conventional sparsity-based DOA estimation approaches that the unknown directions belong to a predefined discrete angular grid, so it can further improve the DOA estimation accuracy. The modified Rife algorithm for DOA estimation (MRife-DOA) is simulated based on SRBWEV algorithm. In this proposed algorithm, the largest and sub-largest inner products between the measurement vector or its residual and the atoms in the dictionary are utilized to further modify DOA estimation according to the principle of Rife algorithm and the basic idea of coarse-to-fine estimation. Finally, simulation experiments show that the proposed algorithm is effective and can reduce the DOA estimation error caused by grid effect with lower complexity. PMID:26610521

  12. A Modified Rife Algorithm for Off-Grid DOA Estimation Based on Sparse Representations

    PubMed Central

    Chen, Tao; Wu, Huanxin; Guo, Limin; Liu, Lutao

    2015-01-01

    In this paper we address the problem of off-grid direction of arrival (DOA) estimation based on sparse representations in the situation of multiple measurement vectors (MMV). A novel sparse DOA estimation method which changes MMV problem to SMV is proposed. This method uses sparse representations based on weighted eigenvectors (SRBWEV) to deal with the MMV problem. MMV problem can be changed to single measurement vector (SMV) problem by using the linear combination of eigenvectors of array covariance matrix in signal subspace as a new SMV for sparse solution calculation. So the complexity of this proposed algorithm is smaller than other DOA estimation algorithms of MMV. Meanwhile, it can overcome the limitation of the conventional sparsity-based DOA estimation approaches that the unknown directions belong to a predefined discrete angular grid, so it can further improve the DOA estimation accuracy. The modified Rife algorithm for DOA estimation (MRife-DOA) is simulated based on SRBWEV algorithm. In this proposed algorithm, the largest and sub-largest inner products between the measurement vector or its residual and the atoms in the dictionary are utilized to further modify DOA estimation according to the principle of Rife algorithm and the basic idea of coarse-to-fine estimation. Finally, simulation experiments show that the proposed algorithm is effective and can reduce the DOA estimation error caused by grid effect with lower complexity. PMID:26610521

  13. Shape prior modeling using sparse representation and online dictionary learning.

    PubMed

    Zhang, Shaoting; Zhan, Yiqiang; Zhou, Yan; Uzunbas, Mustafa; Metaxas, Dimitris N

    2012-01-01

    The recently proposed sparse shape composition (SSC) opens a new avenue for shape prior modeling. Instead of assuming any parametric model of shape statistics, SSC incorporates shape priors on-the-fly by approximating a shape instance (usually derived from appearance cues) by a sparse combination of shapes in a training repository. Theoretically, one can increase the modeling capability of SSC by including as many training shapes in the repository. However, this strategy confronts two limitations in practice. First, since SSC involves an iterative sparse optimization at run-time, the more shape instances contained in the repository, the less run-time efficiency SSC has. Therefore, a compact and informative shape dictionary is preferred to a large shape repository. Second, in medical imaging applications, training shapes seldom come in one batch. It is very time consuming and sometimes infeasible to reconstruct the shape dictionary every time new training shapes appear. In this paper, we propose an online learning method to address these two limitations. Our method starts from constructing an initial shape dictionary using the K-SVD algorithm. When new training shapes come, instead of re-constructing the dictionary from the ground up, we update the existing one using a block-coordinates descent approach. Using the dynamically updated dictionary, sparse shape composition can be gracefully scaled up to model shape priors from a large number of training shapes without sacrificing run-time efficiency. Our method is validated on lung localization in X-Ray and cardiac segmentation in MRI time series. Compared to the original SSC, it shows comparable performance while being significantly more efficient. PMID:23286160

  14. Sparse representations and convex optimization as tools for LOFAR radio interferometric imaging

    NASA Astrophysics Data System (ADS)

    Girard, J. N.; Garsden, H.; Starck, J. L.; Corbel, S.; Woiselle, A.; Tasse, C.; McKean, J. P.; Bobin, J.

    2015-08-01

    Compressed sensing theory is slowly making its way to solve more and more astronomical inverse problems. We address here the application of sparse representations, convex optimization and proximal theory to radio interferometric imaging. First, we expose the theory behind interferometric imaging, sparse representations and convex optimization, and second, we illustrate their application with numerical tests with SASIR, an implementation of the FISTA, a Forward-Backward splitting algorithm hosted in a LOFAR imager. Various tests have been conducted in Garsden et al., 2015. The main results are: i) an improved angular resolution (super resolution of a factor ≈ 2) with point sources as compared to CLEAN on the same data, ii) correct photometry measurements on a field of point sources at high dynamic range and iii) the imaging of extended sources with improved fidelity. SASIR provides better reconstructions (five time less residuals) of the extended emission as compared to CLEAN. With the advent of large radiotelescopes, there is scope for improving classical imaging methods with convex optimization methods combined with sparse representations.

  15. Face sketch synthesis via sparse representation-based greedy search.

    PubMed

    Shengchuan Zhang; Xinbo Gao; Nannan Wang; Jie Li; Mingjin Zhang

    2015-08-01

    Face sketch synthesis has wide applications in digital entertainment and law enforcement. Although there is much research on face sketch synthesis, most existing algorithms cannot handle some nonfacial factors, such as hair style, hairpins, and glasses if these factors are excluded in the training set. In addition, previous methods only work on well controlled conditions and fail on images with different backgrounds and sizes as the training set. To this end, this paper presents a novel method that combines both the similarity between different image patches and prior knowledge to synthesize face sketches. Given training photo-sketch pairs, the proposed method learns a photo patch feature dictionary from the training photo patches and replaces the photo patches with their sparse coefficients during the searching process. For a test photo patch, we first obtain its sparse coefficient via the learnt dictionary and then search its nearest neighbors (candidate patches) in the whole training photo patches with sparse coefficients. After purifying the nearest neighbors with prior knowledge, the final sketch corresponding to the test photo can be obtained by Bayesian inference. The contributions of this paper are as follows: 1) we relax the nearest neighbor search area from local region to the whole image without too much time consuming and 2) our method can produce nonfacial factors that are not contained in the training set and is robust against image backgrounds and can even ignore the alignment and image size aspects of test photos. Our experimental results show that the proposed method outperforms several state-of-the-arts in terms of perceptual and objective metrics. PMID:25879946

  16. Enhanced low-rank representation via sparse manifold adaption for semi-supervised learning.

    PubMed

    Peng, Yong; Lu, Bao-Liang; Wang, Suhang

    2015-05-01

    Constructing an informative and discriminative graph plays an important role in various pattern recognition tasks such as clustering and classification. Among the existing graph-based learning models, low-rank representation (LRR) is a very competitive one, which has been extensively employed in spectral clustering and semi-supervised learning (SSL). In SSL, the graph is composed of both labeled and unlabeled samples, where the edge weights are calculated based on the LRR coefficients. However, most of existing LRR related approaches fail to consider the geometrical structure of data, which has been shown beneficial for discriminative tasks. In this paper, we propose an enhanced LRR via sparse manifold adaption, termed manifold low-rank representation (MLRR), to learn low-rank data representation. MLRR can explicitly take the data local manifold structure into consideration, which can be identified by the geometric sparsity idea; specifically, the local tangent space of each data point was sought by solving a sparse representation objective. Therefore, the graph to depict the relationship of data points can be built once the manifold information is obtained. We incorporate a regularizer into LRR to make the learned coefficients preserve the geometric constraints revealed in the data space. As a result, MLRR combines both the global information emphasized by low-rank property and the local information emphasized by the identified manifold structure. Extensive experimental results on semi-supervised classification tasks demonstrate that MLRR is an excellent method in comparison with several state-of-the-art graph construction approaches. PMID:25634552

  17. Depth reconstruction from sparse samples: representation, algorithm, and sampling.

    PubMed

    Liu, Lee-Kang; Chan, Stanley H; Nguyen, Truong Q

    2015-06-01

    The rapid development of 3D technology and computer vision applications has motivated a thrust of methodologies for depth acquisition and estimation. However, existing hardware and software acquisition methods have limited performance due to poor depth precision, low resolution, and high computational cost. In this paper, we present a computationally efficient method to estimate dense depth maps from sparse measurements. There are three main contributions. First, we provide empirical evidence that depth maps can be encoded much more sparsely than natural images using common dictionaries, such as wavelets and contourlets. We also show that a combined wavelet-contourlet dictionary achieves better performance than using either dictionary alone. Second, we propose an alternating direction method of multipliers (ADMM) for depth map reconstruction. A multiscale warm start procedure is proposed to speed up the convergence. Third, we propose a two-stage randomized sampling scheme to optimally choose the sampling locations, thus maximizing the reconstruction performance for a given sampling budget. Experimental results show that the proposed method produces high-quality dense depth estimates, and is robust to noisy measurements. Applications to real data in stereo matching are demonstrated. PMID:25769151

  18. Extreme learning machine and adaptive sparse representation for image classification.

    PubMed

    Cao, Jiuwen; Zhang, Kai; Luo, Minxia; Yin, Chun; Lai, Xiaoping

    2016-09-01

    Recent research has shown the speed advantage of extreme learning machine (ELM) and the accuracy advantage of sparse representation classification (SRC) in the area of image classification. Those two methods, however, have their respective drawbacks, e.g., in general, ELM is known to be less robust to noise while SRC is known to be time-consuming. Consequently, ELM and SRC complement each other in computational complexity and classification accuracy. In order to unify such mutual complementarity and thus further enhance the classification performance, we propose an efficient hybrid classifier to exploit the advantages of ELM and SRC in this paper. More precisely, the proposed classifier consists of two stages: first, an ELM network is trained by supervised learning. Second, a discriminative criterion about the reliability of the obtained ELM output is adopted to decide whether the query image can be correctly classified or not. If the output is reliable, the classification will be performed by ELM; otherwise the query image will be fed to SRC. Meanwhile, in the stage of SRC, a sub-dictionary that is adaptive to the query image instead of the entire dictionary is extracted via the ELM output. The computational burden of SRC thus can be reduced. Extensive experiments on handwritten digit classification, landmark recognition and face recognition demonstrate that the proposed hybrid classifier outperforms ELM and SRC in classification accuracy with outstanding computational efficiency. PMID:27389571

  19. Pavement crack characteristic detection based on sparse representation

    NASA Astrophysics Data System (ADS)

    Sun, Xiaoming; Huang, Jianping; Liu, Wanyu; Xu, Mantao

    2012-12-01

    Pavement crack detection plays an important role in pavement maintaining and management. The three-dimensional (3D) pavement crack detection technique based on laser is a recent trend due to its ability of discriminating dark areas, which are not caused by pavement distress such as tire marks, oil spills and shadows. In the field of 3D pavement crack detection, the most important thing is the accurate extraction of cracks in individual pavement profile without destroying pavement profile. So after analyzing the pavement profile signal characteristics and the changeability of pavement crack characteristics, a new method based on the sparse representation is developed to decompose pavement profile signal into a summation of the mainly pavement profile and cracks. Based on the characteristics of the pavement profile signal and crack, the mixed dictionary is constructed with an over-complete exponential function and an over-complete trapezoidal membership function, and the signal is separated by learning in this mixed dictionary with a matching pursuit algorithm. Some experiments were conducted and promising results were obtained, showing that we can detect the pavement crack efficiently and achieve a good separation of crack from pavement profile without destroying pavement profile.

  20. Robust ear recognition via nonnegative sparse representation of Gabor orientation information.

    PubMed

    Zhang, Baoqing; Mu, Zhichun; Zeng, Hui; Luo, Shuang

    2014-01-01

    Orientation information is critical to the accuracy of ear recognition systems. In this paper, a new feature extraction approach is investigated for ear recognition by using orientation information of Gabor wavelets. The proposed Gabor orientation feature can not only avoid too much redundancy in conventional Gabor feature but also tend to extract more precise orientation information of the ear shape contours. Then, Gabor orientation feature based nonnegative sparse representation classification (Gabor orientation + NSRC) is proposed for ear recognition. Compared with SRC in which the sparse coding coefficients can be negative, the nonnegativity of NSRC conforms to the intuitive notion of combining parts to form a whole and therefore is more consistent with the biological modeling of visual data. Additionally, the use of Gabor orientation features increases the discriminative power of NSRC. Extensive experimental results show that the proposed Gabor orientation feature based nonnegative sparse representation classification paradigm achieves much better recognition performance and is found to be more robust to challenging problems such as pose changes, illumination variations, and ear partial occlusion in real-world applications. PMID:24723792

  1. Learning sparse discriminative representations for land cover classification in the Arctic

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.; Gangodagamage, Chandana

    2012-10-01

    Neuroscience-inspired machine vision algorithms are of current interest in the areas of detection and monitoring of climate change impacts, and general Land Use/Land Cover classification using satellite image data. We describe an approach for automatic classification of land cover in multispectral satellite imagery of the Arctic using sparse representations over learned dictionaries. We demonstrate our method using DigitalGlobe Worldview-2 8-band visible/near infrared high spatial resolution imagery of the MacKenzie River basin. We use an on-line batch Hebbian learning rule to build spectral-textural dictionaries that are adapted to this multispectral data. We learn our dictionaries from millions of overlapping image patches and then use a pursuit search to generate sparse classification features. We explore unsupervised clustering in the sparse representation space to produce land-cover category labels. This approach combines spectral and spatial textural characteristics to detect geologic, vegetative, and hydrologic features. We compare our technique to standard remote sensing algorithms. Our results suggest that neuroscience-based models are a promising approach to practical pattern recognition problems in remote sensing, even for datasets using spectral bands not found in natural visual systems.

  2. An Improved Sparse Representation over Learned Dictionary Method for Seizure Detection.

    PubMed

    Li, Junhui; Zhou, Weidong; Yuan, Shasha; Zhang, Yanli; Li, Chengcheng; Wu, Qi

    2016-02-01

    Automatic seizure detection has played an important role in the monitoring, diagnosis and treatment of epilepsy. In this paper, a patient specific method is proposed for seizure detection in the long-term intracranial electroencephalogram (EEG) recordings. This seizure detection method is based on sparse representation with online dictionary learning and elastic net constraint. The online learned dictionary could sparsely represent the testing samples more accurately, and the elastic net constraint which combines the 11-norm and 12-norm not only makes the coefficients sparse but also avoids over-fitting problem. First, the EEG signals are preprocessed using wavelet filtering and differential filtering, and the kernel function is applied to make the samples closer to linearly separable. Then the dictionaries of seizure and nonseizure are respectively learned from original ictal and interictal training samples with online dictionary optimization algorithm to compose the training dictionary. After that, the test samples are sparsely coded over the learned dictionary and the residuals associated with ictal and interictal sub-dictionary are calculated, respectively. Eventually, the test samples are classified as two distinct categories, seizure or nonseizure, by comparing the reconstructed residuals. The average segment-based sensitivity of 95.45%, specificity of 99.08%, and event-based sensitivity of 94.44% with false detection rate of 0.23/h and average latency of -5.14 s have been achieved with our proposed method. PMID:26542318

  3. Weighted sparse representation for human ear recognition based on local descriptor

    NASA Astrophysics Data System (ADS)

    Mawloud, Guermoui; Djamel, Melaab

    2016-01-01

    A two-stage ear recognition framework is presented where two local descriptors and a sparse representation algorithm are combined. In a first stage, the algorithm proceeds by deducing a subset of the closest training neighbors to the test ear sample. The selection is based on the K-nearest neighbors classifier in the pattern of oriented edge magnitude feature space. In a second phase, the co-occurrence of adjacent local binary pattern features are extracted from the preselected subset and combined to form a dictionary. Afterward, sparse representation classifier is employed on the developed dictionary in order to infer the closest element to the test sample. Thus, by splitting up the ear image into a number of segments and applying the described recognition routine on each of them, the algorithm finalizes by attributing a final class label based on majority voting over the individual labels pointed out by each segment. Experimental results demonstrate the effectiveness as well as the robustness of the proposed scheme over leading state-of-the-art methods. Especially when the ear image is occluded, the proposed algorithm exhibits a great robustness and reaches the recognition performances outlined in the state of the art.

  4. A classification-and-reconstruction approach for a single image super-resolution by a sparse representation

    NASA Astrophysics Data System (ADS)

    Fan, YingYing; Tanaka, Masayuki; Okutomi, Masatoshi

    2014-03-01

    A sparse representation is known as a very powerful tool to solve image reconstruction problem such as denoising and the single image super-resolution. In the sparse representation, it is assumed that an image patch or data can be approximated by a linear combination of a few bases selected from a given dictionary. A single overcomplete dictionary is usually learned with training patches. Dictionary learning methods almost are concerned about building a general over-complete dictionary on the assumption that the bases in dictionary can represent everything. However, using more appropriate dictionary, the sparse representation of patch can obtain better results. In this paper, we propose a classification-and-reconstruction approach with multiple dictionaries. Before learning dictionary for reconstruction, some representative bases can be used to classify all training patches from database and multiple dictionaries for reconstruction can be learned by classified patches respectively. In reconstruction phase, the patch of input image can be classified and the adaptive dictionary can be selected to use. We demonstrate that the proposed classification-and-reconstruction approach outperforms existing sparse representation with the single dictionary.

  5. [Recognition of water-injected meat based on visible/near-infrared spectrum and sparse representation].

    PubMed

    Hao, Dong-mei; Zhou, Ya-nan; Wang, Yu; Zhang, Song; Yang, Yi-min; Lin, Ling; Li, Gang; Wang, Xiu-li

    2015-01-01

    The present paper proposed a new nondestructive method based on visible/near infrared spectrum (Vis/NIRS) and sparse representation to rapidly and accurately discriminate between raw meat and water-injected meat. Water-injected meat model was built by injecting water into non-destructed meat samples comprising pigskin, fat layer and muscle layer. Vis/NIRS data were collected from raw meat and six scales of water-injected meat with spectrometers. To reduce the redundant information in the spectrum and improve the difference between the samples,. some preprocessing steps were performed for the spectral data, including light modulation and normalization. Effective spectral bands were extracted from the preprocessed spectral data. The meat samples were classified as raw meat and water-injected meat, and further, water-injected meat with different water injection rates. All the training samples were used to compose an atom dictionary, and test samples were represented by the sparsest linear combinations of these atoms via l1-minimization. Projection errors of test samples with respect to each category were calculated. A test sample was classified to the category with the minimum projection error, and leave-one-out cross-validation was conducted. The recognition performance from sparse representation was compared with that from support vector machine (SVM).. Experimental results showed that the overall recognition accuracy of sparse representation for raw meat and water-injected meat was more than 90%, which was higher than that of SVM. For water-injected meat samples with different water injection rates, the recognition accuracy presented a positive correlation with the water injection rate difference. Spare representation-based classifier eliminates the need for the training and feature extraction steps required by conventional pattern recognition models, and is suitable for processing data of high dimensionality and small sample size. Furthermore, it has a low

  6. Multi-class remote sensing object recognition based on discriminative sparse representation.

    PubMed

    Wang, Xin; Shen, Siqiu; Ning, Chen; Huang, Fengchen; Gao, Hongmin

    2016-02-20

    The automatic recognition of multi-class objects with various backgrounds is a big challenge in the field of remote sensing (RS) image analysis. In this paper, we propose a novel recognition framework for multi-class RS objects based on the discriminative sparse representation. In this framework, the recognition problem is implemented in two stages. In the first, or discriminative dictionary learning stage, considering the characterization of remote sensing objects, the scale-invariant feature transform descriptor is first combined with an improved bag-of-words model for multi-class objects feature extraction and representation. Then, information about each class of training samples is fused into the dictionary learning process; by using the K-singular value decomposition algorithm, a discriminative dictionary can be learned for sparse coding. In the second, or recognition, stage, to improve the computational efficiency, the phase spectrum of a quaternion Fourier transform model is applied to the test image to predict a small set of object candidate locations. Then, a multi-scale sliding window mechanism is utilized to scan the image over those candidate locations to obtain the object candidates (or objects of interest). Subsequently, the sparse coding coefficients of these candidates under the discriminative dictionary are mapped to the discriminative vectors that have a good ability to distinguish different classes of objects. Finally, multi-class object recognition can be accomplished by analyzing these vectors. The experimental results show that the proposed work outperforms a number of state-of-the-art methods for multi-class remote sensing object recognition. PMID:26906591

  7. Multi-source adaptation joint kernel sparse representation for visual classification.

    PubMed

    Tao, JianWen; Hu, Wenjun; Wen, Shiting

    2016-04-01

    Most of the existing domain adaptation learning (DAL) methods relies on a single source domain to learn a classifier with well-generalized performance for the target domain of interest, which may lead to the so-called negative transfer problem. To this end, many multi-source adaptation methods have been proposed. While the advantages of using multi-source domains of information for establishing an adaptation model have been widely recognized, how to boost the robustness of the computational model for multi-source adaptation learning has only recently received attention. To address this issue for achieving enhanced performance, we propose in this paper a novel algorithm called multi-source Adaptation Regularization Joint Kernel Sparse Representation (ARJKSR) for robust visual classification problems. Specifically, ARJKSR jointly represents target dataset by a sparse linear combination of training data of each source domain in some optimal Reproduced Kernel Hilbert Space (RKHS), recovered by simultaneously minimizing the inter-domain distribution discrepancy and maximizing the local consistency, whilst constraining the observations from both target and source domains to share their sparse representations. The optimization problem of ARJKSR can be solved using an efficient alternative direction method. Under the framework ARJKSR, we further learn a robust label prediction matrix for the unlabeled instances of target domain based on the classical graph-based semi-supervised learning (GSSL) diagram, into which multiple Laplacian graphs constructed with the ARJKSR are incorporated. The validity of our method is examined by several visual classification problems. Results demonstrate the superiority of our method in comparison to several state-of-the-arts. PMID:26894961

  8. Multiple kernel sparse representations for supervised and unsupervised learning.

    PubMed

    Thiagarajan, Jayaraman J; Ramamurthy, Karthikeyan Natesan; Spanias, Andreas

    2014-07-01

    In complex visual recognition tasks, it is typical to adopt multiple descriptors, which describe different aspects of the images, for obtaining an improved recognition performance. Descriptors that have diverse forms can be fused into a unified feature space in a principled manner using kernel methods. Sparse models that generalize well to the test data can be learned in the unified kernel space, and appropriate constraints can be incorporated for application in supervised and unsupervised learning. In this paper, we propose to perform sparse coding and dictionary learning in the multiple kernel space, where the weights of the ensemble kernel are tuned based on graph-embedding principles such that class discrimination is maximized. In our proposed algorithm, dictionaries are inferred using multiple levels of 1D subspace clustering in the kernel space, and the sparse codes are obtained using a simple levelwise pursuit scheme. Empirical results for object recognition and image clustering show that our algorithm outperforms existing sparse coding based approaches, and compares favorably to other state-of-the-art methods. PMID:24833593

  9. Deformable segmentation via sparse representation and dictionary learning.

    PubMed

    Zhang, Shaoting; Zhan, Yiqiang; Metaxas, Dimitris N

    2012-10-01

    "Shape" and "appearance", the two pillars of a deformable model, complement each other in object segmentation. In many medical imaging applications, while the low-level appearance information is weak or mis-leading, shape priors play a more important role to guide a correct segmentation, thanks to the strong shape characteristics of biological structures. Recently a novel shape prior modeling method has been proposed based on sparse learning theory. Instead of learning a generative shape model, shape priors are incorporated on-the-fly through the sparse shape composition (SSC). SSC is robust to non-Gaussian errors and still preserves individual shape characteristics even when such characteristics is not statistically significant. Although it seems straightforward to incorporate SSC into a deformable segmentation framework as shape priors, the large-scale sparse optimization of SSC has low runtime efficiency, which cannot satisfy clinical requirements. In this paper, we design two strategies to decrease the computational complexity of SSC, making a robust, accurate and efficient deformable segmentation system. (1) When the shape repository contains a large number of instances, which is often the case in 2D problems, K-SVD is used to learn a more compact but still informative shape dictionary. (2) If the derived shape instance has a large number of vertices, which often appears in 3D problems, an affinity propagation method is used to partition the surface into small sub-regions, on which the sparse shape composition is performed locally. Both strategies dramatically decrease the scale of the sparse optimization problem and hence speed up the algorithm. Our method is applied on a diverse set of biomedical image analysis problems. Compared to the original SSC, these two newly-proposed modules not only significant reduce the computational complexity, but also improve the overall accuracy. PMID:22959839

  10. Optimized sparse-particle aerosol representations for modeling cloud-aerosol interactions

    NASA Astrophysics Data System (ADS)

    Fierce, Laura; McGraw, Robert

    2016-04-01

    Sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the method of moments. Given a set of moment constraints, we show how linear programming can be used to identify collections of sparse particles that approximately maximize distributional entropy. The collections of sparse particles derived from this approach reproduce CCN activity of the exact model aerosol distributions with high accuracy. Additionally, the linear programming techniques described in this study can be used to bound key aerosol properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy moment-based approach is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a new aerosol simulation scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.

  11. Two-stage sparse representation-based face recognition with reconstructed images

    NASA Astrophysics Data System (ADS)

    Cheng, Guangtao; Song, Zhanjie; Lei, Yang; Han, Xiuning

    2014-09-01

    In order to address the challenges that both the training and testing images are contaminated by random pixels corruption, occlusion, and disguise, a robust face recognition algorithm based on two-stage sparse representation is proposed. Specifically, noises in the training images are first eliminated by low-rank matrix recovery. Then, by exploiting the first-stage sparse representation computed by solving a new extended ℓ1-minimization problem, noises in the testing image can be successfully removed. After the elimination, feature extraction techniques that are more discriminative but are sensitive to noise can be effectively performed on the reconstructed clean images, and the final classification is accomplished by utilizing the second-stage sparse representation obtained by solving the reduced ℓ1-minimization problem in a low-dimensional feature space. Extensive experiments are conducted on publicly available databases to verify the superiority and robustness of our algorithm.

  12. Sparse Representation-Based Image Quality Index With Adaptive Sub-Dictionaries.

    PubMed

    Li, Leida; Cai, Hao; Zhang, Yabin; Lin, Weisi; Kot, Alex C; Sun, Xingming

    2016-08-01

    Distortions cause structural changes in digital images, leading to degraded visual quality. Dictionary-based sparse representation has been widely studied recently due to its ability to extract inherent image structures. Meantime, it can extract image features with slightly higher level semantics. Intuitively, sparse representation can be used for image quality assessment, because visible distortions can cause significant changes to the sparse features. In this paper, a new sparse representation-based image quality assessment model is proposed based on the construction of adaptive sub-dictionaries. An overcomplete dictionary trained from natural images is employed to capture the structure changes between the reference and distorted images by sparse feature extraction via adaptive sub-dictionary selection. Based on the observation that image sparse features are invariant to weak degradations and the perceived image quality is generally influenced by diverse issues, three auxiliary quality features are added, including gradient, color, and luminance information. The proposed method is not sensitive to training images, so a universal dictionary can be adopted for quality evaluation. Extensive experiments on five public image quality databases demonstrate that the proposed method produces the state-of-the-art results, and it delivers consistently well performances when tested in different image quality databases. PMID:27295675

  13. Sparse representation approaches for the classification of high-dimensional biological data

    PubMed Central

    2013-01-01

    Background High-throughput genomic and proteomic data have important applications in medicine including prevention, diagnosis, treatment, and prognosis of diseases, and molecular biology, for example pathway identification. Many of such applications can be formulated to classification and dimension reduction problems in machine learning. There are computationally challenging issues with regards to accurately classifying such data, and which due to dimensionality, noise and redundancy, to name a few. The principle of sparse representation has been applied to analyzing high-dimensional biological data within the frameworks of clustering, classification, and dimension reduction approaches. However, the existing sparse representation methods are inefficient. The kernel extensions are not well addressed either. Moreover, the sparse representation techniques have not been comprehensively studied yet in bioinformatics. Results In this paper, a Bayesian treatment is presented on sparse representations. Various sparse coding and dictionary learning models are discussed. We propose fast parallel active-set optimization algorithm for each model. Kernel versions are devised based on their dimension-free property. These models are applied for classifying high-dimensional biological data. Conclusions In our experiment, we compared our models with other methods on both accuracy and computing time. It is shown that our models can achieve satisfactory accuracy, and their performance are very efficient. PMID:24565287

  14. Face Aging Effect Simulation Using Hidden Factor Analysis Joint Sparse Representation.

    PubMed

    Yang, Hongyu; Huang, Di; Wang, Yunhong; Wang, Heng; Tang, Yuanyan

    2016-06-01

    Face aging simulation has received rising investigations nowadays, whereas it still remains a challenge to generate convincing and natural age-progressed face images. In this paper, we present a novel approach to such an issue using hidden factor analysis joint sparse representation. In contrast to the majority of tasks in the literature that integrally handle the facial texture, the proposed aging approach separately models the person-specific facial properties that tend to be stable in a relatively long period and the age-specific clues that gradually change over time. It then transforms the age component to a target age group via sparse reconstruction, yielding aging effects, which is finally combined with the identity component to achieve the aged face. Experiments are carried out on three face aging databases, and the results achieved clearly demonstrate the effectiveness and robustness of the proposed method in rendering a face with aging effects. In addition, a series of evaluations prove its validity with respect to identity preservation and aging effect generation. PMID:27093721

  15. Face Aging Effect Simulation Using Hidden Factor Analysis Joint Sparse Representation

    NASA Astrophysics Data System (ADS)

    Yang, Hongyu; Huang, Di; Wang, Yunhong; Wang, Heng; Tang, Yuanyan

    2016-06-01

    Face aging simulation has received rising investigations nowadays, whereas it still remains a challenge to generate convincing and natural age-progressed face images. In this paper, we present a novel approach to such an issue by using hidden factor analysis joint sparse representation. In contrast to the majority of tasks in the literature that handle the facial texture integrally, the proposed aging approach separately models the person-specific facial properties that tend to be stable in a relatively long period and the age-specific clues that change gradually over time. It then merely transforms the age component to a target age group via sparse reconstruction, yielding aging effects, which is finally combined with the identity component to achieve the aged face. Experiments are carried out on three aging databases, and the results achieved clearly demonstrate the effectiveness and robustness of the proposed method in rendering a face with aging effects. Additionally, a series of evaluations prove its validity with respect to identity preservation and aging effect generation.

  16. Tracking and recognition face in videos with incremental local sparse representation model

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang

    2013-10-01

    This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.

  17. Dynamic time warping and sparse representation classification for birdsong phrase classification using limited training data.

    PubMed

    Tan, Lee N; Alwan, Abeer; Kossan, George; Cody, Martin L; Taylor, Charles E

    2015-03-01

    Annotation of phrases in birdsongs can be helpful to behavioral and population studies. To reduce the need for manual annotation, an automated birdsong phrase classification algorithm for limited data is developed. Limited data occur because of limited recordings or the existence of rare phrases. In this paper, classification of up to 81 phrase classes of Cassin's Vireo is performed using one to five training samples per class. The algorithm involves dynamic time warping (DTW) and two passes of sparse representation (SR) classification. DTW improves the similarity between training and test phrases from the same class in the presence of individual bird differences and phrase segmentation inconsistencies. The SR classifier works by finding a sparse linear combination of training feature vectors from all classes that best approximates the test feature vector. When the class decisions from DTW and the first pass SR classification are different, SR classification is repeated using training samples from these two conflicting classes. Compared to DTW, support vector machines, and an SR classifier without DTW, the proposed classifier achieves the highest classification accuracies of 94% and 89% on manually segmented and automatically segmented phrases, respectively, from unseen Cassin's Vireo individuals, using five training samples per class. PMID:25786922

  18. Odor Experience Facilitates Sparse Representations of New Odors in a Large-Scale Olfactory Bulb Model

    PubMed Central

    Zhou, Shanglin; Migliore, Michele; Yu, Yuguo

    2016-01-01

    Prior odor experience has a profound effect on the coding of new odor inputs by animals. The olfactory bulb, the first relay of the olfactory pathway, can substantially shape the representations of odor inputs. How prior odor experience affects the representation of new odor inputs in olfactory bulb and its underlying network mechanism are still unclear. Here we carried out a series of simulations based on a large-scale realistic mitral-granule network model and found that prior odor experience not only accelerated formation of the network, but it also significantly strengthened sparse responses in the mitral cell network while decreasing sparse responses in the granule cell network. This modulation of sparse representations may be due to the increase of inhibitory synaptic weights. Correlations among mitral cells within the network and correlations between mitral network responses to different odors decreased gradually when the number of prior training odors was increased, resulting in a greater decorrelation of the bulb representations of input odors. Based on these findings, we conclude that the degree of prior odor experience facilitates degrees of sparse representations of new odors by the mitral cell network through experience-enhanced inhibition mechanism. PMID:26903819

  19. Multi-layer sparse representation for weighted LBP-patches based facial expression recognition.

    PubMed

    Jia, Qi; Gao, Xinkai; Guo, He; Luo, Zhongxuan; Wang, Yi

    2015-01-01

    In this paper, a novel facial expression recognition method based on sparse representation is proposed. Most contemporary facial expression recognition systems suffer from limited ability to handle image nuisances such as low resolution and noise. Especially for low intensity expression, most of the existing training methods have quite low recognition rates. Motivated by sparse representation, the problem can be solved by finding sparse coefficients of the test image by the whole training set. Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition. We evaluate facial representation based on weighted local binary patterns, and Fisher separation criterion is used to calculate the weighs of patches. A multi-layer sparse representation framework is proposed for multi-intensity facial expression recognition, especially for low-intensity expressions and noisy expressions in reality, which is a critical problem but seldom addressed in the existing works. To this end, several experiments based on low-resolution and multi-intensity expressions are carried out. Promising results on publicly available databases demonstrate the potential of the proposed approach. PMID:25808772

  20. Odor Experience Facilitates Sparse Representations of New Odors in a Large-Scale Olfactory Bulb Model.

    PubMed

    Zhou, Shanglin; Migliore, Michele; Yu, Yuguo

    2016-01-01

    Prior odor experience has a profound effect on the coding of new odor inputs by animals. The olfactory bulb, the first relay of the olfactory pathway, can substantially shape the representations of odor inputs. How prior odor experience affects the representation of new odor inputs in olfactory bulb and its underlying network mechanism are still unclear. Here we carried out a series of simulations based on a large-scale realistic mitral-granule network model and found that prior odor experience not only accelerated formation of the network, but it also significantly strengthened sparse responses in the mitral cell network while decreasing sparse responses in the granule cell network. This modulation of sparse representations may be due to the increase of inhibitory synaptic weights. Correlations among mitral cells within the network and correlations between mitral network responses to different odors decreased gradually when the number of prior training odors was increased, resulting in a greater decorrelation of the bulb representations of input odors. Based on these findings, we conclude that the degree of prior odor experience facilitates degrees of sparse representations of new odors by the mitral cell network through experience-enhanced inhibition mechanism. PMID:26903819

  1. Sparse representation based on local time-frequency template matching for bearing transient fault feature extraction

    NASA Astrophysics Data System (ADS)

    He, Qingbo; Ding, Xiaoxi

    2016-05-01

    The transients caused by the localized fault are important measurement information for bearing fault diagnosis. Thus it is crucial to extract the transients from the bearing vibration or acoustic signals that are always corrupted by a large amount of background noise. In this paper, an iterative transient feature extraction approach is proposed based on time-frequency (TF) domain sparse representation. The approach is realized by presenting a new method, called local TF template matching. In this method, the TF atoms are constructed based on the TF distribution (TFD) of the Morlet wavelet bases and local TF templates are formulated from the TF atoms for the matching process. The instantaneous frequency (IF) ridge calculated from the TFD of an analyzed signal provides the frequency parameter values for the TF atoms as well as an effective template matching path on the TF plane. In each iteration, local TF templates are employed to do correlation with the TFD of the analyzed signal along the IF ridge tube for identifying the optimum parameters of transient wavelet model. With this iterative procedure, transients can be extracted in the TF domain from measured signals one by one. The final signal can be synthesized by combining the extracted TF atoms and the phase of the raw signal. The local TF template matching builds an effective TF matching-based sparse representation approach with the merit of satisfying the native pulse waveform structure of transients. The effectiveness of the proposed method is verified by practical defective bearing signals. Comparison results also show that the proposed method is superior to traditional methods in transient feature extraction.

  2. Automated identification of crystallographic ligands using sparse-density representations

    SciTech Connect

    Carolan, C. G.; Lamzin, V. S.

    2014-07-01

    A novel procedure for identifying ligands in macromolecular crystallographic electron-density maps is introduced. Density clusters in such maps can be rapidly attributed to one of 82 different ligands in an automated manner. A novel procedure for the automatic identification of ligands in macromolecular crystallographic electron-density maps is introduced. It is based on the sparse parameterization of density clusters and the matching of the pseudo-atomic grids thus created to conformationally variant ligands using mathematical descriptors of molecular shape, size and topology. In large-scale tests on experimental data derived from the Protein Data Bank, the procedure could quickly identify the deposited ligand within the top-ranked compounds from a database of candidates. This indicates the suitability of the method for the identification of binding entities in fragment-based drug screening and in model completion in macromolecular structure determination.

  3. Low-resolution facial image restoration based on sparse representation

    NASA Astrophysics Data System (ADS)

    Li, Yuelong; Bian, Junjie; Feng, Jufu

    2011-11-01

    In this paper, a strategy of reconstructing high resolution facial image based on that of low resolution is put forward. Rather than only relying on low resolution input image, we construct a face representation dictionary based on training high resolution facial images to compensate for the information difference between low and high resolution images. This restoration is realized through enrolling a low resolution facial image dictionary which is acquired through directly downsampling the learned high resolution dictionary. After the representation coefficient vector of a low resolution input image on low resolution dictionary is obtained through l1-optimization algorithm, this coefficient can be transplanted into high resolution dictionary directly to restore the high resolution image corresponding to input face. This approach was validated on the Extended Yale database.

  4. Locality Constrained Joint Dynamic Sparse Representation for Local Matching Based Face Recognition

    PubMed Central

    Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun

    2014-01-01

    Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC. PMID:25419662

  5. Sparse and Dense Hybrid Representation via Dictionary Decomposition for Face Recognition.

    PubMed

    Jiang, Xudong; Lai, Jian

    2015-05-01

    Sparse representation provides an effective tool for classification under the conditions that every class has sufficient representative training samples and the training data are uncorrupted. These conditions may not hold true in many practical applications. Face identification is an example where we have a large number of identities but sufficient representative and uncorrupted training images cannot be guaranteed for every identity. A violation of the two conditions leads to a poor performance of the sparse representation-based classification (SRC). This paper addresses this critic issue by analyzing the merits and limitations of SRC. A sparse- and dense-hybrid representation (SDR) framework is proposed in this paper to alleviate the problems of SRC. We further propose a procedure of supervised low-rank (SLR) dictionary decomposition to facilitate the proposed SDR framework. In addition, the problem of the corrupted training data is also alleviated by the proposed SLR dictionary decomposition. The application of the proposed SDR-SLR approach in face recognition verifies its effectiveness and advancement to the field. Extensive experiments on benchmark face databases demonstrate that it consistently outperforms the state-of-the-art sparse representation based approaches and the performance gains are significant in most cases. PMID:26353329

  6. Multitask joint spatial pyramid matching using sparse representation with dynamic coefficients for object recognition

    NASA Astrophysics Data System (ADS)

    Hajigholam, Mohammad-Hossein; Raie, Abolghasem-Asadollah; Faez, Karim

    2016-03-01

    Object recognition is considered a necessary part in many computer vision applications. Recently, sparse coding methods, based on representing a sparse feature from an image, show remarkable results on several object recognition benchmarks, but the precision obtained by these methods is not yet sufficient. Such a problem arises where there are few training images available. As such, using multiple features and multitask dictionaries appears to be crucial to achieving better results. We use multitask joint sparse representation, using dynamic coefficients to connect these sparse features. In other words, we calculate the importance of each feature for each class separately. This causes the features to be used efficiently and appropriately for each class. Thus, we use variance of features and particle swarm optimization algorithms to obtain these dynamic coefficients. Experimental results of our work on Caltech-101 and Caltech-256 databases show more accuracy compared with state-of-the art ones on the same databases.

  7. Low-Rank and Eigenface Based Sparse Representation for Face Recognition

    PubMed Central

    Hou, Yi-Fu; Sun, Zhan-Li; Chong, Yan-Wen; Zheng, Chun-Hou

    2014-01-01

    In this paper, based on low-rank representation and eigenface extraction, we present an improvement to the well known Sparse Representation based Classification (SRC). Firstly, the low-rank images of the face images of each individual in training subset are extracted by the Robust Principal Component Analysis (Robust PCA) to alleviate the influence of noises (e.g., illumination difference and occlusions). Secondly, Singular Value Decomposition (SVD) is applied to extract the eigenfaces from these low-rank and approximate images. Finally, we utilize these eigenfaces to construct a compact and discriminative dictionary for sparse representation. We evaluate our method on five popular databases. Experimental results demonstrate the effectiveness and robustness of our method. PMID:25334027

  8. Detection of dual-band infrared small target based on joint dynamic sparse representation

    NASA Astrophysics Data System (ADS)

    Zhou, Jinwei; Li, Jicheng; Shi, Zhiguang; Lu, Xiaowei; Ren, Dongwei

    2015-10-01

    Infrared small target detection is a crucial and yet still is a difficult issue in aeronautic and astronautic applications. Sparse representation is an important mathematic tool and has been used extensively in image processing in recent years. Joint sparse representation is applied in dual-band infrared dim target detection in this paper. Firstly, according to the characters of dim targets in dual-band infrared images, 2-dimension Gaussian intensity model was used to construct target dictionary, then the dictionary was classified into different sub-classes according to different positions of Gaussian function's center point in image block; The fact that dual-band small targets detection can use the same dictionary and the sparsity doesn't lie in atom-level but in sub-class level was utilized, hence the detection of targets in dual-band infrared images was converted to be a joint dynamic sparse representation problem. And the dynamic active sets were used to describe the sparse constraint of coefficients. Two modified sparsity concentration index (SCI) criteria was proposed to evaluate whether targets exist in the images. In experiments, it shows that the proposed algorithm can achieve better detecting performance and dual-band detection is much more robust to noise compared with single-band detection. Moreover, the proposed method can be expanded to multi-spectrum small target detection.

  9. Sparse representations of gravitational waves from precessing compact binaries.

    PubMed

    Blackman, Jonathan; Szilagyi, Bela; Galley, Chad R; Tiglio, Manuel

    2014-07-11

    Many relevant applications in gravitational wave physics share a significant common problem: the seven-dimensional parameter space of gravitational waveforms from precessing compact binary inspirals and coalescences is large enough to prohibit covering the space of waveforms with sufficient density. We find that by using the reduced basis method together with a parametrization of waveforms based on their phase and precession, we can construct ultracompact yet high-accuracy representations of this large space. As a demonstration, we show that less than 100 judiciously chosen precessing inspiral waveforms are needed for 200 cycles, mass ratios from 1 to 10, and spin magnitudes ≤0.9. In fact, using only the first 10 reduced basis waveforms yields a maximum mismatch of 0.016 over the whole range of considered parameters. We test whether the parameters selected from the inspiral regime result in an accurate reduced basis when including merger and ringdown; we find that this is indeed the case in the context of a nonprecessing effective-one-body model. This evidence suggests that as few as ∼100 numerical simulations of binary black hole coalescences may accurately represent the seven-dimensional parameter space of precession waveforms for the considered ranges. PMID:25062160

  10. Gyrator transform based double random phase encoding with sparse representation for information authentication

    NASA Astrophysics Data System (ADS)

    Chen, Jun-xin; Zhu, Zhi-liang; Fu, Chong; Yu, Hai; Zhang, Li-bo

    2015-07-01

    Optical information security systems have drawn long-term concerns. In this paper, an optical information authentication approach using gyrator transform based double random phase encoding with sparse representation is proposed. Different from traditional optical encryption schemes, only sparse version of the ciphertext is preserved, and hence the decrypted result is completely unrecognizable and shows no similarity to the plaintext. However, we demonstrate that the noise-like decipher result can be effectively authenticated by means of optical correlation approach. Simulations prove that the proposed method is feasible and effective, and can provide additional protection for optical security systems.

  11. Progressive sparse representation-based classification using local discrete cosine transform evaluation for image recognition

    NASA Astrophysics Data System (ADS)

    Song, Xiaoning; Feng, Zhen-Hua; Hu, Guosheng; Yang, Xibei; Yang, Jingyu; Qi, Yunsong

    2015-09-01

    This paper proposes a progressive sparse representation-based classification algorithm using local discrete cosine transform (DCT) evaluation to perform face recognition. Specifically, the sum of the contributions of all training samples of each subject is first taken as the contribution of this subject, then the redundant subject with the smallest contribution to the test sample is iteratively eliminated. Second, the progressive method aims at representing the test sample as a linear combination of all the remaining training samples, by which the representation capability of each training sample is exploited to determine the optimal "nearest neighbors" for the test sample. Third, the transformed DCT evaluation is constructed to measure the similarity between the test sample and each local training sample using cosine distance metrics in the DCT domain. The final goal of the proposed method is to determine an optimal weighted sum of nearest neighbors that are obtained under the local correlative degree evaluation, which is approximately equal to the test sample, and we can use this weighted linear combination to perform robust classification. Experimental results conducted on the ORL database of faces (created by the Olivetti Research Laboratory in Cambridge), the FERET face database (managed by the Defense Advanced Research Projects Agency and the National Institute of Standards and Technology), AR face database (created by Aleix Martinez and Robert Benavente in the Computer Vision Center at U.A.B), and USPS handwritten digit database (gathered at the Center of Excellence in Document Analysis and Recognition at SUNY Buffalo) demonstrate the effectiveness of the proposed method.

  12. An efficient classification method based on principal component and sparse representation.

    PubMed

    Zhai, Lin; Fu, Shujun; Zhang, Caiming; Liu, Yunxian; Wang, Lu; Liu, Guohua; Yang, Mingqiang

    2016-01-01

    As an important application in optical imaging, palmprint recognition is interfered by many unfavorable factors. An effective fusion of blockwise bi-directional two-dimensional principal component analysis and grouping sparse classification is presented. The dimension reduction and normalizing are implemented by the blockwise bi-directional two-dimensional principal component analysis for palmprint images to extract feature matrixes, which are assembled into an overcomplete dictionary in sparse classification. A subspace orthogonal matching pursuit algorithm is designed to solve the grouping sparse representation. Finally, the classification result is gained by comparing the residual between testing and reconstructed images. Experiments are carried out on a palmprint database, and the results show that this method has better robustness against position and illumination changes of palmprint images, and can get higher rate of palmprint recognition. PMID:27386281

  13. Embedded sparse representation of fMRI data via group-wise dictionary optimization

    NASA Astrophysics Data System (ADS)

    Zhu, Dajiang; Lin, Binbin; Faskowitz, Joshua; Ye, Jieping; Thompson, Paul M.

    2016-03-01

    Sparse learning enables dimension reduction and efficient modeling of high dimensional signals and images, but it may need to be tailored to best suit specific applications and datasets. Here we used sparse learning to efficiently represent functional magnetic resonance imaging (fMRI) data from the human brain. We propose a novel embedded sparse representation (ESR), to identify the most consistent dictionary atoms across different brain datasets via an iterative group-wise dictionary optimization procedure. In this framework, we introduced additional criteria to make the learned dictionary atoms more consistent across different subjects. We successfully identified four common dictionary atoms that follow the external task stimuli with very high accuracy. After projecting the corresponding coefficient vectors back into the 3-D brain volume space, the spatial patterns are also consistent with traditional fMRI analysis results. Our framework reveals common features of brain activation in a population, as a new, efficient fMRI analysis method.

  14. High Capacity Reversible Data Hiding in Encrypted Images by Patch-Level Sparse Representation.

    PubMed

    Cao, Xiaochun; Du, Ling; Wei, Xingxing; Meng, Dan; Guo, Xiaojie

    2016-05-01

    Reversible data hiding in encrypted images has attracted considerable attention from the communities of privacy security and protection. The success of the previous methods in this area has shown that a superior performance can be achieved by exploiting the redundancy within the image. Specifically, because the pixels in the local structures (like patches or regions) have a strong similarity, they can be heavily compressed, thus resulting in a large hiding room. In this paper, to better explore the correlation between neighbor pixels, we propose to consider the patch-level sparse representation when hiding the secret data. The widely used sparse coding technique has demonstrated that a patch can be linearly represented by some atoms in an over-complete dictionary. As the sparse coding is an approximation solution, the leading residual errors are encoded and self-embedded within the cover image. Furthermore, the learned dictionary is also embedded into the encrypted image. Thanks to the powerful representation of sparse coding, a large vacated room can be achieved, and thus the data hider can embed more secret messages in the encrypted image. Extensive experiments demonstrate that the proposed method significantly outperforms the state-of-the-art methods in terms of the embedding rate and the image quality. PMID:25955861

  15. Sparse representation of photometric redshift probability density functions: preparing for petascale astronomy

    NASA Astrophysics Data System (ADS)

    Carrasco Kind, Matias; Brunner, Robert J.

    2014-07-01

    One of the consequences of entering the era of precision cosmology is the widespread adoption of photometric redshift probability density functions (PDFs). Both current and future photometric surveys are expected to obtain images of billions of distinct galaxies. As a result, storing and analysing all of these PDFs will be non-trivial and even more severe if a survey plans to compute and store multiple different PDFs. In this paper we propose the use of a sparse basis representation to fully represent individual photo-z PDFs. By using an orthogonal matching pursuit algorithm and a combination of Gaussian and Voigt basis functions, we demonstrate how our approach is superior to a multi-Gaussian fitting, as we require approximately half of the parameters for the same fitting accuracy with the additional advantage that an entire PDF can be stored by using a 4-byte integer per basis function, and we can achieve better accuracy by increasing the number of bases. By using data from the Canada-France-Hawaii Telescope Lensing Survey, we demonstrate that only 10-20 points per galaxy are sufficient to reconstruct both the individual PDFs and the ensemble redshift distribution, N(z), to an accuracy of 99.9 per cent when compared to the one built using the original PDFs computed with a resolution of δz = 0.01, reducing the required storage of 200 original values by a factor of 10-20. Finally, we demonstrate how this basis representation can be directly extended to a cosmological analysis, thereby increasing computational performance without losing resolution nor accuracy.

  16. Sparse coding based dense feature representation model for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Oguslu, Ender; Zhou, Guoqing; Zheng, Zezhong; Iftekharuddin, Khan; Li, Jiang

    2015-11-01

    We present a sparse coding based dense feature representation model (a preliminary version of the paper was presented at the SPIE Remote Sensing Conference, Dresden, Germany, 2013) for hyperspectral image (HSI) classification. The proposed method learns a new representation for each pixel in HSI through the following four steps: sub-band construction, dictionary learning, encoding, and feature selection. The new representation usually has a very high dimensionality requiring a large amount of computational resources. We applied the l1/lq regularized multiclass logistic regression technique to reduce the size of the new representation. We integrated the method with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) to discriminate different types of land cover. We evaluated the proposed algorithm on three well-known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit, and image fusion and recursive filtering. Experimental results show that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification.

  17. Sparse Representation for Infrared Dim Target Detection via a Discriminative Over-Complete Dictionary Learned Online

    PubMed Central

    Li, Zheng-Zhou; Chen, Jing; Hou, Qian; Fu, Hong-Xia; Dai, Zhen; Jin, Gang; Li, Ru-Zhang; Liu, Chang-Ju

    2014-01-01

    It is difficult for structural over-complete dictionaries such as the Gabor function and discriminative over-complete dictionary, which are learned offline and classified manually, to represent natural images with the goal of ideal sparseness and to enhance the difference between background clutter and target signals. This paper proposes an infrared dim target detection approach based on sparse representation on a discriminative over-complete dictionary. An adaptive morphological over-complete dictionary is trained and constructed online according to the content of infrared image by K-singular value decomposition (K-SVD) algorithm. Then the adaptive morphological over-complete dictionary is divided automatically into a target over-complete dictionary describing target signals, and a background over-complete dictionary embedding background by the criteria that the atoms in the target over-complete dictionary could be decomposed more sparsely based on a Gaussian over-complete dictionary than the one in the background over-complete dictionary. This discriminative over-complete dictionary can not only capture significant features of background clutter and dim targets better than a structural over-complete dictionary, but also strengthens the sparse feature difference between background and target more efficiently than a discriminative over-complete dictionary learned offline and classified manually. The target and background clutter can be sparsely decomposed over their corresponding over-complete dictionaries, yet couldn't be sparsely decomposed based on their opposite over-complete dictionary, so their residuals after reconstruction by the prescribed number of target and background atoms differ very visibly. Some experiments are included and the results show that this proposed approach could not only improve the sparsity more efficiently, but also enhance the performance of small target detection more effectively. PMID:24871988

  18. A network of spiking neurons for computing sparse representations in an energy efficient way

    PubMed Central

    Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B.

    2013-01-01

    Computing sparse redundant representations is an important problem both in applied mathematics and neuroscience. In many applications, this problem must be solved in an energy efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating via low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, such operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We compare the numerical performance of HDA with existing algorithms and show that in the asymptotic regime the representation error of HDA decays with time, t, as 1/t. We show that HDA is stable against time-varying noise, specifically, the representation error decays as 1/t for Gaussian white noise. PMID:22920853

  19. A Sparse Hierarchical Map Representation for Mars Science Laboratory Science Operations

    NASA Astrophysics Data System (ADS)

    Nefian, A. V.; Edwards, L. J.; Keely, L.; Lees, D. S.; Fluckinger, L.; Malin, M. C.; Parker, T. J.

    2015-12-01

    We describe a solution for multi-scale Mars terrain modeling and mapping with Digital Elevation Models (DEMs) and co-registered orthogonally projected imagery (ortho-images). High resolution DEMs and ortho-images derived from Mars Science Laboratory (MSL) rover science and navigation cameras are represented in context with lower resolution, wide coverage DEMs and ortho-images derived from Mars Reconnaissance Orbiter (MRO) HiRISE and CTX camera images and Mars Express (MEX) mission HRSC images. Merging MSL rover image derived terrain models with those from orbital images at a uniform high resolution would require super-sampling of the orbital data across a large area to maintain significant context. This solution is not practical, and would result in a mapping product of enormous size. Instead, we choose a sparse hierarchical map representation. Each level in this hierarchical representation is a map described by a set of tiles with fixed number of samples and fixed resolution. The number of samples in a tile is fixed for all levels and each level is associated with a specific resolution. In this work, the resolution ratio between two adjacent levels is set to two. The map at each level is sparse and it contains only the tiles for which data is available at the resolution of the given level. For example, at the highest resolution level only MSL science camera models are available and only a small set of tiles are generated in a sparse map. At the lowest resolution, the map contains the complete set of tiles. The reference level of the representation is chosen to be the HiRISE terrain model and CTX, HRSC and MSL data are projected onto this model before being mapped. While our terrain representation was developed for use in "Antares", a visual planning and sequencing tool for MSL science cameras developed at NASA Ames Research Center, it is general purpose and has a number of potential geo-science visualization applications.

  20. Action Recognition Using Nonnegative Action Component Representation and Sparse Basis Selection.

    PubMed

    Wang, Haoran; Yuan, Chunfeng; Hu, Weiming; Ling, Haibin; Yang, Wankou; Sun, Changyin

    2014-02-01

    In this paper, we propose using high-level action units to represent human actions in videos and, based on such units, a novel sparse model is developed for human action recognition. There are three interconnected components in our approach. First, we propose a new context-aware spatial-temporal descriptor, named locally weighted word context, to improve the discriminability of the traditionally used local spatial-temporal descriptors. Second, from the statistics of the context-aware descriptors, we learn action units using the graph regularized nonnegative matrix factorization, which leads to a part-based representation and encodes the geometrical information. These units effectively bridge the semantic gap in action recognition. Third, we propose a sparse model based on a joint l2,1-norm to preserve the representative items and suppress noise in the action units. Intuitively, when learning the dictionary for action representation, the sparse model captures the fact that actions from the same class share similar units. The proposed approach is evaluated on several publicly available data sets. The experimental results and analysis clearly demonstrate the effectiveness of the proposed approach. PMID:26270909

  1. Mass type-specific sparse representation for mass classification in computer-aided detection on mammograms

    PubMed Central

    2013-01-01

    Background Breast cancer is the leading cause of both incidence and mortality in women population. For this reason, much research effort has been devoted to develop Computer-Aided Detection (CAD) systems for early detection of the breast cancers on mammograms. In this paper, we propose a new and novel dictionary configuration underpinning sparse representation based classification (SRC). The key idea of the proposed algorithm is to improve the sparsity in terms of mass margins for the purpose of improving classification performance in CAD systems. Methods The aim of the proposed SRC framework is to construct separate dictionaries according to the types of mass margins. The underlying idea behind our method is that the separated dictionaries can enhance the sparsity of mass class (true-positive), leading to an improved performance for differentiating mammographic masses from normal tissues (false-positive). When a mass sample is given for classification, the sparse solutions based on corresponding dictionaries are separately solved and combined at score level. Experiments have been performed on both database (DB) named as Digital Database for Screening Mammography (DDSM) and clinical Full Field Digital Mammogram (FFDM) DBs. In our experiments, sparsity concentration in the true class (SCTC) and area under the Receiver operating characteristic (ROC) curve (AUC) were measured for the comparison between the proposed method and a conventional single dictionary based approach. In addition, a support vector machine (SVM) was used for comparing our method with state-of-the-arts classifier extensively used for mass classification. Results Comparing with the conventional single dictionary configuration, the proposed approach is able to improve SCTC of up to 13.9% and 23.6% on DDSM and FFDM DBs, respectively. Moreover, the proposed method is able to improve AUC with 8.2% and 22.1% on DDSM and FFDM DBs, respectively. Comparing to SVM classifier, the proposed method improves

  2. A Vehicle Active Safety Model: Vehicle Speed Control Based on Driver Vigilance Detection Using Wearable EEG and Sparse Representation.

    PubMed

    Zhang, Zutao; Luo, Dianyuan; Rasim, Yagubov; Li, Yanjun; Meng, Guanjun; Xu, Jian; Wang, Chunbai

    2016-01-01

    In this paper, we present a vehicle active safety model for vehicle speed control based on driver vigilance detection using low-cost, comfortable, wearable electroencephalographic (EEG) sensors and sparse representation. The proposed system consists of three main steps, namely wireless wearable EEG collection, driver vigilance detection, and vehicle speed control strategy. First of all, a homemade low-cost comfortable wearable brain-computer interface (BCI) system with eight channels is designed for collecting the driver's EEG signal. Second, wavelet de-noising and down-sample algorithms are utilized to enhance the quality of EEG data, and Fast Fourier Transformation (FFT) is adopted to extract the EEG power spectrum density (PSD). In this step, sparse representation classification combined with k-singular value decomposition (KSVD) is firstly introduced in PSD to estimate the driver's vigilance level. Finally, a novel safety strategy of vehicle speed control, which controls the electronic throttle opening and automatic braking after driver fatigue detection using the above method, is presented to avoid serious collisions and traffic accidents. The simulation and practical testing results demonstrate the feasibility of the vehicle active safety model. PMID:26907278

  3. A Vehicle Active Safety Model: Vehicle Speed Control Based on Driver Vigilance Detection Using Wearable EEG and Sparse Representation

    PubMed Central

    Zhang, Zutao; Luo, Dianyuan; Rasim, Yagubov; Li, Yanjun; Meng, Guanjun; Xu, Jian; Wang, Chunbai

    2016-01-01

    In this paper, we present a vehicle active safety model for vehicle speed control based on driver vigilance detection using low-cost, comfortable, wearable electroencephalographic (EEG) sensors and sparse representation. The proposed system consists of three main steps, namely wireless wearable EEG collection, driver vigilance detection, and vehicle speed control strategy. First of all, a homemade low-cost comfortable wearable brain-computer interface (BCI) system with eight channels is designed for collecting the driver’s EEG signal. Second, wavelet de-noising and down-sample algorithms are utilized to enhance the quality of EEG data, and Fast Fourier Transformation (FFT) is adopted to extract the EEG power spectrum density (PSD). In this step, sparse representation classification combined with k-singular value decomposition (KSVD) is firstly introduced in PSD to estimate the driver’s vigilance level . Finally, a novel safety strategy of vehicle speed control, which controls the electronic throttle opening and automatic braking after driver fatigue detection using the above method, is presented to avoid serious collisions and traffic accidents. The simulation and practical testing results demonstrate the feasibility of the vehicle active safety model. PMID:26907278

  4. Online learning and generalization of parts-based image representations by non-negative sparse autoencoders.

    PubMed

    Lemme, Andre; Reinhart, René Felix; Steil, Jochen Jakob

    2012-09-01

    We present an efficient online learning scheme for non-negative sparse coding in autoencoder neural networks. It comprises a novel synaptic decay rule that ensures non-negative weights in combination with an intrinsic self-adaptation rule that optimizes sparseness of the non-negative encoding. We show that non-negativity constrains the space of solutions such that overfitting is prevented and very similar encodings are found irrespective of the network initialization and size. We benchmark the novel method on real-world datasets of handwritten digits and faces. The autoencoder yields higher sparseness and lower reconstruction errors than related offline algorithms based on matrix factorization. It generalizes to new inputs both accurately and without costly computations, which is fundamentally different from the classical matrix factorization approaches. PMID:22706093

  5. Weighted joint sparse representation-based classification method for robust alignment-free face recognition

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Xu, Feng; Zhou, Guoyan; He, Jun; Ge, Fengxiang

    2015-01-01

    This work proposes a weighted joint sparse representation (WJSR)-based classification method for robust alignment-free face recognition, in which an image is represented by a set of scale-invariant feature transform descriptors. The proposed method considers the correlation and the reliability of the query descriptors. The reliability is measured by the similarity information between the query descriptors and the atoms in the dictionary, which is incorporated into the l0∖l2-norm minimization to seek the optimal WJSR. Compared with the related state-of-art methods, the performance is advanced, as verified by the experiments on the benchmark face databases.

  6. Estimating patient-specific and anatomically correct reference model for craniomaxillofacial deformity via sparse representation

    PubMed Central

    Wang, Li; Ren, Yi; Gao, Yaozong; Tang, Zhen; Chen, Ken-Chung; Li, Jianfu; Shen, Steve G. F.; Yan, Jin; Lee, Philip K. M.; Chow, Ben; Xia, James J.; Shen, Dinggang

    2015-01-01

    Purpose: A significant number of patients suffer from craniomaxillofacial (CMF) deformity and require CMF surgery in the United States. The success of CMF surgery depends on not only the surgical techniques but also an accurate surgical planning. However, surgical planning for CMF surgery is challenging due to the absence of a patient-specific reference model. Currently, the outcome of the surgery is often subjective and highly dependent on surgeon’s experience. In this paper, the authors present an automatic method to estimate an anatomically correct reference shape of jaws for orthognathic surgery, a common type of CMF surgery. Methods: To estimate a patient-specific jaw reference model, the authors use a data-driven method based on sparse shape composition. Given a dictionary of normal subjects, the authors first use the sparse representation to represent the midface of a patient by the midfaces of the normal subjects in the dictionary. Then, the derived sparse coefficients are used to reconstruct a patient-specific reference jaw shape. Results: The authors have validated the proposed method on both synthetic and real patient data. Experimental results show that the authors’ method can effectively reconstruct the normal shape of jaw for patients. Conclusions: The authors have presented a novel method to automatically estimate a patient-specific reference model for the patient suffering from CMF deformity. PMID:26429255

  7. Imaging through turbid media via sparse representation: imaging quality comparison of three projection matrices

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Li, Huijuan; Wu, Tengfei; Dai, Weijia; Bi, Xiangli

    2015-05-01

    The incident light will be scattered away due to the inhomogeneity of the refractive index in many materials which will greatly reduce the imaging depth and degrade the imaging quality. Many exciting methods have been presented in recent years for solving this problem and realizing imaging through a highly scattering medium, such as the wavefront modulation technique and reconstruction technique. The imaging method based on compressed sensing (CS) theory can decrease the computational complexity because it doesn't require the whole speckle pattern to realize reconstruction. One of the key premises of this method is that the object is sparse or can be sparse representation. However, choosing a proper projection matrix is very important to the imaging quality. In this paper, we analyzed that the transmission matrix (TM) of a scattering medium obeys circular Gaussian distribution, which makes it possible that a scattering medium can be used as the measurement matrix in the CS theory. In order to verify the performance of this method, a whole optical system is simulated. Various projection matrices are introduced to make the object sparse, including the fast Fourier transform (FFT) basis, the discrete cosine transform (DCT) basis and the discrete wavelet transform (DWT) basis, the imaging performances of each of which are compared comprehensively. Simulation results show that for most targets, applying the discrete wavelet transform basis will obtain an image in good quality. This work can be applied to biomedical imaging and used to develop real-time imaging through highly scattering media.

  8. Contour Tracking in Echocardiographic Sequences via Sparse Representation and Dictionary Learning

    PubMed Central

    Huang, Xiaojie; Dione, Donald P.; Compas, Colin B.; Papademetris, Xenophon; Lin, Ben A.; Bregasi, Alda; Sinusas, Albert J.; Staib, Lawrence H.; Duncan, James S.

    2013-01-01

    This paper presents a dynamical appearance model based on sparse representation and dictionary learning for tracking both endocardial and epicardial contours of the left ventricle in echocardiographic sequences. Instead of learning offline spatiotemporal priors from databases, we exploit the inherent spatiotemporal coherence of individual data to constraint cardiac contour estimation. The contour tracker is initialized with a manual tracing of the first frame. It employs multiscale sparse representation of local image appearance and learns online multiscale appearance dictionaries in a boosting framework as the image sequence is segmented frame-by-frame sequentially. The weights of multiscale appearance dictionaries are optimized automatically. Our region-based level set segmentation integrates a spectrum of complementary multilevel information including intensity, multiscale local appearance, and dynamical shape prediction. The approach is validated on twenty-six 4D canine echocardiographic images acquired from both healthy and post-infarct canines. The segmentation results agree well with expert manual tracings. The ejection fraction estimates also show good agreement with manual results. Advantages of our approach are demonstrated by comparisons with a conventional pure intensity model, a registration-based contour tracker, and a state-of-the-art database-dependent offline dynamical shape model. We also demonstrate the feasibility of clinical application by applying the method to four 4D human data sets. PMID:24292554

  9. A Novel Method of Automatic Plant Species Identification Using Sparse Representation of Leaf Tooth Features

    PubMed Central

    Jin, Taisong; Hou, Xueliang; Li, Pifan; Zhou, Feifei

    2015-01-01

    Automatic species identification has many advantages over traditional species identification. Currently, most plant automatic identification methods focus on the features of leaf shape, venation and texture, which are promising for the identification of some plant species. However, leaf tooth, a feature commonly used in traditional species identification, is ignored. In this paper, a novel automatic species identification method using sparse representation of leaf tooth features is proposed. In this method, image corners are detected first, and the abnormal image corner is removed by the PauTa criteria. Next, the top and bottom leaf tooth edges are discriminated to effectively correspond to the extracted image corners; then, four leaf tooth features (Leaf-num, Leaf-rate, Leaf-sharpness and Leaf-obliqueness) are extracted and concatenated into a feature vector. Finally, a sparse representation-based classifier is used to identify a plant species sample. Tests on a real-world leaf image dataset show that our proposed method is feasible for species identification. PMID:26440281

  10. Blind image deblurring based on trained dictionary and curvelet using sparse representation

    NASA Astrophysics Data System (ADS)

    Feng, Liang; Huang, Qian; Xu, Tingfa; Li, Shao

    2015-04-01

    Motion blur is one of the most significant and common artifacts causing poor image quality in digital photography, in which many factors resulted. In imaging process, if the objects are moving quickly in the scene or the camera moves in the exposure interval, the image of the scene would blur along the direction of relative motion between the camera and the scene, e.g. camera shake, atmospheric turbulence. Recently, sparse representation model has been widely used in signal and image processing, which is an effective method to describe the natural images. In this article, a new deblurring approach based on sparse representation is proposed. An overcomplete dictionary learned from the trained image samples via the KSVD algorithm is designed to represent the latent image. The motion-blur kernel can be treated as a piece-wise smooth function in image domain, whose support is approximately a thin smooth curve, so we employed curvelet to represent the blur kernel. Both of overcomplete dictionary and curvelet system have high sparsity, which improves the robustness to the noise and more satisfies the observer's visual demand. With the two priors, we constructed restoration model of blurred images and succeeded to solve the optimization problem with the help of alternating minimization technique. The experiment results prove the method can preserve the texture of original images and suppress the ring artifacts effectively.

  11. A reconstruction algorithm based on sparse representation for Raman signal processing under high background noise

    NASA Astrophysics Data System (ADS)

    Fan, X.; Wang, X.; Wang, X.; Xu, Y.; Que, J.; He, H.; Wang, X.; Tang, M.

    2016-02-01

    Background noise is one of the main interference sources of the Raman spectroscopy measurement and imaging technique. In this paper, a sparse representation based algorithm is presented to process the Raman signals under high background noise. In contrast with the existing de-noising methods, the proposed method reconstructs the pure Raman signals by estimating the Raman peak information. The advantage of the proposed algorithm is its high anti-noise capacity and low pure Raman signal reduction contributed by its reconstruction principle. Meanwhile, the Batch-OMP algorithm is applied to accelerate the training of the sparse representation. Therefore, it is very suitable to be adopted in the Raman measurement or imaging instruments to observe fast dynamic processes where the scanning time has to be shortened and the signal-to-noise ratio (SNR) of the raw tested signal is reduced. In the simulation and experiment, the de-noising result obtained by the proposed algorithm was better than the traditional Savitzky-Golay (S-G) filter and the fixed-threshold wavelet de-noising algorithm.

  12. Joint detection and segmentation of vertebral bodies in CT images by sparse representation error minimization

    NASA Astrophysics Data System (ADS)

    Korez, Robert; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž

    2016-03-01

    Automated detection and segmentation of vertebral bodies from spinal computed tomography (CT) images is usually a prerequisite step for numerous spine-related medical applications, such as diagnosis, surgical planning and follow-up assessment of spinal pathologies. However, automated detection and segmentation are challenging tasks due to a relatively high degree of anatomical complexity, presence of unclear boundaries and articulation of vertebrae with each other. In this paper, we describe a sparse representation error minimization (SEM) framework for joint detection and segmentation of vertebral bodies in CT images. By minimizing the sparse representation error of sampled intensity values, we are able to recover the oriented bounding box (OBB) and segmentation binary mask for each vertebral body in the CT image. The performance of the proposed SEM framework was evaluated on five CT images of the thoracolumbar spine. The resulting Euclidean distance of 1:75+/-1:02 mm, computed between the center points of recovered and corresponding reference OBBs, and Dice coefficient of 92:3+/-2:7%, computed between the resulting and corresponding reference segmentation binary masks, indicate that the proposed framework can successfully detect and segment vertebral bodies in CT images of the thoracolumbar spine.

  13. Human gait recognition using patch distribution feature and locality-constrained group sparse representation.

    PubMed

    Xu, Dong; Huang, Yi; Zeng, Zinan; Xu, Xinxing

    2012-01-01

    In this paper, we propose a new patch distribution feature (PDF) (i.e., referred to as Gabor-PDF) for human gait recognition. We represent each gait energy image (GEI) as a set of local augmented Gabor features, which concatenate the Gabor features extracted from different scales and different orientations together with the X-Y coordinates. We learn a global Gaussian mixture model (GMM) (i.e., referred to as the universal background model) with the local augmented Gabor features from all the gallery GEIs; then, each gallery or probe GEI is further expressed as the normalized parameters of an image-specific GMM adapted from the global GMM. Observing that one video is naturally represented as a group of GEIs, we also propose a new classification method called locality-constrained group sparse representation (LGSR) to classify each probe video by minimizing the weighted l(1, 2) mixed-norm-regularized reconstruction error with respect to the gallery videos. In contrast to the standard group sparse representation method that is a special case of LGSR, the group sparsity and local smooth sparsity constraints are both enforced in LGSR. Our comprehensive experiments on the benchmark USF HumanID database demonstrate the effectiveness of the newly proposed feature Gabor-PDF and the new classification method LGSR for human gait recognition. Moreover, LGSR using the new feature Gabor-PDF achieves the best average Rank-1 and Rank-5 recognition rates on this database among all gait recognition algorithms proposed to date. PMID:21724511

  14. Aspect-Aided Dynamic Non-Negative Sparse Representation-Based Microwave Image Classification.

    PubMed

    Zhang, Xinzheng; Yang, Qiuyue; Liu, Miaomiao; Jia, Yunjian; Liu, Shujun; Li, Guojun

    2016-01-01

    Classification of target microwave images is an important application in much areas such as security, surveillance, etc. With respect to the task of microwave image classification, a recognition algorithm based on aspect-aided dynamic non-negative least square (ADNNLS) sparse representation is proposed. Firstly, an aspect sector is determined, the center of which is the estimated aspect angle of the testing sample. The training samples in the aspect sector are divided into active atoms and inactive atoms by smooth self-representative learning. Secondly, for each testing sample, the corresponding active atoms are selected dynamically, thereby establishing dynamic dictionary. Thirdly, the testing sample is represented with ℓ 1 -regularized non-negative sparse representation under the corresponding dynamic dictionary. Finally, the class label of the testing sample is identified by use of the minimum reconstruction error. Verification of the proposed algorithm was conducted using the Moving and Stationary Target Acquisition and Recognition (MSTAR) database which was acquired by synthetic aperture radar. Experiment results validated that the proposed approach was able to capture the local aspect characteristics of microwave images effectively, thereby improving the classification performance. PMID:27598172

  15. Contour tracking in echocardiographic sequences via sparse representation and dictionary learning.

    PubMed

    Huang, Xiaojie; Dione, Donald P; Compas, Colin B; Papademetris, Xenophon; Lin, Ben A; Bregasi, Alda; Sinusas, Albert J; Staib, Lawrence H; Duncan, James S

    2014-02-01

    This paper presents a dynamical appearance model based on sparse representation and dictionary learning for tracking both endocardial and epicardial contours of the left ventricle in echocardiographic sequences. Instead of learning offline spatiotemporal priors from databases, we exploit the inherent spatiotemporal coherence of individual data to constraint cardiac contour estimation. The contour tracker is initialized with a manual tracing of the first frame. It employs multiscale sparse representation of local image appearance and learns online multiscale appearance dictionaries in a boosting framework as the image sequence is segmented frame-by-frame sequentially. The weights of multiscale appearance dictionaries are optimized automatically. Our region-based level set segmentation integrates a spectrum of complementary multilevel information including intensity, multiscale local appearance, and dynamical shape prediction. The approach is validated on twenty-six 4D canine echocardiographic images acquired from both healthy and post-infarct canines. The segmentation results agree well with expert manual tracings. The ejection fraction estimates also show good agreement with manual results. Advantages of our approach are demonstrated by comparisons with a conventional pure intensity model, a registration-based contour tracker, and a state-of-the-art database-dependent offline dynamical shape model. We also demonstrate the feasibility of clinical application by applying the method to four 4D human data sets. PMID:24292554

  16. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    SciTech Connect

    Wang, Li; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang; Chen, Ken Chung; Shen, Steve G. F.; Yan, Jin; Lee, Philip K. M.; Chow, Ben; Liu, Nancy X.; Xia, James J.; Shen, Dinggang

    2014-04-15

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  17. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    PubMed Central

    Wang, Li; Chen, Ken Chung; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang; Shen, Steve G. F.; Yan, Jin; Lee, Philip K. M.; Chow, Ben; Liu, Nancy X.; Xia, James J.; Shen, Dinggang

    2014-01-01

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into a maximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  18. Fusion of sparse representation and dictionary matching for identification of humans in uncontrolled environment.

    PubMed

    Fernandes, Steven Lawrence; Bala, G Josemin

    2016-09-01

    gait recognitionare developed. Then a novel biomechanics based gait recognition is developed using Sparse Representation to generate what we term as "score 1." Further another novel technique for composite sketch matching is developed using Dictionary Matching to generate what we term as "score 2." Finally, score level fusion using Dempster Shafer and Proportional Conflict Distribution Rule Number 5 is performed. The proposed fusion approach is validated using a database containing biomechanics based gait sequences and biometric based composite sketches. From our analysis we find that a fusion of gait recognition and composite sketch matching provides excellent results for real-time human identification. PMID:27498411

  19. The extraction of spot signal in Shack-Hartmann wavefront sensor based on sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Yanyan; Xu, Wentao; Chen, Suting; Ge, Junxiang; Wan, Fayu

    2016-07-01

    Several techniques have been used with Shack-Hartmann wavefront sensors to determine the local wave-front gradient across each lenslet. While the centroid error of Shack-Hartmann wavefront sensor is relatively large since the skylight background and the detector noise. In this paper, we introduce a new method based on sparse representation to extract the target signal from the background and the noise. First, an over complete dictionary of the spot signal is constructed based on two-dimensional Gaussian model. Then the Shack-Hartmann image is divided into sub blocks. The corresponding coefficients of each block is computed in the over complete dictionary. Since the coefficients of the noise and the target are large different, then extract the target by setting a threshold to the coefficients. Experimental results show that the target can be well extracted and the deviation, RMS and PV of the centroid are all smaller than the method of subtracting threshold.

  20. A Sparse Representation-Based Deployment Method for Optimizing the Observation Quality of Camera Networks

    PubMed Central

    Wang, Chang; Qi, Fei; Shi, Guangming; Wang, Xiaotian

    2013-01-01

    Deployment is a critical issue affecting the quality of service of camera networks. The deployment aims at adopting the least number of cameras to cover the whole scene, which may have obstacles to occlude the line of sight, with expected observation quality. This is generally formulated as a non-convex optimization problem, which is hard to solve in polynomial time. In this paper, we propose an efficient convex solution for deployment optimizing the observation quality based on a novel anisotropic sensing model of cameras, which provides a reliable measurement of the observation quality. The deployment is formulated as the selection of a subset of nodes from a redundant initial deployment with numerous cameras, which is an ℓ0 minimization problem. Then, we relax this non-convex optimization to a convex ℓ1 minimization employing the sparse representation. Therefore, the high quality deployment is efficiently obtained via convex optimization. Simulation results confirm the effectiveness of the proposed camera deployment algorithms. PMID:23989826

  1. Secure and Robust Iris Recognition Using Random Projections and Sparse Representations.

    PubMed

    Pillai, Jaishanker K; Patel, Vishal M; Chellappa, Rama; Ratha, Nalini K

    2011-09-01

    Noncontact biometrics such as face and iris have additional benefits over contact-based biometrics such as fingerprint and hand geometry. However, three important challenges need to be addressed in a noncontact biometrics-based authentication system: ability to handle unconstrained acquisition, robust and accurate matching, and privacy enhancement without compromising security. In this paper, we propose a unified framework based on random projections and sparse representations, that can simultaneously address all three issues mentioned above in relation to iris biometrics. Our proposed quality measure can handle segmentation errors and a wide variety of possible artifacts during iris acquisition. We demonstrate how the proposed approach can be easily extended to handle alignment variations and recognition from iris videos, resulting in a robust and accurate system. The proposed approach includes enhancements to privacy and security by providing ways to create cancelable iris templates. Results on public data sets show significant benefits of the proposed approach. PMID:21339529

  2. Sparse representation of transients in wavelet basis and its application in gearbox fault feature extraction

    NASA Astrophysics Data System (ADS)

    Fan, Wei; Cai, Gaigai; Zhu, Z. K.; Shen, Changqing; Huang, Weiguo; Shang, Li

    2015-05-01

    Vibration signals from a defective gearbox are often associated with important measurement information useful for gearbox fault diagnosis. The extraction of transient features from the vibration signals has always been a key issue for detecting the localized fault. In this paper, a new transient feature extraction technique is proposed for gearbox fault diagnosis based on sparse representation in wavelet basis. With the proposed method, both the impulse time and the period of transients can be effectively identified, and thus the transient features can be extracted. The effectiveness of the proposed method is verified by the simulated signals as well as the practical gearbox vibration signals. Comparison study shows that the proposed method outperforms empirical mode decomposition (EMD) in transient feature extraction.

  3. Fast Acquisition and Reconstruction of Optical Coherence Tomography Images via Sparse Representation

    PubMed Central

    Li, Shutao; McNabb, Ryan P.; Nie, Qing; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.; Farsiu, Sina

    2014-01-01

    In this paper, we present a novel technique, based on compressive sensing principles, for reconstruction and enhancement of multi-dimensional image data. Our method is a major improvement and generalization of the multi-scale sparsity based tomographic denoising (MSBTD) algorithm we recently introduced for reducing speckle noise. Our new technique exhibits several advantages over MSBTD, including its capability to simultaneously reduce noise and interpolate missing data. Unlike MSBTD, our new method does not require an a priori high-quality image from the target imaging subject and thus offers the potential to shorten clinical imaging sessions. This novel image restoration method, which we termed sparsity based simultaneous denoising and interpolation (SBSDI), utilizes sparse representation dictionaries constructed from previously collected datasets. We tested the SBSDI algorithm on retinal spectral domain optical coherence tomography images captured in the clinic. Experiments showed that the SBSDI algorithm qualitatively and quantitatively outperforms other state-of-the-art methods. PMID:23846467

  4. Heterogeneous iris image hallucination using sparse representation on a learned heterogeneous patch dictionary

    NASA Astrophysics Data System (ADS)

    Li, Yung-Hui; Zheng, Bo-Ren; Ji, Dai-Yan; Tien, Chung-Hao; Liu, Po-Tsun

    2014-09-01

    Cross sensor iris matching may seriously degrade the recognition performance because of the sensor mis-match problem of iris images between the enrollment and test stage. In this paper, we propose two novel patch-based heterogeneous dictionary learning method to attack this problem. The first method applies the latest sparse representation theory while the second method tries to learn the correspondence relationship through PCA in heterogeneous patch space. Both methods learn the basic atoms in iris textures across different image sensors and build connections between them. After such connections are built, at test stage, it is possible to hallucinate (synthesize) iris images across different sensors. By matching training images with hallucinated images, the recognition rate can be successfully enhanced. The experimental results showed the satisfied results both visually and in terms of recognition rate. Experimenting with an iris database consisting of 3015 images, we show that the EER is decreased 39.4% relatively by the proposed method.

  5. Robust classification for occluded ear via Gabor scale feature-based non-negative sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Baoqing; Mu, Zhichun; Li, Chen; Zeng, Hui

    2014-06-01

    The Gabor wavelets have been experimentally verified to be a good approximation to the response of cortical neurons. A new feature extraction approach is investigated for ear recognition by using scale information of Gabor wavelets. The proposed Gabor scale feature conforms to human visual perception of objects from far to near. It can not only avoid too much redundancy in Gabor features but also tends to extract more precise structural information that is robust to image variations. Then, Gabor scale feature-based non-negative sparse representation classification (G-NSRC) is proposed for ear recognition under occlusion. Compared with SRC in which the sparse coding coefficients can be negative, the non-negativity of G-NSRC conforms to the intuitive notion of combing parts to form a whole and therefore is more consistent with the biological modeling of visual data. Additionally, the use of Gabor scale features increases the discriminative power of G-NSRC. Finally, the proposed classification paradigm is applied to occluded ear recognition. Experimental results demonstrate the effectiveness of our proposed algorithm. Especially when the ear is occluded, the proposed algorithm exhibits great robustness and achieves state-of-the-art recognition performance.

  6. Automatic detection of pulsed radio frequency (RF) targets using sparse representations in undercomplete learned dictionaries

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Smith, David A.; Brumby, Steven P.

    2014-06-01

    Automatic classification of transitory or pulsed radio frequency (RF) signals is of particular interest in persistent surveillance and remote sensing applications. Such transients are often acquired in noisy, cluttered environments, and may be characterized by complex or unknown analytical models. Conventional representations using orthogonal bases, e.g., Short Time Fourier and Wavelet Transforms, can be suboptimal for classification of transients, as they provide a rigid tiling of the time-frequency space, and are not specifically designed for a particular target signal. They do not usually lead to sparse decompositions, and require separate feature selection algorithms, creating additional computational overhead. We propose a fast, adaptive classification approach based on non-analytical dictionaries learned from data. Our goal is to detect chirped pulses from a model target emitter in poor signal-to-noise and varying levels of simulated background clutter conditions. This paper builds on our previous RF classification work, and extends it to more complex target and background scenarios. We use a Hebbian rule to learn discriminative RF dictionaries directly from data, without relying on analytical constraints or additional knowledge about the signal characteristics. A pursuit search is used over the learned dictionaries to generate sparse classification features in order to identify time windows containing a target pulse. We demonstrate that learned dictionary techniques are highly suitable for pulsed RF analysis and present results with varying background clutter and noise levels. The target detection decision is obtained in almost real-time via a parallel, vectorized implementation.

  7. Epileptic Seizure Detection with Log-Euclidean Gaussian Kernel-Based Sparse Representation.

    PubMed

    Yuan, Shasha; Zhou, Weidong; Wu, Qi; Zhang, Yanli

    2016-05-01

    Epileptic seizure detection plays an important role in the diagnosis of epilepsy and reducing the massive workload of reviewing electroencephalography (EEG) recordings. In this work, a novel algorithm is developed to detect seizures employing log-Euclidean Gaussian kernel-based sparse representation (SR) in long-term EEG recordings. Unlike the traditional SR for vector data in Euclidean space, the log-Euclidean Gaussian kernel-based SR framework is proposed for seizure detection in the space of the symmetric positive definite (SPD) matrices, which form a Riemannian manifold. Since the Riemannian manifold is nonlinear, the log-Euclidean Gaussian kernel function is applied to embed it into a reproducing kernel Hilbert space (RKHS) for performing SR. The EEG signals of all channels are divided into epochs and the SPD matrices representing EEG epochs are generated by covariance descriptors. Then, the testing samples are sparsely coded over the dictionary composed by training samples utilizing log-Euclidean Gaussian kernel-based SR. The classification of testing samples is achieved by computing the minimal reconstructed residuals. The proposed method is evaluated on the Freiburg EEG dataset of 21 patients and shows its notable performance on both epoch-based and event-based assessments. Moreover, this method handles multiple channels of EEG recordings synchronously which is more speedy and efficient than traditional seizure detection methods. PMID:26906674

  8. Accelerometer-Based Gait Recognition by Sparse Representation of Signature Points With Clusters.

    PubMed

    Zhang, Yuting; Pan, Gang; Jia, Kui; Lu, Minlong; Wang, Yueming; Wu, Zhaohui

    2015-09-01

    Gait, as a promising biometric for recognizing human identities, can be nonintrusively captured as a series of acceleration signals using wearable or portable smart devices. It can be used for access control. Most existing methods on accelerometer-based gait recognition require explicit step-cycle detection, suffering from cycle detection failures and intercycle phase misalignment. We propose a novel algorithm that avoids both the above two problems. It makes use of a type of salient points termed signature points (SPs), and has three components: 1) a multiscale SP extraction method, including the localization and SP descriptors; 2) a sparse representation scheme for encoding newly emerged SPs with known ones in terms of their descriptors, where the phase propinquity of the SPs in a cluster is leveraged to ensure the physical meaningfulness of the codes; and 3) a classifier for the sparse-code collections associated with the SPs of a series. Experimental results on our publicly available dataset of 175 subjects showed that our algorithm outperformed existing methods, even if the step cycles were perfectly detected for them. When the accelerometers at five different body locations were used together, it achieved the rank-1 accuracy of 95.8% for identification, and the equal error rate of 2.2% for verification. PMID:25423662

  9. Spatiotemporal Context Awareness for Urban Traffic Modeling and Prediction: Sparse Representation Based Variable Selection.

    PubMed

    Yang, Su; Shi, Shixiong; Hu, Xiaobing; Wang, Minjie

    2015-01-01

    Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1) Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2) The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3) The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks. PMID:26496370

  10. Sparse representation of brain aging: extracting covariance patterns from structural MRI.

    PubMed

    Su, Longfei; Wang, Lubin; Chen, Fanglin; Shen, Hui; Li, Baojuan; Hu, Dewen

    2012-01-01

    An enhanced understanding of how normal aging alters brain structure is urgently needed for the early diagnosis and treatment of age-related mental diseases. Structural magnetic resonance imaging (MRI) is a reliable technique used to detect age-related changes in the human brain. Currently, multivariate pattern analysis (MVPA) enables the exploration of subtle and distributed changes of data obtained from structural MRI images. In this study, a new MVPA approach based on sparse representation has been employed to investigate the anatomical covariance patterns of normal aging. Two groups of participants (group 1:290 participants; group 2:56 participants) were evaluated in this study. These two groups were scanned with two 1.5 T MRI machines. In the first group, we obtained the discriminative patterns using a t-test filter and sparse representation step. We were able to distinguish the young from old cohort with a very high accuracy using only a few voxels of the discriminative patterns (group 1:98.4%; group 2:96.4%). The experimental results showed that the selected voxels may be categorized into two components according to the two steps in the proposed method. The first component focuses on the precentral and postcentral gyri, and the caudate nucleus, which play an important role in sensorimotor tasks. The strongest volume reduction with age was observed in these clusters. The second component is mainly distributed over the cerebellum, thalamus, and right inferior frontal gyrus. These regions are not only critical nodes of the sensorimotor circuitry but also the cognitive circuitry although their volume shows a relative resilience against aging. Considering the voxels selection procedure, we suggest that the aging of the sensorimotor and cognitive brain regions identified in this study has a covarying relationship with each other. PMID:22590522

  11. Spatiotemporal Context Awareness for Urban Traffic Modeling and Prediction: Sparse Representation Based Variable Selection

    PubMed Central

    Yang, Su; Shi, Shixiong; Hu, Xiaobing; Wang, Minjie

    2015-01-01

    Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1) Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2) The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3) The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks. PMID:26496370

  12. Improving low-dose cardiac CT images using 3D sparse representation based processing

    NASA Astrophysics Data System (ADS)

    Shi, Luyao; Chen, Yang; Luo, Limin

    2015-03-01

    Cardiac computed tomography (CCT) has been widely used in diagnoses of coronary artery diseases due to the continuously improving temporal and spatial resolution. When helical CT with a lower pitch scanning mode is used, the effective radiation dose can be significant when compared to other radiological exams. Many methods have been developed to reduce radiation dose in coronary CT exams including high pitch scans using dual source CT scanners and step-and-shot scanning mode for both single source and dual source CT scanners. Additionally, software methods have also been proposed to reduce noise in the reconstructed CT images and thus offering the opportunity to reduce radiation dose while maintaining the desired diagnostic performance of a certain imaging task. In this paper, we propose that low-dose scans should be considered in order to avoid the harm from accumulating unnecessary X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. Accordingly, in this paper, a 3D dictionary representation based image processing method is proposed to reduce CT image noise. Information on both spatial and temporal structure continuity is utilized in sparse representation to improve the performance of the image processing method. Clinical cases were used to validate the proposed method.

  13. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation

    PubMed Central

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-01-01

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images. PMID:26980176

  14. Sparse Distributed Representation of Odors in a Large-scale Olfactory Bulb Circuit

    PubMed Central

    Yu, Yuguo; McTavish, Thomas S.; Hines, Michael L.; Shepherd, Gordon M.; Valenti, Cesare; Migliore, Michele

    2013-01-01

    In the olfactory bulb, lateral inhibition mediated by granule cells has been suggested to modulate the timing of mitral cell firing, thereby shaping the representation of input odorants. Current experimental techniques, however, do not enable a clear study of how the mitral-granule cell network sculpts odor inputs to represent odor information spatially and temporally. To address this critical step in the neural basis of odor recognition, we built a biophysical network model of mitral and granule cells, corresponding to 1/100th of the real system in the rat, and used direct experimental imaging data of glomeruli activated by various odors. The model allows the systematic investigation and generation of testable hypotheses of the functional mechanisms underlying odor representation in the olfactory bulb circuit. Specifically, we demonstrate that lateral inhibition emerges within the olfactory bulb network through recurrent dendrodendritic synapses when constrained by a range of balanced excitatory and inhibitory conductances. We find that the spatio-temporal dynamics of lateral inhibition plays a critical role in building the glomerular-related cell clusters observed in experiments, through the modulation of synaptic weights during odor training. Lateral inhibition also mediates the development of sparse and synchronized spiking patterns of mitral cells related to odor inputs within the network, with the frequency of these synchronized spiking patterns also modulated by the sniff cycle. PMID:23555237

  15. Automated Variability Selection in Time-domain Imaging Surveys Using Sparse Representations with Learned Dictionaries

    NASA Astrophysics Data System (ADS)

    Wozniak, Przemyslaw R.; Moody, D. I.; Ji, Z.; Brumby, S. P.; Brink, H.; Richards, J.; Bloom, J. S.

    2013-01-01

    Exponential growth in data streams and discovery power delivered by modern time-domain imaging surveys creates a pressing need for variability extraction algorithms that are both fully automated and highly reliable. The current state of the art methods based on image differencing are limited by the fact that for every real variable source the algorithm returns a large number of bogus "detections" caused by atmospheric effects and instrumental signatures coupled with imperfect image processing. Here we present a new approach to this problem inspired by recent advances in computer vision and train the machine directly on pixel data. The training data set comes from the Palomar Transient Factory survey and consists of small images centered around transient candidates with known real/bogus classification. This set of 441-dimensional vectors (21x21 pixel images) is then transformed to a linear representation using the so called dictionary, an overcomplete basis constructed separately for each class. The learning algorithm captures the fact that the intrinsic dimensionality of the input images is typically much lower than the size of the dictionary, and therefore the data vectors are well approximated with a small number of dictionary elements. This sparse representation can be used to construct informative features for any suitable machine learning classifier. In our preliminary analysis automatically extracted features approach the performance of features constructed by humans using subject domain knowledge.

  16. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation

    NASA Astrophysics Data System (ADS)

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-03-01

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images.

  17. Estimating Anatomically-Correct Reference Model for Craniomaxillofacial Deformity via Sparse Representation

    PubMed Central

    Ren, Yi; Wang, Li; Gao, Yaozong; Tang, Zhen; Chen, Ken Chung; Li, Jianfu; Shen, Steve G.F.; Yan, Jin; Lee, Philip K.M.; Chow, Ben; Xia, James J.; Shen, Dinggang

    2014-01-01

    The success of craniomaxillofacial (CMF) surgery depends not only on the surgical techniques, but also upon an accurate surgical planning. However, surgical planning for CMF surgery is challenging due to the absence of a patient-specific reference model. In this paper, we present a method to automatically estimate an anatomically correct reference shape of jaws for the patient requiring orthognathic surgery, a common type of CMF surgery. We employ the sparse representation technique to represent the normal regions of the patient with respect to the normal subjects. The estimated representation is then used to reconstruct a patient-specific reference model with “restored” normal anatomy of the jaws. We validate our method on both synthetic subjects and patients. Experimental results show that our method can effectively reconstruct the normal shape of jaw for patients. Also, a new quantitative measurement is introduced to quantify the CMF deformity and validate the method in a quantitative approach, which is rarely used before. PMID:25328919

  18. Categorizing biomedicine images using novel image features and sparse coding representation

    PubMed Central

    2013-01-01

    Background Images embedded in biomedical publications carry rich information that often concisely summarize key hypotheses adopted, methods employed, or results obtained in a published study. Therefore, they offer valuable clues for understanding main content in a biomedical publication. Prior studies have pointed out the potential of mining images embedded in biomedical publications for automatically understanding and retrieving such images' associated source documents. Within the broad area of biomedical image processing, categorizing biomedical images is a fundamental step for building many advanced image analysis, retrieval, and mining applications. Similar to any automatic categorization effort, discriminative image features can provide the most crucial aid in the process. Method We observe that many images embedded in biomedical publications carry versatile annotation text. Based on the locations of and the spatial relationships between these text elements in an image, we thus propose some novel image features for image categorization purpose, which quantitatively characterize the spatial positions and distributions of text elements inside a biomedical image. We further adopt a sparse coding representation (SCR) based technique to categorize images embedded in biomedical publications by leveraging our newly proposed image features. Results we randomly selected 990 images of the JPG format for use in our experiments where 310 images were used as training samples and the rest were used as the testing cases. We first segmented 310 sample images following the our proposed procedure. This step produced a total of 1035 sub-images. We then manually labeled all these sub-images according to the two-level hierarchical image taxonomy proposed by [1]. Among our annotation results, 316 are microscopy images, 126 are gel electrophoresis images, 135 are line charts, 156 are bar charts, 52 are spot charts, 25 are tables, 70 are flow charts, and the remaining 155 images are

  19. Online sparse representation for remote sensing compressed-sensed video sampling

    NASA Astrophysics Data System (ADS)

    Wang, Jie; Liu, Kun; Li, Sheng-liang; Zhang, Li

    2014-11-01

    Most recently, an emerging Compressed Sensing (CS) theory has brought a major breakthrough for data acquisition and recovery. It asserts that a signal, which is highly compressible in a known basis, can be reconstructed with high probability through sampling frequency which is well below Nyquist Sampling Frequency. When applying CS to Remote Sensing (RS) Video imaging, it can directly and efficiently acquire compressed image data by randomly projecting original data to obtain linear and non-adaptive measurements. In this paper, with the help of distributed video coding scheme which is a low-complexity technique for resource limited sensors, the frames of a RS video sequence are divided into Key frames (K frames) and Non-Key frames (CS frames). In other words, the input video sequence consists of many groups of pictures (GOPs) and each GOP consists of one K frame followed by several CS frames. Both of them are measured based on block, but at different sampling rates. In this way, the major encoding computation burden will be shifted to the decoder. At the decoder, the Side Information (SI) is generated for the CS frames using traditional Motion-Compensated Interpolation (MCI) technique according to the reconstructed key frames. The over-complete dictionary is trained by dictionary learning methods based on SI. These learning methods include ICA-like, PCA, K-SVD, MOD, etc. Using these dictionaries, the CS frames could be reconstructed according to sparse-land model. In the numerical experiments, the reconstruction performance of ICA algorithm, which is often evaluated by Peak Signal-to-Noise Ratio (PSNR), has been made compared with other online sparse representation algorithms. The simulation results show its advantages in reducing reconstruction time and robustness in reconstruction performance when applying ICA algorithm to remote sensing video reconstruction.

  20. Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation.

    PubMed

    Xu, Yong; Fang, Xiaozhao; Wu, Jian; Li, Xuelong; Zhang, David

    2016-02-01

    In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http://www.yongxu.org/lunwen.html. PMID:26701675

  1. Pulmonary emphysema classification based on an improved texton learning model by sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Zhou, Xiangrong; Goshima, Satoshi; Chen, Huayue; Muramatsu, Chisako; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Fujita, Hiroshi

    2013-03-01

    In this paper, we present a texture classification method based on texton learned via sparse representation (SR) with new feature histogram maps in the classification of emphysema. First, an overcomplete dictionary of textons is learned via KSVD learning on every class image patches in the training dataset. In this stage, high-pass filter is introduced to exclude patches in smooth area to speed up the dictionary learning process. Second, 3D joint-SR coefficients and intensity histograms of the test images are used for characterizing regions of interest (ROIs) instead of conventional feature histograms constructed from SR coefficients of the test images over the dictionary. Classification is then performed using a classifier with distance as a histogram dissimilarity measure. Four hundreds and seventy annotated ROIs extracted from 14 test subjects, including 6 paraseptal emphysema (PSE) subjects, 5 centrilobular emphysema (CLE) subjects and 3 panlobular emphysema (PLE) subjects, are used to evaluate the effectiveness and robustness of the proposed method. The proposed method is tested on 167 PSE, 240 CLE and 63 PLE ROIs consisting of mild, moderate and severe pulmonary emphysema. The accuracy of the proposed system is around 74%, 88% and 89% for PSE, CLE and PLE, respectively.

  2. An application to pulmonary emphysema classification based on model of texton learning by sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Zhou, Xiangrong; Goshima, Satoshi; Chen, Huayue; Muramatsu, Chisako; Hara, Takeshi; Yokoyama, Ryojiro; Kanematsu, Masayuki; Fujita, Hiroshi

    2012-03-01

    We aim at using a new texton based texture classification method in the classification of pulmonary emphysema in computed tomography (CT) images of the lungs. Different from conventional computer-aided diagnosis (CAD) pulmonary emphysema classification methods, in this paper, firstly, the dictionary of texton is learned via applying sparse representation(SR) to image patches in the training dataset. Then the SR coefficients of the test images over the dictionary are used to construct the histograms for texture presentations. Finally, classification is performed by using a nearest neighbor classifier with a histogram dissimilarity measure as distance. The proposed approach is tested on 3840 annotated regions of interest consisting of normal tissue and mild, moderate and severe pulmonary emphysema of three subtypes. The performance of the proposed system, with an accuracy of about 88%, is comparably higher than state of the art method based on the basic rotation invariant local binary pattern histograms and the texture classification method based on texton learning by k-means, which performs almost the best among other approaches in the literature.

  3. Clustering-weighted SIFT-based classification method via sparse representation

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Xu, Feng; He, Jun

    2014-07-01

    In recent years, sparse representation-based classification (SRC) has received significant attention due to its high recognition rate. However, the original SRC method requires a rigid alignment, which is crucial for its application. Therefore, features such as SIFT descriptors are introduced into the SRC method, resulting in an alignment-free method. However, a feature-based dictionary always contains considerable useful information for recognition. We explore the relationship of the similarity of the SIFT descriptors to multitask recognition and propose a clustering-weighted SIFT-based SRC method (CWS-SRC). The proposed approach is considerably more suitable for multitask recognition with sufficient samples. Using two public face databases (AR and Yale face) and a self-built car-model database, the performance of the proposed method is evaluated and compared to that of the SRC, SIFT matching, and MKD-SRC methods. Experimental results indicate that the proposed method exhibits better performance in the alignment-free scenario with sufficient samples.

  4. 3D Face Recognition Based on Multiple Keypoint Descriptors and Sparse Representation

    PubMed Central

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876

  5. A Sparse Representation Based Method to Classify Pulmonary Patterns of Diffuse Lung Diseases

    PubMed Central

    Xu, Rui; Tachibana, Rie; Kido, Shoji

    2015-01-01

    We applied and optimized the sparse representation (SR) approaches in the computer-aided diagnosis (CAD) to classify normal tissues and five kinds of diffuse lung disease (DLD) patterns: consolidation, ground-glass opacity, honeycombing, emphysema, and nodule. By using the K-SVD which is based on the singular value decomposition (SVD) and orthogonal matching pursuit (OMP), it can achieve a satisfied recognition rate, but too much time was spent in the experiment. To reduce the runtime of the method, the K-Means algorithm was substituted for the K-SVD, and the OMP was simplified by searching the desired atoms at one time (OMP1). We proposed three SR based methods for evaluation: SR1 (K-SVD+OMP), SR2 (K-Means+OMP), and SR3 (K-Means+OMP1). 1161 volumes of interest (VOIs) were used to optimize the parameters and train each method, and 1049 VOIs were adopted to evaluate the performances of the methods. The SR based methods were powerful to recognize the DLD patterns (SR1: 96.1%, SR2: 95.6%, SR3: 96.4%) and significantly better than the baseline methods. Furthermore, when the K-Means and OMP1 were applied, the runtime of the SR based methods can be reduced by 98.2% and 55.2%, respectively. Therefore, we thought that the method using the K-Means and OMP1 (SR3) was efficient for the CAD of the DLDs. PMID:25821509

  6. Wireless Sensor Array Network DoA Estimation from Compressed Array Data via Joint Sparse Representation.

    PubMed

    Yu, Kai; Yin, Ming; Luo, Ji-An; Wang, Yingguan; Bao, Ming; Hu, Yu-Hen; Wang, Zhi

    2016-01-01

    A compressive sensing joint sparse representation direction of arrival estimation (CSJSR-DoA) approach is proposed for wireless sensor array networks (WSAN). By exploiting the joint spatial and spectral correlations of acoustic sensor array data, the CSJSR-DoA approach provides reliable DoA estimation using randomly-sampled acoustic sensor data. Since random sampling is performed at remote sensor arrays, less data need to be transmitted over lossy wireless channels to the fusion center (FC), and the expensive source coding operation at sensor nodes can be avoided. To investigate the spatial sparsity, an upper bound of the coherence of incoming sensor signals is derived assuming a linear sensor array configuration. This bound provides a theoretical constraint on the angular separation of acoustic sources to ensure the spatial sparsity of the received acoustic sensor array signals. The Cram e ´ r-Rao bound of the CSJSR-DoA estimator that quantifies the theoretical DoA estimation performance is also derived. The potential performance of the CSJSR-DoA approach is validated using both simulations and field experiments on a prototype WSAN platform. Compared to existing compressive sensing-based DoA estimation methods, the CSJSR-DoA approach shows significant performance improvement. PMID:27223287

  7. Remote Sensing Image Fusion Method Based on Nonsubsampled Shearlet Transform and Sparse Representation

    NASA Astrophysics Data System (ADS)

    Moonon, Altan-Ulzii; Hu, Jianwen; Li, Shutao

    2015-12-01

    The remote sensing image fusion is an important preprocessing technique in remote sensing image processing. In this paper, a remote sensing image fusion method based on the nonsubsampled shearlet transform (NSST) with sparse representation (SR) is proposed. Firstly, the low resolution multispectral (MS) image is upsampled and color space is transformed from Red-Green-Blue (RGB) to Intensity-Hue-Saturation (IHS). Then, the high resolution panchromatic (PAN) image and intensity component of MS image are decomposed by NSST to high and low frequency coefficients. The low frequency coefficients of PAN and the intensity component are fused by the SR with the learned dictionary. The high frequency coefficients of intensity component and PAN image are fused by local energy based fusion rule. Finally, the fused result is obtained by performing inverse NSST and inverse IHS transform. The experimental results on IKONOS and QuickBird satellites demonstrate that the proposed method provides better spectral quality and superior spatial information in the fused image than other remote sensing image fusion methods both in visual effect and object evaluation.

  8. Dim moving target tracking algorithm based on particle discriminative sparse representation

    NASA Astrophysics Data System (ADS)

    Li, Zhengzhou; Li, Jianing; Ge, Fengzeng; Shao, Wanxing; Liu, Bing; Jin, Gang

    2016-03-01

    The small dim moving target usually submerged in strong noise, and its motion observability is debased by numerous false alarms for low signal-to-noise ratio (SNR). A target tracking algorithm based on particle filter and discriminative sparse representation is proposed in this paper to cope with the uncertainty of dim moving target tracking. The weight of every particle is the crucial factor to ensuring the accuracy of dim target tracking for particle filter (PF) that can achieve excellent performance even under the situation of non-linear and non-Gaussian motion. In discriminative over-complete dictionary constructed according to image sequence, the target dictionary describes target signal and the background dictionary embeds background clutter. The difference between target particle and background particle is enhanced to a great extent, and the weight of every particle is then measured by means of the residual after reconstruction using the prescribed number of target atoms and their corresponding coefficients. The movement state of dim moving target is then estimated and finally tracked by these weighted particles. Meanwhile, the subspace of over-complete dictionary is updated online by the stochastic estimation algorithm. Some experiments are induced and the experimental results show the proposed algorithm could improve the performance of moving target tracking by enhancing the consistency between the posteriori probability distribution and the moving target state.

  9. Wireless Sensor Array Network DoA Estimation from Compressed Array Data via Joint Sparse Representation

    PubMed Central

    Yu, Kai; Yin, Ming; Luo, Ji-An; Wang, Yingguan; Bao, Ming; Hu, Yu-Hen; Wang, Zhi

    2016-01-01

    A compressive sensing joint sparse representation direction of arrival estimation (CSJSR-DoA) approach is proposed for wireless sensor array networks (WSAN). By exploiting the joint spatial and spectral correlations of acoustic sensor array data, the CSJSR-DoA approach provides reliable DoA estimation using randomly-sampled acoustic sensor data. Since random sampling is performed at remote sensor arrays, less data need to be transmitted over lossy wireless channels to the fusion center (FC), and the expensive source coding operation at sensor nodes can be avoided. To investigate the spatial sparsity, an upper bound of the coherence of incoming sensor signals is derived assuming a linear sensor array configuration. This bound provides a theoretical constraint on the angular separation of acoustic sources to ensure the spatial sparsity of the received acoustic sensor array signals. The Crame´r–Rao bound of the CSJSR-DoA estimator that quantifies the theoretical DoA estimation performance is also derived. The potential performance of the CSJSR-DoA approach is validated using both simulations and field experiments on a prototype WSAN platform. Compared to existing compressive sensing-based DoA estimation methods, the CSJSR-DoA approach shows significant performance improvement. PMID:27223287

  10. A sparse representation based method to classify pulmonary patterns of diffuse lung diseases.

    PubMed

    Zhao, Wei; Xu, Rui; Hirano, Yasushi; Tachibana, Rie; Kido, Shoji

    2015-01-01

    We applied and optimized the sparse representation (SR) approaches in the computer-aided diagnosis (CAD) to classify normal tissues and five kinds of diffuse lung disease (DLD) patterns: consolidation, ground-glass opacity, honeycombing, emphysema, and nodule. By using the K-SVD which is based on the singular value decomposition (SVD) and orthogonal matching pursuit (OMP), it can achieve a satisfied recognition rate, but too much time was spent in the experiment. To reduce the runtime of the method, the K-Means algorithm was substituted for the K-SVD, and the OMP was simplified by searching the desired atoms at one time (OMP1). We proposed three SR based methods for evaluation: SR1 (K-SVD+OMP), SR2 (K-Means+OMP), and SR3 (K-Means+OMP1). 1161 volumes of interest (VOIs) were used to optimize the parameters and train each method, and 1049 VOIs were adopted to evaluate the performances of the methods. The SR based methods were powerful to recognize the DLD patterns (SR1: 96.1%, SR2: 95.6%, SR3: 96.4%) and significantly better than the baseline methods. Furthermore, when the K-Means and OMP1 were applied, the runtime of the SR based methods can be reduced by 98.2% and 55.2%, respectively. Therefore, we thought that the method using the K-Means and OMP1 (SR3) was efficient for the CAD of the DLDs. PMID:25821509

  11. 3D face recognition based on multiple keypoint descriptors and sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876

  12. Prediction of protein-protein interactions with clustered amino acids and weighted sparse representation.

    PubMed

    Huang, Qiaoying; You, Zhuhong; Zhang, Xiaofeng; Zhou, Yong

    2015-01-01

    With the completion of the Human Genome Project, bioscience has entered into the era of the genome and proteome. Therefore, protein-protein interactions (PPIs) research is becoming more and more important. Life activities and the protein-protein interactions are inseparable, such as DNA synthesis, gene transcription activation, protein translation, etc. Though many methods based on biological experiments and machine learning have been proposed, they all spent a long time to learn and obtained an imprecise accuracy. How to efficiently and accurately predict PPIs is still a big challenge. To take up such a challenge, we developed a new predictor by incorporating the reduced amino acid alphabet (RAAA) information into the general form of pseudo-amino acid composition (PseAAC) and with the weighted sparse representation-based classification (WSRC). The remarkable advantages of introducing the reduced amino acid alphabet is being able to avoid the notorious dimensionality disaster or overfitting problem in statistical prediction. Additionally, experiments have proven that our method achieved good performance in both a low- and high-dimensional feature space. Among all of the experiments performed on the PPIs data of Saccharomyces cerevisiae, the best one achieved 90.91% accuracy, 94.17% sensitivity, 87.22% precision and a 83.43% Matthews correlation coefficient (MCC) value. In order to evaluate the prediction ability of our method, extensive experiments are performed to compare with the state-of-the-art technique, support vector machine (SVM). The achieved results show that the proposed approach is very promising for predicting PPIs, and it can be a helpful supplement for PPIs prediction. PMID:25984606

  13. Prediction of Protein–Protein Interactions with Clustered Amino Acids and Weighted Sparse Representation

    PubMed Central

    Huang, Qiaoying; You, Zhuhong; Zhang, Xiaofeng; Zhou, Yong

    2015-01-01

    With the completion of the Human Genome Project, bioscience has entered into the era of the genome and proteome. Therefore, protein–protein interactions (PPIs) research is becoming more and more important. Life activities and the protein–protein interactions are inseparable, such as DNA synthesis, gene transcription activation, protein translation, etc. Though many methods based on biological experiments and machine learning have been proposed, they all spent a long time to learn and obtained an imprecise accuracy. How to efficiently and accurately predict PPIs is still a big challenge. To take up such a challenge, we developed a new predictor by incorporating the reduced amino acid alphabet (RAAA) information into the general form of pseudo-amino acid composition (PseAAC) and with the weighted sparse representation-based classification (WSRC). The remarkable advantages of introducing the reduced amino acid alphabet is being able to avoid the notorious dimensionality disaster or overfitting problem in statistical prediction. Additionally, experiments have proven that our method achieved good performance in both a low- and high-dimensional feature space. Among all of the experiments performed on the PPIs data of Saccharomyces cerevisiae, the best one achieved 90.91% accuracy, 94.17% sensitivity, 87.22% precision and a 83.43% Matthews correlation coefficient (MCC) value. In order to evaluate the prediction ability of our method, extensive experiments are performed to compare with the state-of-the-art technique, support vector machine (SVM). The achieved results show that the proposed approach is very promising for predicting PPIs, and it can be a helpful supplement for PPIs prediction. PMID:25984606

  14. Intrinsic Functional Component Analysis via Sparse Representation on Alzheimer's Disease Neuroimaging Initiative Database

    PubMed Central

    Jiang, Xi; Zhang, Xin

    2014-01-01

    Abstract Alzheimer's disease (AD) is the most common type of dementia (accounting for 60% to 80%) and is the fifth leading cause of death for those people who are 65 or older. By 2050, one new case of AD in United States is expected to develop every 33 sec. Unfortunately, there is no available effective treatment that can stop or slow the death of neurons that causes AD symptoms. On the other hand, it is widely believed that AD starts before development of the associated symptoms, so its prestages, including mild cognitive impairment (MCI) or even significant memory concern (SMC), have received increasing attention, not only because of their potential as a precursor of AD, but also as a possible predictor of conversion to other neurodegenerative diseases. Although these prestages have been defined clinically, accurate/efficient diagnosis is still challenging. Moreover, brain functional abnormalities behind those alterations and conversions are still unclear. In this article, by developing novel sparse representations of whole-brain resting-state functional magnetic resonance imaging signals and by using the most updated Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, we successfully identified multiple functional components simultaneously, and which potentially represent those intrinsic functional networks involved in the resting-state activities. Interestingly, these identified functional components contain all the resting-state networks obtained from traditional independent-component analysis. Moreover, by using the features derived from those functional components, it yields high classification accuracy for both AD (94%) and MCI (92%) versus normal controls. Even for SMC we can still have 92% accuracy. PMID:24846640

  15. Generating virtual training samples for sparse representation of face images and face recognition

    NASA Astrophysics Data System (ADS)

    Du, Yong; Wang, Yu

    2016-03-01

    There are many challenges in face recognition. In real-world scenes, images of the same face vary with changing illuminations, different expressions and poses, multiform ornaments, or even altered mental status. Limited available training samples cannot convey these possible changes in the training phase sufficiently, and this has become one of the restrictions to improve the face recognition accuracy. In this article, we view the multiplication of two images of the face as a virtual face image to expand the training set and devise a representation-based method to perform face recognition. The generated virtual samples really reflect some possible appearance and pose variations of the face. By multiplying a training sample with another sample from the same subject, we can strengthen the facial contour feature and greatly suppress the noise. Thus, more human essential information is retained. Also, uncertainty of the training data is simultaneously reduced with the increase of the training samples, which is beneficial for the training phase. The devised representation-based classifier uses both the original and new generated samples to perform the classification. In the classification phase, we first determine K nearest training samples for the current test sample by calculating the Euclidean distances between the test sample and training samples. Then, a linear combination of these selected training samples is used to represent the test sample, and the representation result is used to classify the test sample. The experimental results show that the proposed method outperforms some state-of-the-art face recognition methods.

  16. Object detection approach using generative sparse, hierarchical networks with top-down and lateral connections for combining texture/color detection and shape/contour detection

    SciTech Connect

    Paiton, Dylan M.; Kenyon, Garrett T.; Brumby, Steven P.; Schultz, Peter F.; George, John S.

    2015-07-28

    An approach to detecting objects in an image dataset may combine texture/color detection, shape/contour detection, and/or motion detection using sparse, generative, hierarchical models with lateral and top-down connections. A first independent representation of objects in an image dataset may be produced using a color/texture detection algorithm. A second independent representation of objects in the image dataset may be produced using a shape/contour detection algorithm. A third independent representation of objects in the image dataset may be produced using a motion detection algorithm. The first, second, and third independent representations may then be combined into a single coherent output using a combinatorial algorithm.

  17. Hyperspectral Image Super-Resolution via Non-Negative Structured Sparse Representation.

    PubMed

    Dong, Weisheng; Fu, Fazuo; Shi, Guangming; Cao, Xun; Wu, Jinjian; Li, Guangyu; Li, Guangyu

    2016-05-01

    Hyperspectral imaging has many applications from agriculture and astronomy to surveillance and mineralogy. However, it is often challenging to obtain high-resolution (HR) hyperspectral images using existing hyperspectral imaging techniques due to various hardware limitations. In this paper, we propose a new hyperspectral image super-resolution method from a low-resolution (LR) image and a HR reference image of the same scene. The estimation of the HR hyperspectral image is formulated as a joint estimation of the hyperspectral dictionary and the sparse codes based on the prior knowledge of the spatial-spectral sparsity of the hyperspectral image. The hyperspectral dictionary representing prototype reflectance spectra vectors of the scene is first learned from the input LR image. Specifically, an efficient non-negative dictionary learning algorithm using the block-coordinate descent optimization technique is proposed. Then, the sparse codes of the desired HR hyperspectral image with respect to learned hyperspectral basis are estimated from the pair of LR and HR reference images. To improve the accuracy of non-negative sparse coding, a clustering-based structured sparse coding method is proposed to exploit the spatial correlation among the learned sparse codes. The experimental results on both public datasets and real LR hypspectral images suggest that the proposed method substantially outperforms several existing HR hyperspectral image recovery techniques in the literature in terms of both objective quality metrics and computational efficiency. PMID:27019486

  18. Integration of Sparse Multi-modality Representation and Anatomical Constraint for Isointense Infant Brain MR Image Segmentation

    PubMed Central

    Wang, Li; Shi, Feng; Gao, Yaozong; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang

    2014-01-01

    Segmentation of infant brain MR images is challenging due to poor spatial resolution, severe partial volume effect, and the ongoing maturation and myelination process. During the first year of life, the brain image contrast between white and gray matters undergoes dramatic changes. In particular, the image contrast inverses around 6–8 months of age, where the white and gray matter tissues are isointense in T1 and T2 weighted images and hence exhibit the extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a general framework that adopts sparse representation to fuse the multi-modality image information and further incorporate the anatomical constraints for brain tissue segmentation. Specifically, we first derive an initial segmentation from a library of aligned images with ground-truth segmentations by using sparse representation in a patch-based fashion for the multi-modality T1, T2 and FA images. The segmentation result is further iteratively refined by integration of the anatomical constraint. The proposed method was evaluated on 22 infant brain MR images acquired at around 6 months of age by using a leave-one-out cross-validation, as well as other 10 unseen testing subjects. Our method achieved a high accuracy for the Dice ratios that measure the volume overlap between automated and manual segmentations, i.e., 0.889±0.008 for white matter and 0.870±0.006 for gray matter. PMID:24291615

  19. Characterizing and differentiating task-based and resting state fMRI signals via two-stage sparse representations.

    PubMed

    Zhang, Shu; Li, Xiang; Lv, Jinglei; Jiang, Xi; Guo, Lei; Liu, Tianming

    2016-03-01

    A relatively underexplored question in fMRI is whether there are intrinsic differences in terms of signal composition patterns that can effectively characterize and differentiate task-based or resting state fMRI (tfMRI or rsfMRI) signals. In this paper, we propose a novel two-stage sparse representation framework to examine the fundamental difference between tfMRI and rsfMRI signals. Specifically, in the first stage, the whole-brain tfMRI or rsfMRI signals of each subject were composed into a big data matrix, which was then factorized into a subject-specific dictionary matrix and a weight coefficient matrix for sparse representation. In the second stage, all of the dictionary matrices from both tfMRI/rsfMRI data across multiple subjects were composed into another big data-matrix, which was further sparsely represented by a cross-subjects common dictionary and a weight matrix. This framework has been applied on the recently publicly released Human Connectome Project (HCP) fMRI data and experimental results revealed that there are distinctive and descriptive atoms in the cross-subjects common dictionary that can effectively characterize and differentiate tfMRI and rsfMRI signals, achieving 100% classification accuracy. Moreover, our methods and results can be meaningfully interpreted, e.g., the well-known default mode network (DMN) activities can be recovered from the very noisy and heterogeneous aggregated big-data of tfMRI and rsfMRI signals across all subjects in HCP Q1 release. PMID:25732072

  20. Improved extreme value weighted sparse representational image denoising with random perturbation

    NASA Astrophysics Data System (ADS)

    Xuan, Shibin; Han, Yulan

    2015-11-01

    Research into the removal of mixed noise is a hot topic in the field of image denoising. Currently, weighted encoding with sparse nonlocal regularization represents an excellent mixed noise removal method. To make the fitting function closer to the requirements of a robust estimation technique, an extreme value technique is used that allows the fitting function to satisfy three conditions of robust estimation on a larger interval. Moreover, a random disturbance sequence is integrated into the denoising model to prevent the iterative solving process from falling into local optima. A radon transform-based noise detection algorithm and an adaptive median filter are used to obtain a high-quality initial solution for the iterative procedure of the image denoising model. Experimental results indicate that this improved method efficiently enhances the weighted encoding with a sparse nonlocal regularization model. The proposed method can effectively remove mixed noise from corrupted images, while better preserving the edges and details of the processed image.

  1. Machinery vibration signal denoising based on learned dictionary and sparse representation

    NASA Astrophysics Data System (ADS)

    Guo, Liang; Gao, Hongli; Li, Jun; Huang, Haifeng; Zhang, Xiaochen

    2015-07-01

    Mechanical vibration signal denoising has been an import problem for machine damage assessment and health monitoring. Wavelet transfer and sparse reconstruction are the powerful and practical methods. However, those methods are based on the fixed basis functions or atoms. In this paper, a novel method is presented. The atoms used to represent signals are learned from the raw signal. And in order to satisfy the requirements of real-time signal processing, an online dictionary learning algorithm is adopted. Orthogonal matching pursuit is applied to extract the most pursuit column in the dictionary. At last, denoised signal is calculated with the sparse vector and learned dictionary. A simulation signal and real bearing fault signal are utilized to evaluate the improved performance of the proposed method through the comparison with kinds of denoising algorithms. Then Its computing efficiency is demonstrated by an illustrative runtime example. The results show that the proposed method outperforms current algorithms with efficiency calculation.

  2. Sparse Shape Representation using the Laplace-Beltrami Eigenfunctions and Its Application to Modeling Subcortical Structures

    PubMed Central

    Kim, Seung-Goo; Chung, Moo K.; Schaefer, Stacey M.; van Reekum, Carien; Davidson, Richard J.

    2013-01-01

    We present a new sparse shape modeling framework on the Laplace-Beltrami (LB) eigenfunctions. Traditionally, the LB-eigenfunctions are used as a basis for intrinsically representing surface shapes by forming a Fourier series expansion. To reduce high frequency noise, only the first few terms are used in the expansion and higher frequency terms are simply thrown away. However, some lower frequency terms may not necessarily contribute significantly in reconstructing the surfaces. Motivated by this idea, we propose to filter out only the significant eigenfunctions by imposing l1-penalty. The new sparse framework can further avoid additional surface-based smoothing often used in the field. The proposed approach is applied in investigating the influence of age (38–79 years) and gender on amygdala and hippocampus shapes in the normal population. In addition, we show how the emotional response is related to the anatomy of the subcortical structures. PMID:23783079

  3. Sparse representation of multi parametric DCE-MRI features using K-SVD for classifying gene expression based breast cancer recurrence risk

    NASA Astrophysics Data System (ADS)

    Mahrooghy, Majid; Ashraf, Ahmed B.; Daye, Dania; Mies, Carolyn; Rosen, Mark; Feldman, Michael; Kontos, Despina

    2014-03-01

    We evaluate the prognostic value of sparse representation-based features by applying the K-SVD algorithm on multiparametric kinetic, textural, and morphologic features in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). K-SVD is an iterative dimensionality reduction method that optimally reduces the initial feature space by updating the dictionary columns jointly with the sparse representation coefficients. Therefore, by using K-SVD, we not only provide sparse representation of the features and condense the information in a few coefficients but also we reduce the dimensionality. The extracted K-SVD features are evaluated by a machine learning algorithm including a logistic regression classifier for the task of classifying high versus low breast cancer recurrence risk as determined by a validated gene expression assay. The features are evaluated using ROC curve analysis and leave one-out cross validation for different sparse representation and dimensionality reduction numbers. Optimal sparse representation is obtained when the number of dictionary elements is 4 (K=4) and maximum non-zero coefficients is 2 (L=2). We compare K-SVD with ANOVA based feature selection for the same prognostic features. The ROC results show that the AUC of the K-SVD based (K=4, L=2), the ANOVA based, and the original features (i.e., no dimensionality reduction) are 0.78, 0.71. and 0.68, respectively. From the results, it can be inferred that by using sparse representation of the originally extracted multi-parametric, high-dimensional data, we can condense the information on a few coefficients with the highest predictive value. In addition, the dimensionality reduction introduced by K-SVD can prevent models from over-fitting.

  4. Classification of birefringence in mode-locked fiber lasers using machine learning and sparse representation.

    PubMed

    Fu, Xing; Brunton, Steven L; Nathan Kutz, J

    2014-04-01

    It has been observed that changes in the birefringence, which are difficult or impossible to directly measure, can significantly affect mode-locking in a fiber laser. In this work we develop techniques to estimate the effective birefringence by comparing a test measurement of a given objective function against a learned library. In particular, a toroidal search algorithm is applied to the laser cavity for various birefringence values by varying the waveplate and polarizer angles at incommensurate angular frequencies, thus producing a time-series of the objective function. The resulting time series, which is converted to a spectrogram and then dimensionally reduced with a singular value decomposition, is then labelled with the corresponding effective birefringence and concatenated into a library of modes. A sparse search algorithm (L(1)-norm optimization) is then applied to a test measurement in order to classify the birefringence of the fiber laser. Simulations show that the sparse search algorithm performs very well in recognizing cavity birefringence even in the presence of noise and/or noisy measurements. Once classified, the wave plates and polarizers can be adjusted using servo-control motors to the optimal positions obtained from the toroidal search. The result is an efficient, self-tuning laser. PMID:24718230

  5. A tight and explicit representation of Q in sparse QR factorization

    SciTech Connect

    Ng, E.G.; Peyton, B.W.

    1992-05-01

    In QR factorization of a sparse m{times}n matrix A (m {ge} n) the orthogonal factor Q is often stored implicitly as a lower trapezoidal matrix H known as the Householder matrix. This paper presents a simple characterization of the row structure of Q, which could be used as the basis for a sparse data structure that can store Q explicitly. The new characterization is a simple extension of a well known row-oriented characterization of the structure of H. Hare, Johnson, Olesky, and van den Driessche have recently provided a complete sparsity analysis of the QR factorization. Let U be the matrix consisting of the first n columns of Q. Using results from, we show that the data structures for H and U resulting from our characterizations are tight when A is a strong Hall matrix. We also show that H and the lower trapezoidal part of U have the same sparsity characterization when A is strong Hall. We then show that this characterization can be extended to any weak Hall matrix that has been permuted into block upper triangular form. Finally, we show that permuting to block triangular form never increases the fill incurred during the factorization.

  6. Harnessing data structure for recovery of randomly missing structural vibration responses time history: Sparse representation versus low-rank structure

    NASA Astrophysics Data System (ADS)

    Yang, Yongchao; Nagarajaiah, Satish

    2016-06-01

    Randomly missing data of structural vibration responses time history often occurs in structural dynamics and health monitoring. For example, structural vibration responses are often corrupted by outliers or erroneous measurements due to sensor malfunction; in wireless sensing platforms, data loss during wireless communication is a common issue. Besides, to alleviate the wireless data sampling or communication burden, certain accounts of data are often discarded during sampling or before transmission. In these and other applications, recovery of the randomly missing structural vibration responses from the available, incomplete data, is essential for system identification and structural health monitoring; it is an ill-posed inverse problem, however. This paper explicitly harnesses the data structure itself-of the structural vibration responses-to address this (inverse) problem. What is relevant is an empirical, but often practically true, observation, that is, typically there are only few modes active in the structural vibration responses; hence a sparse representation (in frequency domain) of the single-channel data vector, or, a low-rank structure (by singular value decomposition) of the multi-channel data matrix. Exploiting such prior knowledge of data structure (intra-channel sparse or inter-channel low-rank), the new theories of ℓ1-minimization sparse recovery and nuclear-norm-minimization low-rank matrix completion enable recovery of the randomly missing or corrupted structural vibration response data. The performance of these two alternatives, in terms of recovery accuracy and computational time under different data missing rates, is investigated on a few structural vibration response data sets-the seismic responses of the super high-rise Canton Tower and the structural health monitoring accelerations of a real large-scale cable-stayed bridge. Encouraging results are obtained and the applicability and limitation of the presented methods are discussed.

  7. Iris classification based on sparse representations using on-line dictionary learning for large-scale de-duplication applications.

    PubMed

    Nalla, Pattabhi Ramaiah; Chalavadi, Krishna Mohan

    2015-01-01

    De-duplication of biometrics is not scalable when the number of people to be enrolled into the biometric system runs into billions, while creating a unique identity for every person. In this paper, we propose an iris classification based on sparse representation of log-gabor wavelet features using on-line dictionary learning (ODL) for large-scale de-duplication applications. Three different iris classes based on iris fiber structures, namely, stream, flower, jewel and shaker, are used for faster retrieval of identities. Also, an iris adjudication process is illustrated by comparing the matched iris-pair images side-by-side to make the decision on the identification score using color coding. Iris classification and adjudication are included in iris de-duplication architecture to speed-up the identification process and to reduce the identification errors. The efficacy of the proposed classification approach is demonstrated on the standard iris database, UPOL. PMID:26069877

  8. Protein structure determination by combining sparse NMR data with evolutionary couplings

    PubMed Central

    Tang, Yuefeng; Huang, Yuanpeng Janet; Hopf, Thomas A.; Sander, Chris; Marks, Debora S.; Montelione, Gaetano T.

    2015-01-01

    Accurate protein structure determination by NMR is challenging for larger proteins, for which experimental data is often incomplete and ambiguous. Fortunately, the upsurge in evolutionary sequence information and advances in maximum entropy statistical methods now provide a rich complementary source of structural constraints. We have developed a hybrid approach (EC-NMR) combining sparse NMR data with evolutionary residue-residue couplings, and demonstrate accurate structure determination for several 6 to 41 kDa proteins. PMID:26121406

  9. Concept Abstractness and the Representation of Noun-Noun Combinations

    ERIC Educational Resources Information Center

    Xu, Xu; Paulson, Lisa

    2013-01-01

    Research on noun-noun combinations has been largely focusing on concrete concepts. Three experiments examined the role of concept abstractness in the representation of noun-noun combinations. In Experiment 1, participants provided written interpretations for phrases constituted by nouns of varying degrees of abstractness. Interpretive focus (the…

  10. Radio frequency (RF) transient classification using sparse representations over learned dictionaries

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Brumby, Steven P.; Myers, Kary L.; Pawley, Norma H.

    2011-09-01

    Automatic classification of transitory or pulsed radio frequency (RF) signals is of particular interest in persistent surveillance and remote sensing applications. Such transients are often acquired in noisy, cluttered environments, and may be characterized by complex or unknown analytical models, making feature extraction and classification difficult. We propose a fast, adaptive classification approach based on non-analytical dictionaries learned from data. We compare two dictionary learning methods from the image analysis literature, the K-SVD algorithm and Hebbian learning, and extend them for use with RF data. Both methods allow us to learn discriminative RF dictionaries directly from data without relying on analytical constraints or additional knowledge about the expected signal characteristics. We then use a pursuit search over the learned dictionaries to generate sparse classification features in order to identify time windows that contain a target pulse. In this paper we compare the two dictionary learning methods and discuss how their performance changes as a function of dictionary training parameters. We demonstrate that learned dictionary techniques are suitable for pulsed RF analysis and present results with varying background clutter and noise levels.

  11. Temperature variation effects on sparse representation of guided-waves for damage diagnosis in pipelines

    NASA Astrophysics Data System (ADS)

    Eybpoosh, Matineh; Berges, Mario; Noh, Hae Young

    2015-04-01

    Multiple ultrasonic guided-wave modes propagating along a pipe travel with different velocities which are themselves a function of frequency. Reflections from the features of the structure (e.g., boundaries, pipe welding, damage, etc.), and their complex superposition, adds to the complexity of guided-waves. Guided-wave based damage diagnosis of pipelines becomes even more challenging when environmental and operational conditions (EOCs) vary (e.g., temperature, flow rate, inner pressure, etc.). These complexities make guided-wave based damage diagnosis of operating pipelines a challenging task. This paper reviews the approaches to-date addressing these challenges, and highlights the preferred characteristics of a method that simplifies guided-wave signals for damage diagnosis purposes. A method is proposed to extract a sparse subset of guided-wave signals in time-domain, while retaining optimal damage information for detection purpose. In this paper, the general concept of this method is proved through an extensive set of experiments. Effects of temperature variation on detection performance of the proposed method, and on discriminatory power of the extracted damage-sensitive features are investigated. The potential of the proposed method for real-time damage detection is illustrated, for wide range of temperature variation scenarios (i.e., temperature difference between training and test data varying between -2°C and 13°C).

  12. Automatic Myonuclear Detection in Isolated Single Muscle Fibers Using Robust Ellipse Fitting and Sparse Representation

    PubMed Central

    Su, Hai; Xing, Fuyong; Lee, Jonah D.; Peterson, Charlotte A.; Yang, Lin

    2015-01-01

    Accurate and robust detection of myonuclei in isolated single muscle fibers is required to calculate myonuclear domain size. However, this task is challenging because: 1) shape and size variations of the nuclei, 2) overlapping nuclear clumps, and 3) multiple z-stack images with out-of-focus regions. In this paper, we have proposed a novel automatic detection algorithm to robustly quantify myonuclei in isolated single skeletal muscle fibers. The original z-stack images are first converted into one all-in-focus image using multi-focus image fusion. A sufficient number of ellipse fitting hypotheses are then generated from them yonuclei contour segments using heteroscedastic errors-invariables (HEIV) regression. A set of representative training samples and a set of discriminative features are selected by a two-stage sparse model. The selected samples with representative features are utilized to train a classifier to select the best candidates. A modified inner geodesic distance based mean-shift clustering algorithm is used to produce the final nuclei detection results. The proposed method was extensively tested using 42 sets of z-stack images containing over 1,500 myonuclei. The method demonstrates excellent results that are better than current state-of-the-art approaches. PMID:26356342

  13. Turbulent heat transfer from a sparsely vegetated surface - Two-component representation

    NASA Technical Reports Server (NTRS)

    Otterman, J.; Novak, M. D.; Starr, D. O'C.

    1993-01-01

    The conventional calculation of heat fluxes from a vegetated surface involving the coefficient of turbulent heat transfer which increases logarithmically with surface roughness, is inappropriate such highly structured surfaces as desert scrub or open forest. An approach is developed here for computing sensible heat flux from sparsely vegetated surfaces, where the absorption of insolation and the transfer of absorbed heat to the atmosphere are calculated separately for the plants and for the soil. This approach is applied to a desert-scrub surface in the northern Sinai, for which the turbulent transfer coefficient of sensible heat flux from the plants is much larger than that from the soil below, as shown by an analysis of plant, soil, and air temperatures. The plant density is expressed as the sum of products (plant-height) x (plant-diameter) of plants per unit horizontal surface area. The solar heat absorbed by the plants is assumed to be transferred immediately to the airflow. The effective turbulent transfer coefficient k(g-eff) for sensible heat from the desert-scrub/soil surface computed under this assumption increases sharply with increasing solar zenith angle, as the plants absorb a greater fraction of the incoming irradiation. The surface absorptivity (the coalbedo) also increases sharply with increasing solar zenith angle, and thus the sensible heat flux from such complex surfaces is a much broader function of time of day than when computed under constant k(g-eff) and constant albedo assumptions.

  14. Combination of direct matching and collaborative representation for face recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Chongyang

    2013-06-01

    It has been proved that representation-based classification (RBC) can achieve high accuracy in face recognition. However, conventional RBC has a very high computational cost. Collaborative representation proposed in [1] not only has the advantages of RBC but also is computationally very efficient. In this paper, a combination of direct matching of images and collaborative representation is proposed for face recognition. Experimental results show that the proposed method can always classify more accurately than collaborative representation! The underlying reason is that direct matching of images and collaborative representation use different ways to calculate the dissimilarity between the test sample and training sample. As a result, the score obtained using direct matching of images is very complementary to the score obtained using collaborative representation. Actually, the analysis shows that the matching scores generated from direct matching of images and collaborative representation always have a low correlation. This allows the proposed method to exploit more information for face recognition and to produce a better result.

  15. Ring artifacts removal via spatial sparse representation in cone beam CT

    NASA Astrophysics Data System (ADS)

    Li, Zhongyuan; Li, Guang; Sun, Yi; Luo, Shouhua

    2016-03-01

    This paper is about the ring artifacts removal method in cone beam CT. Cone beam CT images often suffer from disturbance of ring artifacts which caused by the non-uniform responses of the elements in detectors. Conventional ring artifacts removal methods focus on the correlation of the elements and the ring artifacts' structural characteristics in either sinogram domain or cross-section image. The challenge in the conventional methods is how to distinguish the artifacts from the intrinsic structures; hence they often give rise to the blurred image results due to over processing. In this paper, we investigate the characteristics of the ring artifacts in spatial space, different from the continuous essence of 3D texture feature of the scanned objects, the ring artifacts are displayed discontinuously in spatial space, specifically along z-axis. Thus we can easily recognize the ring artifacts in spatial space than in cross-section. As a result, we choose dictionary representation for ring artifacts removal due to its high sensitivity to structural information. We verified our theory both in spatial space and coronal-section, the experimental results demonstrate that our methods can remove the artifacts efficiently while maintaining image details.

  16. Automatic approach to solve the morphological galaxy classification problem using the sparse representation technique and dictionary learning

    NASA Astrophysics Data System (ADS)

    Diaz-Hernandez, R.; Ortiz-Esquivel, A.; Peregrina-Barreto, H.; Altamirano-Robles, L.; Gonzalez-Bernal, J.

    2016-04-01

    The observation of celestial objects in the sky is a practice that helps astronomers to understand the way in which the Universe is structured. However, due to the large number of observed objects with modern telescopes, the analysis of these by hand is a difficult task. An important part in galaxy research is the morphological structure classification based on the Hubble sequence. In this research, we present an approach to solve the morphological galaxy classification problem in an automatic way by using the Sparse Representation technique and dictionary learning with K-SVD. For the tests in this work, we use a database of galaxies extracted from the Principal Galaxy Catalog (PGC) and the APM Equatorial Catalogue of Galaxies obtaining a total of 2403 useful galaxies. In order to represent each galaxy frame, we propose to calculate a set of 20 features such as Hu's invariant moments, galaxy nucleus eccentricity, gabor galaxy ratio and some other features commonly used in galaxy classification. A stage of feature relevance analysis was performed using Relief-f in order to determine which are the best parameters for the classification tests using 2, 3, 4, 5, 6 and 7 galaxy classes making signal vectors of different length values with the most important features. For the classification task, we use a 20-random cross-validation technique to evaluate classification accuracy with all signal sets achieving a score of 82.27 % for 2 galaxy classes and up to 44.27 % for 7 galaxy classes.

  17. Automatic approach to solve the morphological galaxy classification problem using the sparse representation technique and dictionary learning

    NASA Astrophysics Data System (ADS)

    Diaz-Hernandez, R.; Ortiz-Esquivel, A.; Peregrina-Barreto, H.; Altamirano-Robles, L.; Gonzalez-Bernal, J.

    2016-06-01

    The observation of celestial objects in the sky is a practice that helps astronomers to understand the way in which the Universe is structured. However, due to the large number of observed objects with modern telescopes, the analysis of these by hand is a difficult task. An important part in galaxy research is the morphological structure classification based on the Hubble sequence. In this research, we present an approach to solve the morphological galaxy classification problem in an automatic way by using the Sparse Representation technique and dictionary learning with K-SVD. For the tests in this work, we use a database of galaxies extracted from the Principal Galaxy Catalog (PGC) and the APM Equatorial Catalogue of Galaxies obtaining a total of 2403 useful galaxies. In order to represent each galaxy frame, we propose to calculate a set of 20 features such as Hu's invariant moments, galaxy nucleus eccentricity, gabor galaxy ratio and some other features commonly used in galaxy classification. A stage of feature relevance analysis was performed using Relief-f in order to determine which are the best parameters for the classification tests using 2, 3, 4, 5, 6 and 7 galaxy classes making signal vectors of different length values with the most important features. For the classification task, we use a 20-random cross-validation technique to evaluate classification accuracy with all signal sets achieving a score of 82.27 % for 2 galaxy classes and up to 44.27 % for 7 galaxy classes.

  18. Application of Unsupervised Clustering using Sparse Representations on Learned Dictionaries to develop Land Cover Classifications in Arctic Landscapes

    NASA Astrophysics Data System (ADS)

    Rowland, J. C.; Moody, D. I.; Brumby, S.; Gangodagamage, C.

    2012-12-01

    Techniques for automated feature extraction, including neuroscience-inspired machine vision, are of great interest for landscape characterization and change detection in support of global climate change science and modeling. Successful application of novel unsupervised feature extraction and clustering algorithms for use in Land Cover Classification requires the ability to determine what landscape attributes are represented by automated clustering. A closely related challenge is learning how to precondition the input data streams to the unsupervised classification algorithms in order to obtain clusters that represent Land Cover category of relevance to landsurface change and modeling applications. We present results from an ongoing effort to apply novel clustering methodologies developed primarily for neuroscience machine vision applications to the environmental sciences. We use a Hebbian learning rule to build spectral-textural dictionaries that are adapted to the data. We learn our dictionaries from millions of overlapping image patches and then use a pursuit search to generate sparse classification features. These sparse representations of pixel patches are used to perform unsupervised k-means clustering. In our application, we use 8-band multispectral Worldview-2 data from three arctic study areas: Barrow, Alaska; the Selawik River, Alaska; and a watershed near the Mackenzie River delta in northwest Canada. Our goal is to develop a robust classification methodology that will allow for the automated discretization of the landscape into distinct units based on attributes such as vegetation, surface hydrological properties (e.g. soil moisture and inundation), and topographic/geomorphic characteristics. The challenge of developing a meaningful land cover classification includes both learning how optimize the clustering algorithm and successfully interpreting the results. In applying the unsupervised clustering, we have the flexibility of selecting both the window

  19. Image resolution enhancement using edge extraction and sparse representation in wavelet domain for real-time application

    NASA Astrophysics Data System (ADS)

    Ponomaryov, Volodymyr I.; Chavez-Roman, Herminio; Gonzalez-Huitron, Victor

    2014-05-01

    The paper presents the design and hardware implementation of novel framework for image resolution enhancement employing the wavelet domain. The principal idea of resolution enhancement consists of using edge preservation procedure and mutual interpolation between the input low-resolution (LR) image and the HF sub-band images performed via the Discrete Wavelet Transform (DWT). The LR image is used in the sparse representation for the resolutionenhancement process, employing a 1-D interpolation in set of angle directions; following, the computations of the new samples are found, estimating the missing samples. Finally, pixels are performed via the Lanczos interpolation. To preserve more edge information additional edge extraction in HF sub-bands is performed in the DWT decomposition of input image. The differences between the LL sub-band image and LR input image is used to correct the HF component, generating a significantly sharper reconstructed image. All sub-band images are used to generate the new HR image applying the inverse DWT (IDWT). Additionally, the novel framework employs a denoising procedure by using the Non-Local Means for the input LR image. An efficiency analysis of the designed and other state-of-the-art filters have been performed on the DSP TMS320DM648 by Texas Instruments through MATLAB's Simulink module and on the video card (NVIDIA®Quadro® K2000), showing that novel SR procedure can be used in real-time processing applications. Experimental results have confirmed that implemented framework outperforms existing SR algorithms in terms of objective criteria (PSNR, MAE and SSIM) as well as in subjective perception, justifying better image resolution.

  20. A homotopy-based sparse representation for fast and accurate shape prior modeling in liver surgical planning.

    PubMed

    Wang, Guotai; Zhang, Shaoting; Xie, Hongzhi; Metaxas, Dimitris N; Gu, Lixu

    2015-01-01

    Shape prior plays an important role in accurate and robust liver segmentation. However, liver shapes have complex variations and accurate modeling of liver shapes is challenging. Using large-scale training data can improve the accuracy but it limits the computational efficiency. In order to obtain accurate liver shape priors without sacrificing the efficiency when dealing with large-scale training data, we investigate effective and scalable shape prior modeling method that is more applicable in clinical liver surgical planning system. We employed the Sparse Shape Composition (SSC) to represent liver shapes by an optimized sparse combination of shapes in the repository, without any assumptions on parametric distributions of liver shapes. To leverage large-scale training data and improve the computational efficiency of SSC, we also introduced a homotopy-based method to quickly solve the L1-norm optimization problem in SSC. This method takes advantage of the sparsity of shape modeling, and solves the original optimization problem in SSC by continuously transforming it into a series of simplified problems whose solution is fast to compute. When new training shapes arrive gradually, the homotopy strategy updates the optimal solution on the fly and avoids re-computing it from scratch. Experiments showed that SSC had a high accuracy and efficiency in dealing with complex liver shape variations, excluding gross errors and preserving local details on the input liver shape. The homotopy-based SSC had a high computational efficiency, and its runtime increased very slowly when repository's capacity and vertex number rose to a large degree. When repository's capacity was 10,000, with 2000 vertices on each shape, homotopy method cost merely about 11.29 s to solve the optimization problem in SSC, nearly 2000 times faster than interior point method. The dice similarity coefficient (DSC), average symmetric surface distance (ASD), and maximum symmetric surface distance measurement

  1. LOFAR sparse image reconstruction

    NASA Astrophysics Data System (ADS)

    Garsden, H.; Girard, J. N.; Starck, J. L.; Corbel, S.; Tasse, C.; Woiselle, A.; McKean, J. P.; van Amesfoort, A. S.; Anderson, J.; Avruch, I. M.; Beck, R.; Bentum, M. J.; Best, P.; Breitling, F.; Broderick, J.; Brüggen, M.; Butcher, H. R.; Ciardi, B.; de Gasperin, F.; de Geus, E.; de Vos, M.; Duscha, S.; Eislöffel, J.; Engels, D.; Falcke, H.; Fallows, R. A.; Fender, R.; Ferrari, C.; Frieswijk, W.; Garrett, M. A.; Grießmeier, J.; Gunst, A. W.; Hassall, T. E.; Heald, G.; Hoeft, M.; Hörandel, J.; van der Horst, A.; Juette, E.; Karastergiou, A.; Kondratiev, V. I.; Kramer, M.; Kuniyoshi, M.; Kuper, G.; Mann, G.; Markoff, S.; McFadden, R.; McKay-Bukowski, D.; Mulcahy, D. D.; Munk, H.; Norden, M. J.; Orru, E.; Paas, H.; Pandey-Pommier, M.; Pandey, V. N.; Pietka, G.; Pizzo, R.; Polatidis, A. G.; Renting, A.; Röttgering, H.; Rowlinson, A.; Schwarz, D.; Sluman, J.; Smirnov, O.; Stappers, B. W.; Steinmetz, M.; Stewart, A.; Swinbank, J.; Tagger, M.; Tang, Y.; Tasse, C.; Thoudam, S.; Toribio, C.; Vermeulen, R.; Vocks, C.; van Weeren, R. J.; Wijnholds, S. J.; Wise, M. W.; Wucknitz, O.; Yatawatta, S.; Zarka, P.; Zensus, A.

    2015-03-01

    Context. The LOw Frequency ARray (LOFAR) radio telescope is a giant digital phased array interferometer with multiple antennas distributed in Europe. It provides discrete sets of Fourier components of the sky brightness. Recovering the original brightness distribution with aperture synthesis forms an inverse problem that can be solved by various deconvolution and minimization methods. Aims: Recent papers have established a clear link between the discrete nature of radio interferometry measurement and the "compressed sensing" (CS) theory, which supports sparse reconstruction methods to form an image from the measured visibilities. Empowered by proximal theory, CS offers a sound framework for efficient global minimization and sparse data representation using fast algorithms. Combined with instrumental direction-dependent effects (DDE) in the scope of a real instrument, we developed and validated a new method based on this framework. Methods: We implemented a sparse reconstruction method in the standard LOFAR imaging tool and compared the photometric and resolution performance of this new imager with that of CLEAN-based methods (CLEAN and MS-CLEAN) with simulated and real LOFAR data. Results: We show that i) sparse reconstruction performs as well as CLEAN in recovering the flux of point sources; ii) performs much better on extended objects (the root mean square error is reduced by a factor of up to 10); and iii) provides a solution with an effective angular resolution 2-3 times better than the CLEAN images. Conclusions: Sparse recovery gives a correct photometry on high dynamic and wide-field images and improved realistic structures of extended sources (of simulated and real LOFAR datasets). This sparse reconstruction method is compatible with modern interferometric imagers that handle DDE corrections (A- and W-projections) required for current and future instruments such as LOFAR and SKA.

  2. Review of Sparse Representation-Based Classification Methods on EEG Signal Processing for Epilepsy Detection, Brain-Computer Interface and Cognitive Impairment

    PubMed Central

    Wen, Dong; Jia, Peilei; Lian, Qiusheng; Zhou, Yanhong; Lu, Chengbiao

    2016-01-01

    At present, the sparse representation-based classification (SRC) has become an important approach in electroencephalograph (EEG) signal analysis, by which the data is sparsely represented on the basis of a fixed dictionary or learned dictionary and classified based on the reconstruction criteria. SRC methods have been used to analyze the EEG signals of epilepsy, cognitive impairment and brain computer interface (BCI), which made rapid progress including the improvement in computational accuracy, efficiency and robustness. However, these methods have deficiencies in real-time performance, generalization ability and the dependence of labeled sample in the analysis of the EEG signals. This mini review described the advantages and disadvantages of the SRC methods in the EEG signal analysis with the expectation that these methods can provide the better tools for analyzing EEG signals. PMID:27458376

  3. Sex Education Representations in Spanish Combined Biology and Geology Textbooks

    NASA Astrophysics Data System (ADS)

    García-Cabeza, Belén; Sánchez-Bello, Ana

    2013-07-01

    Sex education is principally dealt with as part of the combined subject of Biology and Geology in the Spanish school curriculum. Teachers of this subject are not specifically trained to teach sex education, and thus the contents of their assigned textbooks are the main source of information available to them in this field. The main goal of this study was to determine what information Biology and Geology textbooks provide with regard to sex education and the vision of sexuality they give, but above all to reveal which perspectives of sex education they legitimise and which they silence. We analysed the textbooks in question by interpreting both visual and text representations, as a means of enabling us to investigate the nature of the discourse on sex education. With this aim, we have used a qualitative methodology, based on the content analysis. The main analytical tool was an in-house grid constructed to allow us to analyse the visual and textual representations. Our analysis of the combined Biology and Geology textbooks for Secondary Year 3 revealed that there is a tendency to reproduce models of sex education that take place within a framework of the more traditional discourses. Besides, the results suggested that the most of the sample chosen for this study makes a superficial, incomplete, incorrect or biased approach to sex education.

  4. Application of linear graph embedding as a dimensionality reduction technique and sparse representation classifier as a post classifier for the classification of epilepsy risk levels from EEG signals

    NASA Astrophysics Data System (ADS)

    Prabhakar, Sunil Kumar; Rajaguru, Harikumar

    2015-12-01

    The most common and frequently occurring neurological disorder is epilepsy and the main method useful for the diagnosis of epilepsy is electroencephalogram (EEG) signal analysis. Due to the length of EEG recordings, EEG signal analysis method is quite time-consuming when it is processed manually by an expert. This paper proposes the application of Linear Graph Embedding (LGE) concept as a dimensionality reduction technique for processing the epileptic encephalographic signals and then it is classified using Sparse Representation Classifiers (SRC). SRC is used to analyze the classification of epilepsy risk levels from EEG signals and the parameters such as Sensitivity, Specificity, Time Delay, Quality Value, Performance Index and Accuracy are analyzed.

  5. Sparse representation of plane wave response matrices for convex targets using local solution modes with band-limited excitations

    NASA Astrophysics Data System (ADS)

    Adams, R. J.; Wang, G.; Canning, F. X.; Davis, B. A.

    2006-12-01

    A procedure is outlined for determining compressed representations of the plane wave response matrix (P matrix) for transverse magnetic scattering with respect to the z axis from convex cylinders. The method is based on the determination of band-limited spectral modes that excite spatially localized solutions to the wave equation and satisfy global boundary conditions. Numerical examples indicate that the proposed method provides a representation of the P matrix with reduced computational complexity.

  6. Cortical representation of the combination of monaural and binaural unmasking.

    PubMed

    Uppenkamp, Stefan; Uhlig, Christian H; Verhey, Jesko L

    2013-01-01

    The audibility of a target tone is improved by introducing either -amplitude modulations that are coherent across different frequency channels of the masker (comodulation masking release, CMR) or interaural phase differences that are -different for target and masker (binaural masking-level difference, BMLD). Although the two effects are likely to be based on different processing strategies, they both result in improved figure-background decomposition for a target-in-noise situation. In this study, we analyzed the combination of CMR and BMLD for a -target tone in a masker with six 48-Hz-wide noise bands, distributed over a wide frequency range from 216 Hz to 2.78 kHz. Psychoacoustical detection thresholds for the tones in noise were determined for two masker conditions (comodulated or unmodulated bands) and two interaural phase differences of the target tone (0 or 180°). The mean results indicate that the effects of unmasking add independently. The lowest thresholds are found for the dichotic signal embedded in a -modulated masker with an overall threshold difference of about 16 dB compared to the -unmodulated condition with no binaural cues. Based on the psychoacoustic results, a set of 12 signal-masker configurations was selected individually to explore the representation of the audibility of the test tone in brain activation maps by means of auditory functional MR imaging. The comparison of the results for the combination of CMR and BMLD with the results for the separate effects indicates a large overlap of the activated brain regions, where a largely extended area is activated, covering primary auditory cortex and adjacent regions. The result is in agreement with previous fMRI studies on auditory masking, identifying specific regions in the auditory cortex representing a change of the audibility of a target tone in a noise masker, irrespective of the overall sound pressure level of the stimulus. PMID:23716250

  7. Predicting standard-dose PET image from low-dose PET and multimodal MR images using mapping-based sparse representation

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Zhang, Pei; An, Le; Ma, Guangkai; Kang, Jiayin; Shi, Feng; Wu, Xi; Zhou, Jiliu; Lalush, David S.; Lin, Weili; Shen, Dinggang

    2016-01-01

    Positron emission tomography (PET) has been widely used in clinical diagnosis for diseases and disorders. To obtain high-quality PET images requires a standard-dose radionuclide (tracer) injection into the human body, which inevitably increases risk of radiation exposure. One possible solution to this problem is to predict the standard-dose PET image from its low-dose counterpart and its corresponding multimodal magnetic resonance (MR) images. Inspired by the success of patch-based sparse representation (SR) in super-resolution image reconstruction, we propose a mapping-based SR (m-SR) framework for standard-dose PET image prediction. Compared with the conventional patch-based SR, our method uses a mapping strategy to ensure that the sparse coefficients, estimated from the multimodal MR images and low-dose PET image, can be applied directly to the prediction of standard-dose PET image. As the mapping between multimodal MR images (or low-dose PET image) and standard-dose PET images can be particularly complex, one step of mapping is often insufficient. To this end, an incremental refinement framework is therefore proposed. Specifically, the predicted standard-dose PET image is further mapped to the target standard-dose PET image, and then the SR is performed again to predict a new standard-dose PET image. This procedure can be repeated for prediction refinement of the iterations. Also, a patch selection based dictionary construction method is further used to speed up the prediction process. The proposed method is validated on a human brain dataset. The experimental results show that our method can outperform benchmark methods in both qualitative and quantitative measures.

  8. Predicting standard-dose PET image from low-dose PET and multimodal MR images using mapping-based sparse representation.

    PubMed

    Wang, Yan; Zhang, Pei; An, Le; Ma, Guangkai; Kang, Jiayin; Shi, Feng; Wu, Xi; Zhou, Jiliu; Lalush, David S; Lin, Weili; Shen, Dinggang

    2016-01-21

    Positron emission tomography (PET) has been widely used in clinical diagnosis for diseases and disorders. To obtain high-quality PET images requires a standard-dose radionuclide (tracer) injection into the human body, which inevitably increases risk of radiation exposure. One possible solution to this problem is to predict the standard-dose PET image from its low-dose counterpart and its corresponding multimodal magnetic resonance (MR) images. Inspired by the success of patch-based sparse representation (SR) in super-resolution image reconstruction, we propose a mapping-based SR (m-SR) framework for standard-dose PET image prediction. Compared with the conventional patch-based SR, our method uses a mapping strategy to ensure that the sparse coefficients, estimated from the multimodal MR images and low-dose PET image, can be applied directly to the prediction of standard-dose PET image. As the mapping between multimodal MR images (or low-dose PET image) and standard-dose PET images can be particularly complex, one step of mapping is often insufficient. To this end, an incremental refinement framework is therefore proposed. Specifically, the predicted standard-dose PET image is further mapped to the target standard-dose PET image, and then the SR is performed again to predict a new standard-dose PET image. This procedure can be repeated for prediction refinement of the iterations. Also, a patch selection based dictionary construction method is further used to speed up the prediction process. The proposed method is validated on a human brain dataset. The experimental results show that our method can outperform benchmark methods in both qualitative and quantitative measures. PMID:26732849

  9. Sparse-view computed tomography image reconstruction via a combination of L(1) and SL(0) regularization.

    PubMed

    Qi, Hongliang; Chen, Zijia; Guo, Jingyu; Zhou, Linghong

    2015-01-01

    Low-dose computed tomography reconstruction is an important issue in the medical imaging domain. Sparse-view has been widely studied as a potential strategy. Compressed sensing (CS) method has shown great potential to reconstruct high-quality CT images from sparse-view projection data. Nonetheless, low-contrast structures tend to be blurred by the total variation (TV, L1-norm of the gradient image) regularization. Moreover, TV will produce blocky effects on smooth and edge regions. To overcome this limitation, this study has proposed an iterative image reconstruction algorithm by combining L1 regularization and smoothed L0 (SL0) regularization. SL0 is a smooth approximation of L0 norm and can solve the problem of L0 norm being sensitive to noise. To evaluate the proposed method, both qualitative and quantitative studies were conducted on a digital Shepp-Logan phantom and a real head phantom. Experimental comparative results have indicated that the proposed L1/SL0-POCS algorithm can effectively suppress noise and artifacts, as well as preserve more structural information compared to other existing methods. PMID:26405900

  10. Comparison of Support-Vector Machine and Sparse Representation Using a Modified Rule-Based Method for Automated Myocardial Ischemia Detection

    PubMed Central

    Tseng, Yi-Li; Lin, Keng-Sheng; Jaw, Fu-Shan

    2016-01-01

    An automatic method is presented for detecting myocardial ischemia, which can be considered as the early symptom of acute coronary events. Myocardial ischemia commonly manifests as ST- and T-wave changes on ECG signals. The methods in this study are proposed to detect abnormal ECG beats using knowledge-based features and classification methods. A novel classification method, sparse representation-based classification (SRC), is involved to improve the performance of the existing algorithms. A comparison was made between two classification methods, SRC and support-vector machine (SVM), using rule-based vectors as input feature space. The two methods are proposed with quantitative evaluation to validate their performances. The results of SRC method encompassed with rule-based features demonstrate higher sensitivity than that of SVM. However, the specificity and precision are a trade-off. Moreover, SRC method is less dependent on the selection of rule-based features and can achieve high performance using fewer features. The overall performances of the two methods proposed in this study are better than the previous methods. PMID:26925158

  11. Comparison of Support-Vector Machine and Sparse Representation Using a Modified Rule-Based Method for Automated Myocardial Ischemia Detection.

    PubMed

    Tseng, Yi-Li; Lin, Keng-Sheng; Jaw, Fu-Shan

    2016-01-01

    An automatic method is presented for detecting myocardial ischemia, which can be considered as the early symptom of acute coronary events. Myocardial ischemia commonly manifests as ST- and T-wave changes on ECG signals. The methods in this study are proposed to detect abnormal ECG beats using knowledge-based features and classification methods. A novel classification method, sparse representation-based classification (SRC), is involved to improve the performance of the existing algorithms. A comparison was made between two classification methods, SRC and support-vector machine (SVM), using rule-based vectors as input feature space. The two methods are proposed with quantitative evaluation to validate their performances. The results of SRC method encompassed with rule-based features demonstrate higher sensitivity than that of SVM. However, the specificity and precision are a trade-off. Moreover, SRC method is less dependent on the selection of rule-based features and can achieve high performance using fewer features. The overall performances of the two methods proposed in this study are better than the previous methods. PMID:26925158

  12. Image fusion using sparse overcomplete feature dictionaries

    SciTech Connect

    Brumby, Steven P.; Bettencourt, Luis; Kenyon, Garrett T.; Chartrand, Rick; Wohlberg, Brendt

    2015-10-06

    Approaches for deciding what individuals in a population of visual system "neurons" are looking for using sparse overcomplete feature dictionaries are provided. A sparse overcomplete feature dictionary may be learned for an image dataset and a local sparse representation of the image dataset may be built using the learned feature dictionary. A local maximum pooling operation may be applied on the local sparse representation to produce a translation-tolerant representation of the image dataset. An object may then be classified and/or clustered within the translation-tolerant representation of the image dataset using a supervised classification algorithm and/or an unsupervised clustering algorithm.

  13. Evaluating coastal sea surface heights based on a novel sub-waveform approach using sparse representation and conditional random fields

    NASA Astrophysics Data System (ADS)

    Uebbing, Bernd; Roscher, Ribana; Kusche, Jürgen

    2016-04-01

    Satellite radar altimeters allow global monitoring of mean sea level changes over the last two decades. However, coastal regions are less well observed due to influences on the returned signal energy by land located inside the altimeter footprint. The altimeter emits a radar pulse, which is reflected at the nadir-surface and measures the two-way travel time, as well as the returned energy as a function of time, resulting in a return waveform. Over the open ocean the waveform shape corresponds to a theoretical model which can be used to infer information on range corrections, significant wave height or wind speed. However, in coastal areas the shape of the waveform is significantly influenced by return signals from land, located in the altimeter footprint, leading to peaks which tend to bias the estimated parameters. Recently, several approaches dealing with this problem have been published, including utilizing only parts of the waveform (sub-waveforms), estimating the parameters in two steps or estimating additional peak parameters. We present a new approach in estimating sub-waveforms using conditional random fields (CRF) based on spatio-temporal waveform information. The CRF piece-wise approximates the measured waveforms based on a pre-derived dictionary of theoretical waveforms for various combinations of the geophysical parameters; neighboring range gates are likely to be assigned to the same underlying sub-waveform model. Depending on the choice of hyperparameters in the CRF estimation, the classification into sub-waveforms can either be more fine or coarse resulting in multiple sub-waveform hypotheses. After the sub-waveforms have been detected, existing retracking algorithms can be applied to derive water heights or other desired geophysical parameters from particular sub-waveforms. To identify the optimal heights from the multiple hypotheses, instead of utilizing a known reference height, we apply a Dijkstra-algorithm to find the "shortest path" of all

  14. Image fusion via nonlocal sparse K-SVD dictionary learning.

    PubMed

    Li, Ying; Li, Fangyi; Bai, Bendu; Shen, Qiang

    2016-03-01

    Image fusion aims to merge two or more images captured via various sensors of the same scene to construct a more informative image by integrating their details. Generally, such integration is achieved through the manipulation of the representations of the images concerned. Sparse representation plays an important role in the effective description of images, offering a great potential in a variety of image processing tasks, including image fusion. Supported by sparse representation, in this paper, an approach for image fusion by the use of a novel dictionary learning scheme is proposed. The nonlocal self-similarity property of the images is exploited, not only at the stage of learning the underlying description dictionary but during the process of image fusion. In particular, the property of nonlocal self-similarity is combined with the traditional sparse dictionary. This results in an improved learned dictionary, hereafter referred to as the nonlocal sparse K-SVD dictionary (where K-SVD stands for the K times singular value decomposition that is commonly used in the literature), and abbreviated to NL_SK_SVD. The performance of the NL_SK_SVD dictionary is applied for image fusion using simultaneous orthogonal matching pursuit. The proposed approach is evaluated with different types of images, and compared with a number of alternative image fusion techniques. The resultant superior fused images using the present approach demonstrates the efficacy of the NL_SK_SVD dictionary in sparse image representation. PMID:26974648

  15. Sparse Representations-Based Super-Resolution of Key-Frames Extracted from Frames-Sequences Generated by a Visual Sensor Network

    PubMed Central

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2014-01-01

    Visual sensor networks (VSNs) usually generate a low-resolution (LR) frame-sequence due to energy and processing constraints. These LR-frames are not very appropriate for use in certain surveillance applications. It is very important to enhance the resolution of the captured LR-frames using resolution enhancement schemes. In this paper, an effective framework for a super-resolution (SR) scheme is proposed that enhances the resolution of LR key-frames extracted from frame-sequences captured by visual-sensors. In a VSN, a visual processing hub (VPH) collects a huge amount of visual data from camera sensors. In the proposed framework, at the VPH, key-frames are extracted using our recent key-frame extraction technique and are streamed to the base station (BS) after compression. A novel effective SR scheme is applied at BS to produce a high-resolution (HR) output from the received key-frames. The proposed SR scheme uses optimized orthogonal matching pursuit (OOMP) for sparse-representation recovery in SR. OOMP does better in terms of detecting true sparsity than orthogonal matching pursuit (OMP). This property of the OOMP helps produce a HR image which is closer to the original image. The K-SVD dictionary learning procedure is incorporated for dictionary learning. Batch-OMP improves the dictionary learning process by removing the limitation in handling a large set of observed signals. Experimental results validate the effectiveness of the proposed scheme and show its superiority over other state-of-the-art schemes. PMID:24566632

  16. Decoupling sparse coding of SIFT descriptors for large-scale visual recognition

    NASA Astrophysics Data System (ADS)

    Ji, Zhengping; Theiler, James; Chartrand, Rick; Kenyon, Garrett; Brumby, Steven P.

    2013-05-01

    In recent years, sparse coding has drawn considerable research attention in developing feature representations for visual recognition problems. In this paper, we devise sparse coding algorithms to learn a dictionary of basis functions from Scale- Invariant Feature Transform (SIFT) descriptors extracted from images. The learned dictionary is used to code SIFT-based inputs for the feature representation that is further pooled via spatial pyramid matching kernels and fed into a Support Vector Machine (SVM) for object classification on the large-scale ImageNet dataset. We investigate the advantage of SIFT-based sparse coding approach by combining different dictionary learning and sparse representation algorithms. Our results also include favorable performance on different subsets of the ImageNet database.

  17. Variable Selection for Sparse High-Dimensional Nonlinear Regression Models by Combining Nonnegative Garrote and Sure Independence Screening

    PubMed Central

    Xue, Hongqi; Wu, Yichao; Wu, Hulin

    2013-01-01

    In many regression problems, the relations between the covariates and the response may be nonlinear. Motivated by the application of reconstructing a gene regulatory network, we consider a sparse high-dimensional additive model with the additive components being some known nonlinear functions with unknown parameters. To identify the subset of important covariates, we propose a new method for simultaneous variable selection and parameter estimation by iteratively combining a large-scale variable screening (the nonlinear independence screening, NLIS) and a moderate-scale model selection (the nonnegative garrote, NNG) for the nonlinear additive regressions. We have shown that the NLIS procedure possesses the sure screening property and it is able to handle problems with non-polynomial dimensionality; and for finite dimension problems, the NNG for the nonlinear additive regressions has selection consistency for the unimportant covariates and also estimation consistency for the parameter estimates of the important covariates. The proposed method is applied to simulated data and a real data example for identifying gene regulations to illustrate its numerical performance. PMID:25170239

  18. K-t sparse GROWL: sequential combination of partially parallel imaging and compressed sensing in k-t space using flexible virtual coil.

    PubMed

    Huang, Feng; Lin, Wei; Duensing, George R; Reykowski, Arne

    2012-09-01

    Because dynamic MR images are often sparse in x-f domain, k-t space compressed sensing (k-t CS) has been proposed for highly accelerated dynamic MRI. When a multichannel coil is used for acquisition, the combination of partially parallel imaging and k-t CS can improve the accuracy of reconstruction. In this work, an efficient combination method is presented, which is called k-t sparse Generalized GRAPPA fOr Wider readout Line. One fundamental aspect of this work is to apply partially parallel imaging and k-t CS sequentially. A partially parallel imaging technique using a Generalized GRAPPA fOr Wider readout Line operator is adopted before k-t CS reconstruction to decrease the reduction factor in a computationally efficient way while preserving temporal resolution. Channel combination and relative sensitivity maps are used in the flexible virtual coil scheme to alleviate the k-t CS computational load with increasing number of channels. Using k-t FOCUSS as a specific example of k-t CS, the experiments with Cartesian and radial data sets demonstrate that k-t sparse Generalized GRAPPA fOr Wider readout Line can produce results with two times lower root-mean-square error than conventional channel-by-channel k-t CS while consuming up to seven times less computational cost. PMID:22162191

  19. Combining rainfall data from rain gauges and TRMM in hydrological modelling of Laotian data-sparse basins

    NASA Astrophysics Data System (ADS)

    Liu, Xing; Liu, Fa Ming; Wang, Xiao Xia; Li, Xiao Dong; Fan, Yu Yan; Cai, Shi Xiang; Ao, Tian Qi

    2015-09-01

    At present, prediction of streamflow simulation in data-sparse basins of the South East Asia is a challenging task due to the absence of reliable ground-based rainfall information, while satellite-based rainfall estimates are immensely useful to improve our understanding of spatio-temporal variation of rainfall, particularly for data-sparse basins. In this study the TRMM 3B42 V7 and its bias-corrected data were, respectively, used to drive a physically based distributed hydrological model BTOPMC to perform daily streamflow simulations in Nam Khan River and Nam Like River basins during the years from 2000 to 2004 so as to investigate the potential use of the TRMM in complementing rain gauge data in hydrological modelling of data-sparse basins. The results show that although larger difference exists in the high streamflow process and the low streamflow process, the daily simulations fed with TRMM precipitation data could basically reflect the daily streamflow processes at the four stations and determine the time to peak. Furthermore, the calibrated parameters in the Nam Khan River basin are more suitable than that in the Nam Like River basin. By comparing the two precipitation data, it indicates that the integration of TRMM precipitation data and rain gauge data have a promising prospect on the hydrological process simulation in data-sparse basin.

  20. Building Hierarchical Representations for Oracle Character and Sketch Recognition.

    PubMed

    Jun Guo; Changhu Wang; Roman-Rangel, Edgar; Hongyang Chao; Yong Rui

    2016-01-01

    In this paper, we study oracle character recognition and general sketch recognition. First, a data set of oracle characters, which are the oldest hieroglyphs in China yet remain a part of modern Chinese characters, is collected for analysis. Second, typical visual representations in shape- and sketch-related works are evaluated. We analyze the problems suffered when addressing these representations and determine several representation design criteria. Based on the analysis, we propose a novel hierarchical representation that combines a Gabor-related low-level representation and a sparse-encoder-related mid-level representation. Extensive experiments show the effectiveness of the proposed representation in both oracle character recognition and general sketch recognition. The proposed representation is also complementary to convolutional neural network (CNN)-based models. We introduce a solution to combine the proposed representation with CNN-based models, and achieve better performances over both approaches. This solution has beaten humans at recognizing general sketches. PMID:26571529

  1. Extending peripersonal space representation without tool-use: evidence from a combined behavioral-computational approach

    PubMed Central

    Serino, Andrea; Canzoneri, Elisa; Marzolla, Marilena; di Pellegrino, Giuseppe; Magosso, Elisa

    2015-01-01

    Stimuli from different sensory modalities occurring on or close to the body are integrated in a multisensory representation of the space surrounding the body, i.e., peripersonal space (PPS). PPS dynamically modifies depending on experience, e.g., it extends after using a tool to reach far objects. However, the neural mechanism underlying PPS plasticity after tool use is largely unknown. Here we use a combined computational-behavioral approach to propose and test a possible mechanism accounting for PPS extension. We first present a neural network model simulating audio-tactile representation in the PPS around one hand. Simulation experiments showed that our model reproduced the main property of PPS neurons, i.e., selective multisensory response for stimuli occurring close to the hand. We used the neural network model to simulate the effects of a tool-use training. In terms of sensory inputs, tool use was conceptualized as a concurrent tactile stimulation from the hand, due to holding the tool, and an auditory stimulation from the far space, due to tool-mediated action. Results showed that after exposure to those inputs, PPS neurons responded also to multisensory stimuli far from the hand. The model thus suggests that synchronous pairing of tactile hand stimulation and auditory stimulation from the far space is sufficient to extend PPS, such as after tool-use. Such prediction was confirmed by a behavioral experiment, where we used an audio-tactile interaction paradigm to measure the boundaries of PPS representation. We found that PPS extended after synchronous tactile-hand stimulation and auditory-far stimulation in a group of healthy volunteers. Control experiments both in simulation and behavioral settings showed that the same amount of tactile and auditory inputs administered out of synchrony did not change PPS representation. We conclude by proposing a simple, biological-plausible model to explain plasticity in PPS representation after tool-use, which is

  2. Extending peripersonal space representation without tool-use: evidence from a combined behavioral-computational approach.

    PubMed

    Serino, Andrea; Canzoneri, Elisa; Marzolla, Marilena; di Pellegrino, Giuseppe; Magosso, Elisa

    2015-01-01

    Stimuli from different sensory modalities occurring on or close to the body are integrated in a multisensory representation of the space surrounding the body, i.e., peripersonal space (PPS). PPS dynamically modifies depending on experience, e.g., it extends after using a tool to reach far objects. However, the neural mechanism underlying PPS plasticity after tool use is largely unknown. Here we use a combined computational-behavioral approach to propose and test a possible mechanism accounting for PPS extension. We first present a neural network model simulating audio-tactile representation in the PPS around one hand. Simulation experiments showed that our model reproduced the main property of PPS neurons, i.e., selective multisensory response for stimuli occurring close to the hand. We used the neural network model to simulate the effects of a tool-use training. In terms of sensory inputs, tool use was conceptualized as a concurrent tactile stimulation from the hand, due to holding the tool, and an auditory stimulation from the far space, due to tool-mediated action. Results showed that after exposure to those inputs, PPS neurons responded also to multisensory stimuli far from the hand. The model thus suggests that synchronous pairing of tactile hand stimulation and auditory stimulation from the far space is sufficient to extend PPS, such as after tool-use. Such prediction was confirmed by a behavioral experiment, where we used an audio-tactile interaction paradigm to measure the boundaries of PPS representation. We found that PPS extended after synchronous tactile-hand stimulation and auditory-far stimulation in a group of healthy volunteers. Control experiments both in simulation and behavioral settings showed that the same amount of tactile and auditory inputs administered out of synchrony did not change PPS representation. We conclude by proposing a simple, biological-plausible model to explain plasticity in PPS representation after tool-use, which is

  3. A combined representation method for use in band structure calculations. 1: Method

    NASA Technical Reports Server (NTRS)

    Friedli, C.; Ashcroft, N. W.

    1975-01-01

    A representation was described whose basis levels combine the important physical aspects of a finite set of plane waves with those of a set of Bloch tight-binding levels. The chosen combination has a particularly simple dependence on the wave vector within the Brillouin Zone, and its use in reducing the standard one-electron band structure problem to the usual secular equation has the advantage that the lattice sums involved in the calculation of the matrix elements are actually independent of the wave vector. For systems with complicated crystal structures, for which the Korringa-Kohn-Rostoker (KKR), Augmented-Plane Wave (APW) and Orthogonalized-Plane Wave (OPW) methods are difficult to apply, the present method leads to results with satisfactory accuracy and convergence.

  4. Combination of geodetic measurements by means of a multi-resolution representation

    NASA Astrophysics Data System (ADS)

    Goebel, G.; Schmidt, M. G.; Börger, K.; List, H.; Bosch, W.

    2010-12-01

    Recent and in particular current satellite gravity missions provide important contributions for global Earth gravity models, and these global models can be refined by airborne and terrestrial gravity observations. The most common representation of a gravity field model in terms of spherical harmonics has the disadvantages that it is difficult to represent small spatial details and cannot handle data gaps appropriately. An adequate modeling using a multi-resolution representation (MRP) is necessary in order to exploit the highest degree of information out of all these mentioned measurements. The MRP provides a simple hierarchical framework for identifying the properties of a signal. The procedure starts from the measurements, performs the decomposition into frequency-dependent detail signals by applying a pyramidal algorithm and allows for data compression and filtering, i.e. data manipulations. Since different geodetic measurement types (terrestrial, airborne, spaceborne) cover different parts of the frequency spectrum, it seems reasonable to calculate the detail signals of the lower levels mainly from satellite data, the detail signals of medium levels mainly from airborne and the detail signals of the higher levels mainly from terrestrial data. A concept is presented how these different measurement types can be combined within the MRP. In this presentation the basic principles on strategies and concepts for the generation of MRPs will be shown. Examples of regional gravity field determination are presented.

  5. Sparse approximation problem: how rapid simulated annealing succeeds and fails

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Kabashima, Yoshiyuki

    2016-03-01

    Information processing techniques based on sparseness have been actively studied in several disciplines. Among them, a mathematical framework to approximately express a given dataset by a combination of a small number of basis vectors of an overcomplete basis is termed the sparse approximation. In this paper, we apply simulated annealing, a metaheuristic algorithm for general optimization problems, to sparse approximation in the situation where the given data have a planted sparse representation and noise is present. The result in the noiseless case shows that our simulated annealing works well in a reasonable parameter region: the planted solution is found fairly rapidly. This is true even in the case where a common relaxation of the sparse approximation problem, the G-relaxation, is ineffective. On the other hand, when the dimensionality of the data is close to the number of non-zero components, another metastable state emerges, and our algorithm fails to find the planted solution. This phenomenon is associated with a first-order phase transition. In the case of very strong noise, it is no longer meaningful to search for the planted solution. In this situation, our algorithm determines a solution with close-to-minimum distortion fairly quickly.

  6. Visual Tracking via Sparse and Local Linear Coding.

    PubMed

    Wang, Guofeng; Qin, Xueying; Zhong, Fan; Liu, Yue; Li, Hongbo; Peng, Qunsheng; Yang, Ming-Hsuan

    2015-11-01

    The state search is an important component of any object tracking algorithm. Numerous algorithms have been proposed, but stochastic sampling methods (e.g., particle filters) are arguably one of the most effective approaches. However, the discretization of the state space complicates the search for the precise object location. In this paper, we propose a novel tracking algorithm that extends the state space of particle observations from discrete to continuous. The solution is determined accurately via iterative linear coding between two convex hulls. The algorithm is modeled by an optimal function, which can be efficiently solved by either convex sparse coding or locality constrained linear coding. The algorithm is also very flexible and can be combined with many generic object representations. Thus, we first use sparse representation to achieve an efficient searching mechanism of the algorithm and demonstrate its accuracy. Next, two other object representation models, i.e., least soft-threshold squares and adaptive structural local sparse appearance, are implemented with improved accuracy to demonstrate the flexibility of our algorithm. Qualitative and quantitative experimental results demonstrate that the proposed tracking algorithm performs favorably against the state-of-the-art methods in dynamic scenes. PMID:26353352

  7. Effects of damage location and size on sparse representation of guided-waves for damage diagnosis of pipelines under varying temperature

    NASA Astrophysics Data System (ADS)

    Eybpoosh, Matineh; Berges, Mario; Noh, Hae Young

    2015-04-01

    In spite of their many advantages, real-world application of guided-waves for structural health monitoring (SHM) of pipelines is still quite limited. The challenges can be discussed under three headings: (1) Multiple modes, (2) Multipath reflections, and (3) Sensitivity to environmental and operational conditions (EOCs). These challenges are reviewed in the authors' previous work. This paper is part of a study whose objective is to overcome these challenges for damage diagnosis of pipes, while addressing the limitations of the current approaches. That is, develop methods that simplify signal while retaining damage information, perform well as EOCs vary, and minimize the use of transducers. In this paper, a supervised method is proposed to extract a sparse subset of the ultrasonic guided-wave signals that contain optimal damage information for detection purposes. That is, a discriminant vector is calculated so that the projections of undamaged and damaged pipes on this vector is separated. In the training stage, data is recorded from intact pipe, and from a pipe with an artificial structural abnormality (to simulate any variation from intact condition). During the monitoring stage, test signals are projected on the discriminant vector, and these projections are used as damage-sensitive features for detection purposes. Being a supervised method, factors such as EOC variations, and difference in the characteristics of the structural abnormality in training and test data, may affect the detection performance. This paper reports the experiments investigating the extent to which the differences in damage size and damage location, as well as temperatures, can influence the discriminatory power of the extracted damage-sensitive features. The results suggest that, for practical ranges of monitoring and damage sizes of interest, the proposed method has low sensitivity to such training factors. High detection performances are obtained for temperature differences up to 14

  8. Faster learning algorithm convergence utilizing a combined time-frequency representation as basis

    NASA Astrophysics Data System (ADS)

    Hendriks, A. J.; Uys, Hermann; du Plessis, Anton; Steenkamp, Christine

    2013-10-01

    Light is capable of directly manipulating and probing molecular dynamics at its most fundamental level. One versatile approach to influencing such dynamics exploits temporally shaped femtosecond laser pulses. Oftentimes the control mechanisms necessary to induce a desired reaction cannot be determined theoretically a priori. However under certain circumstances these mechanisms can be extracted experimentally through trial and error. This can be implemented systematically by using an evolutionary learning algorithm (LA) with closed loop feedback. Most frequently, pulse shaping algorithms operate within either the time or frequency domain, however seldom both. This may influence the physical insight gained due to dependence on the search basis, as well as influence the speed the algorithm takes to converge. As an alternative to the Fourier domain basis, we make use of a combined time-frequency representation known as the von Neumann basis where we observe temporal and spectral effects at the same time. We report on the numerical and experimental results obtained using the Fourier, as well as the von Neumann basis to maximize the second harmonic generation (SHG) output in a non-linear crystal. We show that the von Neumann representation converges faster than the Fourier domain when compared to searches in the Fourier domain. We also show a reduced parameter space is required for the Fourier domain to converge efficiently, but not for von Neumann domain. Finally we show the highest SHG signal is not only a consequence of the shortest pulse, but that the pulse central frequency also plays a key role. Taken together these results suggest that the von Neumann basis can be used as a viable alternative to the Fourier domain with improved convergence time and potentially deeper physical insight.

  9. Gravitational microlensing - Powerful combination of ray-shooting and parametric representation of caustics

    NASA Technical Reports Server (NTRS)

    Wambsganss, J.; Witt, H. J.; Schneider, P.

    1992-01-01

    We present a combination of two very different methods for numerically calculating the effects of gravitational microlensing: the backward-ray-tracing that results in two-dimensional magnification patterns, and the parametric representation of caustic lines; they are in a way complementary to each other. The combination of these methods is much more powerful than the sum of its parts. It allows to determine the total magnification and the number of microimages as a function of source position. The mean number of microimages is calculated analytically and compared to the numerical results. The peaks in the lightcurves, as obtained from one-dimensional tracks through the magnification pattern, can now be divided into two groups: those which correspond to a source crossing a caustic, and those which are due to sources passing outside cusps. We determine the frequencies of those two types of events as a function of the surface mass density, and the probability distributions of their magnitudes. We find that for low surface mass density as many as 40 percent of all events in a lightcurve are not due to caustic crossings, but rather due to passings outside cusps.

  10. Wavelet Sparse Approximate Inverse Preconditioners

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.; Tang, W.-P.; Wan, W. L.

    1996-01-01

    There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.

  11. DNA binding protein identification by combining pseudo amino acid composition and profile-based protein representation

    PubMed Central

    Liu, Bin; Wang, Shanyi; Wang, Xiaolong

    2015-01-01

    DNA-binding proteins play an important role in most cellular processes. Therefore, it is necessary to develop an efficient predictor for identifying DNA-binding proteins only based on the sequence information of proteins. The bottleneck for constructing a useful predictor is to find suitable features capturing the characteristics of DNA binding proteins. We applied PseAAC to DNA binding protein identification, and PseAAC was further improved by incorporating the evolutionary information by using profile-based protein representation. Finally, Combined with Support Vector Machines (SVMs), a predictor called iDNAPro-PseAAC was proposed. Experimental results on an updated benchmark dataset showed that iDNAPro-PseAAC outperformed some state-of-the-art approaches, and it can achieve stable performance on an independent dataset. By using an ensemble learning approach to incorporate more negative samples (non-DNA binding proteins) in the training process, the performance of iDNAPro-PseAAC was further improved. The web server of iDNAPro-PseAAC is available at http://bioinformatics.hitsz.edu.cn/iDNAPro-PseAAC/. PMID:26482832

  12. DNA binding protein identification by combining pseudo amino acid composition and profile-based protein representation

    NASA Astrophysics Data System (ADS)

    Liu, Bin; Wang, Shanyi; Wang, Xiaolong

    2015-10-01

    DNA-binding proteins play an important role in most cellular processes. Therefore, it is necessary to develop an efficient predictor for identifying DNA-binding proteins only based on the sequence information of proteins. The bottleneck for constructing a useful predictor is to find suitable features capturing the characteristics of DNA binding proteins. We applied PseAAC to DNA binding protein identification, and PseAAC was further improved by incorporating the evolutionary information by using profile-based protein representation. Finally, Combined with Support Vector Machines (SVMs), a predictor called iDNAPro-PseAAC was proposed. Experimental results on an updated benchmark dataset showed that iDNAPro-PseAAC outperformed some state-of-the-art approaches, and it can achieve stable performance on an independent dataset. By using an ensemble learning approach to incorporate more negative samples (non-DNA binding proteins) in the training process, the performance of iDNAPro-PseAAC was further improved. The web server of iDNAPro-PseAAC is available at http://bioinformatics.hitsz.edu.cn/iDNAPro-PseAAC/.

  13. Timing of emotion representation in right and left occipital region: Evidence from combined TMS-EEG.

    PubMed

    Mattavelli, Giulia; Rosanova, Mario; Casali, Adenauer G; Papagno, Costanza; Romero Lauro, Leonor J

    2016-07-01

    Neuroimaging and electrophysiological studies provide evidence of hemispheric differences in processing faces and, in particular, emotional expressions. However, the timing of emotion representation in the right and left hemisphere is still unclear. Transcranial magnetic stimulation combined with electroencephalography (TMS-EEG) was used to explore cortical responsiveness during behavioural tasks requiring processing of either identity or expression of faces. Single-pulse TMS was delivered 100ms after face onset over the medial prefrontal cortex (mPFC) while continuous EEG was recorded using a 60-channel TMS-compatible amplifier; right premotor cortex (rPMC) was also stimulated as control site. The same face stimuli with neutral, happy and fearful expressions were presented in separate blocks and participants were asked to complete either a facial identity or facial emotion matching task. Analyses performed on posterior face specific EEG components revealed that mPFC-TMS reduced the P1-N1 component. In particular, only when an explicit expression processing was required, mPFC-TMS interacted with emotion type in relation to hemispheric side at different timing; the first P1-N1 component was affected in the right hemisphere whereas the later N1-P2 component was modulated in the left hemisphere. These findings support the hypothesis that the frontal cortex exerts an early influence on the occipital cortex during face processing and suggest a different timing of the right and left hemisphere involvement in emotion discrimination. PMID:27155161

  14. Kernel sparse coding method for automatic target recognition in infrared imagery using covariance descriptor

    NASA Astrophysics Data System (ADS)

    Yang, Chunwei; Yao, Junping; Sun, Dawei; Wang, Shicheng; Liu, Huaping

    2016-05-01

    Automatic target recognition in infrared imagery is a challenging problem. In this paper, a kernel sparse coding method for infrared target recognition using covariance descriptor is proposed. First, covariance descriptor combining gray intensity and gradient information of the infrared target is extracted as a feature representation. Then, due to the reason that covariance descriptor lies in non-Euclidean manifold, kernel sparse coding theory is used to solve this problem. We verify the efficacy of the proposed algorithm in terms of the confusion matrices on the real images consisting of seven categories of infrared vehicle targets.

  15. Evolutionary induction of sparse neural trees

    PubMed

    Zhang; Ohm; Muhlenbein

    1997-01-01

    This paper is concerned with the automatic induction of parsimonious neural networks. In contrast to other program induction situations, network induction entails parametric learning as well as structural adaptation. We present a novel representation scheme called neural trees that allows efficient learning of both network architectures and parameters by genetic search. A hybrid evolutionary method is developed for neural tree induction that combines genetic programming and the breeder genetic algorithm under the unified framework of the minimum description length principle. The method is successfully applied to the induction of higher order neural trees while still keeping the resulting structures sparse to ensure good generalization performance. Empirical results are provided on two chaotic time series prediction problems of practical interest. PMID:10021759

  16. Combined-hyperbolic-inverse-power-representation of potential energy surfaces: A preliminary assessment for H_3 and HO_2

    NASA Astrophysics Data System (ADS)

    Varandas, A. J. C.

    2013-02-01

    The purpose is to fit an accurate smooth function of the many-body expansion type to a multidimensional large data set using a basis-set type method. By adopting a combined-hyperbolic-inverse-power-representation for the basis, the novel approach is tested in detail for the ground electronic state of tri-hydrogen and hydroperoxyl systems, assuming that their potential energy surfaces are single-sheeted representable. It is also shown that the method can be easily applicable to potential energy curves by considering as prototypes molecular oxygen and the hydroxyl radical.

  17. Combined-hyperbolic-inverse-power-representation of potential energy surfaces: a preliminary assessment for H3 and HO2.

    PubMed

    Varandas, A J C

    2013-02-01

    The purpose is to fit an accurate smooth function of the many-body expansion type to a multidimensional large data set using a basis-set type method. By adopting a combined-hyperbolic-inverse-power-representation for the basis, the novel approach is tested in detail for the ground electronic state of tri-hydrogen and hydroperoxyl systems, assuming that their potential energy surfaces are single-sheeted representable. It is also shown that the method can be easily applicable to potential energy curves by considering as prototypes molecular oxygen and the hydroxyl radical. PMID:23406111

  18. Combining precipitation data from observed and numerical models to forecast precipitation characteristics in sparsely-gauged watersheds: an application to the Amazon River basin.

    NASA Astrophysics Data System (ADS)

    Dwelle, M. C.; Ivanov, V. Y.; Berrocal, V.

    2014-12-01

    Forecasting rainfall in areas with sparse monitoring efforts is critical to making inferences about the health of ecosystems and built environments. Recent advances in scientific computing have allowed forecasting and climate models to increase their spatial and temporal resolution. Combined with observed point precipitation from monitoring stations, these models can be used to inform dynamic spatial statistical models for precipitation using methods from geostatistics and machine learning. To prove the feasibility, process, and capabilities of these statistical models, we present a case study of two statistical models of precipitation for the Amazon River basin from 2003-2010 that can infer a spatial process at a point using areal data from numerical model output. We investigate the seasonality and accumulation of rainfall, and the occurrence of no-rainfall and large-rainfall events. These parameters are used since they provide valuable information on possible model biases when using climate models for forecasts of the future process of precipitation in the Amazon basin. This information can be vital for ecosystem, agriculture, and water-resource management. We use observed precipitation data from weather stations, three areal datasets derived from observed precipitation (CFSR, CMORPH-CRT, GPCC) and three climate model precipitation datasets from CMIP5 (MIROC4h, HadGEM2-CC, and GISS-E2H) to construct the models. The observational data in the model domain is sparse, with 195 stations in the approximate 7×106 square kilometers of the Amazon basin, and therefore requires the areal data to create a more robust model. The first model uses the method of Bayesian melding to combine and make inferences from the included data sets, and the second uses a regression model with spatially and temporally-varying coefficients. The models of precipitation are fitted using the areal products and a subset of the point data, while another subset of point data is held out for

  19. Inversion of magnetotelluric data in a sparse model domain

    NASA Astrophysics Data System (ADS)

    Nittinger, Christian G.; Becken, Michael

    2016-08-01

    The inversion of magnetotelluric data into subsurface electrical conductivity poses an ill-posed problem. Smoothing constraints are widely employed to estimate a regularized solution. Here, we present an alternative inversion scheme that estimates a sparse representation of the model in a wavelet basis. The objective of the inversion is to determine the few non-zero wavelet coefficients which are required to fit the data. This approach falls into the class of sparsity constrained inversion schemes and minimizes the combination of the data misfit in a least-squares ℓ2 sense and of a model coefficient norm in an ℓ1 sense (ℓ2-ℓ1 minimization). The ℓ1 coefficient norm renders the solution sparse in a suitable representation such as the multiresolution wavelet basis, but does not impose explicit structural penalties on the model as it is the case for ℓ2 regularization. The presented numerical algorithm solves the mixed ℓ2-ℓ1 norm minimization problem for the nonlinear magnetotelluric inverse problem. We demonstrate the feasibility of our algorithm on synthetic 2-D MT data as well as on a real data example. We found that sparse models can be estimated by inversion and that the spatial distribution of non-vanishing coefficients indicates regions in the model which are resolved.

  20. Inversion of magnetotelluric data in a sparse model domain

    NASA Astrophysics Data System (ADS)

    Nittinger, Christian G.; Becken, Michael

    2016-06-01

    The inversion of magnetotelluric data into subsurface electrical conductivity poses an ill-posed problem. Smoothing constraints are widely employed to estimate a regularized solution. Here, we present an alternative inversion scheme that estimates a sparse representation of the model in a wavelet basis. The objective of the inversion is to determine the few non-zero wavelet coefficients which are required to fit the data. This approach falls into the class of sparsity constrained inversion schemes and minimizes the combination of the data misfit in a least squares ℓ2 sense and of a model coefficient norm in a ℓ1 sense (ℓ2-ℓ1 minimization). The ℓ1 coefficient norm renders the solution sparse in a suitable representation such as the multi-resolution wavelet basis, but does not impose explicit structural penalties on the model as it is the case for ℓ2 regularization. The presented numerical algorithm solves the mixed ℓ2-ℓ1 norm minimization problem for the non-linear magnetotelluric inverse problem. We demonstrate the feasibility of our algorithm on synthetic 2-D MT data as well as on a real data example. We found that sparse models can be estimated by inversion and that the spatial distribution of non-vanishing coefficients indicates regions in the model which are resolved.

  1. Golden-Angle Radial Sparse Parallel MRI: Combination of Compressed Sensing, Parallel Imaging, and Golden-Angle Radial Sampling for Fast and Flexible Dynamic Volumetric MRI

    PubMed Central

    Feng, Li; Grimm, Robert; Block, Kai Tobias; Chandarana, Hersh; Kim, Sungheon; Xu, Jian; Axel, Leon; Sodickson, Daniel K.; Otazo, Ricardo

    2013-01-01

    Purpose To develop a fast and flexible free-breathing dynamic volumetric MRI technique, iterative Golden-angle RAdial Sparse Parallel MRI (iGRASP), that combines compressed sensing, parallel imaging, and golden-angle radial sampling. Methods Radial k-space data are acquired continuously using the golden-angle scheme and sorted into time series by grouping an arbitrary number of consecutive spokes into temporal frames. An iterative reconstruction procedure is then performed on the undersampled time series where joint multicoil sparsity is enforced by applying a total-variation constraint along the temporal dimension. Required coil-sensitivity profiles are obtained from the time-averaged data. Results iGRASP achieved higher acceleration capability than either parallel imaging or coil-by-coil compressed sensing alone. It enabled dynamic volumetric imaging with high spatial and temporal resolution for various clinical applications, including free-breathing dynamic contrast-enhanced imaging in the abdomen of both adult and pediatric patients, and in the breast and neck of adult patients. Conclusion The high performance and flexibility provided by iGRASP can improve clinical studies that require robustness to motion and simultaneous high spatial and temporal resolution. PMID:24142845

  2. Bundle block adjustment of large-scale remote sensing data with Block-based Sparse Matrix Compression combined with Preconditioned Conjugate Gradient

    NASA Astrophysics Data System (ADS)

    Zheng, Maoteng; Zhang, Yongjun; Zhou, Shunping; Zhu, Junfeng; Xiong, Xiaodong

    2016-07-01

    In recent years, new platforms and sensors in photogrammetry, remote sensing and computer vision areas have become available, such as Unmanned Aircraft Vehicles (UAV), oblique camera systems, common digital cameras and even mobile phone cameras. Images collected by all these kinds of sensors could be used as remote sensing data sources. These sensors can obtain large-scale remote sensing data which consist of a great number of images. Bundle block adjustment of large-scale data with conventional algorithm is very time and space (memory) consuming due to the super large normal matrix arising from large-scale data. In this paper, an efficient Block-based Sparse Matrix Compression (BSMC) method combined with the Preconditioned Conjugate Gradient (PCG) algorithm is chosen to develop a stable and efficient bundle block adjustment system in order to deal with the large-scale remote sensing data. The main contribution of this work is the BSMC-based PCG algorithm which is more efficient in time and memory than the traditional algorithm without compromising the accuracy. Totally 8 datasets of real data are used to test our proposed method. Preliminary results have shown that the BSMC method can efficiently decrease the time and memory requirement of large-scale data.

  3. Wavefront reconstruction in phase-shifting interferometry via sparse coding of amplitude and absolute phase.

    PubMed

    Katkovnik, V; Bioucas-Dias, J

    2014-08-01

    Phase-shifting interferometry is a coherent optical method that combines high accuracy with high measurement speeds. This technique is therefore desirable in many applications such as the efficient industrial quality inspection process. However, despite its advantageous properties, the inference of the object amplitude and the phase, herein termed wavefront reconstruction, is not a trivial task owing to the Poissonian noise associated with the measurement process and to the 2π phase periodicity of the observation mechanism. In this paper, we formulate the wavefront reconstruction as an inverse problem, where the amplitude and the absolute phase are assumed to admit sparse linear representations in suitable sparsifying transforms (dictionaries). Sparse modeling is a form of regularization of inverse problems which, in the case of the absolute phase, is not available to the conventional wavefront reconstruction techniques, as only interferometric phase modulo-2π is considered therein. The developed sparse modeling of the absolute phase solves two different problems: accuracy of the interferometric (wrapped) phase reconstruction and simultaneous phase unwrapping. Based on this rationale, we introduce the sparse phase and amplitude reconstruction (SPAR) algorithm. SPAR takes into full consideration the Poissonian (photon counting) measurements and uses the data-adaptive block-matching 3D (BM3D) frames as a sparse representation for the amplitude and for the absolute phase. SPAR effectiveness is documented by comparing its performance with that of competitors in a series of experiments. PMID:25121537

  4. A combined model of sensory and cognitive representations underlying tonal expectations in music: from audio signals to behavior.

    PubMed

    Collins, Tom; Tillmann, Barbara; Barrett, Frederick S; Delbé, Charles; Janata, Petr

    2014-01-01

    Listeners' expectations for melodies and harmonies in tonal music are perhaps the most studied aspect of music cognition. Long debated has been whether faster response times (RTs) to more strongly primed events (in a music theoretic sense) are driven by sensory or cognitive mechanisms, such as repetition of sensory information or activation of cognitive schemata that reflect learned tonal knowledge, respectively. We analyzed over 300 stimuli from 7 priming experiments comprising a broad range of musical material, using a model that transforms raw audio signals through a series of plausible physiological and psychological representations spanning a sensory-cognitive continuum. We show that RTs are modeled, in part, by information in periodicity pitch distributions, chroma vectors, and activations of tonal space--a representation on a toroidal surface of the major/minor key relationships in Western tonal music. We show that in tonal space, melodies are grouped by their tonal rather than timbral properties, whereas the reverse is true for the periodicity pitch representation. While tonal space variables explained more of the variation in RTs than did periodicity pitch variables, suggesting a greater contribution of cognitive influences to tonal expectation, a stepwise selection model contained variables from both representations and successfully explained the pattern of RTs across stimulus categories in 4 of the 7 experiments. The addition of closure--a cognitive representation of a specific syntactic relationship--succeeded in explaining results from all 7 experiments. We conclude that multiple representational stages along a sensory-cognitive continuum combine to shape tonal expectations in music. PMID:24490788

  5. Deformable segmentation via sparse shape representation.

    PubMed

    Zhang, Shaoting; Zhan, Yiqiang; Dewan, Maneesh; Huang, Junzhou; Metaxas, Dimitris N; Zhou, Xiang Sean

    2011-01-01

    Appearance and shape are two key elements exploited in medical image segmentation. However, in some medical image analysis tasks, appearance cues are weak/misleading due to disease/artifacts and often lead to erroneous segmentation. In this paper, a novel deformable model is proposed for robust segmentation in the presence of weak/misleading appearance cues. Owing to the less trustable appearance information, this method focuses on the effective shape modeling with two contributions. First, a shape composition method is designed to incorporate shape prior on-the-fly. Based on two sparsity observations, this method is robust to false appearance information and adaptive to statistically insignificant shape modes. Second, shape priors are modeled and used in a hierarchical fashion. More specifically, by using affinity propagation method, our deformable surface is divided into multiple partitions, on which local shape models are built independently. This scheme facilitates a more compact shape prior modeling and hence a more robust and efficient segmentation. Our deformable model is applied on two very diverse segmentation problems, liver segmentation in PET-CT images and rodent brain segmentation in MR images. Compared to state-of-art methods, our method achieves better performance in both studies. PMID:21995060

  6. A sparse algorithm for the evaluation of the local energy in quantum Monte Carlo.

    PubMed

    Aspuru-Guzik, Alán; Salomón-Ferrer, Romelia; Austin, Brian; Lester, William A

    2005-05-01

    A new algorithm is presented for the sparse representation and evaluation of Slater determinants in the quantum Monte Carlo (QMC) method. The approach, combined with the use of localized orbitals in a Slater-type orbital basis set, significantly extends the size molecule that can be treated with the QMC method. Application of the algorithm to systems containing up to 390 electrons confirms that the cost of evaluating the Slater determinant scales linearly with system size. PMID:15761862

  7. Sparse Regression as a Sparse Eigenvalue Problem

    NASA Technical Reports Server (NTRS)

    Moghaddam, Baback; Gruber, Amit; Weiss, Yair; Avidan, Shai

    2008-01-01

    We extend the l0-norm "subspectral" algorithms for sparse-LDA [5] and sparse-PCA [6] to general quadratic costs such as MSE in linear (kernel) regression. The resulting "Sparse Least Squares" (SLS) problem is also NP-hard, by way of its equivalence to a rank-1 sparse eigenvalue problem (e.g., binary sparse-LDA [7]). Specifically, for a general quadratic cost we use a highly-efficient technique for direct eigenvalue computation using partitioned matrix inverses which leads to dramatic x103 speed-ups over standard eigenvalue decomposition. This increased efficiency mitigates the O(n4) scaling behaviour that up to now has limited the previous algorithms' utility for high-dimensional learning problems. Moreover, the new computation prioritizes the role of the less-myopic backward elimination stage which becomes more efficient than forward selection. Similarly, branch-and-bound search for Exact Sparse Least Squares (ESLS) also benefits from partitioned matrix inverse techniques. Our Greedy Sparse Least Squares (GSLS) generalizes Natarajan's algorithm [9] also known as Order-Recursive Matching Pursuit (ORMP). Specifically, the forward half of GSLS is exactly equivalent to ORMP but more efficient. By including the backward pass, which only doubles the computation, we can achieve lower MSE than ORMP. Experimental comparisons to the state-of-the-art LARS algorithm [3] show forward-GSLS is faster, more accurate and more flexible in terms of choice of regularization

  8. Removing sparse noise from hyperspectral images with sparse and low-rank penalties

    NASA Astrophysics Data System (ADS)

    Tariyal, Snigdha; Aggarwal, Hemant Kumar; Majumdar, Angshul

    2016-03-01

    In diffraction grating, at times, there are defective pixels on the focal plane array; this results in horizontal lines of corrupted pixels in some channels. Since only a few such pixels exist, the corruption/noise is sparse. Studies on sparse noise removal from hyperspectral noise are parsimonious. To remove such sparse noise, a prior work exploited the interband spectral correlation along with intraband spatial redundancy to yield a sparse representation in transform domains. We improve upon the prior technique. The intraband spatial redundancy is modeled as a sparse set of transform coefficients and the interband spectral correlation is modeled as a rank deficient matrix. The resulting optimization problem is solved using the split Bregman technique. Comparative experimental results show that our proposed approach is better than the previous one.

  9. Sparse subspace clustering: algorithm, theory, and applications.

    PubMed

    Elhamifar, Ehsan; Vidal, René

    2013-11-01

    Many real-world problems deal with collections of high-dimensional data, such as images, videos, text, and web documents, DNA microarray data, and more. Often, such high-dimensional data lie close to low-dimensional structures corresponding to several classes or categories to which the data belong. In this paper, we propose and study an algorithm, called sparse subspace clustering, to cluster data points that lie in a union of low-dimensional subspaces. The key idea is that, among the infinitely many possible representations of a data point in terms of other points, a sparse representation corresponds to selecting a few points from the same subspace. This motivates solving a sparse optimization program whose solution is used in a spectral clustering framework to infer the clustering of the data into subspaces. Since solving the sparse optimization program is in general NP-hard, we consider a convex relaxation and show that, under appropriate conditions on the arrangement of the subspaces and the distribution of the data, the proposed minimization program succeeds in recovering the desired sparse representations. The proposed algorithm is efficient and can handle data points near the intersections of subspaces. Another key advantage of the proposed algorithm with respect to the state of the art is that it can deal directly with data nuisances, such as noise, sparse outlying entries, and missing entries, by incorporating the model of the data into the sparse optimization program. We demonstrate the effectiveness of the proposed algorithm through experiments on synthetic data as well as the two real-world problems of motion segmentation and face clustering. PMID:24051734

  10. Guided wavefield reconstruction from sparse measurements

    NASA Astrophysics Data System (ADS)

    Mesnil, Olivier; Ruzzene, Massimo

    2016-02-01

    Guided wave measurements are at the basis of several Non-Destructive Evaluation (NDE) techniques. Although sparse measurements of guided wave obtained using piezoelectric sensors can efficiently detect and locate defects, extensive informa-tion on the shape and subsurface location of defects can be extracted from full-field measurements acquired by Laser Doppler Vibrometers (LDV). Wavefield acquisition from LDVs is generally a slow operation due to the fact that the wave propagation to record must be repeated for each point measurement and the initial conditions must be reached between each measurement. In this research, a Sparse Wavefield Reconstruction (SWR) process using Compressed Sensing is developed. The goal of this technique is to reduce the number of point measurements needed to apply NDE techniques by at least one order of magnitude by extrapolating the knowledge of a few randomly chosen measured pixels over an over-sampled grid. To achieve this, the Lamb wave propagation equation is used to formulate a basis of shape functions in which the wavefield has a sparse representation, in order to comply with the Compressed Sensing requirements and use l1-minimization solvers. The main assumption of this reconstruction process is that every material point of the studied area is a potential source. The Compressed Sensing matrix is defined as being the contribution that would have been received at a measurement location from each possible source, using the dispersion relations of the specimen computed using a Semi-Analytical Finite Element technique. The measurements are then processed through an l1-minimizer to find a minimum corresponding to the set of active sources and their corresponding excitation functions. This minimum represents the best combination of the parameters of the model matching the sparse measurements. Wavefields are then reconstructed using the propagation equation. The set of active sources found by minimization contains all the wave

  11. Infrared image recognition based on structure sparse and atomic sparse parallel

    NASA Astrophysics Data System (ADS)

    Wu, Yalu; Li, Ruilong; Xu, Yi; Wang, Liping

    2015-12-01

    Use the redundancy of the super complete dictionary can capture the structural features of the image effectively, can achieving the effective representation of the image. However, the commonly used atomic sparse representation without regard the structure of the dictionary and the unrelated non-zero-term in the process of the computation, though structure sparse consider the structure feature of dictionary, the majority coefficients of the blocks maybe are non-zero, it may affect the identification efficiency. For the disadvantages of these two sparse expressions, a weighted parallel atomic sparse and sparse structure is proposed, and the recognition efficiency is improved by the adaptive computation of the optimal weights. The atomic sparse expression and structure sparse expression are respectively, and the optimal weights are calculated by the adaptive method. Methods are as follows: training by using the less part of the identification sample, the recognition rate is calculated by the increase of the certain step size and t the constraint between weight. The recognition rate as the Z axis, two weight values respectively as X, Y axis, the resulting points can be connected in a straight line in the 3 dimensional coordinate system, by solving the highest recognition rate, the optimal weights can be obtained. Through simulation experiments can be known, the optimal weights based on adaptive method are better in the recognition rate, weights obtained by adaptive computation of a few samples, suitable for parallel recognition calculation, can effectively improve the recognition rate of infrared images.

  12. [Autonomy and dementia Part II: autonomy and representation: a possible combination?].

    PubMed

    Rigaux, Natalie

    2011-06-01

    This paper, based on a critical review of the medico-social literature, questions the representation of patients with dementia in relation to the autonomy perspectives presented in a previous article. In the canonical perspective of autonomy (defined as a rational decision-making by a stand alone self), the surrogate is the spokeperson of the subject's wills when he was competent because he knows these wills through advance directives or assuming them via substituted judgment. Best patient's interest is then depreciated because it is focused on the present incompetent self. In the relational perspective, where autonomy is constructed through a dialogue with others, the surrogate is the present interlocutor, making the decisions with the patient and care-givers in a way varying with the disease process. He represents the subject with dementia as he was before the disease but also as he has become. Therefore, there is a continuum between autonomy and representation. Autonomy and well being are both the surrogate aims. The relational perspective allows care continuity of patients with dementia even when considered as incompetent. It offers a more balanced perspective on the patient autonomy since it is embedded in all others, and opens a richer view on what good life is, untill the end of dementia. PMID:21690029

  13. Finding communities in sparse networks

    PubMed Central

    Singh, Abhinav; Humphries, Mark D.

    2015-01-01

    Spectral algorithms based on matrix representations of networks are often used to detect communities, but classic spectral methods based on the adjacency matrix and its variants fail in sparse networks. New spectral methods based on non-backtracking random walks have recently been introduced that successfully detect communities in many sparse networks. However, the spectrum of non-backtracking random walks ignores hanging trees in networks that can contain information about their community structure. We introduce the reluctant backtracking operators that explicitly account for hanging trees as they admit a small probability of returning to the immediately previous node, unlike the non-backtracking operators that forbid an immediate return. We show that the reluctant backtracking operators can detect communities in certain sparse networks where the non-backtracking operators cannot, while performing comparably on benchmark stochastic block model networks and real world networks. We also show that the spectrum of the reluctant backtracking operator approximately optimises the standard modularity function. Interestingly, for this family of non- and reluctant-backtracking operators the main determinant of performance on real-world networks is whether or not they are normalised to conserve probability at each node. PMID:25742951

  14. Visual recognition based on discriminative and collaborative representation

    NASA Astrophysics Data System (ADS)

    Xiang, Fengtao; Wang, Zhengzhi; Liu, Hongfu

    2014-11-01

    In this paper, a low computation complexity, yet very efficient representation of image for visual recognition tasks is presented. The collaborative representation and discriminative ingredient are combined in a unified framework. The coefficients of collaborative representation of test samples are sparse and robust to occlusion or other disguises. It is known that in the recognition or classification tasks, the discriminative model is also very important. The proposed model has two-fold advantages. It can represent the test sample well using redundant representation with sparsity and robust to disguises. On the other hand, the representation coefficients are generated with more discriminative information. It is very helpful for visual recognition issues. The point is that the l 2 norm can achieve comparable performance to the l 1 norm with simple implementation. Experimental evaluations on some benchmarks indicate that the proposed method could achieve impressive performances in terms of accuracy and efficiency with other existing works.

  15. Sparse Spectrotemporal Coding of Sounds

    NASA Astrophysics Data System (ADS)

    Klein, David J.; König, Peter; Körding, Konrad P.

    2003-12-01

    Recent studies of biological auditory processing have revealed that sophisticated spectrotemporal analyses are performed by central auditory systems of various animals. The analysis is typically well matched with the statistics of relevant natural sounds, suggesting that it produces an optimal representation of the animal's acoustic biotope. We address this topic using simulated neurons that learn an optimal representation of a speech corpus. As input, the neurons receive a spectrographic representation of sound produced by a peripheral auditory model. The output representation is deemed optimal when the responses of the neurons are maximally sparse. Following optimization, the simulated neurons are similar to real neurons in many respects. Most notably, a given neuron only analyzes the input over a localized region of time and frequency. In addition, multiple subregions either excite or inhibit the neuron, together producing selectivity to spectral and temporal modulation patterns. This suggests that the brain's solution is particularly well suited for coding natural sound; therefore, it may prove useful in the design of new computational methods for processing speech.

  16. Analog system for computing sparse codes

    DOEpatents

    Rozell, Christopher John; Johnson, Don Herrick; Baraniuk, Richard Gordon; Olshausen, Bruno A.; Ortman, Robert Lowell

    2010-08-24

    A parallel dynamical system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements, and for reconstructing compressively sensed images. The system is based on the principles of thresholding and local competition that solves a family of sparse approximation problems corresponding to various sparsity metrics. The system utilizes Locally Competitive Algorithms (LCAs), nodes in a population continually compete with neighboring units using (usually one-way) lateral inhibition to calculate coefficients representing an input in an over complete dictionary.

  17. Combined numerical and linguistic knowledge representation and its application to medical diagnosis

    NASA Astrophysics Data System (ADS)

    Meesad, Phayung; Yen, Gary G.

    2002-07-01

    In this study, we propose a novel hybrid intelligent system (HIS) which provides a unified integration of numerical and linguistic knowledge representations. The proposed HIS is hierarchical integration of an incremental learning fuzzy neural network (ILFN) and a linguistic model, i.e., fuzzy expert system, optimized via the genetic algorithm. The ILFN is a self-organizing network with the capability of fast, one-pass, online, and incremental learning. The linguistic model is constructed based on knowledge embedded in the trained ILFN or provided by the domain expert. The knowledge captured from the low-level ILFN can be mapped to the higher-level linguistic model and vice versa. The GA is applied to optimize the linguistic model to maintain high accuracy, comprehensibility, completeness, compactness, and consistency. After the system being completely constructed, it can incrementally learn new information in both numerical and linguistic forms. To evaluate the system's performance, the well-known benchmark Wisconsin breast cancer data set was studied for an application to medical diagnosis. The simulation results have shown that the prosed HIS perform better than the individual standalone systems. The comparison results show that the linguistic rules extracted are competitive with or even superior to some well-known methods.

  18. Improving mass detection using combined feature representations from projection views and reconstructed volume of DBT and boosting based classification with feature selection

    NASA Astrophysics Data System (ADS)

    Kim, Dae Hoe; Kim, Seong Tae; Ro, Yong Man

    2015-11-01

    In digital breast tomosynthesis (DBT), image characteristics of projection views and reconstructed volume are different and both have the advantage of detecting breast masses, e.g. reconstructed volume mitigates a tissue overlap, while projection views have less reconstruction blur artifacts. In this paper, an improved mass detection is proposed by using combined feature representations from projection views and reconstructed volume in the DBT. To take advantage of complementary effects on different image characteristics of both data, combined feature representations are extracted from both projection views and reconstructed volume concurrently. An indirect region-of-interest segmentation in projection views, which projects volume-of-interest in reconstructed volume into the corresponding projection views, is proposed to extract combined feature representations. In addition, a boosting based classification with feature selection has been employed for selecting effective feature representations among a large number of combined feature representations, and for reducing false positives. Experiments have been conducted on a clinical data set that contains malignant masses. Experimental results demonstrate that the proposed mass detection can achieve high sensitivity with a small number of false positives. In addition, the experimental results demonstrate that the selected feature representations for classifying masses complementarily come from both projection views and reconstructed volume.

  19. Improving mass detection using combined feature representations from projection views and reconstructed volume of DBT and boosting based classification with feature selection.

    PubMed

    Kim, Dae Hoe; Kim, Seong Tae; Ro, Yong Man

    2015-11-21

    In digital breast tomosynthesis (DBT), image characteristics of projection views and reconstructed volume are different and both have the advantage of detecting breast masses, e.g. reconstructed volume mitigates a tissue overlap, while projection views have less reconstruction blur artifacts. In this paper, an improved mass detection is proposed by using combined feature representations from projection views and reconstructed volume in the DBT. To take advantage of complementary effects on different image characteristics of both data, combined feature representations are extracted from both projection views and reconstructed volume concurrently. An indirect region-of-interest segmentation in projection views, which projects volume-of-interest in reconstructed volume into the corresponding projection views, is proposed to extract combined feature representations. In addition, a boosting based classification with feature selection has been employed for selecting effective feature representations among a large number of combined feature representations, and for reducing false positives. Experiments have been conducted on a clinical data set that contains malignant masses. Experimental results demonstrate that the proposed mass detection can achieve high sensitivity with a small number of false positives. In addition, the experimental results demonstrate that the selected feature representations for classifying masses complementarily come from both projection views and reconstructed volume. PMID:26529080

  20. Re-Examining Evidence for the Use of Independent Relational Representations during Conceptual Combination

    ERIC Educational Resources Information Center

    Gagne, Christina L.; Spalding, Thomas L.; Ji, Hongbo

    2005-01-01

    In a recent study of conceptual combination, Estes (2003) presented evidence for the priming of relational information in the absence of shared constituents between the prime and target (e.g., "pancake spatula" was interpreted more quickly following "bacon tongs" than following "city riots"). He argued that these data support the view that…

  1. Combining Multiple External Representations and Refutational Text: An Intervention on Learning to Interpret Box Plots

    ERIC Educational Resources Information Center

    Lem, Stephanie; Kempen, Goya; Ceulemans, Eva; Onghena, Patrick; Verschaffel, Lieven; Van Dooren, Wim

    2015-01-01

    Box plots are frequently misinterpreted and educational attempts to correct these misinterpretations have not been successful. In this study, we used two instructional techniques that seemed powerful to change the misinterpretation of the area of the box in box plots, both separately and in combination, leading to three experimental conditions,…

  2. ENSO and annual cycle interaction: the combination mode representation in CMIP5 models

    NASA Astrophysics Data System (ADS)

    Ren, Hong-Li; Zuo, Jinqing; Jin, Fei-Fei; Stuecker, Malte F.

    2015-08-01

    Recent research demonstrated the existence of a combination mode (C-mode) originating from the atmospheric nonlinear interaction between the El Niño-Southern Oscillation (ENSO) and the Pacific warm pool annual cycle. In this paper, we show that the majority of coupled climate models in the Coupled Model Intercomparison Project Phase 5 (CMIP5) are able to reproduce the observed spatial pattern of the C-mode in terms of surface wind anomalies reasonably well, and about half of the coupled models are able to reproduce spectral power at the combination tone periodicities of about 10 and/or 15 months. Compared to the CMIP5 historical simulations, the CMIP5 Atmospheric Model Intercomparison Project (AMIP) simulations can generally exhibit a more realistic simulation of the C-mode due to prescribed lower boundary forcing. Overall, the multi-model ensemble average of the CMIP5 models tends to capture the C-mode better than the individual models. Furthermore, the models with better performance in simulating the ENSO mode tend to also exhibit a more realistic C-mode with respect to its spatial pattern and amplitude, in both the CMIP5 historical and AMIP simulations. This study shows that the CMIP5 models are able to simulate the proposed combination mode mechanism to some degree, resulting from their reasonable performance in representing the ENSO mode. It is suggested that the main ENSO periods in the current climate models needs to be further improved for making the C-mode better.

  3. ENSO and annual cycle interaction: the combination mode representation in CMIP5 models

    NASA Astrophysics Data System (ADS)

    Ren, Hong-Li; Zuo, Jinqing; Jin, Fei-Fei; Stuecker, Malte F.

    2016-06-01

    Recent research demonstrated the existence of a combination mode (C-mode) originating from the atmospheric nonlinear interaction between the El Niño-Southern Oscillation (ENSO) and the Pacific warm pool annual cycle. In this paper, we show that the majority of coupled climate models in the Coupled Model Intercomparison Project Phase 5 (CMIP5) are able to reproduce the observed spatial pattern of the C-mode in terms of surface wind anomalies reasonably well, and about half of the coupled models are able to reproduce spectral power at the combination tone periodicities of about 10 and/or 15 months. Compared to the CMIP5 historical simulations, the CMIP5 Atmospheric Model Intercomparison Project (AMIP) simulations can generally exhibit a more realistic simulation of the C-mode due to prescribed lower boundary forcing. Overall, the multi-model ensemble average of the CMIP5 models tends to capture the C-mode better than the individual models. Furthermore, the models with better performance in simulating the ENSO mode tend to also exhibit a more realistic C-mode with respect to its spatial pattern and amplitude, in both the CMIP5 historical and AMIP simulations. This study shows that the CMIP5 models are able to simulate the proposed combination mode mechanism to some degree, resulting from their reasonable performance in representing the ENSO mode. It is suggested that the main ENSO periods in the current climate models needs to be further improved for making the C-mode better.

  4. Improving representation-based classification for robust face recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Hongzhi; Zhang, Zheng; Li, Zhengming; Chen, Yan; Shi, Jian

    2014-06-01

    The sparse representation classification (SRC) method proposed by Wright et al. is considered as the breakthrough of face recognition because of its good performance. Nevertheless it still cannot perfectly address the face recognition problem. The main reason for this is that variation of poses, facial expressions, and illuminations of the facial image can be rather severe and the number of available facial images are fewer than the dimensions of the facial image, so a certain linear combination of all the training samples is not able to fully represent the test sample. In this study, we proposed a novel framework to improve the representation-based classification (RBC). The framework first ran the sparse representation algorithm and determined the unavoidable deviation between the test sample and optimal linear combination of all the training samples in order to represent it. It then exploited the deviation and all the training samples to resolve the linear combination coefficients. Finally, the classification rule, the training samples, and the renewed linear combination coefficients were used to classify the test sample. Generally, the proposed framework can work for most RBC methods. From the viewpoint of regression analysis, the proposed framework has a solid theoretical soundness. Because it can, to an extent, identify the bias effect of the RBC method, it enables RBC to obtain more robust face recognition results. The experimental results on a variety of face databases demonstrated that the proposed framework can improve the collaborative representation classification, SRC, and improve the nearest neighbor classifier.

  5. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals.

    PubMed

    Pinski, Peter; Riplinger, Christoph; Valeev, Edward F; Neese, Frank

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in

  6. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals

    SciTech Connect

    Pinski, Peter; Riplinger, Christoph; Neese, Frank E-mail: frank.neese@cec.mpg.de; Valeev, Edward F. E-mail: frank.neese@cec.mpg.de

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in

  7. Haptic fMRI: combining functional neuroimaging with haptics for studying the brain's motor control representation.

    PubMed

    Menon, Samir; Brantner, Gerald; Aholt, Chris; Kay, Kendrick; Khatib, Oussama

    2013-01-01

    A challenging problem in motor control neuroimaging studies is the inability to perform complex human motor tasks given the Magnetic Resonance Imaging (MRI) scanner's disruptive magnetic fields and confined workspace. In this paper, we propose a novel experimental platform that combines Functional MRI (fMRI) neuroimaging, haptic virtual simulation environments, and an fMRI-compatible haptic device for real-time haptic interaction across the scanner workspace (above torso ∼ .65×.40×.20m(3)). We implement this Haptic fMRI platform with a novel haptic device, the Haptic fMRI Interface (HFI), and demonstrate its suitability for motor neuroimaging studies. HFI has three degrees-of-freedom (DOF), uses electromagnetic motors to enable high-fidelity haptic rendering (>350Hz), integrates radio frequency (RF) shields to prevent electromagnetic interference with fMRI (temporal SNR >100), and is kinematically designed to minimize currents induced by the MRI scanner's magnetic field during motor displacement (<2cm). HFI possesses uniform inertial and force transmission properties across the workspace, and has low friction (.05-.30N). HFI's RF noise levels, in addition, are within a 3 Tesla fMRI scanner's baseline noise variation (∼.85±.1%). Finally, HFI is haptically transparent and does not interfere with human motor tasks (tested for .4m reaches). By allowing fMRI experiments involving complex three-dimensional manipulation with haptic interaction, Haptic fMRI enables-for the first time-non-invasive neuroscience experiments involving interactive motor tasks, object manipulation, tactile perception, and visuo-motor integration. PMID:24110643

  8. SU-E-I-87: Automated Liver Segmentation Method for CBCT Dataset by Combining Sparse Shape Composition and Probabilistic Atlas Construction

    SciTech Connect

    Li, Dengwang; Liu, Li; Chen, Jinhu; Li, Hongsheng

    2014-06-01

    Purpose: The aiming of this study was to extract liver structures for daily Cone beam CT (CBCT) images automatically. Methods: Datasets were collected from 50 intravenous contrast planning CT images, which were regarded as training dataset for probabilistic atlas and shape prior model construction. Firstly, probabilistic atlas and shape prior model based on sparse shape composition (SSC) were constructed by iterative deformable registration. Secondly, the artifacts and noise were removed from the daily CBCT image by an edge-preserving filtering using total variation with L1 norm (TV-L1). Furthermore, the initial liver region was obtained by registering the incoming CBCT image with the atlas utilizing edge-preserving deformable registration with multi-scale strategy, and then the initial liver region was converted to surface meshing which was registered with the shape model where the major variation of specific patient was modeled by sparse vectors. At the last stage, the shape and intensity information were incorporated into joint probabilistic model, and finally the liver structure was extracted by maximum a posteriori segmentation.Regarding the construction process, firstly the manually segmented contours were converted into meshes, and then arbitrary patient data was chosen as reference image to register with the rest of training datasets by deformable registration algorithm for constructing probabilistic atlas and prior shape model. To improve the efficiency of proposed method, the initial probabilistic atlas was used as reference image to register with other patient data for iterative construction for removing bias caused by arbitrary selection. Results: The experiment validated the accuracy of the segmentation results quantitatively by comparing with the manually ones. The volumetric overlap percentage between the automatically generated liver contours and the ground truth were on an average 88%–95% for CBCT images. Conclusion: The experiment demonstrated

  9. A sparse Bayesian learning based scheme for multi-movement recognition using sEMG.

    PubMed

    Ding, Shuai; Wang, Liang

    2016-03-01

    This paper proposed a feature extraction scheme based on sparse representation considering the non-stationary property of surface electromyography (sEMG). Sparse Bayesian learning was introduced to extract the feature with optimal class separability to improve recognition accuracy of multi-movement patterns. The extracted feature, sparse representation coefficients (SRC), represented time-varying characteristics of sEMG effectively because of the compressibility (or weak sparsity) of the signal in some transformed domains. We investigated the effect of the proposed feature by comparing with other fourteen individual features in offline recognition. The results demonstrated the proposed feature revealed important dynamic information in the sEMG signals. The multi-feature sets formed by the SRC and other single feature yielded more superior performance on recognition accuracy, compared with the single features. The best average recognition accuracy of 94.33 % was gained by using SVM classifier with the multi-feature set combining the feature SRC, Williston amplitude (WAMP), wavelength (WL) and the coefficients of the fourth order autoregressive model (ARC4) via multiple kernel learning framework. The proposed feature extraction scheme (known as SRC + WAMP + WL + ARC4) is a promising method for multi-movement recognition with high accuracy. PMID:26577712

  10. On the decoding of intracranial data using sparse orthonormalized partial least squares

    NASA Astrophysics Data System (ADS)

    van Gerven, Marcel A. J.; Chao, Zenas C.; Heskes, Tom

    2012-04-01

    It has recently been shown that robust decoding of motor output from electrocorticogram signals in monkeys over prolonged periods of time has become feasible (Chao et al 2010 Front. Neuroeng. 3 1-10 ). In order to achieve these results, multivariate partial least-squares (PLS) regression was used. PLS uses a set of latent variables, referred to as components, to model the relationship between the input and the output data and is known to handle high-dimensional and possibly strongly correlated inputs and outputs well. We developed a new decoding method called sparse orthonormalized partial least squares (SOPLS) which was tested on a subset of the data used in Chao et al (2010) (freely obtainable from neurotycho.org (Nagasaka et al 2011 PLoS ONE 6 e22561)). We show that SOPLS reaches the same decoding performance as PLS using just two sparse components which can each be interpreted as encoding particular combinations of motor parameters. Furthermore, the sparse solution afforded by the SOPLS model allowed us to show the functional involvement of beta and gamma band responses in premotor and motor cortex for predicting the first component. Based on the literature, we conjecture that this first component is involved in the encoding of movement direction. Hence, the sparse and compact representation afforded by the SOPLS model facilitates interpretation of which spectral, spatial and temporal components are involved in successful decoding. These advantages make the proposed decoding method an important new tool in neuroprosthetics.

  11. Automatic anatomy recognition of sparse objects

    NASA Astrophysics Data System (ADS)

    Zhao, Liming; Udupa, Jayaram K.; Odhner, Dewey; Wang, Huiqian; Tong, Yubing; Torigian, Drew A.

    2015-03-01

    A general body-wide automatic anatomy recognition (AAR) methodology was proposed in our previous work based on hierarchical fuzzy models of multitudes of objects which was not tied to any specific organ system, body region, or image modality. That work revealed the challenges encountered in modeling, recognizing, and delineating sparse objects throughout the body (compared to their non-sparse counterparts) if the models are based on the object's exact geometric representations. The challenges stem mainly from the variation in sparse objects in their shape, topology, geographic layout, and relationship to other objects. That led to the idea of modeling sparse objects not from the precise geometric representations of their samples but by using a properly designed optimal super form. This paper presents the underlying improved methodology which includes 5 steps: (a) Collecting image data from a specific population group G and body region Β and delineating in these images the objects in Β to be modeled; (b) Building a super form, S-form, for each object O in Β; (c) Refining the S-form of O to construct an optimal (minimal) super form, S*-form, which constitutes the (fuzzy) model of O; (d) Recognizing objects in Β using the S*-form; (e) Defining confounding and background objects in each S*-form for each object and performing optimal delineation. Our evaluations based on 50 3D computed tomography (CT) image sets in the thorax on four sparse objects indicate that substantially improved performance (FPVF~2%, FNVF~10%, and success where the previous approach failed) can be achieved using the new approach.

  12. Benchmarking of HPCC: A novel 3D molecular representation combining shape and pharmacophoric descriptors for efficient molecular similarity assessments.

    PubMed

    Karaboga, Arnaud S; Petronin, Florent; Marchetti, Gino; Souchet, Michel; Maigret, Bernard

    2013-04-01

    Since 3D molecular shape is an important determinant of biological activity, designing accurate 3D molecular representations is still of high interest. Several chemoinformatic approaches have been developed to try to describe accurate molecular shapes. Here, we present a novel 3D molecular description, namely harmonic pharma chemistry coefficient (HPCC), combining a ligand-centric pharmacophoric description projected onto a spherical harmonic based shape of a ligand. The performance of HPCC was evaluated by comparison to the standard ROCS software in a ligand-based virtual screening (VS) approach using the publicly available directory of useful decoys (DUD) data set comprising over 100,000 compounds distributed across 40 protein targets. Our results were analyzed using commonly reported statistics such as the area under the curve (AUC) and normalized sum of logarithms of ranks (NSLR) metrics. Overall, our HPCC 3D method is globally as efficient as the state-of-the-art ROCS software in terms of enrichment and slightly better for more than half of the DUD targets. Since it is largely admitted that VS results depend strongly on the nature of the protein families, we believe that the present HPCC solution is of interest over the current ligand-based VS methods. PMID:23467019

  13. Sparse distributed memory overview

    NASA Technical Reports Server (NTRS)

    Raugh, Mike

    1990-01-01

    The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.

  14. A unified approach to sparse signal processing

    NASA Astrophysics Data System (ADS)

    Marvasti, Farokh; Amini, Arash; Haddadi, Farzan; Soltanolkotabi, Mahdi; Khalaj, Babak Hossein; Aldroubi, Akram; Sanei, Saeid; Chambers, Janathon

    2012-12-01

    A unified view of the area of sparse signal processing is presented in tutorial form by bringing together various fields in which the property of sparsity has been successfully exploited. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common potential benefits of significant reduction in sampling rate and processing manipulations through sparse signal processing are revealed. The key application domains of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of the sampling process and reconstruction algorithms, linkages are made with random sampling, compressed sensing, and rate of innovation. The redundancy introduced by channel coding in finite and real Galois fields is then related to over-sampling with similar reconstruction algorithms. The error locator polynomial (ELP) and iterative methods are shown to work quite effectively for both sampling and coding applications. The methods of Prony, Pisarenko, and MUltiple SIgnal Classification (MUSIC) are next shown to be targeted at analyzing signals with sparse frequency domain representations. Specifically, the relations of the approach of Prony to an annihilating filter in rate of innovation and ELP in coding are emphasized; the Pisarenko and MUSIC methods are further improvements of the Prony method under noisy environments. The iterative methods developed for sampling and coding applications are shown to be powerful tools in spectral estimation. Such narrowband spectral estimation is then related to multi-source location and direction of arrival estimation in array processing. Sparsity in unobservable source signals is also shown to facilitate source separation in sparse component analysis; the algorithms developed in this area such as linear programming and matching pursuit are also widely used in compressed sensing. Finally

  15. Sparse regularization for force identification using dictionaries

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  16. Structured Multifrontal Sparse Solver

    Energy Science and Technology Software Center (ESTSC)

    2014-05-01

    StruMF is an algebraic structured preconditioner for the interative solution of large sparse linear systems. The preconditioner corresponds to a multifrontal variant of sparse LU factorization in which some dense blocks of the factors are approximated with low-rank matrices. It is algebraic in that it only requires the linear system itself, and the approximation threshold that determines the accuracy of individual low-rank approximations. Favourable rank properties are obtained using a block partitioning which is amore » refinement of the partitioning induced by nested dissection ordering.« less

  17. Structured Sparse Method for Hyperspectral Unmixing

    NASA Astrophysics Data System (ADS)

    Zhu, Feiyun; Wang, Ying; Xiang, Shiming; Fan, Bin; Pan, Chunhong

    2014-02-01

    Hyperspectral Unmixing (HU) has received increasing attention in the past decades due to its ability of unveiling information latent in hyperspectral data. Unfortunately, most existing methods fail to take advantage of the spatial information in data. To overcome this limitation, we propose a Structured Sparse regularized Nonnegative Matrix Factorization (SS-NMF) method based on the following two aspects. First, we incorporate a graph Laplacian to encode the manifold structures embedded in the hyperspectral data space. In this way, the highly similar neighboring pixels can be grouped together. Second, the lasso penalty is employed in SS-NMF for the fact that pixels in the same manifold structure are sparsely mixed by a common set of relevant bases. These two factors act as a new structured sparse constraint. With this constraint, our method can learn a compact space, where highly similar pixels are grouped to share correlated sparse representations. Experiments on real hyperspectral data sets with different noise levels demonstrate that our method outperforms the state-of-the-art methods significantly.

  18. Decimal fraction representations are not distinct from natural number representations – evidence from a combined eye-tracking and computational modeling approach

    PubMed Central

    Huber, Stefan; Klein, Elise; Willmes, Klaus; Nuerk, Hans-Christoph; Moeller, Korbinian

    2014-01-01

    Decimal fractions comply with the base-10 notational system of natural Arabic numbers. Nevertheless, recent research suggested that decimal fractions may be represented differently than natural numbers because two number processing effects (i.e., semantic interference and compatibility effects) differed in their size between decimal fractions and natural numbers. In the present study, we examined whether these differences indeed indicate that decimal fractions are represented differently from natural numbers. Therefore, we provided an alternative explanation for the semantic congruity effect, namely a string length congruity effect. Moreover, we suggest that the smaller compatibility effect for decimal fractions compared to natural numbers was driven by differences in processing strategy (sequential vs. parallel). To evaluate this claim, we manipulated the tenth and hundredth digits in a magnitude comparison task with participants’ eye movements recorded, while the unit digits remained identical. In addition, we evaluated whether our empirical findings could be simulated by an extended version of our computational model originally developed to simulate magnitude comparisons of two-digit natural numbers. In the eye-tracking study, we found evidence that participants processed decimal fractions more sequentially than natural numbers because of the identical leading digit. Importantly, our model was able to account for the smaller compatibility effect found for decimal fractions. Moreover, string length congruity was an alternative account for the prolonged reaction times for incongruent decimal pairs. Consequently, we suggest that representations of natural numbers and decimal fractions do not differ. PMID:24744717

  19. Structured data-sparse approximation to high order tensors arising from the deterministic Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Khoromskij, Boris N.

    2007-09-01

    We develop efficient data-sparse representations to a class of high order tensors via a block many-fold Kronecker product decomposition. Such a decomposition is based on low separation-rank approximations of the corresponding multivariate generating function. We combine the Sinc interpolation and a quadrature-based approximation with hierarchically organised block tensor-product formats. Different matrix and tensor operations in the generalised Kronecker tensor-product format including the Hadamard-type product can be implemented with the low cost. An application to the collision integral from the deterministic Boltzmann equation leads to an asymptotical cost O(n^4log^beta n) - O(n^5log^beta n) in the one-dimensional problem size n (depending on the model kernel function), which noticeably improves the complexity O(n^6log^beta n) of the full matrix representation.

  20. Sparse inpainting and isotropy

    SciTech Connect

    Feeney, Stephen M.; McEwen, Jason D.; Peiris, Hiranya V.; Marinucci, Domenico; Cammarota, Valentina; Wandelt, Benjamin D. E-mail: marinucc@axp.mat.uniroma2.it E-mail: h.peiris@ucl.ac.uk E-mail: cammarot@axp.mat.uniroma2.it

    2014-01-01

    Sparse inpainting techniques are gaining in popularity as a tool for cosmological data analysis, in particular for handling data which present masked regions and missing observations. We investigate here the relationship between sparse inpainting techniques using the spherical harmonic basis as a dictionary and the isotropy properties of cosmological maps, as for instance those arising from cosmic microwave background (CMB) experiments. In particular, we investigate the possibility that inpainted maps may exhibit anisotropies in the behaviour of higher-order angular polyspectra. We provide analytic computations and simulations of inpainted maps for a Gaussian isotropic model of CMB data, suggesting that the resulting angular trispectrum may exhibit small but non-negligible deviations from isotropy.

  1. A Multiobjective Sparse Feature Learning Model for Deep Neural Networks.

    PubMed

    Gong, Maoguo; Liu, Jia; Li, Hao; Cai, Qing; Su, Linzhi

    2015-12-01

    Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features. PMID:26340790

  2. Sparse matrix test collections

    SciTech Connect

    Duff, I.

    1996-12-31

    This workshop will discuss plans for coordinating and developing sets of test matrices for the comparison and testing of sparse linear algebra software. We will talk of plans for the next release (Release 2) of the Harwell-Boeing Collection and recent work on improving the accessibility of this Collection and others through the World Wide Web. There will only be three talks of about 15 to 20 minutes followed by a discussion from the floor.

  3. Sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1988-01-01

    Theoretical models of the human brain and proposed neural-network computers are developed analytically. Chapters are devoted to the mathematical foundations, background material from computer science, the theory of idealized neurons, neurons as address decoders, and the search of memory for the best match. Consideration is given to sparse memory, distributed storage, the storage and retrieval of sequences, the construction of distributed memory, and the organization of an autonomous learning system.

  4. Sparse distributed memory

    SciTech Connect

    Kanerva, P.

    1988-01-01

    Theoretical models of the human brain and proposed neural-network computers are developed analytically. Chapters are devoted to the mathematical foundations, background material from computer science, the theory of idealized neurons, neurons as address decoders, and the search of memory for the best match. Consideration is given to sparse memory, distributed storage, the storage and retrieval of sequences, the construction of distributed memory, and the organization of an autonomous learning system. 63 refs.

  5. Accurate combined-hyperbolic-inverse-power-representation of ab initio potential energy surface for the hydroperoxyl radical and dynamics study of O + OH reaction.

    PubMed

    Varandas, A J C

    2013-04-01

    The Combined-Hyperbolic-Inverse-Power-Representation method, which treats evenly both short- and long-range interactions, is used to fit an extensive set of ab initio points for HO2 previously utilized [Xu et al., J. Chem. Phys. 122, 244305 (2005)] to develop a spline interpolant. The novel form is shown to perform accurately when compared with others, while quasiclassical trajectory calculations of the O + OH reaction clearly pinpoint the role of long-range forces at low temperatures. PMID:23574218

  6. Optical sparse aperture imaging.

    PubMed

    Miller, Nicholas J; Dierking, Matthew P; Duncan, Bradley D

    2007-08-10

    The resolution of a conventional diffraction-limited imaging system is proportional to its pupil diameter. A primary goal of sparse aperture imaging is to enhance resolution while minimizing the total light collection area; the latter being desirable, in part, because of the cost of large, monolithic apertures. Performance metrics are defined and used to evaluate several sparse aperture arrays constructed from multiple, identical, circular subapertures. Subaperture piston and/or tilt effects on image quality are also considered. We selected arrays with compact nonredundant autocorrelations first described by Golay. We vary both the number of subapertures and their relative spacings to arrive at an optimized array. We report the results of an experiment in which we synthesized an image from multiple subaperture pupil fields by masking a large lens with a Golay array. For this experiment we imaged a slant edge feature of an ISO12233 resolution target in order to measure the modulation transfer function. We note the contrast reduction inherent in images formed through sparse aperture arrays and demonstrate the use of a Wiener-Helstrom filter to restore contrast in our experimental images. Finally, we describe a method to synthesize images from multiple subaperture focal plane intensity images using a phase retrieval algorithm to obtain estimates of subaperture pupil fields. Experimental results from synthesizing an image of a point object from multiple subaperture images are presented, and weaknesses of the phase retrieval method for this application are discussed. PMID:17694146

  7. Learning Low-Rank Class-Specific Dictionary and Sparse Intra-Class Variant Dictionary for Face Recognition

    PubMed Central

    Tang, Xin; Feng, Guo-can; Li, Xiao-xin; Cai, Jia-xin

    2015-01-01

    Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the

  8. Sparse Image Format

    Energy Science and Technology Software Center (ESTSC)

    2007-04-12

    The Sparse Image Format (SIF) is a file format for storing spare raster images. It works by breaking an image down into tiles. Space is savid by only storing non-uniform tiles, i.e. tiles with at least two different pixel values. If a tile is completely uniform, its common pixel value is stored instead of the complete tile raster. The software is a library in the C language used for manipulating files in SIF format. Itmore » supports large files (> 2GB) and is designed to build in Windows and Linux environments.« less

  9. Sparse Image Format

    SciTech Connect

    Eads, Damian Ryan

    2007-04-12

    The Sparse Image Format (SIF) is a file format for storing spare raster images. It works by breaking an image down into tiles. Space is savid by only storing non-uniform tiles, i.e. tiles with at least two different pixel values. If a tile is completely uniform, its common pixel value is stored instead of the complete tile raster. The software is a library in the C language used for manipulating files in SIF format. It supports large files (> 2GB) and is designed to build in Windows and Linux environments.

  10. TASMANIAN Sparse Grids Module

    SciTech Connect

    and Drayton Munster, Miroslav Stoyanov

    2013-09-20

    Sparse Grids are the family of methods of choice for multidimensional integration and interpolation in low to moderate number of dimensions. The method is to select extend a one dimensional set of abscissas, weights and basis functions by taking a subset of all possible tensor products. The module provides the ability to create global and local approximations based on polynomials and wavelets. The software has three components, a library, a wrapper for the library that provides a command line interface via text files ad a MATLAB interface via the command line tool.

  11. TASMANIAN Sparse Grids Module

    Energy Science and Technology Software Center (ESTSC)

    2013-09-20

    Sparse Grids are the family of methods of choice for multidimensional integration and interpolation in low to moderate number of dimensions. The method is to select extend a one dimensional set of abscissas, weights and basis functions by taking a subset of all possible tensor products. The module provides the ability to create global and local approximations based on polynomials and wavelets. The software has three components, a library, a wrapper for the library thatmore » provides a command line interface via text files ad a MATLAB interface via the command line tool.« less

  12. Cerebellar Functional Parcellation Using Sparse Dictionary Learning Clustering.

    PubMed

    Wang, Changqing; Kipping, Judy; Bao, Chenglong; Ji, Hui; Qiu, Anqi

    2016-01-01

    The human cerebellum has recently been discovered to contribute to cognition and emotion beyond the planning and execution of movement, suggesting its functional heterogeneity. We aimed to identify the functional parcellation of the cerebellum using information from resting-state functional magnetic resonance imaging (rs-fMRI). For this, we introduced a new data-driven decomposition-based functional parcellation algorithm, called Sparse Dictionary Learning Clustering (SDLC). SDLC integrates dictionary learning, sparse representation of rs-fMRI, and k-means clustering into one optimization problem. The dictionary is comprised of an over-complete set of time course signals, with which a sparse representation of rs-fMRI signals can be constructed. Cerebellar functional regions were then identified using k-means clustering based on the sparse representation of rs-fMRI signals. We solved SDLC using a multi-block hybrid proximal alternating method that guarantees strong convergence. We evaluated the reliability of SDLC and benchmarked its classification accuracy against other clustering techniques using simulated data. We then demonstrated that SDLC can identify biologically reasonable functional regions of the cerebellum as estimated by their cerebello-cortical functional connectivity. We further provided new insights into the cerebello-cortical functional organization in children. PMID:27199650

  13. Cerebellar Functional Parcellation Using Sparse Dictionary Learning Clustering

    PubMed Central

    Wang, Changqing; Kipping, Judy; Bao, Chenglong; Ji, Hui; Qiu, Anqi

    2016-01-01

    The human cerebellum has recently been discovered to contribute to cognition and emotion beyond the planning and execution of movement, suggesting its functional heterogeneity. We aimed to identify the functional parcellation of the cerebellum using information from resting-state functional magnetic resonance imaging (rs-fMRI). For this, we introduced a new data-driven decomposition-based functional parcellation algorithm, called Sparse Dictionary Learning Clustering (SDLC). SDLC integrates dictionary learning, sparse representation of rs-fMRI, and k-means clustering into one optimization problem. The dictionary is comprised of an over-complete set of time course signals, with which a sparse representation of rs-fMRI signals can be constructed. Cerebellar functional regions were then identified using k-means clustering based on the sparse representation of rs-fMRI signals. We solved SDLC using a multi-block hybrid proximal alternating method that guarantees strong convergence. We evaluated the reliability of SDLC and benchmarked its classification accuracy against other clustering techniques using simulated data. We then demonstrated that SDLC can identify biologically reasonable functional regions of the cerebellum as estimated by their cerebello-cortical functional connectivity. We further provided new insights into the cerebello-cortical functional organization in children. PMID:27199650

  14. A new sparse Bayesian learning method for inverse synthetic aperture radar imaging via exploiting cluster patterns

    NASA Astrophysics Data System (ADS)

    Fang, Jun; Zhang, Lizao; Duan, Huiping; Huang, Lei; Li, Hongbin

    2016-05-01

    The application of sparse representation to SAR/ISAR imaging has attracted much attention over the past few years. This new class of sparse representation based imaging methods present a number of unique advantages over conventional range-Doppler methods, the basic idea behind these works is to formulate SAR/ISAR imaging as a sparse signal recovery problem. In this paper, we propose a new two-dimensional pattern-coupled sparse Bayesian learning(SBL) method to capture the underlying cluster patterns of the ISAR target images. Based on this model, an expectation-maximization (EM) algorithm is developed to infer the maximum a posterior (MAP) estimate of the hyperparameters, along with the posterior distribution of the sparse signal. Experimental results demonstrate that the proposed method is able to achieve a substantial performance improvement over existing algorithms, including the conventional SBL method.

  15. Sparse Coding on Symmetric Positive Definite Manifolds Using Bregman Divergences.

    PubMed

    Harandi, Mehrtash T; Hartley, Richard; Lovell, Brian; Sanderson, Conrad

    2016-06-01

    This paper introduces sparse coding and dictionary learning for symmetric positive definite (SPD) matrices, which are often used in machine learning, computer vision, and related areas. Unlike traditional sparse coding schemes that work in vector spaces, in this paper, we discuss how SPD matrices can be described by sparse combination of dictionary atoms, where the atoms are also SPD matrices. We propose to seek sparse coding by embedding the space of SPD matrices into the Hilbert spaces through two types of the Bregman matrix divergences. This not only leads to an efficient way of performing sparse coding but also an online and iterative scheme for dictionary learning. We apply the proposed methods to several computer vision tasks where images are represented by region covariance matrices. Our proposed algorithms outperform state-of-the-art methods on a wide range of classification tasks, including face recognition, action recognition, material classification, and texture categorization. PMID:25643414

  16. Adaptive sparse signal processing of on-orbit lightning data using learned dictionaries

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Smith, David A.; Hamlin, Timothy D.; Light, Tess E.; Suszcynsky, David M.

    2013-05-01

    For the past two decades, there has been an ongoing research effort at Los Alamos National Laboratory to learn more about the Earth's radiofrequency (RF) background utilizing satellite-based RF observations of terrestrial lightning. The Fast On-orbit Recording of Transient Events (FORTE) satellite provided a rich RF lighting database, comprising of five years of data recorded from its two RF payloads. While some classification work has been done previously on the FORTE RF database, application of modern pattern recognition techniques may advance lightning research in the scientific community and potentially improve on-orbit processing and event discrimination capabilities for future satellite payloads. We now develop and implement new event classification capability on the FORTE database using state-of-the-art adaptive signal processing combined with compressive sensing and machine learning techniques. The focus of our work is improved feature extraction using sparse representations in learned dictionaries. Conventional localized data representations for RF transients using analytical dictionaries, such as a short-time Fourier basis or wavelets, can be suitable for analyzing some types of signals, but not others. Instead, we learn RF dictionaries directly from data, without relying on analytical constraints or additional knowledge about the signal characteristics, using several established machine learning algorithms. Sparse classification features are extracted via matching pursuit search over the learned dictionaries, and used in conjunction with a statistical classifier to distinguish between lightning types. We present preliminary results of our work and discuss classification scenarios and future development.

  17. Resistant multiple sparse canonical correlation.

    PubMed

    Coleman, Jacob; Replogle, Joseph; Chandler, Gabriel; Hardin, Johanna

    2016-04-01

    Canonical correlation analysis (CCA) is a multivariate technique that takes two datasets and forms the most highly correlated possible pairs of linear combinations between them. Each subsequent pair of linear combinations is orthogonal to the preceding pair, meaning that new information is gleaned from each pair. By looking at the magnitude of coefficient values, we can find out which variables can be grouped together, thus better understanding multiple interactions that are otherwise difficult to compute or grasp intuitively. CCA appears to have quite powerful applications to high-throughput data, as we can use it to discover, for example, relationships between gene expression and gene copy number variation. One of the biggest problems of CCA is that the number of variables (often upwards of 10,000) makes biological interpretation of linear combinations nearly impossible. To limit variable output, we have employed a method known as sparse canonical correlation analysis (SCCA), while adding estimation which is resistant to extreme observations or other types of deviant data. In this paper, we have demonstrated the success of resistant estimation in variable selection using SCCA. Additionally, we have used SCCA to find multiple canonical pairs for extended knowledge about the datasets at hand. Again, using resistant estimators provided more accurate estimates than standard estimators in the multiple canonical correlation setting. R code is available and documented at https://github.com/hardin47/rmscca. PMID:26963062

  18. Adaptive sparse grid expansions of the vibrational Hamiltonian

    NASA Astrophysics Data System (ADS)

    Strobusch, D.; Scheurer, Ch.

    2014-02-01

    The vibrational Hamiltonian involves two high dimensional operators, the kinetic energy operator (KEO), and the potential energy surface (PES). Both must be approximated for systems involving more than a few atoms. Adaptive approximation schemes are not only superior to truncated Taylor or many-body expansions (MBE), they also allow for error estimates, and thus operators of predefined precision. To this end, modified sparse grids (SG) are developed that can be combined with adaptive MBEs. This MBE/SG hybrid approach yields a unified, fully adaptive representation of the KEO and the PES. Refinement criteria, based on the vibrational self-consistent field (VSCF) and vibrational configuration interaction (VCI) methods, are presented. The combination of the adaptive MBE/SG approach and the VSCF plus VCI methods yields a black box like procedure to compute accurate vibrational spectra. This is demonstrated on a test set of molecules, comprising water, formaldehyde, methanimine, and ethylene. The test set is first employed to prove convergence for semi-empirical PM3-PESs and subsequently to compute accurate vibrational spectra from CCSD(T)-PESs that agree well with experimental values.

  19. Adaptive sparse grid expansions of the vibrational Hamiltonian

    SciTech Connect

    Strobusch, D.; Scheurer, Ch.

    2014-02-21

    The vibrational Hamiltonian involves two high dimensional operators, the kinetic energy operator (KEO), and the potential energy surface (PES). Both must be approximated for systems involving more than a few atoms. Adaptive approximation schemes are not only superior to truncated Taylor or many-body expansions (MBE), they also allow for error estimates, and thus operators of predefined precision. To this end, modified sparse grids (SG) are developed that can be combined with adaptive MBEs. This MBE/SG hybrid approach yields a unified, fully adaptive representation of the KEO and the PES. Refinement criteria, based on the vibrational self-consistent field (VSCF) and vibrational configuration interaction (VCI) methods, are presented. The combination of the adaptive MBE/SG approach and the VSCF plus VCI methods yields a black box like procedure to compute accurate vibrational spectra. This is demonstrated on a test set of molecules, comprising water, formaldehyde, methanimine, and ethylene. The test set is first employed to prove convergence for semi-empirical PM3-PESs and subsequently to compute accurate vibrational spectra from CCSD(T)-PESs that agree well with experimental values.

  20. Accurate combined-hyperbolic-inverse-power-representation of ab initio potential energy surface for the hydroperoxyl radical and dynamics study of O+OH reaction

    NASA Astrophysics Data System (ADS)

    Varandas, A. J. C.

    2013-04-01

    The Combined-Hyperbolic-Inverse-Power-Representation method, which treats evenly both short- and long-range interactions, is used to fit an extensive set of ab initio points for HO2 previously utilized [Xu et al., J. Chem. Phys. 122, 244305 (2005), 10.1063/1.1944290] to develop a spline interpolant. The novel form is shown to perform accurately when compared with others, while quasiclassical trajectory calculations of the O + OH reaction clearly pinpoint the role of long-range forces at low temperatures.

  1. Free-energy analysis of lysozyme-triNAG binding modes with all-atom molecular dynamics simulation combined with the solution theory in the energy representation

    NASA Astrophysics Data System (ADS)

    Takemura, Kazuhiro; Burri, Raghunadha Reddy; Ishikawa, Takeshi; Ishikura, Takakazu; Sakuraba, Shun; Matubayasi, Nobuyuki; Kuwata, Kazuo; Kitao, Akio

    2013-02-01

    We propose a method for calculating the binding free energy of protein-ligand complexes using all-atom molecular dynamics simulation combined with the solution theory in the energy representation. Four distinct modes for the binding of tri-N-acetyl-D-glucosamine (triNAG) to hen egg-white lysozyme were investigated, one from the crystal structure and three generated by docking predictions. The proposed method was demonstrated to be used to distinguish the most plausible binding mode (crystal model) as the lowest binding energy mode.

  2. Compressed Sampling of Spectrally Sparse Signals Using Sparse Circulant Matrices

    NASA Astrophysics Data System (ADS)

    Xu, Guangjie; Wang, Huali; Sun, Lei; Zeng, Weijun; Wang, Qingguo

    2014-11-01

    Circulant measurement matrices constructed by partial cyclically shifts of one generating sequence, are easier to be implemented in hardware than widely used random measurement matrices; however, the diminishment of randomness makes it more sensitive to signal noise. Selecting a deterministic sequence with optimal periodic autocorrelation property (PACP) as generating sequence, would enhance the noise robustness of circulant measurement matrix, but this kind of deterministic circulant matrices only exists in the fixed periodic length. Actually, the selection of generating sequence doesn't affect the compressive performance of circulant measurement matrix but the subspace energy in spectrally sparse signals. Sparse circulant matrices, whose generating sequence is a sparse sequence, could keep the energy balance of subspaces and have similar noise robustness to deterministic circulant matrices. In addition, sparse circulant matrices have no restriction on length and are more suitable for the compressed sampling of spectrally sparse signals at arbitrary dimensionality.

  3. Sparse decomposition learning based dynamic MRI reconstruction

    NASA Astrophysics Data System (ADS)

    Zhu, Peifei; Zhang, Qieshi; Kamata, Sei-ichiro

    2015-02-01

    Dynamic MRI is widely used for many clinical exams but slow data acquisition becomes a serious problem. The application of Compressed Sensing (CS) demonstrated great potential to increase imaging speed. However, the performance of CS is largely depending on the sparsity of image sequence in the transform domain, where there are still a lot to be improved. In this work, the sparsity is exploited by proposed Sparse Decomposition Learning (SDL) algorithm, which is a combination of low-rank plus sparsity and Blind Compressed Sensing (BCS). With this decomposition, only sparsity component is modeled as a sparse linear combination of temporal basis functions. This enables coefficients to be sparser and remain more details of dynamic components comparing learning the whole images. A reconstruction is performed on the undersampled data where joint multicoil data consistency is enforced by combing Parallel Imaging (PI). The experimental results show the proposed methods decrease about 15~20% of Mean Square Error (MSE) compared to other existing methods.

  4. Sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1989-01-01

    Sparse distributed memory was proposed be Pentti Kanerva as a realizable architecture that could store large patterns and retrieve them based on partial matches with patterns representing current sensory inputs. This memory exhibits behaviors, both in theory and in experiment, that resemble those previously unapproached by machines - e.g., rapid recognition of faces or odors, discovery of new connections between seemingly unrelated ideas, continuation of a sequence of events when given a cue from the middle, knowing that one doesn't know, or getting stuck with an answer on the tip of one's tongue. These behaviors are now within reach of machines that can be incorporated into the computing systems of robots capable of seeing, talking, and manipulating. Kanerva's theory is a break with the Western rationalistic tradition, allowing a new interpretation of learning and cognition that respects biology and the mysteries of individual human beings.

  5. Percolation on Sparse Networks

    NASA Astrophysics Data System (ADS)

    Karrer, Brian; Newman, M. E. J.; Zdeborová, Lenka

    2014-11-01

    We study percolation on networks, which is used as a model of the resilience of networked systems such as the Internet to attack or failure and as a simple model of the spread of disease over human contact networks. We reformulate percolation as a message passing process and demonstrate how the resulting equations can be used to calculate, among other things, the size of the percolating cluster and the average cluster size. The calculations are exact for sparse networks when the number of short loops in the network is small, but even on networks with many short loops we find them to be highly accurate when compared with direct numerical simulations. By considering the fixed points of the message passing process, we also show that the percolation threshold on a network with few loops is given by the inverse of the leading eigenvalue of the so-called nonbacktracking matrix.

  6. A comparison of methods for representing sparsely sampled random quantities.

    SciTech Connect

    Romero, Vicente Jose; Swiler, Laura Painton; Urbina, Angel; Mullins, Joshua

    2013-09-01

    This report discusses the treatment of uncertainties stemming from relatively few samples of random quantities. The importance of this topic extends beyond experimental data uncertainty to situations involving uncertainty in model calibration, validation, and prediction. With very sparse data samples it is not practical to have a goal of accurately estimating the underlying probability density function (PDF). Rather, a pragmatic goal is that the uncertainty representation should be conservative so as to bound a specified percentile range of the actual PDF, say the range between 0.025 and .975 percentiles, with reasonable reliability. A second, opposing objective is that the representation not be overly conservative; that it minimally over-estimate the desired percentile range of the actual PDF. The presence of the two opposing objectives makes the sparse-data uncertainty representation problem interesting and difficult. In this report, five uncertainty representation techniques are characterized for their performance on twenty-one test problems (over thousands of trials for each problem) according to these two opposing objectives and other performance measures. Two of the methods, statistical Tolerance Intervals and a kernel density approach specifically developed for handling sparse data, exhibit significantly better overall performance than the others.

  7. Adaptive sparse signal processing for discrimination of satellite-based radiofrequency (RF) recordings of lightning events

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Smith, David A.

    2015-05-01

    For over two decades, Los Alamos National Laboratory programs have included an active research effort utilizing satellite observations of terrestrial lightning to learn more about the Earth's RF background. The FORTE satellite provided a rich satellite lightning database, which has been previously used for some event classification, and remains relevant for advancing lightning research. Lightning impulses are dispersed as they travel through the ionosphere, appearing as nonlinear chirps at the receiver on orbit. The data processing challenge arises from the combined complexity of the lightning source model, the propagation medium nonlinearities, and the sensor artifacts. We continue to develop modern event classification capability on the FORTE database using adaptive signal processing combined with compressive sensing techniques. The focus of our work is improved feature extraction using sparse representations in overcomplete analytical dictionaries. We explore two possible techniques for detecting lightning events, and showcase the algorithms on few representative data examples. We present preliminary results of our work and discuss future development.

  8. Sparse and optimal acquisition design for diffusion MRI and beyond

    PubMed Central

    Koay, Cheng Guan; Özarslan, Evren; Johnson, Kevin M.; Meyerand, M. Elizabeth

    2012-01-01

    Purpose: Diffusion magnetic resonance imaging (MRI) in combination with functional MRI promises a whole new vista for scientists to investigate noninvasively the structural and functional connectivity of the human brain—the human connectome, which had heretofore been out of reach. As with other imaging modalities, diffusion MRI data are inherently noisy and its acquisition time-consuming. Further, a faithful representation of the human connectome that can serve as a predictive model requires a robust and accurate data-analytic pipeline. The focus of this paper is on one of the key segments of this pipeline—in particular, the development of a sparse and optimal acquisition (SOA) design for diffusion MRI multiple-shell acquisition and beyond. Methods: The authors propose a novel optimality criterion for sparse multiple-shell acquisition and quasimultiple-shell designs in diffusion MRI and a novel and effective semistochastic and moderately greedy combinatorial search strategy with simulated annealing to locate the optimum design or configuration. The goal of the optimality criteria is threefold: first, to maximize uniformity of the diffusion measurements in each shell, which is equivalent to maximal incoherence in angular measurements; second, to maximize coverage of the diffusion measurements around each radial line to achieve maximal incoherence in radial measurements for multiple-shell acquisition; and finally, to ensure maximum uniformity of diffusion measurement directions in the limiting case when all the shells are coincidental as in the case of a single-shell acquisition. The approach taken in evaluating the stability of various acquisition designs is based on the condition number and the A-optimal measure of the design matrix. Results: Even though the number of distinct configurations for a given set of diffusion gradient directions is very large in general—e.g., in the order of 10232 for a set of 144 diffusion gradient directions, the proposed search

  9. Multimodal Wavelet Embedding Representation for data Combination (MaWERiC): Integrating Magnetic Resonance Imaging and Spectroscopy for Prostate Cancer Detection

    PubMed Central

    Tiwari, Pallavi; Kurhanewicz, John; Viswanath, Satish; Sridhar, Akshay; Madabhushi, Anant

    2011-01-01

    Rationale and Objectives To develop a computerized data integration framework (MaWERiC) for quantitatively combining structural and metabolic information from different Magnetic Resonance (MR) imaging modalities. Materials and Methods In this paper, we present a novel computerized support system that we call Multimodal Wavelet Embedding Representation for data Combination (MaWERiC) which (1) employs wavelet theory and dimensionality reduction for providing a common, uniform representation of the different imaging (T2-w) and non-imaging (spectroscopy) MRI channels, and (2) leverages a random forest classifier for automated prostate cancer detection on a per voxel basis from combined 1.5 Tesla in vivo MRI and MRS. Results A total of 36 1.5 T endorectal in vivo T2-w MRI, MRS patient studies were evaluated on a per-voxel via MaWERiC, using a three-fold cross validation scheme across 25 iterations. Ground truth for evaluation of the results was obtained via ex-vivo whole-mount histology sections which served as the gold standard for expert radiologist annotations of prostate cancer on a per-voxel basis. The results suggest that MaWERiC based MRS-T2-w meta-classifier (mean AUC, μ = 0.89 ± 0.02) significantly outperformed (i) a T2-w MRI (employing wavelet texture features) classifier (μ = 0.55± 0.02), (ii) a MRS (employing metabolite ratios) classifier (μ= 0.77 ± 0.03), (iii) a decision-fusion classifier, obtained by combining individual T2-w MRI and MRS classifier outputs (μ = 0.85 ± 0.03) and (iv) a data combination scheme involving combination of metabolic MRS and MR signal intensity features (μ = 0.66± 0.02). Conclusion A novel data integration framework, MaWERiC, for combining imaging and non-imaging MRI channels was presented. Application to prostate cancer detection via combination of T2-w MRI and MRS data demonstrated significantly higher AUC and accuracy values compared to the individual T2-w MRI, MRS modalities and other data integration strategies

  10. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/sqrt{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/sqrt{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  11. Sparse Regulatory Networks

    PubMed Central

    James, Gareth M.; Sabatti, Chiara; Zhou, Nengfeng; Zhu, Ji

    2011-01-01

    In many organisms the expression levels of each gene are controlled by the activation levels of known “Transcription Factors” (TF). A problem of considerable interest is that of estimating the “Transcription Regulation Networks” (TRN) relating the TFs and genes. While the expression levels of genes can be observed, the activation levels of the corresponding TFs are usually unknown, greatly increasing the difficulty of the problem. Based on previous experimental work, it is often the case that partial information about the TRN is available. For example, certain TFs may be known to regulate a given gene or in other cases a connection may be predicted with a certain probability. In general, the biology of the problem indicates there will be very few connections between TFs and genes. Several methods have been proposed for estimating TRNs. However, they all suffer from problems such as unrealistic assumptions about prior knowledge of the network structure or computational limitations. We propose a new approach that can directly utilize prior information about the network structure in conjunction with observed gene expression data to estimate the TRN. Our approach uses L1 penalties on the network to ensure a sparse structure. This has the advantage of being computationally efficient as well as making many fewer assumptions about the network structure. We use our methodology to construct the TRN for E. coli and show that the estimate is biologically sensible and compares favorably with previous estimates. PMID:21625366

  12. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-05-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/√{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/√{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  13. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/√{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/√{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  14. Spatiotemporal System Identification With Continuous Spatial Maps and Sparse Estimation.

    PubMed

    Aram, Parham; Kadirkamanathan, Visakan; Anderson, Sean R

    2015-11-01

    We present a framework for the identification of spatiotemporal linear dynamical systems. We use a state-space model representation that has the following attributes: 1) the number of spatial observation locations are decoupled from the model order; 2) the model allows for spatial heterogeneity; 3) the model representation is continuous over space; and 4) the model parameters can be identified in a simple and sparse estimation procedure. The model identification procedure we propose has four steps: 1) decomposition of the continuous spatial field using a finite set of basis functions where spatial frequency analysis is used to determine basis function width and spacing, such that the main spatial frequency contents of the underlying field can be captured; 2) initialization of states in closed form; 3) initialization of state-transition and input matrix model parameters using sparse regression-the least absolute shrinkage and selection operator method; and 4) joint state and parameter estimation using an iterative Kalman-filter/sparse-regression algorithm. To investigate the performance of the proposed algorithm we use data generated by the Kuramoto model of spatiotemporal cortical dynamics. The identification algorithm performs successfully, predicting the spatiotemporal field with high accuracy, whilst the sparse regression leads to a compact model. PMID:25647667

  15. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    PubMed Central

    Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus

    2015-01-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. PMID:26146475

  16. Completeness for sparse potential scattering

    SciTech Connect

    Shen, Zhongwei

    2014-01-15

    The present paper is devoted to the scattering theory of a class of continuum Schrödinger operators with deterministic sparse potentials. We first establish the limiting absorption principle for both modified free resolvents and modified perturbed resolvents. This actually is a weak form of the classical limiting absorption principle. We then prove the existence and completeness of local wave operators, which, in particular, imply the existence of wave operators. Under additional assumptions on the sparse potential, we prove the completeness of wave operators. In the context of continuum Schrödinger operators with sparse potentials, this paper gives the first proof of the completeness of wave operators.

  17. Manipulating Representations.

    PubMed

    Recchia-Luciani, Angelo N M

    2012-04-01

    The present paper proposes a definition for the complex polysemic concepts of consciousness and awareness (in humans as well as in other species), and puts forward the idea of a progressive ontological development of consciousness from a state of 'childhood' awareness, in order to explain that humans are not only able to manipulate objects, but also their mental representations. The paper builds on the idea of qualia intended as entities posing regular invariant requests to neural processes, trough the permanence of different properties. The concept of semantic differential introduces the properties of metaphorical qualia as an exclusively human ability. Furthermore this paper proposes a classification of qualia, according to the models-with different levels of abstraction-they are implied in, in a taxonomic perspective. This, in turn, becomes a source of categorization of divergent representations, sign systems, and forms of intentionality, relying always on biological criteria. New emerging image-of-the-world-devices are proposed, whose qualia are likely to be only accessible to humans: emotional qualia, where emotion accounts for the invariant and dominant property; and the qualic self where continuity, combined with the oneness of the self, accounts for the invariant and dominant property. The concept of congruence between different domains in a metaphor introduces the possibility of a general evaluation of truth and falsity of all kinds of metaphorical constructs, while the work of Matte Blanco enables us to classify conscious versus unconscious metaphors, both in individuals and in social organizations. PMID:22347988

  18. Scene Classfication Based on the Semantic-Feature Fusion Fully Sparse Topic Model for High Spatial Resolution Remote Sensing Imagery

    NASA Astrophysics Data System (ADS)

    Zhu, Qiqi; Zhong, Yanfei; Zhang, Liangpei

    2016-06-01

    Topic modeling has been an increasingly mature method to bridge the semantic gap between the low-level features and high-level semantic information. However, with more and more high spatial resolution (HSR) images to deal with, conventional probabilistic topic model (PTM) usually presents the images with a dense semantic representation. This consumes more time and requires more storage space. In addition, due to the complex spectral and spatial information, a combination of multiple complementary features is proved to be an effective strategy to improve the performance for HSR image scene classification. But it should be noticed that how the distinct features are fused to fully describe the challenging HSR images, which is a critical factor for scene classification. In this paper, a semantic-feature fusion fully sparse topic model (SFF-FSTM) is proposed for HSR imagery scene classification. In SFF-FSTM, three heterogeneous features - the mean and standard deviation based spectral feature, wavelet based texture feature, and dense scale-invariant feature transform (SIFT) based structural feature are effectively fused at the latent semantic level. The combination of multiple semantic-feature fusion strategy and sparse based FSTM is able to provide adequate feature representations, and can achieve comparable performance with limited training samples. Experimental results on the UC Merced dataset and Google dataset of SIRI-WHU demonstrate that the proposed method can improve the performance of scene classification compared with other scene classification methods for HSR imagery.

  19. Integer sparse distributed memory: analysis and results.

    PubMed

    Snaider, Javier; Franklin, Stan; Strain, Steve; George, E Olusegun

    2013-10-01

    Sparse distributed memory is an auto-associative memory system that stores high dimensional Boolean vectors. Here we present an extension of the original SDM, the Integer SDM that uses modular arithmetic integer vectors rather than binary vectors. This extension preserves many of the desirable properties of the original SDM: auto-associativity, content addressability, distributed storage, and robustness over noisy inputs. In addition, it improves the representation capabilities of the memory and is more robust over normalization. It can also be extended to support forgetting and reliable sequence storage. We performed several simulations that test the noise robustness property and capacity of the memory. Theoretical analyses of the memory's fidelity and capacity are also presented. PMID:23747569

  20. Sparse Representation and Multiscale Methods - Application to Digital Elevation Models

    NASA Astrophysics Data System (ADS)

    Stefanescu, R. E. R.; Patra, A. K.; Bursik, M. I.

    2014-12-01

    In general, a Digital Elevation Model (DEM) is produced either digitizing existing maps and elevation values are interpolated from the contours, or elevation information is collected from stereo imagery on digital photogrammetric workstations. Both methods produce a DEM to the required specification, but each method contains a variety of possible production scenarios, and each method results in DEM cells with totally different character. Common artifacts found in DEM are missing-values at different location which can influence the output of the application that uses this particular DEM. In this work we introduce a numerically-stable multiscale scheme to evaluate the missing-value DEM's quantity of interest (elevation, slope, etc.). This method is very efficient for the case when dealing with large high resolution DEMs that cover large area, resulting in O(106-1010) data points. Our scheme relies on graph-based algorithms and low-rank approximations of the entire adjacency matrix of the DEM's graph. When dealing with large data sets such as DEMs, the Laplacian or kernel matrix resulted from the interaction of the data points is stupendously big. One needs to identify a subspace that capture most of the action of the kernel matrix. By the application of a randomized projection on the graph affinity matrix, a well-conditioned basis is identified for it numerical range. This basis is later used in out-of-sample extension at missing-value location. In many cases, this method beats its classical competitors in terms of accuracy, speed, and robustness.

  1. Heart rate analysis by sparse representation for acute pain detection.

    PubMed

    Tejman-Yarden, Shai; Levi, Ofer; Beizerov, Alex; Parmet, Yisrael; Nguyen, Tu; Saunders, Michael; Rudich, Zvia; Perry, James C; Baker, Dewleen G; Moeller-Bertram, Tobias

    2016-04-01

    Objective pain assessment methods pose an advantage over the currently used subjective pain rating tools. Advanced signal processing methodologies, including the wavelet transform (WT) and the orthogonal matching pursuit algorithm (OMP), were developed in the past two decades. The aim of this study was to apply and compare these time-specific methods to heart rate samples of healthy subjects for acute pain detection. Fifteen adult volunteers participated in a study conducted in the pain clinic at a single center. Each subject's heart rate was sampled for 5-min baseline, followed by a cold pressor test (CPT). Analysis was done by the WT and the OMP algorithm with a Fourier/Wavelet dictionary separately. Data from 11 subjects were analyzed. Compared to baseline, The WT analysis showed a significant coefficients' density increase during the pain incline period (p < 0.01) and the entire CPT (p < 0.01), with significantly higher coefficient amplitudes. The OMP analysis showed a significant wavelet coefficients' density increase during pain incline and decline periods (p < 0.01, p < 0.05) and the entire CPT (p < 0.001), with suggestive higher amplitudes. Comparison of both methods showed that during the baseline there was a significant reduction in wavelet coefficient density using the OMP algorithm (p < 0.001). Analysis by the two-way ANOVA with repeated measures showed a significant proportional increase in wavelet coefficients during the incline period and the entire CPT using the OMP algorithm (p < 0.01). Both methods provided accurate and non-delayed detection of pain events. Statistical analysis proved the OMP to be by far more specific allowing the Fourier coefficients to represent the signal's basic harmonics and the wavelet coefficients to focus on the time-specific painful event. This is an initial study using OMP for pain detection; further studies need to prove the efficiency of this system in different settings. PMID:26264057

  2. Image superresolution by midfrequency sparse representation and total variation regularization

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi

    2015-01-01

    Machine learning has provided many good tools for superresolution, whereas existing methods still need to be improved in many aspects. On one hand, the memory and time cost should be reduced. On the other hand, the step edges of the results obtained by the existing methods are not clear enough. We do the following work. First, we propose a method to extract the midfrequency features for dictionary learning. This method brings the benefit of a reduction of the memory and time complexity without sacrificing the performance. Second, we propose a detailed wiping-off total variation (DWO-TV) regularization model to reconstruct the sharp step edges. This model adds a novel constraint on the downsampling version of the high-resolution image to wipe off the details and artifacts and sharpen the step edges. Finally, step edges produced by the DWO-TV regularization and the details provided by learning are fused. Experimental results show that the proposed method offers a desirable compromise between low time and memory cost and the reconstruction quality.

  3. Towards robust topology of sparsely sampled data.

    PubMed

    Correa, Carlos D; Lindstrom, Peter

    2011-12-01

    Sparse, irregular sampling is becoming a necessity for reconstructing large and high-dimensional signals. However, the analysis of this type of data remains a challenge. One issue is the robust selection of neighborhoods--a crucial part of analytic tools such as topological decomposition, clustering and gradient estimation. When extracting the topology of sparsely sampled data, common neighborhood strategies such as k-nearest neighbors may lead to inaccurate results, either due to missing neighborhood connections, which introduce false extrema, or due to spurious connections, which conceal true extrema. Other neighborhoods, such as the Delaunay triangulation, are costly to compute and store even in relatively low dimensions. In this paper, we address these issues. We present two new types of neighborhood graphs: a variation on and a generalization of empty region graphs, which considerably improve the robustness of neighborhood-based analysis tools, such as topological decomposition. Our findings suggest that these neighborhood graphs lead to more accurate topological representations of low- and high- dimensional data sets at relatively low cost, both in terms of storage and computation time. We describe the implications of our work in the analysis and visualization of scalar functions, and provide general strategies for computing and applying our neighborhood graphs towards robust data analysis. PMID:22034302

  4. Why Representations?

    ERIC Educational Resources Information Center

    Schultz, James E.; Waters, Michael S.

    2000-01-01

    Discusses representations in the context of solving a system of linear equations. Views representations (concrete, tables, graphs, algebraic, matrices) from perspectives of understanding, technology, generalization, exact versus approximate solution, and learning style. (KHR)

  5. Threaded Operations on Sparse Matrices

    SciTech Connect

    Sneed, Brett

    2015-09-01

    We investigate the use of sparse matrices and OpenMP multi-threading on linear algebra operations involving them. Several sparse matrix data structures are presented. Implementation of the multi- threading primarily occurs in the level one and two BLAS functions used within the four algorithms investigated{the Power Method, Conjugate Gradient, Biconjugate Gradient, and Jacobi's Method. The bene ts of launching threads once per high level algorithm are explored.

  6. Segmenting hippocampus from infant brains by sparse patch matching with deep-learned features.

    PubMed

    Guo, Yanrong; Wu, Guorong; Commander, Leah A; Szary, Stephanie; Jewells, Valerie; Lin, Weili; Shent, Dinggang

    2014-01-01

    Accurate segmentation of the hippocampus from infant MR brain images is a critical step for investigating early brain development. Unfortunately, the previous tools developed for adult hippocampus segmentation are not suitable for infant brain images acquired from the first year of life, which often have poor tissue contrast and variable structural patterns of early hippocampal development. From our point of view, the main problem is lack of discriminative and robust feature representations for distinguishing the hippocampus from the surrounding brain structures. Thus, instead of directly using the predefined features as popularly used in the conventional methods, we propose to learn the latent feature representations of infant MR brain images by unsupervised deep learning. Since deep learning paradigms can learn low-level features and then successfully build up more comprehensive high-level features in a layer-by-layer manner, such hierarchical feature representations can be more competitive for distinguishing the hippocampus from entire brain images. To this end, we apply Stacked Auto Encoder (SAE) to learn the deep feature representations from both T1- and T2-weighed MR images combining their complementary information, which is important for characterizing different development stages of infant brains after birth. Then, we present a sparse patch matching method for transferring hippocampus labels from multiple atlases to the new infant brain image, by using deep-learned feature representations to measure the interpatch similarity. Experimental results on 2-week-old to 9-month-old infant brain images show the effectiveness of the proposed method, especially compared to the state-of-the-art counterpart methods. PMID:25485393

  7. Ontology Sparse Vector Learning Algorithm for Ontology Similarity Measuring and Ontology Mapping via ADAL Technology

    NASA Astrophysics Data System (ADS)

    Gao, Wei; Zhu, Linli; Wang, Kaiyun

    2015-12-01

    Ontology, a model of knowledge representation and storage, has had extensive applications in pharmaceutics, social science, chemistry and biology. In the age of “big data”, the constructed concepts are often represented as higher-dimensional data by scholars, and thus the sparse learning techniques are introduced into ontology algorithms. In this paper, based on the alternating direction augmented Lagrangian method, we present an ontology optimization algorithm for ontological sparse vector learning, and a fast version of such ontology technologies. The optimal sparse vector is obtained by an iterative procedure, and the ontology function is then obtained from the sparse vector. Four simulation experiments show that our ontological sparse vector learning model has a higher precision ratio on plant ontology, humanoid robotics ontology, biology ontology and physics education ontology data for similarity measuring and ontology mapping applications.

  8. Predefined Redundant Dictionary for Effective Depth Maps Representation

    NASA Astrophysics Data System (ADS)

    Sebai, Dorsaf; Chaieb, Faten; Ghorbel, Faouzi

    2016-01-01

    The multi-view video plus depth (MVD) video format consists of two components: texture and depth map, where a combination of these components enables a receiver to generate arbitrary virtual views. However, MVD presents a very voluminous video format that requires a compression process for storage and especially for transmission. Conventional codecs are perfectly efficient for texture images compression but not for intrinsic depth maps properties. Depth images indeed are characterized by areas of smoothly varying grey levels separated by sharp discontinuities at the position of object boundaries. Preserving these characteristics is important to enable high quality view synthesis at the receiver side. In this paper, sparse representation of depth maps is discussed. It is shown that a significant gain in sparsity is achieved when particular mixed dictionaries are used for approximating these types of images with greedy selection strategies. Experiments are conducted to confirm the effectiveness at producing sparse representations, and competitiveness, with respect to candidate state-of-art dictionaries. Finally, the resulting method is shown to be effective for depth maps compression and represents an advantage over the ongoing 3D high efficiency video coding compression standard, particularly at medium and high bitrates.

  9. Inverse sparse tracker with a locally weighted distance metric.

    PubMed

    Wang, Dong; Lu, Huchuan; Xiao, Ziyang; Yang, Ming-Hsuan

    2015-09-01

    Sparse representation has been recently extensively studied for visual tracking and generally facilitates more accurate tracking results than classic methods. In this paper, we propose a sparsity-based tracking algorithm that is featured with two components: 1) an inverse sparse representation formulation and 2) a locally weighted distance metric. In the inverse sparse representation formulation, the target template is reconstructed with particles, which enables the tracker to compute the weights of all particles by solving only one l1 optimization problem and thereby provides a quite efficient model. This is in direct contrast to most previous sparse trackers that entail solving one optimization problem for each particle. However, we notice that this formulation with normal Euclidean distance metric is sensitive to partial noise like occlusion and illumination changes. To this end, we design a locally weighted distance metric to replace the Euclidean one. Similar ideas of using local features appear in other works, but only being supported by popular assumptions like local models could handle partial noise better than holistic models, without any solid theoretical analysis. In this paper, we attempt to explicitly explain it from a mathematical view. On that basis, we further propose a method to assign local weights by exploiting the temporal and spatial continuity. In the proposed method, appearance changes caused by partial occlusion and shape deformation are carefully considered, thereby facilitating accurate similarity measurement and model update. The experimental validation is conducted from two aspects: 1) self validation on key components and 2) comparison with other state-of-the-art algorithms. Results over 15 challenging sequences show that the proposed tracking algorithm performs favorably against the existing sparsity-based trackers and the other state-of-the-art methods. PMID:25935033

  10. A view of Kanerva's sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1986-01-01

    Pentti Kanerva is working on a new class of computers, which are called pattern computers. Pattern computers may close the gap between capabilities of biological organisms to recognize and act on patterns (visual, auditory, tactile, or olfactory) and capabilities of modern computers. Combinations of numeric, symbolic, and pattern computers may one day be capable of sustaining robots. The overview of the requirements for a pattern computer, a summary of Kanerva's Sparse Distributed Memory (SDM), and examples of tasks this computer can be expected to perform well are given.

  11. Simultaneously Sparse and Low-Rank Abundance Matrix Estimation for Hyperspectral Image Unmixing

    NASA Astrophysics Data System (ADS)

    Giampouras, Paris V.; Themelis, Konstantinos E.; Rontogiannis, Athanasios A.; Koutroumbas, Konstantinos D.

    2016-08-01

    In a plethora of applications dealing with inverse problems, e.g. in image processing, social networks, compressive sensing, biological data processing etc., the signal of interest is known to be structured in several ways at the same time. This premise has recently guided the research to the innovative and meaningful idea of imposing multiple constraints on the parameters involved in the problem under study. For instance, when dealing with problems whose parameters form sparse and low-rank matrices, the adoption of suitably combined constraints imposing sparsity and low-rankness, is expected to yield substantially enhanced estimation results. In this paper, we address the spectral unmixing problem in hyperspectral images. Specifically, two novel unmixing algorithms are introduced, in an attempt to exploit both spatial correlation and sparse representation of pixels lying in homogeneous regions of hyperspectral images. To this end, a novel convex mixed penalty term is first defined consisting of the sum of the weighted $\\ell_1$ and the weighted nuclear norm of the abundance matrix corresponding to a small area of the image determined by a sliding square window. This penalty term is then used to regularize a conventional quadratic cost function and impose simultaneously sparsity and row-rankness on the abundance matrix. The resulting regularized cost function is minimized by a) an incremental proximal sparse and low-rank unmixing algorithm and b) an algorithm based on the alternating minimization method of multipliers (ADMM). The effectiveness of the proposed algorithms is illustrated in experiments conducted both on simulated and real data.

  12. Mental Representations of Weekdays

    PubMed Central

    Ellis, David A.; Wiseman, Richard; Jenkins, Rob

    2015-01-01

    Keeping social appointments involves keeping track of what day it is. In practice, mismatches between apparent day and actual day are common. For example, a person might think the current day is Wednesday when in fact it is Thursday. Here we show that such mismatches are highly systematic, and can be traced to specific properties of their mental representations. In Study 1, mismatches between apparent day and actual day occurred more frequently on midweek days (Tuesday, Wednesday, and Thursday) than on other days, and were mainly due to intrusions from immediately neighboring days. In Study 2, reaction times to report the current day were fastest on Monday and Friday, and slowest midweek. In Study 3, participants generated fewer semantic associations for “Tuesday”, “Wednesday” and “Thursday” than for other weekday names. Similarly, Google searches found fewer occurrences of midweek days in webpages and books. Analysis of affective norms revealed that participants’ associations were strongly negative for Monday, strongly positive for Friday, and graded over the intervening days. Midweek days are confusable because their mental representations are sparse and similar. Mondays and Fridays are less confusable because their mental representations are rich and distinctive, forming two extremes along a continuum of change. PMID:26288194

  13. Clutter Mitigation in Echocardiography Using Sparse Signal Separation

    PubMed Central

    Turek, Javier S.; Elad, Michael; Yavneh, Irad

    2015-01-01

    In ultrasound imaging, clutter artifacts degrade images and may cause inaccurate diagnosis. In this paper, we apply a method called Morphological Component Analysis (MCA) for sparse signal separation with the objective of reducing such clutter artifacts. The MCA approach assumes that the two signals in the additive mix have each a sparse representation under some dictionary of atoms (a matrix), and separation is achieved by finding these sparse representations. In our work, an adaptive approach is used for learning the dictionary from the echo data. MCA is compared to Singular Value Filtering (SVF), a Principal Component Analysis- (PCA-) based filtering technique, and to a high-pass Finite Impulse Response (FIR) filter. Each filter is applied to a simulated hypoechoic lesion sequence, as well as experimental cardiac ultrasound data. MCA is demonstrated in both cases to outperform the FIR filter and obtain results comparable to the SVF method in terms of contrast-to-noise ratio (CNR). Furthermore, MCA shows a lower impact on tissue sections while removing the clutter artifacts. In experimental heart data, MCA obtains in our experiments clutter mitigation with an average CNR improvement of 1.33 dB. PMID:26199622

  14. Sparse encoding of automatic visual association in hippocampal networks.

    PubMed

    Hulme, Oliver J; Skov, Martin; Chadwick, Martin J; Siebner, Hartwig R; Ramsøy, Thomas Z

    2014-11-15

    Intelligent action entails exploiting predictions about associations between elements of ones environment. The hippocampus and mediotemporal cortex are endowed with the network topology, physiology, and neurochemistry to automatically and sparsely code sensori-cognitive associations that can be reconstructed from single or partial inputs. Whilst acquiring fMRI data and performing an attentional task, participants were incidentally presented with a sequence of cartoon images. By assigning subjects a post-scan free-association task on the same images we assayed the density of associations triggered by these stimuli. Using multivariate Bayesian decoding, we show that human hippocampal and temporal neocortical structures host sparse associative representations that are automatically triggered by visual input. Furthermore, as predicted theoretically, there was a significant increase in sparsity in the Cornu Ammonis subfields, relative to the entorhinal cortex. Remarkably, the sparsity of CA encoding correlated significantly with associative memory performance over subjects; elsewhere within the temporal lobe, entorhinal, parahippocampal, perirhinal and fusiform cortices showed the highest model evidence for the sparse encoding of associative density. In the absence of reportability or attentional confounds, this charts a distribution of visual associative representations within hippocampal populations and their temporal lobe afferent fields, and demonstrates the viability of retrospective associative sampling techniques for assessing the form of reflexive associative encoding. PMID:25038440

  15. Dentate Gyrus Circuitry Features Improve Performance of Sparse Approximation Algorithms

    PubMed Central

    Petrantonakis, Panagiotis C.; Poirazi, Panayiota

    2015-01-01

    Memory-related activity in the Dentate Gyrus (DG) is characterized by sparsity. Memory representations are seen as activated neuronal populations of granule cells, the main encoding cells in DG, which are estimated to engage 2–4% of the total population. This sparsity is assumed to enhance the ability of DG to perform pattern separation, one of the most valuable contributions of DG during memory formation. In this work, we investigate how features of the DG such as its excitatory and inhibitory connectivity diagram can be used to develop theoretical algorithms performing Sparse Approximation, a widely used strategy in the Signal Processing field. Sparse approximation stands for the algorithmic identification of few components from a dictionary that approximate a certain signal. The ability of DG to achieve pattern separation by sparsifing its representations is exploited here to improve the performance of the state of the art sparse approximation algorithm “Iterative Soft Thresholding” (IST) by adding new algorithmic features inspired by the DG circuitry. Lateral inhibition of granule cells, either direct or indirect, via mossy cells, is shown to enhance the performance of the IST. Apart from revealing the potential of DG-inspired theoretical algorithms, this work presents new insights regarding the function of particular cell types in the pattern separation task of the DG. PMID:25635776

  16. A sparse neural code for some speech sounds but not for others.

    PubMed

    Scharinger, Mathias; Bendixen, Alexandra; Trujillo-Barreto, Nelson J; Obleser, Jonas

    2012-01-01

    The precise neural mechanisms underlying speech sound representations are still a matter of debate. Proponents of 'sparse representations' assume that on the level of speech sounds, only contrastive or otherwise not predictable information is stored in long-term memory. Here, in a passive oddball paradigm, we challenge the neural foundations of such a 'sparse' representation; we use words that differ only in their penultimate consonant ("coronal" [t] vs. "dorsal" [k] place of articulation) and for example distinguish between the German nouns Latz ([lats]; bib) and Lachs ([laks]; salmon). Changes from standard [t] to deviant [k] and vice versa elicited a discernible Mismatch Negativity (MMN) response. Crucially, however, the MMN for the deviant [lats] was stronger than the MMN for the deviant [laks]. Source localization showed this difference to be due to enhanced brain activity in right superior temporal cortex. These findings reflect a difference in phonological 'sparsity': Coronal [t] segments, but not dorsal [k] segments, are based on more sparse representations and elicit less specific neural predictions; sensory deviations from this prediction are more readily 'tolerated' and accordingly trigger weaker MMNs. The results foster the neurocomputational reality of 'representationally sparse' models of speech perception that are compatible with more general predictive mechanisms in auditory perception. PMID:22815876

  17. Global and Local Sparse Subspace Optimization for Motion Segmentation

    NASA Astrophysics Data System (ADS)

    Yang, M. Ying; Feng, S.; Ackermann, H.; Rosenhahn, B.

    2015-08-01

    In this paper, we propose a new framework for segmenting feature-based moving objects under affine subspace model. Since the feature trajectories in practice are high-dimensional and contain a lot of noise, we firstly apply the sparse PCA to represent the original trajectories with a low-dimensional global subspace, which consists of the orthogonal sparse principal vectors. Subsequently, the local subspace separation will be achieved via automatically searching the sparse representation of the nearest neighbors for each projected data. In order to refine the local subspace estimation result, we propose an error estimation to encourage the projected data that span a same local subspace to be clustered together. In the end, the segmentation of different motions is achieved through the spectral clustering on an affinity matrix, which is constructed with both the error estimation and sparse neighbors optimization. We test our method extensively and compare it with state-of-the-art methods on the Hopkins 155 dataset. The results show that our method is comparable with the other motion segmentation methods, and in many cases exceed them in terms of precision and computation time.

  18. Varying Coefficient Models for Sparse Noise-contaminated Longitudinal Data

    PubMed Central

    2014-01-01

    Summary In this paper we propose a varying coefficient model for highly sparse longitudinal data that allows for error-prone time-dependent variables and time-invariant covariates. We develop a new estimation procedure, based on covariance representation techniques, that enables effective borrowing of information across all subjects in sparse and irregular longitudinal data observed with measurement error, a challenge in which there is no adequate solution currently. More specifically, sparsity is addressed via a functional analysis approach that considers the observed longitudinal data as noise contaminated realizations of a random process that produces smooth trajectories. This approach allows for estimation based on pooled data, borrowing strength from all subjects, in targeting the mean functions and auto- and cross-covariances to overcome sparse noisy designs. The resulting estimators are shown to be uniformly consistent. Consistent prediction for the response trajectories are also obtained via conditional expectation under Gaussian assumptions. Asymptotic distribution of the predicted response trajectories are derived, allowing for construction of asymptotic pointwise confidence bands. Efficacy of the proposed method is investigated in simulation studies and compared to the commonly used local polynomial smoothing method. The proposed method is illustrated with a sparse longitudinal data set, examining the age-varying relationship between calcium absorption and dietary calcium. Prediction of individual calcium absorption curves as a function of age are also examined. PMID:25589822

  19. A scalable 2-D parallel sparse solver

    SciTech Connect

    Kothari, S.C.; Mitra, S.

    1995-12-01

    Scalability beyond a small number of processors, typically 32 or less, is known to be a problem for existing parallel general sparse (PGS) direct solvers. This paper presents a parallel general sparse PGS direct solver for general sparse linear systems on distributed memory machines. The algorithm is based on the well-known sequential sparse algorithm Y12M. To achieve efficient parallelization, a 2-D scattered decomposition of the sparse matrix is used. The proposed algorithm is more scalable than existing parallel sparse direct solvers. Its scalability is evaluated on a 256 processor nCUBE2s machine using Boeing/Harwell benchmark matrices.

  20. Adaptive sparse signal processing of on-orbit lightning data using learned dictionaries

    NASA Astrophysics Data System (ADS)

    Moody, D. I.; Hamlin, T.; Light, T. E.; Loveland, R. C.; Smith, D. A.; Suszcynsky, D. M.

    2012-12-01

    For the past two decades, there has been an ongoing research effort at Los Alamos National Laboratory (LANL) to learn more about the Earth's radiofrequency (RF) background utilizing satellite-based RF observations of terrestrial lightning. Arguably the richest satellite lightning database ever recorded is that from the Fast On-orbit Recording of Transient Events (FORTE) satellite, which returned at least five years of data from its two RF payloads after launch in 1997. While some classification work has been done previously on the LANL FORTE RF database, application of modern pattern recognition techniques may further lightning research in the scientific community and potentially improve on-orbit processing and event discrimination capabilities for future satellite payloads. We now develop and implement new event classification capability on the FORTE database using state-of-the-art adaptive signal processing combined with compressive sensing and machine learning techniques. The focus of our work is improved feature extraction using sparse representations in learned dictionaries. Extracting classification features from RF signals typically relies on knowledge of the application domain in order to find feature vectors unique to a signal class and robust against background noise. Conventional localized data representations for RF transients using analytical dictionaries, such as a short-time Fourier basis or wavelets, can be suitable for analyzing some types of signals, but not others. Instead, we learn RF dictionaries directly from data, without relying on analytical constraints or additional knowledge about the signal characteristics, using several established machine learning algorithms. Sparse classification features are extracted via matching pursuit search over the learned dictionaries, and used in conjunction with a statistical classifier to distinguish between lightning types. We present preliminary results of our work and discuss classification performance

  1. Iterative Sparse Approximation of the Gravitational Potential

    NASA Astrophysics Data System (ADS)

    Telschow, R.

    2012-04-01

    In recent applications in the approximation of gravitational potential fields, several new challenges arise. We are concerned with a huge quantity of data (e.g. in case of the Earth) or strongly irregularly distributed data points (e.g. in case of the Juno mission to Jupiter), where both of these problems bring the established approximation methods to their limits. Our novel method, which is a matching pursuit, however, iteratively chooses a best basis out of a large redundant family of trial functions to reconstruct the signal. It is independent of the data points which makes it possible to take into account a much higher amount of data and, furthermore, handle irregularly distributed data, since the algorithm is able to combine arbitrary spherical basis functions, i.e., global as well as local trial functions. This additionaly results in a solution, which is sparse in the sense that it features more basis functions where the signal has a higher local detail density. Summarizing, we get a method which reconstructs large quantities of data with a preferably low number of basis functions, combining global as well as several localizing functions to a sparse basis and a solution which is locally adapted to the data density and also to the detail density of the signal.

  2. The Use of Lesson Study Combined with Content Representation in the Planning of Physics Lessons During Field Practice to Develop Pedagogical Content Knowledge

    NASA Astrophysics Data System (ADS)

    Juhler, Martin Vogt

    2016-08-01

    Recent research, both internationally and in Norway, has clearly expressed concerns about missing connections between subject-matter knowledge, pedagogical competence and real-life practice in schools. This study addresses this problem within the domain of field practice in teacher education, studying pre-service teachers' planning of a Physics lesson. Two means of intervention were introduced. The first was lesson study, which is a method for planning, carrying out and reflecting on a research lesson in detail with a learner and content-centered focus. This was used in combination with a second means, content representations, which is a systematic tool that connects overall teaching aims with pedagogical prompts. Changes in teaching were assessed through the construct of pedagogical content knowledge (PCK). A deductive coding analysis was carried out for this purpose. Transcripts of pre-service teachers' planning of a Physics lesson were coded into four main PCK categories, which were thereafter divided into 16 PCK sub-categories. The results showed that the intervention affected the pre-service teachers' potential to start developing PCK. First, they focused much more on categories concerning the learners. Second, they focused far more uniformly in all of the four main categories comprising PCK. Consequently, these differences could affect their potential to start developing PCK.

  3. The Use of Lesson Study Combined with Content Representation in the Planning of Physics Lessons During Field Practice to Develop Pedagogical Content Knowledge

    NASA Astrophysics Data System (ADS)

    Juhler, Martin Vogt

    2016-05-01

    Recent research, both internationally and in Norway, has clearly expressed concerns about missing connections between subject-matter knowledge, pedagogical competence and real-life practice in schools. This study addresses this problem within the domain of field practice in teacher education, studying pre-service teachers' planning of a Physics lesson. Two means of intervention were introduced. The first was lesson study, which is a method for planning, carrying out and reflecting on a research lesson in detail with a learner and content-centered focus. This was used in combination with a second means, content representations, which is a systematic tool that connects overall teaching aims with pedagogical prompts. Changes in teaching were assessed through the construct of pedagogical content knowledge (PCK). A deductive coding analysis was carried out for this purpose. Transcripts of pre-service teachers' planning of a Physics lesson were coded into four main PCK categories, which were thereafter divided into 16 PCK sub-categories. The results showed that the intervention affected the pre-service teachers' potential to start developing PCK. First, they focused much more on categories concerning the learners. Second, they focused far more uniformly in all of the four main categories comprising PCK. Consequently, these differences could affect their potential to start developing PCK.

  4. Free-energy analysis of water affinity in polymer studied by atomistic molecular simulation combined with the theory of solutions in the energy representation

    NASA Astrophysics Data System (ADS)

    Kawakami, Tomonori; Shigemoto, Isamu; Matubayasi, Nobuyuki

    2012-12-01

    Affinity of small molecule to polymer is an essential property for designing polymer materials with tuned permeability. In the present work, we develop a computational approach to the free energy ΔG of binding a small solute molecule into polymer using the atomistic molecular dynamics (MD) simulation combined with the method of energy representation. The binding free energy ΔG is obtained by viewing a single polymer as a collection of fragments and employing an approximate functional constructed from distribution functions of the interaction energy between solute and the fragment obtained from MD simulation. The binding of water is then examined against 9 typical polymers. The relationship is addressed between the fragment size and the calculated ΔG, and a useful fragment size is identified to compromise the performance of the free-energy functional and the sampling efficiency. It is found with the appropriate fragment size that the ΔG convergence at a statistical error of ˜0.2 kcal/mol is reached at ˜4 ns of replica-exchange MD of the water-polymer system and that the mean absolute deviation of the computational ΔG from the experimental is 0.5 kcal/mol. The connection is further discussed between the polymer structure and the thermodynamic ΔG.

  5. EPR Oximetry in Three Spatial Dimensions using Sparse Spin Distribution

    PubMed Central

    Som, Subhojit; Potter, Lee C.; Ahmad, Rizwan; Vikram, Deepti S.; Kuppusamy, Periannan

    2008-01-01

    A method is presented to use continuous wave electron paramagnetic resonance imaging for rapid measurement of oxygen partial pressure in three spatial dimensions. A particulate paramagnetic probe is employed to create a sparse distribution of spins in a volume of interest. Information encoding location and spectral linewidth is collected by varying the spatial orientation and strength of an applied magnetic gradient field. Data processing exploits the spatial sparseness of spins to detect voxels with nonzero spin and to estimate the spectral linewidth for those voxels. The parsimonious representation of spin locations and linewidths permits an order of magnitude reduction in data acquisition time, compared to four-dimensional tomographic reconstruction using traditional spectral-spatial imaging. The proposed oximetry method is experimentally demonstrated for a lithium octa-n-butoxy naphthalocyanine (LiNc-BuO) probe using an L-band EPR spectrometer. PMID:18538600

  6. Amesos2 Templated Direct Sparse Solver Package

    Energy Science and Technology Software Center (ESTSC)

    2011-05-24

    Amesos2 is a templated direct sparse solver package. Amesos2 provides interfaces to direct sparse solvers, rather than providing native solver capabilities. Amesos2 is a derivative work of the Trilinos package Amesos.

  7. Sparse Biclustering of Transposable Data

    PubMed Central

    Tan, Kean Ming

    2013-01-01

    We consider the task of simultaneously clustering the rows and columns of a large transposable data matrix. We assume that the matrix elements are normally distributed with a bicluster-specific mean term and a common variance, and perform biclustering by maximizing the corresponding log likelihood. We apply an ℓ1 penalty to the means of the biclusters in order to obtain sparse and interpretable biclusters. Our proposal amounts to a sparse, symmetrized version of k-means clustering. We show that k-means clustering of the rows and of the columns of a data matrix can be seen as special cases of our proposal, and that a relaxation of our proposal yields the singular value decomposition. In addition, we propose a framework for bi-clustering based on the matrix-variate normal distribution. The performances of our proposals are demonstrated in a simulation study and on a gene expression data set. This article has supplementary material online. PMID:25364221

  8. Sparse Biclustering of Transposable Data.

    PubMed

    Tan, Kean Ming; Witten, Daniela M

    2014-01-01

    We consider the task of simultaneously clustering the rows and columns of a large transposable data matrix. We assume that the matrix elements are normally distributed with a bicluster-specific mean term and a common variance, and perform biclustering by maximizing the corresponding log likelihood. We apply an ℓ1 penalty to the means of the biclusters in order to obtain sparse and interpretable biclusters. Our proposal amounts to a sparse, symmetrized version of k-means clustering. We show that k-means clustering of the rows and of the columns of a data matrix can be seen as special cases of our proposal, and that a relaxation of our proposal yields the singular value decomposition. In addition, we propose a framework for bi-clustering based on the matrix-variate normal distribution. The performances of our proposals are demonstrated in a simulation study and on a gene expression data set. This article has supplementary material online. PMID:25364221

  9. Topological sparse learning of dynamic form patterns.

    PubMed

    Guthier, T; Willert, V; Eggert, J

    2015-01-01

    Motion is a crucial source of information for a variety of tasks in social interactions. The process of how humans recognize complex articulated movements such as gestures or face expressions remains largely unclear. There is an ongoing discussion if and how explicit low-level motion information, such as optical flow, is involved in the recognition process. Motivated by this discussion, we introduce a computational model that classifies the spatial configuration of gradient and optical flow patterns. The patterns are learned with an unsupervised learning algorithm based on translation-invariant nonnegative sparse coding called VNMF that extracts prototypical optical flow patterns shaped, for example, as moving heads or limb parts. A key element of the proposed system is a lateral inhibition term that suppresses activations of competing patterns in the learning process, leading to a low number of dominant and topological sparse activations. We analyze the classification performance of the gradient and optical flow patterns on three real-world human action recognition and one face expression recognition data set. The results indicate that the recognition of human actions can be achieved by gradient patterns alone, but adding optical flow patterns increases the classification performance. The combined patterns outperform other biological-inspired models and are competitive with current computer vision approaches. PMID:25248088

  10. PSPIKE: A Parallel Hybrid Sparse Linear System Solver

    NASA Astrophysics Data System (ADS)

    Manguoglu, Murat; Sameh, Ahmed H.; Schenk, Olaf

    The availability of large-scale computing platforms comprised of tens of thousands of multicore processors motivates the need for the next generation of highly scalable sparse linear system solvers. These solvers must optimize parallel performance, processor (serial) performance, as well as memory requirements, while being robust across broad classes of applications and systems. In this paper, we present a new parallel solver that combines the desirable characteristics of direct methods (robustness) and effective iterative solvers (low computational cost), while alleviating their drawbacks (memory requirements, lack of robustness). Our proposed hybrid solver is based on the general sparse solver PARDISO, and the “Spike” family of hybrid solvers. The resulting algorithm, called PSPIKE, is as robust as direct solvers, more reliable than classical preconditioned Krylov subspace methods, and much more scalable than direct sparse solvers. We support our performance and parallel scalability claims using detailed experimental studies and comparison with direct solvers, as well as classical preconditioned Krylov methods.

  11. A note on rank reduction in sparse multivariate regression

    PubMed Central

    Chen, Kun; Chan, Kung-Sik

    2016-01-01

    A reduced-rank regression with sparse singular value decomposition (RSSVD) approach was proposed by Chen et al. for conducting variable selection in a reduced-rank model. To jointly model the multivariate response, the method efficiently constructs a prespecified number of latent variables as some sparse linear combinations of the predictors. Here, we generalize the method to also perform rank reduction, and enable its usage in reduced-rank vector autoregressive (VAR) modeling to perform automatic rank determination and order selection. We show that in the context of stationary time-series data, the generalized approach correctly identifies both the model rank and the sparse dependence structure between the multivariate response and the predictors, with probability one asymptotically. We demonstrate the efficacy of the proposed method by simulations and analyzing a macro-economical multivariate time series using a reduced-rank VAR model. PMID:26997938

  12. Sparse-based multispectral image encryption via ptychography

    NASA Astrophysics Data System (ADS)

    Rawat, Nitin; Shi, Yishi; Kim, Byoungho; Lee, Byung-Geun

    2015-12-01

    Recently, we proposed a model of securing a ptychography-based monochromatic image encryption system via the classical Photon-counting imaging (PCI) technique. In this study, we examine a single-channel multispectral sparse-based photon-counting ptychography imaging (SMPI)-based cryptosystem. A ptychography-based cryptosystem creates a complex object wave field, which can be reconstructed by a series of diffraction intensity patterns through an aperture movement. The PCI sensor records only a few complex Bayer patterned samples that have been utilized in the decryption process. Sparse sensing and nonlinear properties of the classical PCI system, together with the scanning probes, enlarge the key space, and such a combination therefore enhances the system's security. We demonstrate that the sparse samples have adequate information for image decryption, as well as information authentication by means of optical correlation.

  13. A Probabilistic Analysis of Sparse Coded Feature Pooling and Its Application for Image Retrieval

    PubMed Central

    Zhang, Yunchao; Chen, Jing; Huang, Xiujie; Wang, Yongtian

    2015-01-01

    Feature coding and pooling as a key component of image retrieval have been widely studied over the past several years. Recently sparse coding with max-pooling is regarded as the state-of-the-art for image classification. However there is no comprehensive study concerning the application of sparse coding for image retrieval. In this paper, we first analyze the effects of different sampling strategies for image retrieval, then we discuss feature pooling strategies on image retrieval performance with a probabilistic explanation in the context of sparse coding framework, and propose a modified sum pooling procedure which can improve the retrieval accuracy significantly. Further we apply sparse coding method to aggregate multiple types of features for large-scale image retrieval. Extensive experiments on commonly-used evaluation datasets demonstrate that our final compact image representation improves the retrieval accuracy significantly. PMID:26132080

  14. Highly parallel sparse Cholesky factorization

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Schreiber, Robert

    1990-01-01

    Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms.

  15. Structure of a Single Whisker Representation in Layer 2 of Mouse Somatosensory Cortex

    PubMed Central

    Clancy, Kelly B.; Schnepel, Philipp; Rao, Antara T.

    2015-01-01

    Layer (L)2 is a major output of primary sensory cortex that exhibits very sparse spiking, but the structure of sensory representation in L2 is not well understood. We combined two-photon calcium imaging with deflection of many whiskers to map whisker receptive fields, characterize sparse coding, and quantitatively define the point representation in L2 of mouse somatosensory cortex. Neurons within a column-sized imaging field showed surprisingly heterogeneous, salt-and-pepper tuning to many different whiskers. Single whisker deflection elicited low-probability spikes in highly distributed, shifting neural ensembles spanning multiple cortical columns. Whisker-evoked response probability correlated strongly with spontaneous firing rate, but weakly with tuning properties, indicating a spectrum of inherent responsiveness across pyramidal cells. L2 neurons projecting to motor and secondary somatosensory cortex differed in whisker tuning and responsiveness, and carried different amounts of information about columnar whisker deflection. From these data, we derive a quantitative, fine-scale picture of the distributed point representation in L2. PMID:25740523

  16. Sparse Matrices in MATLAB: Design and Implementation

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Moler, Cleve; Schreiber, Robert

    1992-01-01

    The matrix computation language and environment MATLAB is extended to include sparse matrix storage and operations. The only change to the outward appearance of the MATLAB language is a pair of commands to create full or sparse matrices. Nearly all the operations of MATLAB now apply equally to full or sparse matrices, without any explicit action by the user. The sparse data structure represents a matrix in space proportional to the number of nonzero entries, and most of the operations compute sparse results in time proportional to the number of arithmetic operations on nonzeros.

  17. Approximation and compression with sparse orthonormal transforms.

    PubMed

    Sezer, Osman Gokhan; Guleryuz, Onur G; Altunbasak, Yucel

    2015-08-01

    We propose a new transform design method that targets the generation of compression-optimized transforms for next-generation multimedia applications. The fundamental idea behind transform compression is to exploit regularity within signals such that redundancy is minimized subject to a fidelity cost. Multimedia signals, in particular images and video, are well known to contain a diverse set of localized structures, leading to many different types of regularity and to nonstationary signal statistics. The proposed method designs sparse orthonormal transforms (SOTs) that automatically exploit regularity over different signal structures and provides an adaptation method that determines the best representation over localized regions. Unlike earlier work that is motivated by linear approximation constructs and model-based designs that are limited to specific types of signal regularity, our work uses general nonlinear approximation ideas and a data-driven setup to significantly broaden its reach. We show that our SOT designs provide a safe and principled extension of the Karhunen-Loeve transform (KLT) by reducing to the KLT on Gaussian processes and by automatically exploiting non-Gaussian statistics to significantly improve over the KLT on more general processes. We provide an algebraic optimization framework that generates optimized designs for any desired transform structure (multiresolution, block, lapped, and so on) with significantly better n -term approximation performance. For each structure, we propose a new prototype codec and test over a database of images. Simulation results show consistent increase in compression and approximation performance compared with conventional methods. PMID:25823033

  18. Optimal parallel solution of sparse triangular systems

    NASA Technical Reports Server (NTRS)

    Alvarado, Fernando L.; Schreiber, Robert

    1990-01-01

    A method for the parallel solution of triangular sets of equations is described that is appropriate when there are many right-handed sides. By preprocessing, the method can reduce the number of parallel steps required to solve Lx = b compared to parallel forward or backsolve. Applications are to iterative solvers with triangular preconditioners, to structural analysis, or to power systems applications, where there may be many right-handed sides (not all available a priori). The inverse of L is represented as a product of sparse triangular factors. The problem is to find a factored representation of this inverse of L with the smallest number of factors (or partitions), subject to the requirement that no new nonzero elements be created in the formation of these inverse factors. A method from an earlier reference is shown to solve this problem. This method is improved upon by constructing a permutation of the rows and columns of L that preserves triangularity and allow for the best possible such partition. A number of practical examples and algorithmic details are presented. The parallelism attainable is illustrated by means of elimination trees and clique trees.

  19. Clothed particle representation in quantum field theory: Mass renormalization

    NASA Astrophysics Data System (ADS)

    Korda, V. Yu.; Shebeko, A. V.

    2004-10-01

    We consider the neutral pion and nucleon fields interacting via the pseudoscalar (PS) Yukawa-type coupling. The method of unitary clothing transformations is used to handle the so-called clothed particle representation, where the total field Hamiltonian and the three boost operators in the instant form of relativistic dynamics take on the same sparse structure in the Hilbert space of hadronic states. In this approach the mass counterterms are cancelled (at least, partly) by commutators of the generators of clothing transformations and the field interaction operator. This allows the pion and nucleon mass shifts to be expressed through the corresponding three-dimensional integrals whose integrands depend on certain covariant combinations of the relevant three-momenta. The property provides the momentum independence of mass renormalization. The present results prove to be equivalent to the results obtained by Feynman techniques.

  20. Intrinsic membrane properties and inhibitory synaptic input of kenyon cells as mechanisms for sparse coding?

    PubMed

    Demmer, Heike; Kloppenburg, Peter

    2009-09-01

    The insect mushroom bodies (MBs) are multimodal signal processing centers and are essential for olfactory learning. Electrophysiological recordings from the MBs' principal component neurons, the Kenyon cells (KCs), showed a sparse representation of olfactory signals. It has been proposed that the intrinsic and synaptic properties of the KC circuitry combine to reduce the firing of action potentials and to generate relatively brief windows for synaptic integration in the KCs, thus causing them to operate as coincidence detectors. To better understand the ionic mechanisms that mediate the KC intrinsic firing properties, we used whole cell patch-clamp recordings from KCs in the adult, intact brain of Periplaneta americana to analyze voltage- and/or Ca2+-dependent inward (ICa, INa) and outward currents [IA, IK(V), IK,ST, IO(Ca)]. In general the currents had properties similar to those of currents in other insect neurons. Certain functional parameters of ICa and IO(Ca), however, had unusually high values, allowing them to assist sparse coding. ICa had a low-activation threshold and a very high current density compared with those of ICa in other insect neurons. Together these parameters make ICa suitable for boosting and sharpening the excitatory postsynaptic potentials as reported in previous studies. IO(Ca) also had a large current density and a very depolarized activation threshold. In combination, the large ICa and IO(Ca) are likely to mediate the strong spike frequency adaptation. These intrinsic properties of the KCs are likely to be supported by their tonic, inhibitory synaptic input, which was revealed by specific GABA antagonists and which contributes significantly to the hyperpolarized membrane potential at rest. PMID:19553491

  1. A strategy of car detection via sparse dictionary

    NASA Astrophysics Data System (ADS)

    Jin, Guo-Qing; Dong, Ying-Hui

    2011-06-01

    In recent years there is a growing interest in the study of sparse representation for object detection. These approaches heavily depend on local salient image patches, thus weakening the global contribution to the object identification of other less informative signals.Our generic approach not only employs the informative representation by linear transform, but also keeps all the spatial dependence implied among the objects. As an example,car images can be represented using parts from a vocabulary, along with spatial relations observed among them.Our approach is conducted with the quantitative measurement in developing the car detector at every stage. The theory underneath the optimal solution is the maximal mutual information carried out by the system. Our goal is to keep the maximal mutual information transmitted from stage to stage so that only the least uncertainty about the class identification remains based on the observation of classifier's output.

  2. Sparse parallel transmission on randomly perturbed spiral k-space trajectory

    PubMed Central

    Pang, Yong; Jiang, Xiaohua

    2014-01-01

    Combination of parallel transmission and sparse pulse is able to shorten the excitation by using both the coil sensitivity and sparse k-space, showing improved fast excitation capability over the use of parallel transmission alone. However, to design an optimal k-space trajectory for sparse parallel transmission is a challenging task. In this work, a randomly perturbed sparse k-space trajectory is designed by modifying the path of a spiral trajectory along the sparse k-space data, and the sparse parallel transmission RF pulses are subsequently designed based on this optimal trajectory. This method combines the parallel transmission and sparse spiral k-space trajectory, potentially to further reduce the RF transmission time. Bloch simulation of 90° excitation by using a four channel coil array is performed to demonstrate its feasibility. Excitation performance of the sparse parallel transmission technique at different reduction factors of 1, 2, and 4 is evaluated. For comparison, parallel excitation using regular spiral trajectory is performed. The passband errors of the excitation profiles of each transmission are calculated for quantitative assessment of the proposed excitation method. PMID:24834422

  3. Representation is representation of similarities.

    PubMed

    Edelman, S

    1998-08-01

    Advanced perceptual systems are faced with the problem of securing a principled (ideally, veridical) relationship between the world and its internal representation. I propose a unified approach to visual representation, addressing the need for superordinate and basic-level categorization and for the identification of specific instances of familiar categories. According to the proposed theory, a shape is represented internally by the responses of a small number of tuned modules, each broadly selective for some reference shape, whose similarity to the stimulus it measures. This amounts to embedding the stimulus in a low-dimensional proximal shape space spanned by the outputs of the active modules. This shape space supports representations of distal shape similarities that are veridical as Shepard's (1968) second-order isomorphisms (i.e., correspondence between distal and proximal similarities among shapes, rather than between distal shapes and their proximal representations). Representation in terms of similarities to reference shapes supports processing (e.g., discrimination) of shapes that are radically different from the reference ones, without the need for the computationally problematic decomposition into parts required by other theories. Furthermore, a general expression for similarity between two stimuli, based on comparisons to reference shapes, can be used to derive models of perceived similarity ranging from continuous, symmetric, and hierarchical ones, as in multidimensional scaling (Shepard 1980), to discrete and nonhierarchical ones, as in the general contrast models (Shepard & Arabie 1979; Tversky 1977). PMID:10097019

  4. Representing Representation

    ERIC Educational Resources Information Center

    Kuntz, Aaron M.

    2010-01-01

    What can be known and how to render what we know are perpetual quandaries met by qualitative research, complicated further by the understanding that the everyday discourses influencing our representations are often tacit, unspoken or heard so often that they seem to warrant little reflection. In this article, I offer analytic memos as a means for…

  5. Drosophila Gene Expression Pattern Annotation Using Sparse Features and Term-Term Interactions

    PubMed Central

    Ji, Shuiwang; Yuan, Lei; Li, Ying-Xin; Zhou, Zhi-Hua; Kumar, Sudhir; Ye, Jieping

    2010-01-01

    The Drosophila gene expression pattern images document the spatial and temporal dynamics of gene expression and they are valuable tools for explicating the gene functions, interaction, and networks during Drosophila embryogenesis. To provide text-based pattern searching, the images in the Berkeley Drosophila Genome Project (BDGP) study are annotated with ontology terms manually by human curators. We present a systematic approach for automating this task, because the number of images needing text descriptions is now rapidly increasing. We consider both improved feature representation and novel learning formulation to boost the annotation performance. For feature representation, we adapt the bag-of-words scheme commonly used in visual recognition problems so that the image group information in the BDGP study is retained. Moreover, images from multiple views can be integrated naturally in this representation. To reduce the quantization error caused by the bag-of-words representation, we propose an improved feature representation scheme based on the sparse learning technique. In the design of learning formulation, we propose a local regularization framework that can incorporate the correlations among terms explicitly. We further show that the resulting optimization problem admits an analytical solution. Experimental results show that the representation based on sparse learning outperforms the bag-of-words representation significantly. Results also show that incorporation of the term-term correlations improves the annotation performance consistently. PMID:21614142

  6. Sparse approximation of currents for statistics on curves and surfaces.

    PubMed

    Durrleman, Stanley; Pennec, Xavier; Trouvé, Alain; Ayache, Nicholas

    2008-01-01

    Computing, processing, visualizing statistics on shapes like curves or surfaces is a real challenge with many applications ranging from medical image analysis to computational geometry. Modelling such geometrical primitives with currents avoids feature-based approach as well as point-correspondence method. This framework has been proved to be powerful to register brain surfaces or to measure geometrical invariants. However, if the state-of-the-art methods perform efficiently pairwise registrations, new numerical schemes are required to process groupwise statistics due to an increasing complexity when the size of the database is growing. Statistics such as mean and principal modes of a set of shapes often have a heavy and highly redundant representation. We propose therefore to find an adapted basis on which mean and principal modes have a sparse decomposition. Besides the computational improvement, this sparse representation offers a way to visualize and interpret statistics on currents. Experiments show the relevance of the approach on 34 sets of 70 sulcal lines and on 50 sets of 10 meshes of deep brain structures. PMID:18982629

  7. Efficient Sparse Coding in Early Sensory Processing: Lessons from Signal Recovery

    PubMed Central

    Lörincz, András; Palotai, Zsolt; Szirtes, Gábor

    2012-01-01

    Sensory representations are not only sparse, but often overcomplete: coding units significantly outnumber the input units. For models of neural coding this overcompleteness poses a computational challenge for shaping the signal processing channels as well as for using the large and sparse representations in an efficient way. We argue that higher level overcompleteness becomes computationally tractable by imposing sparsity on synaptic activity and we also show that such structural sparsity can be facilitated by statistics based decomposition of the stimuli into typical and atypical parts prior to sparse coding. Typical parts represent large-scale correlations, thus they can be significantly compressed. Atypical parts, on the other hand, represent local features and are the subjects of actual sparse coding. When applied on natural images, our decomposition based sparse coding model can efficiently form overcomplete codes and both center-surround and oriented filters are obtained similar to those observed in the retina and the primary visual cortex, respectively. Therefore we hypothesize that the proposed computational architecture can be seen as a coherent functional model of the first stages of sensory coding in early vision. PMID:22396629

  8. Cellular-resolution population imaging reveals robust sparse coding in the Drosophila Mushroom Body

    PubMed Central

    Honegger, Kyle S.; Campbell, Robert A. A.; Turner, Glenn C.

    2011-01-01

    Sensory stimuli are represented in the brain by the activity of populations of neurons. In most biological systems, studying population coding is challenging since only a tiny proportion of cells can be recorded simultaneously. Here we used 2-photon imaging to record neural activity in the relatively simple Drosophila mushroom body (MB), an area involved in olfactory learning and memory. Using the highly sensitive calcium indicator, GCaMP3, we simultaneously monitored the activity of >100 MB neurons in vivo (about 5% of the total population). The MB is thought to encode odors in sparse patterns of activity, but the code has yet to be explored either on a population level or with a wide variety of stimuli. We therefore imaged responses to odors chosen to evaluate the robustness of sparse representations. Different odors activated distinct patterns of MB neurons, however we found no evidence for spatial organization of neurons by either response probability or odor tuning within the cell body layer. The degree of sparseness was consistent across a wide range of stimuli, from monomolecular odors to artificial blends and even complex natural smells. Sparseness was mainly invariant across concentrations, largely because of the influence of recent odor experience. Finally, in contrast to sensory processing in other systems, no response features distinguished natural stimuli from monomolecular odors. Our results indicate that the fundamental feature of odor processing in the MB is to create sparse stimulus representations in a format that facilitates arbitrary associations between odor and punishment or reward. PMID:21849538

  9. Nonlocal sparse model with adaptive structural clustering for feature extraction of aero-engine bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Li, Xiang; Yan, Ruqiang

    2016-04-01

    Fault information of aero-engine bearings presents two particular phenomena, i.e., waveform distortion and impulsive feature frequency band dispersion, which leads to a challenging problem for current techniques of bearing fault diagnosis. Moreover, although many progresses of sparse representation theory have been made in feature extraction of fault information, the theory also confronts inevitable performance degradation due to the fact that relatively weak fault information has not sufficiently prominent and sparse representations. Therefore, a novel nonlocal sparse model (coined NLSM) and its algorithm framework has been proposed in this paper, which goes beyond simple sparsity by introducing more intrinsic structures of feature information. This work adequately exploits the underlying prior information that feature information exhibits nonlocal self-similarity through clustering similar signal fragments and stacking them together into groups. Within this framework, the prior information is transformed into a regularization term and a sparse optimization problem, which could be solved through block coordinate descent method (BCD), is formulated. Additionally, the adaptive structural clustering sparse dictionary learning technique, which utilizes k-Nearest-Neighbor (kNN) clustering and principal component analysis (PCA) learning, is adopted to further enable sufficient sparsity of feature information. Moreover, the selection rule of regularization parameter and computational complexity are described in detail. The performance of the proposed framework is evaluated through numerical experiment and its superiority with respect to the state-of-the-art method in the field is demonstrated through the vibration signals of experimental rig of aircraft engine bearings.

  10. Generation of Rayleigh-wave dispersion images from multichannel seismic data using sparse signal reconstruction

    NASA Astrophysics Data System (ADS)

    Mun, Songchol; Bao, Yuequan; Li, Hui

    2015-11-01

    The accurate estimation of dispersion curves has been a key issue for ensuring high quality in geophysical surface wave exploration. Many studies have been carried out on the generation of a high-resolution dispersion image from array measurements. In this study, the sparse signal representation and reconstruction techniques are employed to obtain the high resolution Rayleigh-wave dispersion image from seismic wave data. First, a sparse representation of the seismic wave data is introduced, in which the signal is assumed to be sparse in terms of wave speed. Then, the sparse signal is reconstructed by optimization using l1-norm regularization, which gives the signal amplitude spectrum as a function of wave speed. A dispersion image in the f-v domain is generated by arranging the sparse spectra for all frequency slices in the frequency range. Finally, to show the efficiency of the proposed approach, the Surfbar-2 field test data, acquired by B. Luke and colleagues at the University of Nevada Las Vegas, are analysed. By comparing the real-field dispersion image with the results from other methods, the high mode-resolving ability of the proposed approach is demonstrated, particularly for a case with strongly coherent modes.

  11. The Real-Valued Sparse Direction of Arrival (DOA) Estimation Based on the Khatri-Rao Product.

    PubMed

    Chen, Tao; Wu, Huanxin; Zhao, Zhongkai

    2016-01-01

    There is a problem that complex operation which leads to a heavy calculation burden is required when the direction of arrival (DOA) of a sparse signal is estimated by using the array covariance matrix. The solution of the multiple measurement vectors (MMV) model is difficult. In this paper, a real-valued sparse DOA estimation algorithm based on the Khatri-Rao (KR) product called the L₁-RVSKR is proposed. The proposed algorithm is based on the sparse representation of the array covariance matrix. The array covariance matrix is transformed to a real-valued matrix via a unitary transformation so that a real-valued sparse model is achieved. The real-valued sparse model is vectorized for transforming to a single measurement vector (SMV) model, and a new virtual overcomplete dictionary is constructed according to the KR product's property. Finally, the sparse DOA estimation is solved by utilizing the idea of a sparse representation of array covariance vectors (SRACV). The simulation results demonstrate the superior performance and the low computational complexity of the proposed algorithm. PMID:27187409

  12. The Real-Valued Sparse Direction of Arrival (DOA) Estimation Based on the Khatri-Rao Product

    PubMed Central

    Chen, Tao; Wu, Huanxin; Zhao, Zhongkai

    2016-01-01

    There is a problem that complex operation which leads to a heavy calculation burden is required when the direction of arrival (DOA) of a sparse signal is estimated by using the array covariance matrix. The solution of the multiple measurement vectors (MMV) model is difficult. In this paper, a real-valued sparse DOA estimation algorithm based on the Khatri-Rao (KR) product called the L1-RVSKR is proposed. The proposed algorithm is based on the sparse representation of the array covariance matrix. The array covariance matrix is transformed to a real-valued matrix via a unitary transformation so that a real-valued sparse model is achieved. The real-valued sparse model is vectorized for transforming to a single measurement vector (SMV) model, and a new virtual overcomplete dictionary is constructed according to the KR product’s property. Finally, the sparse DOA estimation is solved by utilizing the idea of a sparse representation of array covariance vectors (SRACV). The simulation results demonstrate the superior performance and the low computational complexity of the proposed algorithm. PMID:27187409

  13. Sparse Coding for Alpha Matting

    NASA Astrophysics Data System (ADS)

    Johnson, Jubin; Varnousfaderani, Ehsan Shahrian; Cholakkal, Hisham; Rajan, Deepu

    2016-07-01

    Existing color sampling based alpha matting methods use the compositing equation to estimate alpha at a pixel from pairs of foreground (F) and background (B) samples. The quality of the matte depends on the selected (F,B) pairs. In this paper, the matting problem is reinterpreted as a sparse coding of pixel features, wherein the sum of the codes gives the estimate of the alpha matte from a set of unpaired F and B samples. A non-parametric probabilistic segmentation provides a certainty measure on the pixel belonging to foreground or background, based on which a dictionary is formed for use in sparse coding. By removing the restriction to conform to (F,B) pairs, this method allows for better alpha estimation from multiple F and B samples. The same framework is extended to videos, where the requirement of temporal coherence is handled effectively. Here, the dictionary is formed by samples from multiple frames. A multi-frame graph model, as opposed to a single image as for image matting, is proposed that can be solved efficiently in closed form. Quantitative and qualitative evaluations on a benchmark dataset are provided to show that the proposed method outperforms current state-of-the-art in image and video matting.

  14. Sparseness of vowel category structure: Evidence from English dialect comparison

    PubMed Central

    Scharinger, Mathias; Idsardi, William J.

    2014-01-01

    Current models of speech perception tend to emphasize either fine-grained acoustic properties or coarse-grained abstract characteristics of speech sounds. We argue for a particular kind of 'sparse' vowel representations and provide new evidence that these representations account for the successful access of the corresponding categories. In an auditory semantic priming experiment, American English listeners made lexical decisions on targets (e.g. load) preceded by semantically related primes (e.g. pack). Changes of the prime vowel that crossed a vowel-category boundary (e.g. peck) were not treated as a tolerable variation, as assessed by a lack of priming, although the phonetic categories of the two different vowels considerably overlap in American English. Compared to the outcome of the same experiment with New Zealand English listeners, where such prime variations were tolerated, our experiment supports the view that phonological representations are important in guiding the mapping process from the acoustic signal to an abstract mental representation. Our findings are discussed with regard to current models of speech perception and recent findings from brain imaging research. PMID:24653528

  15. Complete Gabor transformation for signal representation.

    PubMed

    Yao, J

    1993-01-01

    Properties of the Gabor transformation used for image representation are discussed. The properties can be expressed in matrix notation, and the complete Gabor coefficients can be found by multiplying the inverse of the Gabor (1946) matrix and the signal vector. The Gabor matrix can be decomposed into the product of a sparse constant complex matrix and another sparse matrix that depends only on the window function. A fast algorithm is suggested to compute the inverse of the window function matrix, enabling discrete signals to be transformed into generalized nonorthogonal Gabor representations efficiently. A comparison is made between this method and the analytical method. The relation between the window function matrix and the biorthogonal functions is demonstrated. A numerical computation method for the biorthogonal functions is proposed. PMID:18296205

  16. Parallel sparse and dense information coding streams in the electrosensory midbrain.

    PubMed

    Sproule, Michael K J; Metzen, Michael G; Chacron, Maurice J

    2015-10-21

    Efficient processing of incoming sensory information is critical for an organism's survival. It has been widely observed across systems and species that the representation of sensory information changes across successive brain areas. Indeed, peripheral sensory neurons tend to respond densely to a broad range of sensory stimuli while more central neurons tend to instead respond sparsely to a narrow range of stimuli. Such a transition might be advantageous as sparse neural codes are thought to be metabolically efficient and optimize coding efficiency. Here we investigated whether the neural code transitions from dense to sparse within the midbrain Torus semicircularis (TS) of weakly electric fish. Confirming previous results, we found both dense and sparse coding neurons. However, subsequent histological classification revealed that most dense neurons projected to higher brain areas. Our results thus provide strong evidence against the hypothesis that the neural code transitions from dense to sparse in the electrosensory system. Rather, they support the alternative hypothesis that higher brain areas receive parallel streams of dense and sparse coded information from the electrosensory midbrain. We discuss the implications and possible advantages of such a coding strategy and argue that it is a general feature of sensory processing. PMID:26375927

  17. Dimensionality reduction of hyperspectral images based on sparse discriminant manifold embedding

    NASA Astrophysics Data System (ADS)

    Huang, Hong; Luo, Fulin; Liu, Jiamin; Yang, Yaqiong

    2015-08-01

    Sparse manifold clustering and embedding (SMCE) adaptively selects neighbor points from the same manifold and approximately spans a low-dimensional affine subspace, but it does not explicitly give a projection matrix and encounters the out-of-sample problem. To overcome this drawback, we propose a new dimensionality reduction method, called sparse manifold embedding (SME), based on graph embedding and sparse representation for hyperspectral image (HSI). It utilizes the sparse coefficients of affine subspace to construct a similarity graph and preserves this sparse similarity in embedding space. Furthermore, we try to make full use of the prior label information to design a novel supervised learning method termed sparse discriminant manifold embedding (SDME). SDME not only inherits the merits of the sparsity property of affine subspace but also boosts the compactness of intra-manifold, which achieves discriminating features and further improves the classification performance of HSI. Experiments on two real hyperspectral data sets (Indian Pines and PaviaU) show the benefits of the proposed SME and SDME methods.

  18. Precise Feature Based Time Scales and Frequency Decorrelation Lead to a Sparse Auditory Code

    PubMed Central

    Chen, Chen; Read, Heather L.; Escabí, Monty A.

    2012-01-01

    Sparse redundancy reducing codes have been proposed as efficient strategies for representing sensory stimuli. A prevailing hypothesis suggests that sensory representations shift from dense redundant codes in the periphery to selective sparse codes in cortex. We propose an alternative framework where sparseness and redundancy depend on sensory integration time scales and demonstrate that the central nucleus of the inferior colliculus (ICC) of cats encodes sound features by precise sparse spike trains. Direct comparisons with auditory cortical neurons demonstrate that ICC responses were sparse and uncorrelated as long as the spike train time scales were matched to the sensory integration time scales relevant to ICC neurons. Intriguingly, correlated spiking in the ICC was substantially lower than predicted by linear or nonlinear models and strictly observed for neurons with best frequencies within a “critical band,” the hallmark of perceptual frequency resolution in mammals. This is consistent with a sparse asynchronous code throughout much of the ICC and a complementary correlation code within a critical band that may allow grouping of perceptually relevant cues. PMID:22723685

  19. Parallel sparse and dense information coding streams in the electrosensory midbrain

    PubMed Central

    Sproule, Michael K.J.; Metzen, Michael G.; Chacron, Maurice J.

    2015-01-01

    Efficient processing of incoming sensory information is critical for an organism’s survival. It has been widely observed across systems and species that the representation of sensory information changes across successive brain areas. Indeed, peripheral sensory neurons tend to respond densely to a broad range of sensory stimuli while more central neurons tend to instead respond sparsely to a narrow range of stimuli. Such a transition might be advantageous as sparse neural codes are thought to be metabolically efficient and optimize coding efficiency. Here we investigated whether the neural code transitions from dense to sparse within the midbrain Torus semicircularis (TS) of weakly electric fish. Confirming previous results, we found both dense and sparse coding neurons. However, subsequent histological classification revealed that most dense neurons projected to higher brain areas. Our results thus provide strong evidence against the hypothesis that the neural code transitions from dense to sparse in the electrosensory system. Rather, they support the alternative hypothesis that higher brain areas receive parallel streams of dense and sparse coded information from the electrosensory midbrain. We discuss the implications and possible advantages of such a coding strategy and argue that it is a general feature of sensory processing. PMID:26375927

  20. Sparse and stable Markowitz portfolios

    PubMed Central

    Brodie, Joshua; Daubechies, Ingrid; De Mol, Christine; Giannone, Domenico; Loris, Ignace

    2009-01-01

    We consider the problem of portfolio selection within the classical Markowitz mean-variance framework, reformulated as a constrained least-squares regression problem. We propose to add to the objective function a penalty proportional to the sum of the absolute values of the portfolio weights. This penalty regularizes (stabilizes) the optimization problem, encourages sparse portfolios (i.e., portfolios with only few active positions), and allows accounting for transaction costs. Our approach recovers as special cases the no-short-positions portfolios, but does allow for short positions in limited number. We implement this methodology on two benchmark data sets constructed by Fama and French. Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the naïve evenly weighted portfolio. PMID:19617537

  1. A Data Type for Efficient Representation of Other Data Types

    NASA Technical Reports Server (NTRS)

    James, Mark

    2008-01-01

    A self-organizing, monomorphic data type denoted a sequence has been conceived to address certain concerns that arise in programming parallel computers. A sequence in the present sense can be regarded abstractly as a vector, set, bag, queue, or other construct. Heretofore, in programming a parallel computer, it has been necessary for the programmer to state explicitly, at the outset, what parts of the program and the underlying data structures must be represented in parallel form. Not only is this requirement not optimal from the perspective of implementation; it entails an additional requirement that the programmer have intimate understanding of the underlying parallel structure. The present sequence data type overcomes both the implementation and parallel structure obstacles. In so doing, the sequence data type provides unified means by which the programmer can represent a data structure for natural and automatic decomposition to a parallel computing architecture. Sequences exhibit the behavioral and structural characteristics of vectors, but the underlying representations are automatically synthesized from combinations of programmers advice and execution use metrics. Sequences can vary bidirectionally between sparseness and density, making them excellent choices for many kinds of algorithms. The novelty and benefit of this behavior lies in the fact that it can relieve programmers of the details of implementations. The creation of a sequence enables decoupling of a conceptual representation from an implementation. The underlying representation of a sequence is a hybrid of representations composed of vectors, linked lists, connected blocks, and hash tables. The internal structure of a sequence can automatically change from time to time on the basis of how it is being used. Those portions of a sequence where elements have not been added or removed can be as efficient as vectors. As elements are inserted and removed in a given portion, then different methods are

  2. Approximate Orthogonal Sparse Embedding for Dimensionality Reduction.

    PubMed

    Lai, Zhihui; Wong, Wai Keung; Xu, Yong; Yang, Jian; Zhang, David

    2016-04-01

    Locally linear embedding (LLE) is one of the most well-known manifold learning methods. As the representative linear extension of LLE, orthogonal neighborhood preserving projection (ONPP) has attracted widespread attention in the field of dimensionality reduction. In this paper, a unified sparse learning framework is proposed by introducing the sparsity or L1-norm learning, which further extends the LLE-based methods to sparse cases. Theoretical connections between the ONPP and the proposed sparse linear embedding are discovered. The optimal sparse embeddings derived from the proposed framework can be computed by iterating the modified elastic net and singular value decomposition. We also show that the proposed model can be viewed as a general model for sparse linear and nonlinear (kernel) subspace learning. Based on this general model, sparse kernel embedding is also proposed for nonlinear sparse feature extraction. Extensive experiments on five databases demonstrate that the proposed sparse learning framework performs better than the existing subspace learning algorithm, particularly in the cases of small sample sizes. PMID:25955995

  3. Approximate inverse preconditioners for general sparse matrices

    SciTech Connect

    Chow, E.; Saad, Y.

    1994-12-31

    Preconditioned Krylov subspace methods are often very efficient in solving sparse linear matrices that arise from the discretization of elliptic partial differential equations. However, for general sparse indifinite matrices, the usual ILU preconditioners fail, often because of the fact that the resulting factors L and U give rise to unstable forward and backward sweeps. In such cases, alternative preconditioners based on approximate inverses may be attractive. We are currently developing a number of such preconditioners based on iterating on each column to get the approximate inverse. For this approach to be efficient, the iteration must be done in sparse mode, i.e., we must use sparse-matrix by sparse-vector type operatoins. We will discuss a few options and compare their performance on standard problems from the Harwell-Boeing collection.

  4. Large-scale sparse singular value computations

    NASA Technical Reports Server (NTRS)

    Berry, Michael W.

    1992-01-01

    Four numerical methods for computing the singular value decomposition (SVD) of large sparse matrices on a multiprocessor architecture are presented. Lanczos and subspace iteration-based methods for determining several of the largest singular triplets (singular values and corresponding left and right-singular vectors) for sparse matrices arising from two practical applications: information retrieval and seismic reflection tomography are emphasized. The target architectures for implementations are the CRAY-2S/4-128 and Alliant FX/80. The sparse SVD problem is well motivated by recent information-retrieval techniques in which dominant singular values and their corresponding singular vectors of large sparse term-document matrices are desired, and by nonlinear inverse problems from seismic tomography applications which require approximate pseudo-inverses of large sparse Jacobian matrices.

  5. Towards robust and effective shape modeling: sparse shape composition.

    PubMed

    Zhang, Shaoting; Zhan, Yiqiang; Dewan, Maneesh; Huang, Junzhou; Metaxas, Dimitris N; Zhou, Xiang Sean

    2012-01-01

    Organ shape plays an important role in various clinical practices, e.g., diagnosis, surgical planning and treatment evaluation. It is usually derived from low level appearance cues in medical images. However, due to diseases and imaging artifacts, low level appearance cues might be weak or misleading. In this situation, shape priors become critical to infer and refine the shape derived by image appearances. Effective modeling of shape priors is challenging because: (1) shape variation is complex and cannot always be modeled by a parametric probability distribution; (2) a shape instance derived from image appearance cues (input shape) may have gross errors; and (3) local details of the input shape are difficult to preserve if they are not statistically significant in the training data. In this paper we propose a novel Sparse Shape Composition model (SSC) to deal with these three challenges in a unified framework. In our method, a sparse set of shapes in the shape repository is selected and composed together to infer/refine an input shape. The a priori information is thus implicitly incorporated on-the-fly. Our model leverages two sparsity observations of the input shape instance: (1) the input shape can be approximately represented by a sparse linear combination of shapes in the shape repository; (2) parts of the input shape may contain gross errors but such errors are sparse. Our model is formulated as a sparse learning problem. Using L1 norm relaxation, it can be solved by an efficient expectation-maximization (EM) type of framework. Our method is extensively validated on two medical applications, 2D lung localization in X-ray images and 3D liver segmentation in low-dose CT scans. Compared to state-of-the-art methods, our model exhibits better performance in both studies. PMID:21963296

  6. Representation of discrete Steklov-Poincare operator arising in domain decomposition methods in wavelet basis

    SciTech Connect

    Jemcov, A.; Matovic, M.D.

    1996-12-31

    This paper examines the sparse representation and preconditioning of a discrete Steklov-Poincare operator which arises in domain decomposition methods. A non-overlapping domain decomposition method is applied to a second order self-adjoint elliptic operator (Poisson equation), with homogeneous boundary conditions, as a model problem. It is shown that the discrete Steklov-Poincare operator allows sparse representation with a bounded condition number in wavelet basis if the transformation is followed by thresholding and resealing. These two steps combined enable the effective use of Krylov subspace methods as an iterative solution procedure for the system of linear equations. Finding the solution of an interface problem in domain decomposition methods, known as a Schur complement problem, has been shown to be equivalent to the discrete form of Steklov-Poincare operator. A common way to obtain Schur complement matrix is by ordering the matrix of discrete differential operator in subdomain node groups then block eliminating interface nodes. The result is a dense matrix which corresponds to the interface problem. This is equivalent to reducing the original problem to several smaller differential problems and one boundary integral equation problem for the subdomain interface.

  7. A Hyperspherical Adaptive Sparse-Grid Method for High-Dimensional Discontinuity Detection

    SciTech Connect

    Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.; Burkardt, John V.

    2015-06-24

    This study proposes and analyzes a hyperspherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hypersurface of an N-dimensional discontinuous quantity of interest, by virtue of a hyperspherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyperspherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. In addition, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.

  8. Sparse-view ultrasound diffraction tomography using compressed sensing with nonuniform FFT.

    PubMed

    Hua, Shaoyan; Ding, Mingyue; Yuchi, Ming

    2014-01-01

    Accurate reconstruction of the object from sparse-view sampling data is an appealing issue for ultrasound diffraction tomography (UDT). In this paper, we present a reconstruction method based on compressed sensing framework for sparse-view UDT. Due to the piecewise uniform characteristics of anatomy structures, the total variation is introduced into the cost function to find a more faithful sparse representation of the object. The inverse problem of UDT is iteratively resolved by conjugate gradient with nonuniform fast Fourier transform. Simulation results show the effectiveness of the proposed method that the main characteristics of the object can be properly presented with only 16 views. Compared to interpolation and multiband method, the proposed method can provide higher resolution and lower artifacts with the same view number. The robustness to noise and the computation complexity are also discussed. PMID:24868241

  9. Improved Image Registration by Sparse Patch-Based Deformation Estimation

    PubMed Central

    Kim, Minjeong; Wu, Guorong; Wang, Qian; Shen, Dinggang

    2014-01-01

    Despite of intensive efforts for decades, deformable image registration is still a challenging problem due to the potential large anatomical differences across individual images, which limits the registration performance. Fortunately, this issue could be alleviated if a good initial deformation can be provided for the two images under registration, which are often termed as the moving subject and the fixed template, respectively. In this work, we present a novel patch-based initial deformation prediction framework for improving the performance of existing registration algorithms. Our main idea is to estimate the initial deformation between subject and template in a patch-wise fashion by using the sparse representation technique. We argue that two image patches should follow the same deformation towards the template image if their patch-wise appearance patterns are similar. To this end, our framework consists of two stages, i.e., the training stage and the application stage. In the training stage, we register all training images to the pre-selected template, such that the deformation of each training image with respect to the template is known. In the application stage, we apply the following four steps to efficiently calculate the initial deformation field for the new test subject: (1) We pick a small number of key points in the distinctive regions of the test subject; (2) For each key point, we extract a local patch and form a coupled appearance-deformation dictionary from training images where each dictionary atom consists of the image intensity patch as well as their respective local deformations; (3) A small set of training image patches in the coupled dictionary are selected to represent the image patch of each subject key point by sparse representation. Then, we can predict the initial deformation for each subject key point by propagating the pre-estimated deformations on the selected training patches with the same sparse representation coefficients. (4) We

  10. Sparse Bayesian infinite factor models

    PubMed Central

    Bhattacharya, A.; Dunson, D. B.

    2011-01-01

    We focus on sparse modelling of high-dimensional covariance matrices using Bayesian latent factor models. We propose a multiplicative gamma process shrinkage prior on the factor loadings which allows introduction of infinitely many factors, with the loadings increasingly shrunk towards zero as the column index increases. We use our prior on a parameter-expanded loading matrix to avoid the order dependence typical in factor analysis models and develop an efficient Gibbs sampler that scales well as data dimensionality increases. The gain in efficiency is achieved by the joint conjugacy property of the proposed prior, which allows block updating of the loadings matrix. We propose an adaptive Gibbs sampler for automatically truncating the infinite loading matrix through selection of the number of important factors. Theoretical results are provided on the support of the prior and truncation approximation bounds. A fast algorithm is proposed to produce approximate Bayes estimates. Latent factor regression methods are developed for prediction and variable selection in applications with high-dimensional correlated predictors. Operating characteristics are assessed through simulation studies, and the approach is applied to predict survival times from gene expression data. PMID:23049129

  11. Evaluation of protein-protein docking model structures using all-atom molecular dynamics simulations combined with the solution theory in the energy representation

    NASA Astrophysics Data System (ADS)

    Takemura, Kazuhiro; Guo, Hao; Sakuraba, Shun; Matubayasi, Nobuyuki; Kitao, Akio

    2012-12-01

    We propose a method to evaluate binding free energy differences among distinct protein-protein complex model structures through all-atom molecular dynamics simulations in explicit water using the solution theory in the energy representation. Complex model structures are generated from a pair of monomeric structures using the rigid-body docking program ZDOCK. After structure refinement by side chain optimization and all-atom molecular dynamics simulations in explicit water, complex models are evaluated based on the sum of their conformational and solvation free energies, the latter calculated from the energy distribution functions obtained from relatively short molecular dynamics simulations of the complex in water and of pure water based on the solution theory in the energy representation. We examined protein-protein complex model structures of two protein-protein complex systems, bovine trypsin/CMTI-1 squash inhibitor (PDB ID: 1PPE) and RNase SA/barstar (PDB ID: 1AY7), for which both complex and monomer structures were determined experimentally. For each system, we calculated the energies for the crystal complex structure and twelve generated model structures including the model most similar to the crystal structure and very different from it. In both systems, the sum of the conformational and solvation free energies tended to be lower for the structure similar to the crystal. We concluded that our energy calculation method is useful for selecting low energy complex models similar to the crystal structure from among a set of generated models.

  12. Nonlinear model reduction for dynamical systems using sparse sensor locations from learned libraries

    NASA Astrophysics Data System (ADS)

    Sargsyan, Syuzanna; Brunton, Steven L.; Kutz, J. Nathan

    2015-09-01

    We demonstrate the synthesis of sparse sampling and dimensionality reduction to characterize and model nonlinear dynamical systems over a range of bifurcation parameters. First, we construct modal libraries using the classical proper orthogonal decomposition in order to expose the dominant low-rank coherent structures. Here, libraries of the nonlinear terms are also constructed in order to take advantage of the discrete empirical interpolation method and projection that allows for the approximation of nonlinear terms from a sparse number of grid points. The selected grid points are shown to be effective sensing and measurement locations for characterizing the underlying dynamics, stability, and bifurcations of nonlinear dynamical systems. The use of empirical interpolation points and sparse representation facilitates a family of local reduced-order models for each physical regime, rather than a higher-order global model, which has the benefit of physical interpretability of energy transfer between coherent structures. The method advocated also allows for orders-of-magnitude improvement in computational speed and memory requirements. To illustrate the method, the discrete interpolation points and nonlinear modal libraries are used for sparse representation in order to classify and reconstruct the dynamic bifurcation regimes in the complex Ginzburg-Landau equation. It is also shown that point measurements of the nonlinearity are more effective than linear measurements when sensor noise is present.

  13. Visual Tracking via Coarse and Fine Structural Local Sparse Appearance Models.

    PubMed

    Jia, Xu; Lu, Huchuan; Yang, Ming-Hsuan

    2016-10-01

    Sparse representation has been successfully applied to visual tracking by finding the best candidate with a minimal reconstruction error using target templates. However, most sparse representation-based tracking methods only consider holistic rather than local appearance to discriminate between target and background regions, and hence may not perform well when target objects are heavily occluded. In this paper, we develop a simple yet robust tracking algorithm based on a coarse and fine structural local sparse appearance model. The proposed method exploits both partial and structural information of a target object based on sparse coding using the dictionary composed of patches from multiple target templates. The likelihood obtained by averaging and pooling operations exploits consistent appearance of object parts, thereby helping not only locate targets accurately but also handle partial occlusion. To update templates more accurately without introducing occluding regions, we introduce an occlusion detection scheme to account for pixels belonging to the target objects. The proposed method is evaluated on a large benchmark data set with three evaluation metrics. Experimental results demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. PMID:27448350

  14. Data-driven and calibration-free Lamb wave source localization with sparse sensor arrays.

    PubMed

    Harley, Joel B; Moura, José M F

    2015-08-01

    Most Lamb wave localization techniques require that we know the wave's velocity characteristics; yet, in many practical scenarios, velocity estimates can be challenging to acquire, are unavailable, or are unreliable because of the complexity of Lamb waves. As a result, there is a significant need for new methods that can reduce a system's reliance on a priori velocity information. This paper addresses this challenge through two novel source localization methods designed for sparse sensor arrays in isotropic media. Both methods exploit the fundamental sparse structure of a Lamb wave's frequency-wavenumber representation. The first method uses sparse recovery techniques to extract velocities from calibration data. The second method uses kurtosis and the support earth mover's distance to measure the sparseness of a Lamb wave's approximate frequency-wavenumber representation. These measures are then used to locate acoustic sources with no prior calibration data. We experimentally study each method with a collection of acoustic emission data measured from a 1.22 m by 1.22 m isotropic aluminum plate. We show that both methods can achieve less than 1 cm localization error and have less systematic error than traditional time-of-arrival localization methods. PMID:26276960

  15. Nonlinear model reduction for dynamical systems using sparse sensor locations from learned libraries.

    PubMed

    Sargsyan, Syuzanna; Brunton, Steven L; Kutz, J Nathan

    2015-09-01

    We demonstrate the synthesis of sparse sampling and dimensionality reduction to characterize and model nonlinear dynamical systems over a range of bifurcation parameters. First, we construct modal libraries using the classical proper orthogonal decomposition in order to expose the dominant low-rank coherent structures. Here, libraries of the nonlinear terms are also constructed in order to take advantage of the discrete empirical interpolation method and projection that allows for the approximation of nonlinear terms from a sparse number of grid points. The selected grid points are shown to be effective sensing and measurement locations for characterizing the underlying dynamics, stability, and bifurcations of nonlinear dynamical systems. The use of empirical interpolation points and sparse representation facilitates a family of local reduced-order models for each physical regime, rather than a higher-order global model, which has the benefit of physical interpretability of energy transfer between coherent structures. The method advocated also allows for orders-of-magnitude improvement in computational speed and memory requirements. To illustrate the method, the discrete interpolation points and nonlinear modal libraries are used for sparse representation in order to classify and reconstruct the dynamic bifurcation regimes in the complex Ginzburg-Landau equation. It is also shown that point measurements of the nonlinearity are more effective than linear measurements when sensor noise is present. PMID:26465583

  16. A sparse Bayesian framework for conditioning uncertain geologic models to nonlinear flow measurements

    NASA Astrophysics Data System (ADS)

    Li, Lianlin; Jafarpour, Behnam

    2010-09-01

    We present a Bayesian framework for reconstructing hydraulic properties of rock formations from nonlinear dynamic flow data by imposing sparsity on the distribution of the parameters in a sparse transform basis through Laplace prior distribution. Sparse representation of the subsurface flow properties in a compression transform basis (where a compact representation is often possible) lends itself to a natural regularization approach, i.e. sparsity regularization, which has recently been exploited in solving ill-posed subsurface flow inverse problems. The Bayesian estimation approach presented here allows for a probabilistic treatment of the sparse reconstruction problem and has its roots in machine learning and the recently introduced relevance vector machine algorithm for linear inverse problems. We formulate the Bayesian sparse reconstruction algorithm and apply it to nonlinear subsurface inverse problems where solution sparsity in a discrete cosine transform is assumed. The probabilistic description of solution sparsity, as opposed to deterministic regularization, allows for quantification of the estimation uncertainty and avoids the need for specifying a regularization parameter. Several numerical experiments from multiphase subsurface flow application are presented to illustrate the performance of the proposed method and compare it with the regular Bayesian estimation approach that does not impose solution sparsity. While the examples are derived from subsurface flow modeling, the proposed framework can be applied to nonlinear inverse problems in other imaging applications including geophysical and medical imaging and electromagnetic inverse problem.

  17. Enhancing Scalability of Sparse Direct Methods

    SciTech Connect

    Li, Xiaoye S.; Demmel, James; Grigori, Laura; Gu, Ming; Xia,Jianlin; Jardin, Steve; Sovinec, Carl; Lee, Lie-Quan

    2007-07-23

    TOPS is providing high-performance, scalable sparse direct solvers, which have had significant impacts on the SciDAC applications, including fusion simulation (CEMM), accelerator modeling (COMPASS), as well as many other mission-critical applications in DOE and elsewhere. Our recent developments have been focusing on new techniques to overcome scalability bottleneck of direct methods, in both time and memory. These include parallelizing symbolic analysis phase and developing linear-complexity sparse factorization methods. The new techniques will make sparse direct methods more widely usable in large 3D simulations on highly-parallel petascale computers.

  18. Sparse High Dimensional Models in Economics

    PubMed Central

    Fan, Jianqing; Lv, Jinchi; Qi, Lei

    2010-01-01

    This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance. Recent developments of theory, methods, and implementations in penalized least squares and penalized likelihood methods are highlighted. These variable selection methods are proved to be effective in high dimensional sparse modeling. The limits of dimensionality that regularization methods can handle, the role of penalty functions, and their statistical properties are detailed. Some recent advances in ultra-high dimensional sparse modeling are also briefly discussed. PMID:22022635

  19. Imaging correlography with sparse collecting apertures

    NASA Astrophysics Data System (ADS)

    Idell, Paul S.; Fienup, J. R.

    1987-01-01

    This paper investigates the possibility of implementing an imaging correlography system with sparse arrays of intensity detectors. The theory underlying the image formation process for imaging correlography is reviewed, emphasizing the spatial filtering effects that sparse collecting apertures have on the reconstructed imagery. Image recovery with sparse arrays of intensity detectors through the use of computer experiments in which laser speckle measurements are digitally simulated is then demonstrated. It is shown that the quality of imagery reconstructed using this technique is visibly enhanced when appropriate filtering techniques are applied. A performance tradeoff between collecting array redundancy and the number of speckle pattern measurements is briefly discussed.

  20. Separation of seismic blended data by sparse inversion over dictionary learning

    NASA Astrophysics Data System (ADS)

    Zhou, Yanhui; Chen, Wenchao; Gao, Jinghuai

    2014-07-01

    Recent development of blended acquisition calls for the new procedure to process blended seismic measurements. Presently, deblending and reconstructing unblended data followed by conventional processing is the most practical processing workflow. We study seismic deblending by advanced sparse inversion with a learned dictionary in this paper. To make our method more effective, hybrid acquisition and time-dithering sequential shooting are introduced so that clean single-shot records can be used to train the dictionary to favor the sparser representation of data to be recovered. Deblending and dictionary learning with l1-norm based sparsity are combined to construct the corresponding problem with respect to unknown recovery, dictionary, and coefficient sets. A two-step optimization approach is introduced. In the step of dictionary learning, the clean single-shot data are selected as trained data to learn the dictionary. For deblending, we fix the dictionary and employ an alternating scheme to update the recovery and coefficients separately. Synthetic and real field data were used to verify the performance of our method. The outcome can be a significant reference in designing high-efficient and low-cost blended acquisition.

  1. Interpretable exemplar-based shape classification using constrained sparse linear models

    NASA Astrophysics Data System (ADS)

    Sigurdsson, Gunnar A.; Yang, Zhen; Tran, Trac D.; Prince, Jerry L.

    2015-03-01

    Many types of diseases manifest themselves as observable changes in the shape of the affected organs. Using shape classification, we can look for signs of disease and discover relationships between diseases. We formulate the problem of shape classification in a holistic framework that utilizes a lossless scalar field representation and a non-parametric classification based on sparse recovery. This framework generalizes over certain classes of unseen shapes while using the full information of the shape, bypassing feature extraction. The output of the method is the class whose combination of exemplars most closely approximates the shape, and furthermore, the algorithm returns the most similar exemplars along with their similarity to the shape, which makes the result simple to interpret. Our results show that the method offers accurate classification between three cerebellar diseases and controls in a database of cerebellar ataxia patients. For reproducible comparison, promising results are presented on publicly available 2D datasets, including the ETH-80 dataset where the method achieves 88.4% classification accuracy.

  2. Sparse ice: Geophysical, biological and Indigenous knowledge perspectives on a habitat for ice-associated fauna

    NASA Astrophysics Data System (ADS)

    Lee, O. A.; Eicken, H.; Weyapuk, W., Jr.; Adams, B.; Mohoney, A. R.

    2015-12-01

    The significance of highly dispersed, remnant Arctic sea ice as a platform for marine mammals and indigenous hunters in spring and summer may have increased disproportionately with changes in the ice cover. As dispersed remnant ice becomes more common in the future it will be increasingly important to understand its ecological role for upper trophic levels such as marine mammals and its role for supporting primary productivity of ice-associated algae. Potential sparse ice habitat at sea ice concentrations below 15% is difficult to detect using remote sensing data alone. A combination of high resolution satellite imagery (including Synthetic Aperture Radar), data from the Barrow sea ice radar, and local observations from indigenous sea ice experts was used to detect sparse sea ice in the Alaska Arctic. Traditional knowledge on sea ice use by marine mammals was used to delimit the scales where sparse ice could still be used as habitat for seals and walrus. Potential sparse ice habitat was quantified with respect to overall spatial extent, size of ice floes, and density of floes. Sparse ice persistence offshore did not prevent the occurrence of large coastal walrus haul outs, but the lack of sparse ice and early sea ice retreat coincided with local observations of ringed seal pup mortality. Observations from indigenous hunters will continue to be an important source of information for validating remote sensing detections of sparse ice, and improving understanding of marine mammal adaptations to sea ice change.

  3. Social biases determine spatiotemporal sparseness of ciliate mating heuristics.

    PubMed

    Clark, Kevin B

    2012-01-01

    Ciliates become highly social, even displaying animal-like qualities, in the joint presence of aroused conspecifics and nonself mating pheromones. Pheromone detection putatively helps trigger instinctual and learned courtship and dominance displays from which social judgments are made about the availability, compatibility, and fitness representativeness or likelihood of prospective mates and rivals. In earlier studies, I demonstrated the heterotrich Spirostomum ambiguum improves mating competence by effecting preconjugal strategies and inferences in mock social trials via behavioral heuristics built from Hebbian-like associative learning. Heuristics embody serial patterns of socially relevant action that evolve into ordered, topologically invariant computational networks supporting intra- and intermate selection. S. ambiguum employs heuristics to acquire, store, plan, compare, modify, select, and execute sets of mating propaganda. One major adaptive constraint over formation and use of heuristics involves a ciliate's initial subjective bias, responsiveness, or preparedness, as defined by Stevens' Law of subjective stimulus intensity, for perceiving the meaningfulness of mechanical pressures accompanying cell-cell contacts and additional perimating events. This bias controls durations and valences of nonassociative learning, search rates for appropriate mating strategies, potential net reproductive payoffs, levels of social honesty and deception, successful error diagnosis and correction of mating signals, use of insight or analysis to solve mating dilemmas, bioenergetics expenditures, and governance of mating decisions by classical or quantum statistical mechanics. I now report this same social bias also differentially affects the spatiotemporal sparseness, as measured with metric entropy, of ciliate heuristics. Sparseness plays an important role in neural systems through optimizing the specificity, efficiency, and capacity of memory representations. The present

  4. Time-frequency signature sparse reconstruction using chirp dictionary

    NASA Astrophysics Data System (ADS)

    Nguyen, Yen T. H.; Amin, Moeness G.; Ghogho, Mounir; McLernon, Des

    2015-05-01

    This paper considers local sparse reconstruction of time-frequency signatures of windowed non-stationary radar returns. These signals can be considered instantaneously narrow-band, thus the local time-frequency behavior can be recovered accurately with incomplete observations. The typically employed sinusoidal dictionary induces competing requirements on window length. It confronts converse requests on the number of measurements for exact recovery, and sparsity. In this paper, we use chirp dictionary for each window position to determine the signal instantaneous frequency laws. This approach can considerably mitigate the problems of sinusoidal dictionary, and enable the utilization of longer windows for accurate time-frequency representations. It also reduces the picket fence by introducing a new factor, the chirp rate α. Simulation examples are provided, demonstrating the superior performance of local chirp dictionary over its sinusoidal counterpart.

  5. Sparse Downscaling and Adaptive Fusion of Multi-sensor Precipitation

    NASA Astrophysics Data System (ADS)

    Ebtehaj, M.; Foufoula, E.

    2011-12-01

    The past decades have witnessed a remarkable emergence of new sources of multiscale multi-sensor precipitation data including data from global spaceborne active and passive sensors, regional ground based weather surveillance radars and local rain-gauges. Resolution enhancement of remotely sensed rainfall and optimal integration of multi-sensor data promise a posteriori estimates of precipitation fluxes with increased accuracy and resolution to be used in hydro-meteorological applications. In this context, new frameworks are proposed for resolution enhancement and multiscale multi-sensor precipitation data fusion, which capitalize on two main observations: (1) sparseness of remotely sensed precipitation fields in appropriately chosen transformed domains, (e.g., in wavelet space) which promotes the use of the newly emerged theory of sparse representation and compressive sensing for resolution enhancement; (2) a conditionally Gaussian Scale Mixture (GSM) parameterization in the wavelet domain which allows exploiting the efficient linear estimation methodologies, while capturing the non-Gaussian data structure of rainfall. The proposed methodologies are demonstrated using a data set of coincidental observations of precipitation reflectivity images by the spaceborne precipitation radar (PR) aboard the Tropical Rainfall Measurement Mission (TRMM) satellite and ground-based NEXRAD weather surveillance Doppler radars. Uniqueness and stability of the solution, capturing non-Gaussian singular structure of rainfall, reduced uncertainty of estimation and efficiency of computation are the main advantages of the proposed methodologies over the commonly used standard Gaussian techniques.

  6. Sparse graph-based transduction for image classification

    NASA Astrophysics Data System (ADS)

    Huang, Sheng; Yang, Dan; Zhou, Jia; Huangfu, Lunwen; Zhang, Xiaohong

    2015-03-01

    Motivated by the remarkable successes of graph-based transduction (GT) and sparse representation (SR), we present a classifier named sparse graph-based classifier (SGC) for image classification. In SGC, SR is leveraged to measure the correlation (similarity) of every two samples and a graph is constructed for encoding these correlations. Then the Laplacian eigenmapping is adopted for deriving the graph Laplacian of the graph. Finally, SGC can be obtained by plugging the graph Laplacian into the conventional GT framework. In the image classification procedure, SGC utilizes the correlations which are encoded in the learned graph Laplacian, to infer the labels of unlabeled images. SGC inherits the merits of both GT and SR. Compared to SR, SGC improves the robustness and the discriminating power of GT. Compared to GT, SGC sufficiently exploits the whole data. Therefore, it alleviates the undercomplete dictionary issue suffered by SR. Four popular image databases are employed for evaluation. The results demonstrate that SGC can achieve a promising performance in comparison with the state-of-the-art classifiers, particularly in the small training sample size case and the noisy sample case.

  7. Framelet-Based Sparse Unmixing of Hyperspectral Images.

    PubMed

    Zhang, Guixu; Xu, Yingying; Fang, Faming

    2016-04-01

    Spectral unmixing aims at estimating the proportions (abundances) of pure spectrums (endmembers) in each mixed pixel of hyperspectral data. Recently, a semi-supervised approach, which takes the spectral library as prior knowledge, has been attracting much attention in unmixing. In this paper, we propose a new semi-supervised unmixing model, termed framelet-based sparse unmixing (FSU), which promotes the abundance sparsity in framelet domain and discriminates the approximation and detail components of hyperspectral data after framelet decomposition. Due to the advantages of the framelet representations, e.g., images have good sparse approximations in framelet domain, and most of the additive noises are included in the detail coefficients, the FSU model has a better antinoise capability, and accordingly leads to more desirable unmixing performance. The existence and uniqueness of the minimizer of the FSU model are then discussed, and the split Bregman algorithm and its convergence property are presented to obtain the minimal solution. Experimental results on both simulated data and real data demonstrate that the FSU model generally performs better than the compared methods. PMID:26849863

  8. Classification of Histology Sections via Multispectral Convolutional Sparse Coding*

    PubMed Central

    Zhou, Yin; Barner, Kenneth; Spellman, Paul

    2014-01-01

    Image-based classification of histology sections plays an important role in predicting clinical outcomes. However this task is very challenging due to the presence of large technical variations (e.g., fixation, staining) and biological heterogeneities (e.g., cell type, cell state). In the field of biomedical imaging, for the purposes of visualization and/or quantification, different stains are typically used for different targets of interest (e.g., cellular/subcellular events), which generates multi-spectrum data (images) through various types of microscopes and, as a result, provides the possibility of learning biological-component-specific features by exploiting multispectral information. We propose a multispectral feature learning model that automatically learns a set of convolution filter banks from separate spectra to efficiently discover the intrinsic tissue morphometric signatures, based on convolutional sparse coding (CSC). The learned feature representations are then aggregated through the spatial pyramid matching framework (SPM) and finally classified using a linear SVM. The proposed system has been evaluated using two large-scale tumor cohorts, collected from The Cancer Genome Atlas (TCGA). Experimental results show that the proposed model 1) outperforms systems utilizing sparse coding for unsupervised feature learning (e.g., PSD-SPM [5]); 2) is competitive with systems built upon features with biological prior knowledge (e.g., SMLSPM [4]). PMID:25554749

  9. Visual recognition and inference using dynamic overcomplete sparse learning.

    PubMed

    Murray, Joseph F; Kreutz-Delgado, Kenneth

    2007-09-01

    We present a hierarchical architecture and learning algorithm for visual recognition and other visual inference tasks such as imagination, reconstruction of occluded images, and expectation-driven segmentation. Using properties of biological vision for guidance, we posit a stochastic generative world model and from it develop a simplified world model (SWM) based on a tractable variational approximation that is designed to enforce sparse coding. Recent developments in computational methods for learning overcomplete representations (Lewicki & Sejnowski, 2000; Teh, Welling, Osindero, & Hinton, 2003) suggest that overcompleteness can be useful for visual tasks, and we use an overcomplete dictionary learning algorithm (Kreutz-Delgado, et al., 2003) as a preprocessing stage to produce accurate, sparse codings of images. Inference is performed by constructing a dynamic multilayer network with feedforward, feedback, and lateral connections, which is trained to approximate the SWM. Learning is done with a variant of the back-propagation-through-time algorithm, which encourages convergence to desired states within a fixed number of iterations. Vision tasks require large networks, and to make learning efficient, we take advantage of the sparsity of each layer to update only a small subset of elements in a large weight matrix at each iteration. Experiments on a set of rotated objects demonstrate various types of visual inference and show that increasing the degree of overcompleteness improves recognition performance in difficult scenes with occluded objects in clutter. PMID:17650062

  10. Sparse Geologic Dictionaries for Flexible and Low-Rank Subsurface Flow Model Calibration: Field Applications

    NASA Astrophysics Data System (ADS)

    Khaninezhad, M. R. M.; Jafarpour, B.

    2014-12-01

    Inference of spatially distributed reservoir and aquifer properties from scattered and spatially limited data poses a poorly constrained nonlinear inverse problem that can have many solutions. In particular, the uncertainty in the geologic continuity model can remarkably degrade the quality of fluid displacement predictions, hence, the efficiency of resource development plans. For model calibration, instead of estimating aquifer properties for each grid cell in the model, the sparse representation of the aquifer properties is estimated from nonlinear production data. The resulting calibration problem can be solved using recent developments in sparse signal processing, widely known as compressed sensing. This novel formulation leads to a sparse data inversion technique that effectively searches for relevant geologic patterns that can explain the available spatiotemporal data. We recently introduced a new model calibration framework by using sparse geologic dictionaries that are constructed from uncertain prior geologic models. Here, we first demonstrate the effectiveness of the proposed sparse geologic dictionaries for flexible and robust model calibration under prior geologic uncertainty. We illustrate the effectiveness of the proposed approach in using limited nonlinear production data to identify a consistent geologic scenario from a number of candidate scenarios, which is usually a challenging problem in geostatistical reservoir characterization. We then evaluate the feasibility of adopting this framework for field application. In particular, we present subsurface field model calibration applications in which sparse geologic dictionaries are learned from uncertain prior information on large-scale reservoir property descriptions. We consider two large-scale field case studies, the Brugges and the Norne field examples. We discuss the construction of geologic dictionaries for large-scale problems and present reduced-order methods to speed up the computational

  11. Sparse principal component analysis in cancer research

    PubMed Central

    Hsu, Ying-Lin; Huang, Po-Yu; Chen, Dung-Tsa

    2015-01-01

    A critical challenging component in analyzing high-dimensional data in cancer research is how to reduce the dimension of data and how to extract relevant features. Sparse principal component analysis (PCA) is a powerful statistical tool that could help reduce data dimension and select important variables simultaneously. In this paper, we review several approaches for sparse PCA, including variance maximization (VM), reconstruction error minimization (REM), singular value decomposition (SVD), and probabilistic modeling (PM) approaches. A simulation study is conducted to compare PCA and the sparse PCAs. An example using a published gene signature in a lung cancer dataset is used to illustrate the potential application of sparse PCAs in cancer research. PMID:26719835

  12. A new sparse design method on phased array-based acoustic emission sensor for partial discharge detection

    NASA Astrophysics Data System (ADS)

    Xie, Qing; Cheng, Shuyi; Lü, Fangcheng; Li, Yanqing

    2014-03-01

    The acoustic detecting performance of a partial discharge (PD) ultrasonic sensor array can be improved by increasing the number of array elements. However, it will increase the complexity and cost of the PD detection system. Therefore, a sparse sensor with an optimization design can be chosen to ensure good acoustic performance. In this paper, first, a quantitative method is proposed for evaluating the acoustic performance of a square PD ultrasonic array sensor. Second, a method of sparse design is presented to combine the evaluation method with the chaotic monkey algorithm. Third, an optimal sparse structure of a 3 × 3 square PD ultrasonic array sensor is deduced. It is found that, under different sparseness and sparse structure, the main beam width of the directivity function shows a small variation, while the sidelobe amplitude shows a bigger variation. For a specific sparseness, the acoustic performance under the optimal sparse structure is close to that using a full array. Finally, some simulations based on the above method show that, for certain sparseness, the sensor with the optimal sparse structure exhibits superior positioning accuracy compared to that with a stochastic one. The sensor array structure may be chosen according to the actual requirements for an actual engineering application.

  13. Wavelet Representation of Contour Sets

    SciTech Connect

    Bertram, M; Laney, D E; Duchaineau, M A; Hansen, C D; Hamann, B; Joy, K I

    2001-07-19

    We present a new wavelet compression and multiresolution modeling approach for sets of contours (level sets). In contrast to previous wavelet schemes, our algorithm creates a parametrization of a scalar field induced by its contoum and compactly stores this parametrization rather than function values sampled on a regular grid. Our representation is based on hierarchical polygon meshes with subdivision connectivity whose vertices are transformed into wavelet coefficients. From this sparse set of coefficients, every set of contours can be efficiently reconstructed at multiple levels of resolution. When applying lossy compression, introducing high quantization errors, our method preserves contour topology, in contrast to compression methods applied to the corresponding field function. We provide numerical results for scalar fields defined on planar domains. Our approach generalizes to volumetric domains, time-varying contours, and level sets of vector fields.

  14. Finding Nonoverlapping Substructures of a Sparse Matrix

    SciTech Connect

    Pinar, Ali; Vassilevska, Virginia

    2005-08-11

    Many applications of scientific computing rely on computations on sparse matrices. The design of efficient implementations of sparse matrix kernels is crucial for the overall efficiency of these applications. Due to the high compute-to-memory ratio and irregular memory access patterns, the performance of sparse matrix kernels is often far away from the peak performance on a modern processor. Alternative data structures have been proposed, which split the original matrix A into A{sub d} and A{sub s}, so that A{sub d} contains all dense blocks of a specified size in the matrix, and A{sub s} contains the remaining entries. This enables the use of dense matrix kernels on the entries of A{sub d} producing better memory performance. In this work, we study the problem of finding a maximum number of nonoverlapping dense blocks in a sparse matrix, which is previously not studied in the sparse matrix community. We show that the maximum nonoverlapping dense blocks problem is NP-complete by using a reduction from the maximum independent set problem on cubic planar graphs. We also propose a 2/3-approximation algorithm that runs in linear time in the number of nonzeros in the matrix. This extended abstract focuses on our results for 2x2 dense blocks. However we show that our results can be generalized to arbitrary sized dense blocks, and many other oriented substructures, which can be exploited to improve the memory performance of sparse matrix operations.

  15. Efficient image representations and features

    NASA Astrophysics Data System (ADS)

    Dorr, Michael; Vig, Eleonora; Barth, Erhardt

    2013-03-01

    Interdisciplinary research in human vision and electronic imaging has greatly contributed to the current state of the art in imaging technologies. Image compression and image quality are prominent examples and the progress made in these areas relies on a better understanding of what natural images are and how they are perceived by the human visual system. A key research question has been: given the (statistical) properties of natural images, what are the most efficient and perceptually relevant image representations, what are the most prominent and descriptive features of images and videos? We give an overview of how these topics have evolved over the 25 years of HVEI conferences and how they have influenced the current state of the art. There are a number of striking parallels between human vision and electronic imaging. The retina does lateral inhibition, one of the early coders was using a Laplacian pyramid; primary visual cortical areas have orientation- and frequency-selective neurons, the current JPEG standard defines similar wavelet transforms; the brain uses a sparse code, engineers are currently excited about sparse coding and compressed sensing. Some of this has indeed happened at the HVEI conferences and we would like to distill that.

  16. Sparse principal component analysis by choice of norm

    PubMed Central

    Luo, Ruiyan; Zhao, Hongyu

    2012-01-01

    Recent years have seen the developments of several methods for sparse principal component analysis due to its importance in the analysis of high dimensional data. Despite the demonstration of their usefulness in practical applications, they are limited in terms of lack of orthogonality in the loadings (coefficients) of different principal components, the existence of correlation in the principal components, the expensive computation needed, and the lack of theoretical results such as consistency in high-dimensional situations. In this paper, we propose a new sparse principal component analysis method by introducing a new norm to replace the usual norm in traditional eigenvalue problems, and propose an efficient iterative algorithm to solve the optimization problems. With this method, we can efficiently obtain uncorrelated principal components or orthogonal loadings, and achieve the goal of explaining a high percentage of variations with sparse linear combinations. Due to the strict convexity of the new norm, we can prove the convergence of the iterative method and provide the detailed characterization of the limits. We also prove that the obtained principal component is consistent for a single component model in high dimensional situations. As illustration, we apply this method to real gene expression data with competitive results. PMID:23524453

  17. Supramodal representation of emotions.

    PubMed

    Klasen, Martin; Kenworthy, Charles A; Mathiak, Krystyna A; Kircher, Tilo T J; Mathiak, Klaus

    2011-09-21

    Supramodal representation of emotion and its neural substrates have recently attracted attention as a marker of social cognition. However, the question whether perceptual integration of facial and vocal emotions takes place in primary sensory areas, multimodal cortices, or in affective structures remains unanswered yet. Using novel computer-generated stimuli, we combined emotional faces and voices in congruent and incongruent ways and assessed functional brain data (fMRI) during an emotional classification task. Both congruent and incongruent audiovisual stimuli evoked larger responses in thalamus and superior temporal regions compared with unimodal conditions. Congruent emotions were characterized by activation in amygdala, insula, ventral posterior cingulate (vPCC), temporo-occipital, and auditory cortices; incongruent emotions activated a frontoparietal network and bilateral caudate nucleus, indicating a greater processing load in working memory and emotion-encoding areas. The vPCC alone exhibited differential reactions to congruency and incongruency for all emotion categories and can thus be considered a central structure for supramodal representation of complex emotional information. Moreover, the left amygdala reflected supramodal representation of happy stimuli. These findings document that emotional information does not merge at the perceptual audiovisual integration level in unimodal or multimodal areas, but in vPCC and amygdala. PMID:21940454

  18. Sparse SPM: Group Sparse-dictionary learning in SPM framework for resting-state functional connectivity MRI analysis.

    PubMed

    Lee, Young-Beom; Lee, Jeonghyeon; Tak, Sungho; Lee, Kangjoo; Na, Duk L; Seo, Sang Won; Jeong, Yong; Ye, Jong Chul

    2016-01-15

    Recent studies of functional connectivity MR imaging have revealed that the default-mode network activity is disrupted in diseases such as Alzheimer's disease (AD). However, there is not yet a consensus on the preferred method for resting-state analysis. Because the brain is reported to have complex interconnected networks according to graph theoretical analysis, the independency assumption, as in the popular independent component analysis (ICA) approach, often does not hold. Here, rather than using the independency assumption, we present a new statistical parameter mapping (SPM)-type analysis method based on a sparse graph model where temporal dynamics at each voxel position are described as a sparse combination of global brain dynamics. In particular, a new concept of a spatially adaptive design matrix has been proposed to represent local connectivity that shares the same temporal dynamics. If we further assume that local network structures within a group are similar, the estimation problem of global and local dynamics can be solved using sparse dictionary learning for the concatenated temporal data across subjects. Moreover, under the homoscedasticity variance assumption across subjects and groups that is often used in SPM analysis, the aforementioned individual and group analyses using sparse dictionary learning can be accurately modeled by a mixed-effect model, which also facilitates a standard SPM-type group-level inference using summary statistics. Using an extensive resting fMRI data set obtained from normal, mild cognitive impairment (MCI), and Alzheimer's disease patient groups, we demonstrated that the changes in the default mode network extracted by the proposed method are more closely correlated with the progression of Alzheimer's disease. PMID:26524138

  19. Robust Semi-Supervised Subspace Clustering via Non-Negative Low-Rank Representation.

    PubMed

    Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung

    2016-08-01

    Low-rank representation (LRR) has been successfully applied in exploring the subspace structures of data. However, in previous LRR-based semi-supervised subspace clustering methods, the label information is not used to guide the affinity matrix construction so that the affinity matrix cannot deliver strong discriminant information. Moreover, these methods cannot guarantee an overall optimum since the affinity matrix construction and subspace clustering are often independent steps. In this paper, we propose a robust semi-supervised subspace clustering method based on non-negative LRR (NNLRR) to address these problems. By combining the LRR framework and the Gaussian fields and harmonic functions method in a single optimization problem, the supervision information is explicitly incorporated to guide the affinity matrix construction and the affinity matrix construction and subspace clustering are accomplished in one step to guarantee the overall optimum. The affinity matrix is obtained by seeking a non-negative low-rank matrix that represents each sample as a linear combination of others. We also explicitly impose the sparse constraint on the affinity matrix such that the affinity matrix obtained by NNLRR is non-negative low-rank and sparse. We introduce an efficient linearized alternating direction method with adaptive penalty to solve the corresponding optimization problem. Extensive experimental results demonstrate that NNLRR is effective in semi-supervised subspace clustering and robust to different types of noise than other state-of-the-art methods. PMID:26259210

  20. Finding nonoverlapping substructures of a sparse matrix

    SciTech Connect

    Pinar, Ali; Vassilevska, Virginia

    2004-08-09

    Many applications of scientific computing rely on computations on sparse matrices, thus the design of efficient implementations of sparse matrix kernels is crucial for the overall efficiency of these applications. Due to the high compute-to-memory ratio and irregular memory access patterns, the performance of sparse matrix kernels is often far away from the peak performance on a modern processor. Alternative data structures have been proposed, which split the original matrix A into A{sub d} and A{sub s}, so that A{sub d} contains all dense blocks of a specified size in the matrix, and A{sub s} contains the remaining entries. This enables the use of dense matrix kernels on the entries of A{sub d} producing better memory performance. In this work, we study the problem of finding a maximum number of non overlapping rectangular dense blocks in a sparse matrix, which has not been studied in the sparse matrix community. We show that the maximum non overlapping dense blocks problem is NP-complete by using a reduction from the maximum independent set problem on cubic planar graphs. We also propose a 2/3-approximation algorithm for 2 times 2 blocks that runs in linear time in the number of nonzeros in the matrix. We discuss alternatives to rectangular blocks such as diagonal blocks and cross blocks and present complexity analysis and approximation algorithms.

  1. A Sparse Neural Code for Some Speech Sounds but Not for Others

    PubMed Central

    Scharinger, Mathias; Bendixen, Alexandra; Trujillo-Barreto, Nelson J.; Obleser, Jonas

    2012-01-01

    The precise neural mechanisms underlying speech sound representations are still a matter of debate. Proponents of ‘sparse representations’ assume that on the level of speech sounds, only contrastive or otherwise not predictable information is stored in long-term memory. Here, in a passive oddball paradigm, we challenge the neural foundations of such a ‘sparse’ representation; we use words that differ only in their penultimate consonant (“coronal” [t] vs. “dorsal” [k] place of articulation) and for example distinguish between the German nouns Latz ([lats]; bib) and Lachs ([laks]; salmon). Changes from standard [t] to deviant [k] and vice versa elicited a discernible Mismatch Negativity (MMN) response. Crucially, however, the MMN for the deviant [lats] was stronger than the MMN for the deviant [laks]. Source localization showed this difference to be due to enhanced brain activity in right superior temporal cortex. These findings reflect a difference in phonological ‘sparsity’: Coronal [t] segments, but not dorsal [k] segments, are based on more sparse representations and elicit less specific neural predictions; sensory deviations from this prediction are more readily ‘tolerated’ and accordingly trigger weaker MMNs. The results foster the neurocomputational reality of ‘representationally sparse’ models of speech perception that are compatible with more general predictive mechanisms in auditory perception. PMID:22815876

  2. Design and implementation of sparse aperture imaging systems

    NASA Astrophysics Data System (ADS)

    Chung, Soon-Jo; Miller, David W.; de Weck, Olivier L.

    2002-12-01

    In order to better understand the technological difficulties involved in designing and building a sparse aperture array, the challenge of building a white light Golay-3 telescope was undertaken. The MIT Adaptive Reconnaissance Golay-3 Optical Satellite (ARGOS) project exploits wide-angle Fizeau interferometer technology with an emphasis on modularity in the optics and spacecraft subsystems. Unique design procedures encompassing the nature of coherent wavefront sensing, control and combining as well as various system engineering aspects to achieve cost effectiveness, are developed. To demonstrate a complete spacecraft in a 1-g environment, the ARGOS system is mounted on a frictionless air-bearing, and has the ability to track fast orbiting satellites like the ISS or the planets. Wavefront sensing techniques are explored to mitigate initial misalignment and to feed back real-time aberrations into the optical control loop. This paper presents the results and the lessons learned from the conceive, design and implementation phases of ARGOS. A preliminary assess-ment shows that the beam combining problem is the most challenging aspect of sparse optical arrays. The need for optical control is paramount due to tight beam combining tolerances. The wavefront sensing/control requirements appear to be a major technology and cost driver.

  3. Semi-implicit integration factor methods on sparse grids for high-dimensional systems

    NASA Astrophysics Data System (ADS)

    Wang, Dongyong; Chen, Weitao; Nie, Qing

    2015-07-01

    Numerical methods for partial differential equations in high-dimensional spaces are often limited by the curse of dimensionality. Though the sparse grid technique, based on a one-dimensional hierarchical basis through tensor products, is popular for handling challenges such as those associated with spatial discretization, the stability conditions on time step size due to temporal discretization, such as those associated with high-order derivatives in space and stiff reactions, remain. Here, we incorporate the sparse grids with the implicit integration factor method (IIF) that is advantageous in terms of stability conditions for systems containing stiff reactions and diffusions. We combine IIF, in which the reaction is treated implicitly and the diffusion is treated explicitly and exactly, with various sparse grid techniques based on the finite element and finite difference methods and a multi-level combination approach. The overall method is found to be efficient in terms of both storage and computational time for solving a wide range of PDEs in high dimensions. In particular, the IIF with the sparse grid combination technique is flexible and effective in solving systems that may include cross-derivatives and non-constant diffusion coefficients. Extensive numerical simulations in both linear and nonlinear systems in high dimensions, along with applications of diffusive logistic equations and Fokker-Planck equations, demonstrate the accuracy, efficiency, and robustness of the new methods, indicating potential broad applications of the sparse grid-based integration factor method.

  4. Semi-implicit Integration Factor Methods on Sparse Grids for High-Dimensional Systems

    PubMed Central

    Wang, Dongyong; Chen, Weitao; Nie, Qing

    2015-01-01

    Numerical methods for partial differential equations in high-dimensional spaces are often limited by the curse of dimensionality. Though the sparse grid technique, based on a one-dimensional hierarchical basis through tensor products, is popular for handling challenges such as those associated with spatial discretization, the stability conditions on time step size due to temporal discretization, such as those associated with high-order derivatives in space and stiff reactions, remain. Here, we incorporate the sparse grids with the implicit integration factor method (IIF) that is advantageous in terms of stability conditions for systems containing stiff reactions and diffusions. We combine IIF, in which the reaction is treated implicitly and the diffusion is treated explicitly and exactly, with various sparse grid techniques based on the finite element and finite difference methods and a multi-level combination approach. The overall method is found to be efficient in terms of both storage and computational time for solving a wide range of PDEs in high dimensions. In particular, the IIF with the sparse grid combination technique is flexible and effective in solving systems that may include cross-derivatives and non-constant diffusion coefficients. Extensive numerical simulations in both linear and nonlinear systems in high dimensions, along with applications of diffusive logistic equations and Fokker-Planck equations, demonstrate the accuracy, efficiency, and robustness of the new methods, indicating potential broad applications of the sparse grid-based integration factor method. PMID:25897178

  5. Fast wavelet based sparse approximate inverse preconditioner

    SciTech Connect

    Wan, W.L.

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  6. The hierarchical sparse selection model of visual crowding

    PubMed Central

    Chaney, Wesley; Fischer, Jason; Whitney, David

    2014-01-01

    Because the environment is cluttered, objects rarely appear in isolation. The visual system must therefore attentionally select behaviorally relevant objects from among many irrelevant ones. A limit on our ability to select individual objects is revealed by the phenomenon of visual crowding: an object seen in the periphery, easily recognized in isolation, can become impossible to identify when surrounded by other, similar objects. The neural basis of crowding is hotly debated: while prevailing theories hold that crowded information is irrecoverable – destroyed due to over-integration in early stage visual processing – recent evidence demonstrates otherwise. Crowding can occur between high-level, configural object representations, and crowded objects can contribute with high precision to judgments about the “gist” of a group of objects, even when they are individually unrecognizable. While existing models can account for the basic diagnostic criteria of crowding (e.g., specific critical spacing, spatial anisotropies, and temporal tuning), no present model explains how crowding can operate simultaneously at multiple levels in the visual processing hierarchy, including at the level of whole objects. Here, we present a new model of visual crowding—the hierarchical sparse selection (HSS) model, which accounts for object-level crowding, as well as a number of puzzling findings in the recent literature. Counter to existing theories, we posit that crowding occurs not due to degraded visual representations in the brain, but due to impoverished sampling of visual representations for the sake of perception. The HSS model unifies findings from a disparate array of visual crowding studies and makes testable predictions about how information in crowded scenes can be accessed. PMID:25309360

  7. Sparsey™: event recognition via deep hierarchical sparse distributed codes

    PubMed Central

    Rinkus, Gerard J.

    2014-01-01

    The visual cortex's hierarchical, multi-level organization is captured in many biologically inspired computational vision models, the general idea being that progressively larger scale (spatially/temporally) and more complex visual features are represented in progressively higher areas. However, most earlier models use localist representations (codes) in each representational field (which we equate with the cortical macrocolumn, “mac”), at each level. In localism, each represented feature/concept/event (hereinafter “item”) is coded by a single unit. The model we describe, Sparsey, is hierarchical as well but crucially, it uses sparse distributed coding (SDC) in every mac in all levels. In SDC, each represented item is coded by a small subset of the mac's units. The SDCs of different items can overlap and the size of overlap between items can be used to represent their similarity. The difference between localism and SDC is crucial because SDC allows the two essential operations of associative memory, storing a new item and retrieving the best-matching stored item, to be done in fixed time for the life of the model. Since the model's core algorithm, which does both storage and retrieval (inference), makes a single pass over all macs on each time step, the overall model's storage/retrieval operation is also fixed-time, a criterion we consider essential for scalability to the huge (“Big Data”) problems. A 2010 paper described a nonhierarchical version of this model in the context of purely spatial pattern processing. Here, we elaborate a fully hierarchical model (arbitrary numbers of levels and macs per level), describing novel model principles like progressive critical periods, dynamic modulation of principal cells' activation functions based on a mac-level familiarity measure, representation of multiple simultaneously active hypotheses, a novel method of time warp invariant recognition, and we report results showing learning/recognition of spatiotemporal

  8. Native ultrametricity of sparse random ensembles

    NASA Astrophysics Data System (ADS)

    Avetisov, V.; Krapivsky, P. L.; Nechaev, S.

    2016-01-01

    We investigate the eigenvalue density in ensembles of large sparse Bernoulli random matrices. Analyzing in detail the spectral density of ensembles of linear subgraphs, we discuss its ultrametric nature and show that near the spectrum boundary, the tails of the spectral density exhibit a Lifshitz singularity typical for Anderson localization. We pay attention to an intriguing connection of the spectral density to the Dedekind η-function. We conjecture that ultrametricity emerges in rare-event statistics and is inherit to generic complex sparse systems.

  9. Tensor methods for large, sparse unconstrained optimization

    SciTech Connect

    Bouaricha, A.

    1996-11-01

    Tensor methods for unconstrained optimization were first introduced by Schnabel and Chow [SIAM J. Optimization, 1 (1991), pp. 293-315], who describe these methods for small to moderate size problems. This paper extends these methods to large, sparse unconstrained optimization problems. This requires an entirely new way of solving the tensor model that makes the methods suitable for solving large, sparse optimization problems efficiently. We present test results for sets of problems where the Hessian at the minimizer is nonsingular and where it is singular. These results show that tensor methods are significantly more efficient and more reliable than standard methods based on Newton`s method.

  10. VIM-based dynamic sparse grid approach to partial differential equations.

    PubMed

    Mei, Shu-Li

    2014-01-01

    Combining the variational iteration method (VIM) with the sparse grid theory, a dynamic sparse grid approach for nonlinear PDEs is proposed in this paper. In this method, a multilevel interpolation operator is constructed based on the sparse grids theory firstly. The operator is based on the linear combination of the basic functions and independent of them. Second, by means of the precise integration method (PIM), the VIM is developed to solve the nonlinear system of ODEs which is obtained from the discretization of the PDEs. In addition, a dynamic choice scheme on both of the inner and external grid points is proposed. It is different from the traditional interval wavelet collocation method in which the choice of both of the inner and external grid points is dynamic. The numerical experiments show that our method is better than the traditional wavelet collocation method, especially in solving the PDEs with the Nuemann boundary conditions. PMID:24723805

  11. VIM-Based Dynamic Sparse Grid Approach to Partial Differential Equations

    PubMed Central

    Mei, Shu-Li

    2014-01-01

    Combining the variational iteration method (VIM) with the sparse grid theory, a dynamic sparse grid approach for nonlinear PDEs is proposed in this paper. In this method, a multilevel interpolation operator is constructed based on the sparse grids theory firstly. The operator is based on the linear combination of the basic functions and independent of them. Second, by means of the precise integration method (PIM), the VIM is developed to solve the nonlinear system of ODEs which is obtained from the discretization of the PDEs. In addition, a dynamic choice scheme on both of the inner and external grid points is proposed. It is different from the traditional interval wavelet collocation method in which the choice of both of the inner and external grid points is dynamic. The numerical experiments show that our method is better than the traditional wavelet collocation method, especially in solving the PDEs with the Nuemann boundary conditions. PMID:24723805

  12. Sparse LSSVM in Primal Using Cholesky Factorization for Large-Scale Problems.

    PubMed

    Zhou, Shuisheng

    2016-04-01

    For support vector machine (SVM) learning, least squares SVM (LSSVM), derived by duality LSSVM (D-LSSVM), is a widely used model, because it has an explicit solution. One obvious limitation of the model is that the solution lacks sparseness, which limits it from training large-scale problems efficiently. In this paper, we derive an equivalent LSSVM model in primal space LSSVM (P-LSSVM) by the representer theorem and prove that P-LSSVM can be solved exactly at some sparse solutions for problems with low-rank kernel matrices. Two algorithms are proposed for finding the sparse (approximate) solution of P-LSSVM by Cholesky factorization. One is based on the decomposition of the kernel matrix K as P P(T) with the best low-rank matrix P approximately by pivoting Cholesky factorization. The other is based on solving P-LSSVM by approximating the Cholesky factorization of the Hessian matrix with rank-one update scheme. For linear learning problems, theoretical analysis and experimental results support that P-LSSVM can give the sparsest solutions in all SVM learners. Experimental results on some large-scale nonlinear training problems show that our algorithms, based on P-LSSVM, can converge to acceptable test accuracies at very sparse solutions with a sparsity level <1%, and even as little as 0.01%. Hence, our algorithms are a better choice for large-scale training problems. PMID:25966482

  13. Second SIAM conference on sparse matrices: Abstracts. Final technical report

    SciTech Connect

    1996-12-31

    This report contains abstracts on the following topics: invited and long presentations (IP1 & LP1); sparse matrix reordering & graph theory I; sparse matrix tools & environments I; eigenvalue computations I; iterative methods & acceleration techniques I; applications I; parallel algorithms I; sparse matrix reordering & graphy theory II; sparse matrix tool & environments II; least squares & optimization I; iterative methods & acceleration techniques II; applications II; eigenvalue computations II; least squares & optimization II; parallel algorithms II; sparse direct methods; iterative methods & acceleration techniques III; eigenvalue computations III; and sparse matrix reordering & graph theory III.

  14. Sparse reconstruction of visual appearance for computer graphics and vision

    NASA Astrophysics Data System (ADS)

    Ramamoorthi, Ravi

    2011-09-01

    A broad range of problems in computer graphics rendering, appearance acquisition for graphics and vision, and imaging, involve sampling, reconstruction, and integration of high-dimensional (4D-8D) signals. For example, precomputation-based real-time rendering of glossy materials and intricate lighting effects like caustics, can involve (pre)-computing the response of the scene to different light and viewing directions, which is often a 6D dataset. Similarly, image-based appearance acquisition of facial details, car paint, or glazed wood, requires us to take images from different light and view directions. Even offline rendering of visual effects like motion blur from a fast-moving car, or depth of field, involves high-dimensional sampling across time and lens aperture. The same problems are also common in computational imaging applications such as light field cameras. In the past few years, computer graphics and computer vision researchers have made significant progress in subsequent analysis and compact factored or multiresolution representations for some of these problems. However, the initial full dataset must almost always still be acquired or computed by brute force. This is often prohibitively expensive, taking hours to days of computation and acquisition time, as well as being a challenge for memory usage and storage. For example, on the order of 10,000 megapixel images are needed for a 1 degree sampling of lights and views for high-frequency materials. We argue that dramatically sparser sampling and reconstruction of these signals is possible, before the full dataset is acquired or simulated. Our key idea is to exploit the structure of the data that often lies in lower-frequency, sparse, or low-dimensional spaces. Our framework will apply to a diverse set of problems such as sparse reconstruction of light transport matrices for relighting, sheared sampling and denoising for offline shadow rendering, time-coherent compressive sampling for appearance

  15. A Comparative Study of Sparse Associative Memories

    NASA Astrophysics Data System (ADS)

    Gripon, Vincent; Heusel, Judith; Löwe, Matthias; Vermet, Franck

    2016-05-01

    We study various models of associative memories with sparse information, i.e. a pattern to be stored is a random string of 0s and 1s with about log N 1s, only. We compare different synaptic weights, architectures and retrieval mechanisms to shed light on the influence of the various parameters on the storage capacity.

  16. Self-Control in Sparsely Coded Networks

    NASA Astrophysics Data System (ADS)

    Dominguez, D. R. C.; Bollé, D.

    1998-03-01

    A complete self-control mechanism is proposed in the dynamics of neural networks through the introduction of a time-dependent threshold, determined in function of both the noise and the pattern activity in the network. Especially for sparsely coded models this mechanism is shown to considerably improve the storage capacity, the basins of attraction, and the mutual information content.

  17. Sparse matrix orderings for factorized inverse preconditioners

    SciTech Connect

    Benzi, M.; Tuama, M.

    1998-09-01

    The effect of reorderings on the performance of factorized sparse approximate inverse preconditioners is considered. It is shown that certain reorderings can be very beneficial both in the preconditioner construction phase and in terms of the rate of convergence of the preconditioned iteration.

  18. A Comparative Study of Sparse Associative Memories

    NASA Astrophysics Data System (ADS)

    Gripon, Vincent; Heusel, Judith; Löwe, Matthias; Vermet, Franck

    2016-07-01

    We study various models of associative memories with sparse information, i.e. a pattern to be stored is a random string of 0s and 1s with about log N 1s, only. We compare different synaptic weights, architectures and retrieval mechanisms to shed light on the influence of the various parameters on the storage capacity.

  19. Multilevel sparse functional principal component analysis.

    PubMed

    Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S

    2014-01-29

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions. PMID:24872597

  20. Multilevel sparse functional principal component analysis

    PubMed Central

    Di, Chongzhi; Crainiceanu, Ciprian M.; Jank, Wolfgang S.

    2014-01-01

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions. PMID:24872597

  1. Interpolating Sparse Scattered Oceanographic Data Using Flow Information

    NASA Astrophysics Data System (ADS)

    Streletz, G. J.; Gebbie, G.; Spero, H. J.; Kreylos, O.; Kellogg, L. H.; Hamann, B.

    2012-12-01

    We present a novel approach for interpolating sparse scattered data in the presence of a flow field. In order to visualize a scalar field representing a physical quantity such as salinity, temperature, or nutrient concentration in the ocean, the individual measured values of the quantity of interest typically are first converted into a representation of the scalar field on a regular grid. If the measured values are located at a number of scattered sites, then the reconstruction process will be scattered data interpolation. Scattered data interpolation itself is a well-known problem space for which many methods exist, including methods involving radial basis functions, statistical approaches such as optimal interpolation, and grid-based methods such as Laplace interpolation. However, the quality of the reconstruction result obtained using such methods depends upon having a sufficient density of sample points as input. For cases involving sparse scattered data - such as is the case when using measurements from benthic foraminifera in deep sea sedimentary cores as a proxy for the physical properties of the past ocean - the standard methods may not produce acceptable results. However, if the scalar field is associated with a known (or partially known) flow field, then the flow field information can be used to enhance the interpolation method in order to compensate for the sparsity of the available scalar field samples. Our hypothesis is that scalar field values should be more highly correlated along streamlines of the flow field than across such streamlines. We have investigated and tested such augmented, flow-field-aware scattered data interpolation methods. In particular, we have modified standard scattered data interpolation methods to use non-Euclidean distance pseudometrics, which we have constructed by employing various relative weightings of "distance-along-streamlines" versus "distance-from-streamlines." We have tested the resulting methods by applying them to

  2. Adaptive Sparse Signal Processing for Discrimination of Satellite-based Radiofrequency (RF) Recordings of Lightning Events

    NASA Astrophysics Data System (ADS)

    Moody, D. I.; Smith, D. A.; Heavner, M.; Hamlin, T.

    2014-12-01

    Ongoing research at Los Alamos National Laboratory studies the Earth's radiofrequency (RF) background utilizing satellite-based RF observations of terrestrial lightning. The Fast On-orbit Recording of Transient Events (FORTE) satellite, launched in 1997, provided a rich RF lightning database. Application of modern pattern recognition techniques to this dataset may further lightning research in the scientific community, and potentially improve on-orbit processing and event discrimination capabilities for future satellite payloads. We extend sparse signal processing techniques to radiofrequency (RF) transient signals, and specifically focus on improved signature extraction using sparse representations in data-adaptive dictionaries. We present various processing options and classification results for on-board discharges, and discuss robustness and potential for capability development.

  3. Sparse models for correlative and integrative analysis of imaging and genetic data

    PubMed Central

    Lin, Dongdong; Cao, Hongbao; Calhoun, Vince D.

    2014-01-01

    The development of advanced medical imaging technologies and high-throughput genomic measurements has enhanced our ability to understand their interplay as well as their relationship with human behavior by integrating these two types of datasets. However, the high dimensionality and heterogeneity of these datasets presents a challenge to conventional statistical methods; there is a high demand for the development of both correlative and integrative analysis approaches. Here, we review our recent work on developing sparse representation based approaches to address this challenge. We show how sparse models are applied to the correlation and integration of imaging and genetic data for biomarker identification. We present examples on how these approaches are used for the detection of risk genes and classification of complex diseases such as schizophrenia. Finally, we discuss future directions on the integration of multiple imaging and genomic datasets including their interactions such as epistasis. PMID:25218561

  4. Hyperspectral Image Kernel Sparse Subspace Clustering with Spatial Max Pooling Operation

    NASA Astrophysics Data System (ADS)

    Zhang, Hongyan; Zhai, Han; Liao, Wenzhi; Cao, Liqin; Zhang, Liangpei; Pižurica, Aleksandra

    2016-06-01

    In this paper, we present a kernel sparse subspace clustering with spatial max pooling operation (KSSC-SMP) algorithm for hyperspectral remote sensing imagery. Firstly, the feature points are mapped from the original space into a higher dimensional space with a kernel strategy. In particular, the sparse subspace clustering (SSC) model is extended to nonlinear manifolds, which can better explore the complex nonlinear structure of hyperspectral images (HSIs) and obtain a much more accurate representation coefficient matrix. Secondly, through the spatial max pooling operation, the spatial contextual information is integrated to obtain a smoother clustering result. Through experiments, it is verified that the KSSC-SMP algorithm is a competitive clustering method for HSIs and outperforms the state-of-the-art clustering methods.

  5. Sparse models for correlative and integrative analysis of imaging and genetic data.

    PubMed

    Lin, Dongdong; Cao, Hongbao; Calhoun, Vince D; Wang, Yu-Ping

    2014-11-30

    The development of advanced medical imaging technologies and high-throughput genomic measurements has enhanced our ability to understand their interplay as well as their relationship with human behavior by integrating these two types of datasets. However, the high dimensionality and heterogeneity of these datasets presents a challenge to conventional statistical methods; there is a high demand for the development of both correlative and integrative analysis approaches. Here, we review our recent work on developing sparse representation based approaches to address this challenge. We show how sparse models are applied to the correlation and integration of imaging and genetic data for biomarker identification. We present examples on how these approaches are used for the detection of risk genes and classification of complex diseases such as schizophrenia. Finally, we discuss future directions on the integration of multiple imaging and genomic datasets including their interactions such as epistasis. PMID:25218561

  6. Sparse EEG Source Localization Using Bernoulli Laplacian Priors.

    PubMed

    Costa, Facundo; Batatia, Hadj; Chaari, Lotfi; Tourneret, Jean-Yves

    2015-12-01

    Source localization in electroencephalography has received an increasing amount of interest in the last decade. Solving the underlying ill-posed inverse problem usually requires choosing an appropriate regularization. The usual l2 norm has been considered and provides solutions with low computational complexity. However, in several situations, realistic brain activity is believed to be focused in a few focal areas. In these cases, the l2 norm is known to overestimate the activated spatial areas. One solution to this problem is to promote sparse solutions for instance based on the l1 norm that are easy to handle with optimization techniques. In this paper, we consider the use of an l0 + l1 norm to enforce sparse source activity (by ensuring the solution has few nonzero elements) while regularizing the nonzero amplitudes of the solution. More precisely, the l0 pseudonorm handles the position of the nonzero elements while the l1 norm constrains the values of their amplitudes. We use a Bernoulli-Laplace prior to introduce this combined l0 + l1 norm in a Bayesian framework. The proposed Bayesian model is shown to favor sparsity while jointly estimating the model hyperparameters using a Markov chain Monte Carlo sampling technique. We apply the model to both simulated and real EEG data, showing that the proposed method provides better results than the l2 and l1  norms regularizations in the presence of pointwise sources. A comparison with a recent method based on multiple sparse priors is also conducted. PMID:26126270

  7. Mono- and multistatic polarimetric sparse aperture 3D SAR imaging

    NASA Astrophysics Data System (ADS)

    DeGraaf, Stuart; Twigg, Charles; Phillips, Louis

    2008-04-01

    SAR imaging at low center frequencies (UHF and L-band) offers advantages over imaging at more conventional (X-band) frequencies, including foliage penetration for target detection and scene segmentation based on polarimetric coherency. However, bandwidths typically available at these center frequencies are small, affording poor resolution. By exploiting extreme spatial diversity (partial hemispheric k-space coverage) and nonlinear bandwidth extrapolation/interpolation methods such as Least-Squares SuperResolution (LSSR) and Least-Squares CLEAN (LSCLEAN), one can achieve resolutions that are commensurate with the carrier frequency (λ/4) rather than the bandwidth (c/2B). Furthermore, extreme angle diversity affords complete coverage of a target's backscatter, and a correspondingly more literal image. To realize these benefits, however, one must image the scene in 3-D; otherwise layover-induced misregistration compromises the coherent summation that yields improved resolution. Practically, one is limited to very sparse elevation apertures, i.e. a small number of circular passes. Here we demonstrate that both LSSR and LSCLEAN can reduce considerably the sidelobe and alias artifacts caused by these sparse elevation apertures. Further, we illustrate how a hypothetical multi-static geometry consisting of six vertical real-aperture receive apertures, combined with a single circular transmit aperture provide effective, though sparse and unusual, 3-D k-space support. Forward scattering captured by this geometry reveals horizontal scattering surfaces that are missed in monostatic backscattering geometries. This paper illustrates results based on LucernHammer UHF and L-band mono- and multi-static simulations of a backhoe.

  8. A Comparison of Methods for Ocean Reconstruction from Sparse Observations

    NASA Astrophysics Data System (ADS)

    Streletz, G. J.; Kronenberger, M.; Weber, C.; Gebbie, G.; Hagen, H.; Garth, C.; Hamann, B.; Kreylos, O.; Kellogg, L. H.; Spero, H. J.

    2014-12-01

    We present a comparison of two methods for developing reconstructions of oceanic scalar property fields from sparse scattered observations. Observed data from deep sea core samples provide valuable information regarding the properties of oceans in the past. However, because the locations of sample sites are distributed on the ocean floor in a sparse and irregular manner, developing a global ocean reconstruction is a difficult task. Our methods include a flow-based and a moving least squares -based approximation method. The flow-based method augments the process of interpolating or approximating scattered scalar data by incorporating known flow information. The scheme exploits this additional knowledge to define a non-Euclidean distance measure between points in the spatial domain. This distance measure is used to create a reconstruction of the desired scalar field on the spatial domain. The resulting reconstruction thus incorporates information from both the scattered samples and the known flow field. The second method does not assume a known flow field, but rather works solely with the observed scattered samples. It is based on a modification of the moving least squares approach, a weighted least squares approximation method that blends local approximations into a global result. The modifications target the selection of data used for these local approximations and the construction of the weighting function. The definition of distance used in the weighting function is crucial for this method, so we use a machine learning approach to determine a set of near-optimal parameters for the weighting. We have implemented both of the reconstruction methods and have tested them using several sparse oceanographic datasets. Based upon these studies, we discuss the advantages and disadvantages of each method and suggest possible ways to combine aspects of both methods in order to achieve an overall high-quality reconstruction.

  9. Sparse and Compositionally Robust Inference of Microbial Ecological Networks

    PubMed Central

    Kurtz, Zachary D.; Müller, Christian L.; Miraldi, Emily R.; Littman, Dan R.; Blaser, Martin J.; Bonneau, Richard A.

    2015-01-01

    16S ribosomal RNA (rRNA) gene and other environmental sequencing techniques provide snapshots of microbial communities, revealing phylogeny and the abundances of microbial populations across diverse ecosystems. While changes in microbial community structure are demonstrably associated with certain environmental conditions (from metabolic and immunological health in mammals to ecological stability in soils and oceans), identification of underlying mechanisms requires new statistical tools, as these datasets present several technical challenges. First, the abundances of microbial operational taxonomic units (OTUs) from amplicon-based datasets are compositional. Counts are normalized to the total number of counts in the sample. Thus, microbial abundances are not independent, and traditional statistical metrics (e.g., correlation) for the detection of OTU-OTU relationships can lead to spurious results. Secondly, microbial sequencing-based studies typically measure hundreds of OTUs on only tens to hundreds of samples; thus, inference of OTU-OTU association networks is severely under-powered, and additional information (or assumptions) are required for accurate inference. Here, we present SPIEC-EASI (SParse InversE Covariance Estimation for Ecological Association Inference), a statistical method for the inference of microbial ecological networks from amplicon sequencing datasets that addresses both of these issues. SPIEC-EASI combines data transformations developed for compositional data analysis with a graphical model inference framework that assumes the underlying ecological association network is sparse. To reconstruct the network, SPIEC-EASI relies on algorithms for sparse neighborhood and inverse covariance selection. To provide a synthetic benchmark in the absence of an experimentally validated gold-standard network, SPIEC-EASI is accompanied by a set of computational tools to generate OTU count data from a set of diverse underlying network topologies. SPIEC

  10. Sparse and compositionally robust inference of microbial ecological networks.

    PubMed

    Kurtz, Zachary D; Müller, Christian L; Miraldi, Emily R; Littman, Dan R; Blaser, Martin J; Bonneau, Richard A

    2015-05-01

    16S ribosomal RNA (rRNA) gene and other environmental sequencing techniques provide snapshots of microbial communities, revealing phylogeny and the abundances of microbial populations across diverse ecosystems. While changes in microbial community structure are demonstrably associated with certain environmental conditions (from metabolic and immunological health in mammals to ecological stability in soils and oceans), identification of underlying mechanisms requires new statistical tools, as these datasets present several technical challenges. First, the abundances of microbial operational taxonomic units (OTUs) from amplicon-based datasets are compositional. Counts are normalized to the total number of counts in the sample. Thus, microbial abundances are not independent, and traditional statistical metrics (e.g., correlation) for the detection of OTU-OTU relationships can lead to spurious results. Secondly, microbial sequencing-based studies typically measure hundreds of OTUs on only tens to hundreds of samples; thus, inference of OTU-OTU association networks is severely under-powered, and additional information (or assumptions) are required for accurate inference. Here, we present SPIEC-EASI (SParse InversE Covariance Estimation for Ecological Association Inference), a statistical method for the inference of microbial ecological networks from amplicon sequencing datasets that addresses both of these issues. SPIEC-EASI combines data transformations developed for compositional data analysis with a graphical model inference framework that assumes the underlying ecological association network is sparse. To reconstruct the network, SPIEC-EASI relies on algorithms for sparse neighborhood and inverse covariance selection. To provide a synthetic benchmark in the absence of an experimentally validated gold-standard network, SPIEC-EASI is accompanied by a set of computational tools to generate OTU count data from a set of diverse underlying network topologies. SPIEC

  11. Deformable MR Prostate Segmentation via Deep Feature Learning and Sparse Patch Matching.

    PubMed

    Guo, Yanrong; Gao, Yaozong; Shen, Dinggang

    2016-04-01

    Automatic and reliable segmentation of the prostate is an important but difficult task for various clinical applications such as prostate cancer radiotherapy. The main challenges for accurate MR prostate localization lie in two aspects: (1) inhomogeneous and inconsistent appearance around prostate boundary, and (2) the large shape variation across different patients. To tackle these two problems, we propose a new deformable MR prostate segmentation method by unifying deep feature learning with the sparse patch matching. First, instead of directly using handcrafted features, we propose to learn the latent feature representation from prostate MR images by the stacked sparse auto-encoder (SSAE). Since the deep learning algorithm learns the feature hierarchy from the data, the learned features are often more concise and effective than the handcrafted features in describing the underlying data. To improve the discriminability of learned features, we further refine the feature representation in a supervised fashion. Second, based on the learned features, a sparse patch matching method is proposed to infer a prostate likelihood map by transferring the prostate labels from multiple atlases to the new prostate MR image. Finally, a deformable segmentation is used to integrate a sparse shape model with the prostate likelihood map for achieving the final segmentation. The proposed method has been extensively evaluated on the dataset that contains 66 T2-wighted prostate MR images. Experimental results show that the deep-learned features are more effective than the handcrafted features in guiding MR prostate segmentation. Moreover, our method shows superior performance than other state-of-the-art segmentation methods. PMID:26685226

  12. Dense and Sparse Matrix Operations on the Cell Processor

    SciTech Connect

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Husbands,Parry; Yelick, Katherine

    2005-05-01

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. Therefore, the high performance computing community is examining alternative architectures that address the limitations of modern superscalar designs. In this work, we examine STI's forthcoming Cell processor: a novel, low-power architecture that combines a PowerPC core with eight independent SIMD processing units coupled with a software-controlled memory to offer high FLOP/s/Watt. Since neither Cell hardware nor cycle-accurate simulators are currently publicly available, we develop an analytic framework to predict Cell performance on dense and sparse matrix operations, using a variety of algorithmic approaches. Results demonstrate Cell's potential to deliver more than an order of magnitude better GFLOP/s per watt performance, when compared with the Intel Itanium2 and Cray X1 processors.

  13. Segmentation of High Angular Resolution Diffusion MRI using Sparse Riemannian Manifold Clustering

    PubMed Central

    Wright, Margaret J.; Thompson, Paul M.; Vidal, René

    2015-01-01

    We address the problem of segmenting high angular resolution diffusion imaging (HARDI) data into multiple regions (or fiber tracts) with distinct diffusion properties. We use the orientation distribution function (ODF) to represent HARDI data and cast the problem as a clustering problem in the space of ODFs. Our approach integrates tools from sparse representation theory and Riemannian geometry into a graph theoretic segmentation framework. By exploiting the Riemannian properties of the space of ODFs, we learn a sparse representation for each ODF and infer the segmentation by applying spectral clustering to a similarity matrix built from these representations. In cases where regions with similar (resp. distinct) diffusion properties belong to different (resp. same) fiber tracts, we obtain the segmentation by incorporating spatial and user-specified pairwise relationships into the formulation. Experiments on synthetic data evaluate the sensitivity of our method to image noise and the presence of complex fiber configurations, and show its superior performance compared to alternative segmentation methods. Experiments on phantom and real data demonstrate the accuracy of the proposed method in segmenting simulated fibers, as well as white matter fiber tracts of clinical importance in the human brain. PMID:24108748

  14. Learning an enriched representation from unlabeled data for protein-protein interaction extraction

    PubMed Central

    2010-01-01

    Background Extracting protein-protein interactions from biomedical literature is an important task in biomedical text mining. Supervised machine learning methods have been used with great success in this task but they tend to suffer from data sparseness because of their restriction to obtain knowledge from limited amount of labelled data. In this work, we study the use of unlabeled biomedical texts to enhance the performance of supervised learning for this task. We use feature coupling generalization (FCG) – a recently proposed semi-supervised learning strategy – to learn an enriched representation of local contexts in sentences from 47 million unlabeled examples and investigate the performance of the new features on AIMED corpus. Results The new features generated by FCG achieve a 60.1 F-score and produce significant improvement over supervised baselines. The experimental analysis shows that FCG can utilize well the sparse features which have little effect in supervised learning. The new features perform better in non-linear classifiers than linear ones. We combine the new features with local lexical features, obtaining an F-score of 63.5 on AIMED corpus, which is comparable with the current state-of-the-art results. We also find that simple Boolean lexical features derived only from local contexts are able to achieve competitive results against most syntactic feature/kernel based methods. Conclusions FCG creates a lot of opportunities for designing new features, since a lot of sparse features ignored by supervised learning can be utilized well. Interestingly, our results also demonstrate that the state-of-the art performance can be achieved without using any syntactic information in this task. PMID:20406505

  15. Clothed particle representation in quantum field theory: mass renormalization

    NASA Astrophysics Data System (ADS)

    Korda, V. Yu.; Shebeko, A. V.

    2007-06-01

    The method of unitary clothing transformations is used to handling the so-called clothed particle representation (CPR) (see [A.V. Shebeko and M.I. Shirokov, Phys. Part. Nucl. 32 (2001) 31; nucl-th/0102037, V.Yu. Korda and A.V. Shebeko, Phys. Rev. D 70 (2004) 085011, V.Yu. Korda, L. Canton and A.V. Shebeko, doi:10.1016/j.aop.2006.07.010, Ann. Phys. (2006) in press; nucl-th/060325] and refs. therein), where the total field Hamiltonian H and the three boost operators in the instant form of relativistic dynamics take on the same sparse structure in the Hilbert space of hadronic states. In this approach the mass counterterms are cancelled by commutators of the generators of clothing transformations and the field interaction operator. This allows the pion and nucleon mass shifts to be expressed through the corresponding three-dimensional integrals whose integrands are proved to be dependent on certain covariant combinations of the relevant three-momenta. The property provides the momentum independence of mass renormalization.

  16. Perception of biological motion from size-invariant body representations

    PubMed Central

    Lappe, Markus; Wittinghofer, Karin; de Lussanet, Marc H. E.

    2015-01-01

    The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion. PMID:25852505

  17. Kernel weighted joint collaborative representation for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Du, Qian; Li, Wei

    2015-05-01

    Collaborative representation classifier (CRC) has been applied to hyperspectral image classification, which intends to use all the atoms in a dictionary to represent a testing pixel for label assignment. However, some atoms that are very dissimilar to the testing pixel should not participate in the representation, or their contribution should be very little. The regularized version of CRC imposes strong penalty to prevent dissimilar atoms with having large representation coefficients. To utilize spatial information, the weighted sum of local spatial neighbors is considered as a joint spatial-spectral feature, which is actually for regularized CRC-based classification. This paper proposes its kernel version to further improve classification accuracy, which can be higher than those from the traditional support vector machine with composite kernel and the kernel version of sparse representation classifier.

  18. Multipath sparse coding for scene classification in very high resolution satellite imagery

    NASA Astrophysics Data System (ADS)

    Fan, Jiayuan; Tan, Hui Li; Lu, Shijian

    2015-10-01

    With the rapid development of various satellite sensors, automatic and advanced scene classification technique is urgently needed to process a huge amount of satellite image data. Recently, a few of research works start to implant the sparse coding for feature learning in aerial scene classification. However, these previous research works use the single-layer sparse coding in their system and their performances are highly related with multiple low-level features, such as scale-invariant feature transform (SIFT) and saliency. Motivated by the importance of feature learning through multiple layers, we propose a new unsupervised feature learning approach for scene classification on very high resolution satellite imagery. The proposed unsupervised feature learning utilizes multipath sparse coding architecture in order to capture multiple aspects of discriminative structures within complex satellite scene images. In addition, the dense low-level features are extracted from the raw satellite data by using different image patches with varying size at different layers, and this approach is not limited to a particularly designed feature descriptors compared with the other related works. The proposed technique has been evaluated on two challenging high-resolution datasets, including the UC Merced dataset containing 21 different aerial scene categories with a 1 foot resolution and the Singapore dataset containing 5 land-use categories with a 0.5m spatial resolution. Experimental results show that it outperforms the state-of-the-art that uses the single-layer sparse coding. The major contributions of this proposed technique include (1) a new unsupervised feature learning approach to generate feature representation for very high-resolution satellite imagery, (2) the first multipath sparse coding that is used for scene classification in very high-resolution satellite imagery, (3) a simple low-level feature descriptor instead of many particularly designed low-level descriptor

  19. A survey of visual preprocessing and shape representation techniques

    NASA Technical Reports Server (NTRS)

    Olshausen, Bruno A.

    1988-01-01

    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention).

  20. Statistical prediction with Kanerva's sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1989-01-01

    A new viewpoint of the processing performed by Kanerva's sparse distributed memory (SDM) is presented. In conditions of near- or over-capacity, where the associative-memory behavior of the model breaks down, the processing performed by the model can be interpreted as that of a statistical predictor. Mathematical results are presented which serve as the framework for a new statistical viewpoint of sparse distributed memory and for which the standard formulation of SDM is a special case. This viewpoint suggests possible enhancements to the SDM model, including a procedure for improving the predictiveness of the system based on Holland's work with genetic algorithms, and a method for improving the capacity of SDM even when used as an associative memory.

  1. Solving large sparse eigenvalue problems on supercomputers

    NASA Technical Reports Server (NTRS)

    Philippe, Bernard; Saad, Youcef

    1988-01-01

    An important problem in scientific computing consists in finding a few eigenvalues and corresponding eigenvectors of a very large and sparse matrix. The most popular methods to solve these problems are based on projection techniques on appropriate subspaces. The main attraction of these methods is that they only require the use of the matrix in the form of matrix by vector multiplications. The implementations on supercomputers of two such methods for symmetric matrices, namely Lanczos' method and Davidson's method are compared. Since one of the most important operations in these two methods is the multiplication of vectors by the sparse matrix, methods of performing this operation efficiently are discussed. The advantages and the disadvantages of each method are compared and implementation aspects are discussed. Numerical experiments on a one processor CRAY 2 and CRAY X-MP are reported. Possible parallel implementations are also discussed.

  2. Sparse brain network using penalized linear regression

    NASA Astrophysics Data System (ADS)

    Lee, Hyekyoung; Lee, Dong Soo; Kang, Hyejin; Kim, Boong-Nyun; Chung, Moo K.

    2011-03-01

    Sparse partial correlation is a useful connectivity measure for brain networks when it is difficult to compute the exact partial correlation in the small-n large-p setting. In this paper, we formulate the problem of estimating partial correlation as a sparse linear regression with a l1-norm penalty. The method is applied to brain network consisting of parcellated regions of interest (ROIs), which are obtained from FDG-PET images of the autism spectrum disorder (ASD) children and the pediatric control (PedCon) subjects. To validate the results, we check their reproducibilities of the obtained brain networks by the leave-one-out cross validation and compare the clustered structures derived from the brain networks of ASD and PedCon.

  3. Representation in Memory.

    ERIC Educational Resources Information Center

    Rumelhart, David E.; Norman, Donald A.

    This paper reviews work on the representation of knowledge from within psychology and artificial intelligence. The work covers the nature of representation, the distinction between the represented world and the representing world, and significant issues concerned with propositional, analogical, and superpositional representations. Specific topics…

  4. The efficient parallel iterative solution of large sparse linear systems

    SciTech Connect

    Jones, M.T.; Plassmann, P.E.

    1992-06-01

    The development of efficient, general-purpose software for the iterative solution of sparse linear systems on a parallel MIMD computer requires an interesting combination of expertise. Parallel graph heuristics, convergence analysis, and basic linear algebra implementation issues must all be considered. In this paper, we discuss how we have incorporated recent results in these areas into a general-purpose iterative solver. First, we consider two recently developed parallel graph coloring heuristics. We show how the method proposed by Luby, based on determining maximal independent sets, can be modified to run in an asynchronous manner and give aa expected running time bound for this modified heuristic. In addition, a number of graph reduction heuristics are described that are used in our implementation to improve the individual processor performance. The effect of these various graph reductions on the solution of sparse triangular systems is categorized. Finally, we discuss the performance of this solver from the perspective of two large-scale applications: a piezoelectric crystal finite-element modeling problem, and a nonlinear optimization problem to determine the minimum energy configuration of a three-dimensional, layered superconductor model.

  5. Dictionary learning and sparse recovery for electrodermal activity analysis

    NASA Astrophysics Data System (ADS)

    Kelsey, Malia; Dallal, Ahmed; Eldeeb, Safaa; Akcakaya, Murat; Kleckner, Ian; Gerard, Christophe; Quigley, Karen S.; Goodwin, Matthew S.

    2016-05-01

    Measures of electrodermal activity (EDA) have advanced research in a wide variety of areas including psychophysiology; however, the majority of this research is typically undertaken in laboratory settings. To extend the ecological validity of laboratory assessments, researchers are taking advantage of advances in wireless biosensors to gather EDA data in ambulatory settings, such as in school classrooms. While measuring EDA in naturalistic contexts may enhance ecological validity, it also introduces analytical challenges that current techniques cannot address. One limitation is the limited efficiency and automation of analysis techniques. Many groups either analyze their data by hand, reviewing each individual record, or use computationally inefficient software that limits timely analysis of large data sets. To address this limitation, we developed a method to accurately and automatically identify SCRs using curve fitting methods. Curve fitting has been shown to improve the accuracy of SCR amplitude and location estimations, but have not yet been used to reduce computational complexity. In this paper, sparse recovery and dictionary learning methods are combined to improve computational efficiency of analysis and decrease run time, while maintaining a high degree of accuracy in detecting SCRs. Here, a dictionary is first created using curve fitting methods for a standard SCR shape. Then, orthogonal matching pursuit (OMP) is used to detect SCRs within a dataset using the dictionary to complete sparse recovery. Evaluation of our method, including a comparison to for speed and accuracy with existing software, showed an accuracy of 80% and a reduced run time.

  6. Object class recognition based on compressive sensing with sparse features inspired by hierarchical model in visual cortex

    NASA Astrophysics Data System (ADS)

    Lu, Pei; Xu, Zhiyong; Yu, Huapeng; Chang, Yongxin; Fu, Chengyu; Shao, Jianxin

    2012-11-01

    According to models of object recognition in cortex, the brain uses a hierarchical approach in which simple, low-level features having high position and scale specificity are pooled and combined into more complex, higher-level features having greater location invariance. At higher levels, spatial structure becomes implicitly encoded into the features themselves, which may overlap, while explicit spatial information is coded more coarsely. In this paper, the importance of sparsity and localized patch features in a hierarchical model inspired by visual cortex is investigated. As in the model of Serre, Wolf, and Poggio, we first apply Gabor filters at all positions and scales; feature complexity and position/scale invariance are then built up by alternating template matching and max pooling operations. In order to improve generalization performance, the sparsity is proposed and data dimension is reduced by means of compressive sensing theory and sparse representation algorithm. Similarly, within computational neuroscience, adding the sparsity on the number of feature inputs and feature selection is critical for learning biologically model from the statistics of natural images. Then, a redundancy dictionary of patch-based features that could distinguish object class from other categories is designed and then object recognition is implemented by the process of iterative optimization. The method is test on the UIUC car database. The success of this approach suggests a proof for the object class recognition in visual cortex.

  7. Network Structure within the Cerebellar Input Layer Enables Lossless Sparse Encoding

    PubMed Central

    Billings, Guy; Piasini, Eugenio; Lőrincz, Andrea; Nusser, Zoltan; Silver, R. Angus

    2014-01-01

    Summary The synaptic connectivity within neuronal networks is thought to determine the information processing they perform, yet network structure-function relationships remain poorly understood. By combining quantitative anatomy of the cerebellar input layer and information theoretic analysis of network models, we investigated how synaptic connectivity affects information transmission and processing. Simplified binary models revealed that the synaptic connectivity within feedforward networks determines the trade-off between information transmission and sparse encoding. Networks with few synaptic connections per neuron and network-activity-dependent threshold were optimal for lossless sparse encoding over the widest range of input activities. Biologically detailed spiking network models with experimentally constrained synaptic conductances and inhibition confirmed our analytical predictions. Our results establish that the synaptic connectivity within the cerebellar input layer enables efficient lossless sparse encoding. Moreover, they provide a functional explanation for why granule cells have approximately four dendrites, a feature that has been evolutionarily conserved since the appearance of fish. PMID:25123311

  8. Multi dose computed tomography image fusion based on hybrid sparse methodology.

    PubMed

    Venkataraman, Anuyogam; Alirezaie, Javad; Babyn, Paul; Ahmadian, Alireza

    2014-01-01

    With the increasing utilization of X-ray Computed Tomography (CT) in medical diagnosis, obtaining higher quality image with lower exposure to radiation has become a highly challenging task in image processing. In this paper, a novel sparse fusion algorithm is proposed to address the problem of lower Signal to Noise Ratio (SNR) in low dose CT images. Initial fused image is obtained by combining low dose and medium dose images in sparse domain, utilizing the Dual Tree Complex Wavelet Transform (DTCWT) dictionary which is trained by high dose image. And then, the strongly focused image is obtained by determining the pixels of source images which have high similarity with the pixels of the initial fused image. Final denoised image is obtained by fusing strongly focused image and decomposed sparse vectors of source images, thereby preserving the edges and other critical information needed for diagnosis. This paper demonstrates the effectiveness of the proposed algorithm both quantitatively and qualitatively. PMID:25570844

  9. Notes on implementation of sparsely distributed memory

    NASA Technical Reports Server (NTRS)

    Keeler, J. D.; Denning, P. J.

    1986-01-01

    The Sparsely Distributed Memory (SDM) developed by Kanerva is an unconventional memory design with very interesting and desirable properties. The memory works in a manner that is closely related to modern theories of human memory. The SDM model is discussed in terms of its implementation in hardware. Two appendices discuss the unconventional approaches of the SDM: Appendix A treats a resistive circuit for fast, parallel address decoding; and Appendix B treats a systolic array for high throughput read and write operations.

  10. Cognitive Dissonance as an Instructional Tool for Understanding Chemical Representations

    ERIC Educational Resources Information Center

    Corradi, David; Clarebout, Geraldine; Elen, Jan

    2015-01-01

    Previous research on multiple external representations (MER) indicates that sequencing representations (compared with presenting them as a whole) can, in some cases, increase conceptual understanding if there is interference between internal and external representations. We tested this mechanism by sequencing different combinations of scientific…

  11. Modified sparse regularization for electrical impedance tomography.

    PubMed

    Fan, Wenru; Wang, Huaxiang; Xue, Qian; Cui, Ziqiang; Sun, Benyuan; Wang, Qi

    2016-03-01

    Electrical impedance tomography (EIT) aims to estimate the electrical properties at the interior of an object from current-voltage measurements on its boundary. It has been widely investigated due to its advantages of low cost, non-radiation, non-invasiveness, and high speed. Image reconstruction of EIT is a nonlinear and ill-posed inverse problem. Therefore, regularization techniques like Tikhonov regularization are used to solve the inverse problem. A sparse regularization based on L1 norm exhibits superiority in preserving boundary information at sharp changes or discontinuous areas in the image. However, the limitation of sparse regularization lies in the time consumption for solving the problem. In order to further improve the calculation speed of sparse regularization, a modified method based on separable approximation algorithm is proposed by using adaptive step-size and preconditioning technique. Both simulation and experimental results show the effectiveness of the proposed method in improving the image quality and real-time performance in the presence of different noise intensities and conductivity contrasts. PMID:27036798

  12. Mean-field sparse optimal control

    PubMed Central

    Fornasier, Massimo; Piccoli, Benedetto; Rossi, Francesco

    2014-01-01

    We introduce the rigorous limit process connecting finite dimensional sparse optimal control problems with ODE constraints, modelling parsimonious interventions on the dynamics of a moving population divided into leaders and followers, to an infinite dimensional optimal control problem with a constraint given by a system of ODE for the leaders coupled with a PDE of Vlasov-type, governing the dynamics of the probability distribution of the followers. In the classical mean-field theory, one studies the behaviour of a large number of small individuals freely interacting with each other, by simplifying the effect of all the other individuals on any given individual by a single averaged effect. In this paper, we address instead the situation where the leaders are actually influenced also by an external policy maker, and we propagate its effect for the number N of followers going to infinity. The technical derivation of the sparse mean-field optimal control is realized by the simultaneous development of the mean-field limit of the equations governing the followers dynamics together with the Γ-limit of the finite dimensional sparse optimal control problems. PMID:25288818

  13. Sparseness- and continuity-constrained seismic imaging

    NASA Astrophysics Data System (ADS)

    Herrmann, Felix J.

    2005-04-01

    Non-linear solution strategies to the least-squares seismic inverse-scattering problem with sparseness and continuity constraints are proposed. Our approach is designed to (i) deal with substantial amounts of additive noise (SNR < 0 dB); (ii) use the sparseness and locality (both in position and angle) of directional basis functions (such as curvelets and contourlets) on the model: the reflectivity; and (iii) exploit the near invariance of these basis functions under the normal operator, i.e., the scattering-followed-by-imaging operator. Signal-to-noise ratio and the continuity along the imaged reflectors are significantly enhanced by formulating the solution of the seismic inverse problem in terms of an optimization problem. During the optimization, sparseness on the basis and continuity along the reflectors are imposed by jointly minimizing the l1- and anisotropic diffusion/total-variation norms on the coefficients and reflectivity, respectively. [Joint work with Peyman P. Moghaddam was carried out as part of the SINBAD project, with financial support secured through ITF (the Industry Technology Facilitator) from the following organizations: BG Group, BP, ExxonMobil, and SHELL. Additional funding came from the NSERC Discovery Grants 22R81254.

  14. Imaging black holes with sparse modeling

    NASA Astrophysics Data System (ADS)

    Honma, Mareki; Akiyama, Kazunori; Tazaki, Fumie; Kuramochi, Kazuki; Ikeda, Shiro; Hada, Kazuhiro; Uemura, Makoto

    2016-03-01

    We introduce a new imaging method for radio interferometry based on sparse- modeling. The direct observables in radio interferometry are visibilities, which are Fourier transformation of an astronomical image on the sky-plane, and incomplete sampling of visibilities in the spatial frequency domain results in an under-determined problem, which has been usually solved with 0 filling to un-sampled grids. In this paper we propose to directly solve this under-determined problem using sparse modeling without 0 filling, which realizes super resolution, i.e., resolution higher than the standard refraction limit. We show simulation results of sparse modeling for the Event Horizon Telescope (EHT) observations of super-massive black holes and demonstrate that our approach has significant merit in observations of black hole shadows expected to be realized in near future. We also present some results with the method applied to real data, and also discuss more advanced techniques for practical observations such as imaging with closure phase as well as treating the effect of interstellar scattering effect.

  15. Optimized sparse presentation-based classification method with weighted block and maximum likelihood model

    NASA Astrophysics Data System (ADS)

    He, Jun; Zuo, Tian; Sun, Bo; Wu, Xuewen; Chen, Chao

    2014-06-01

    This paper is aiming at applying sparse representation based classification (SRC) on face recognition with disguise or illumination variation. Having analyzed the characteristics of general object recognition and the principle of the classifier of SRC method, authors focus on evaluating blocks of a probe sample and propose an optimized SRC method based on position-preserving weighted block and maximum likelihood model. Principle and implementation of the proposed method have been introduced in the article, and experiments on Yale and AR face database have been given too. From experimental results, it can be seen that the proposed optimized SRC method works well than existing methods.

  16. Fusion of Depth and Intensity Data for Three-Dimensional Object Representation and Recognition

    NASA Astrophysics Data System (ADS)

    Ramirez Cortes, Juan Manuel

    For humans, retinal images provide sufficient information for the complete understanding of three-dimensional shapes in a scene. The ultimate goal of computer vision is to develop an automated system able to reproduce some of the tasks performed in a natural way by human beings as recognition, classification, or analysis of the environment as basis for further decisions. At the first level, referred to as early computer vision, the task is to extract symbolic descriptive information in a scene from a variety of sensory data. The second level is concerned with classification, recognition, or decision systems and the related heuristics, that aid the processing of the available information. This research is concerned with a new approach to 3-D object representation and recognition using an interpolation scheme applied to the information from the fusion of range and intensity data. The range image acquisition uses a methodology based on a passive stereo-vision model originally developed to be used with a sequence of images. However, curved features, large disparities and noisy input images are some of the problems associated with real imagery, which need to be addressed prior to applying the matching techniques in the spatial frequency domain. Some of the above mentioned problems can only be solved by computationally intensive spatial domain algorithms. Regularization techniques are explored for surface recovery from sparse range data, and intensity images are incorporated in the final representation of the surface. As an important application, the problem of 3-D representation of retinal images for extraction of quantitative information is addressed. Range information is also combined with intensity data to provide a more accurate numerical description based on aspect graphs. This representation is used as input to a three-dimensional object recognition system. Such an approach results in an improved performance of 3-D object classifiers.

  17. Adaptive block-wise alphabet reduction scheme for lossless compression of images with sparse and locally sparse histograms

    NASA Astrophysics Data System (ADS)

    Masmoudi, Atef; Zouari, Sonia; Ghribi, Abdelaziz

    2015-11-01

    We propose a new adaptive block-wise lossless image compression algorithm, which is based on the so-called alphabet reduction scheme combined with an adaptive arithmetic coding (AC). This new encoding algorithm is particularly efficient for lossless compression of images with sparse and locally sparse histograms. AC is a very efficient technique for lossless data compression and produces a rate that is close to the entropy; however, a compression performance loss occurs when encoding images or blocks with a limited number of active symbols by comparison with the number of symbols in the nominal alphabet, which consists in the amplification of the zero frequency problem. Generally, most methods add one to the frequency count of each symbol from the nominal alphabet, which leads to a statistical model distortion, and therefore reduces the efficiency of the AC. The aim of this work is to overcome this drawback by assigning to each image block the smallest possible set including all the existing symbols called active symbols. This is an alternative of using the nominal alphabet when applying the conventional arithmetic encoders. We show experimentally that the proposed method outperforms several lossless image compression encoders and standards including the conventional arithmetic encoders, JPEG2000, and JPEG-LS.

  18. Computational representation of biological systems

    SciTech Connect

    Frazier, Zach; McDermott, Jason E.; Guerquin, Michal; Samudrala, Ram

    2009-04-20

    Integration of large and diverse biological data sets is a daunting problem facing systems biology researchers. Exploring the complex issues of data validation, integration, and representation, we present a systematic approach for the management and analysis of large biological data sets based on data warehouses. Our system has been implemented in the Bioverse, a framework combining diverse protein information from a variety of knowledge areas such as molecular interactions, pathway localization, protein structure, and protein function.

  19. Computer aided surface representation

    SciTech Connect

    Barnhill, R.E.

    1991-04-02

    Modern computing resources permit the generation of large amounts of numerical data. These large data sets, if left in numerical form, can be overwhelming. Such large data sets are usually discrete points from some underlying physical phenomenon. Because we need to evaluate the phenomenon at places where we don't have data, a continuous representation (a surface'') is required. A simple example is a weather map obtained from a discrete set of weather stations. (For more examples including multi-dimensional ones, see the article by Dr. Rosemary Chang in the enclosed IRIS Universe). In order to create a scientific structure encompassing the data, we construct an interpolating mathematical surface which can evaluate at arbitrary locations. We can also display and analyze the results via interactive computer graphics. In our research we construct a very wide variety of surfaces for applied geometry problems that have sound theoretical foundations. However, our surfaces have the distinguishing feature that they are constructed to solve short or long term practical problems. This DOE-funded project has developed the premiere research team in the subject of constructing surfaces (3D and higher dimensional) that provide smooth representations of real scientific and engineering information, including state of the art computer graphics visualizations. However, our main contribution is in the development of fundamental constructive mathematical methods and visualization techniques which can be incorporated into a wide variety of applications. This project combines constructive mathematics, algorithms, and computer graphics, all applied to real problems. The project is a unique resource, considered by our peers to be a de facto national center for this type of research.

  20. Slowness and Sparseness Lead to Place, Head-Direction, and Spatial-View Cells

    PubMed Central

    Franzius, Mathias; Sprekeler, Henning; Wiskott, Laurenz

    2007-01-01

    We present a model for the self-organized formation of place cells, head-direction cells, and spatial-view cells in the hippocampal formation based on unsupervised learning on quasi-natural visual stimuli. The model comprises a hierarchy of Slow Feature Analysis (SFA) nodes, which were recently shown to reproduce many properties of complex cells in the early visual system [1]. The system extracts a distributed grid-like representation of position and orientation, which is transcoded into a localized place-field, head-direction, or view representation, by sparse coding. The type of cells that develops depends solely on the relevant input statistics, i.e., the movement pattern of the simulated animal. The numerical simulations are complemented by a mathematical analysis that allows us to accurately predict the output of the top SFA layer. PMID:17784780

  1. Typical kernel size and number of sparse random matrices over Galois fields: A statistical physics approach

    NASA Astrophysics Data System (ADS)

    Alamino, R. C.; Saad, D.

    2008-06-01

    Using methods of statistical physics, we study the average number and kernel size of general sparse random matrices over Galois fields GF(q) , with a given connectivity profile, in the thermodynamical limit of large matrices. We introduce a mapping of GF(q) matrices onto spin systems using the representation of the cyclic group of order q as the q th complex roots of unity. This representation facilitates the derivation of the average kernel size of random matrices using the replica approach, under the replica-symmetric ansatz, resulting in saddle point equations for general connectivity distributions. Numerical solutions are then obtained for particular cases by population dynamics. Similar techniques also allow us to obtain an expression for the exact and average numbers of random matrices for any general connectivity profile. We present numerical results for particular distributions.

  2. Sparse Labeling of Proteins: Structural Characterization from Long Range Constraints

    PubMed Central

    Prestegard, James H.; Agard, David A.; Moremen, Kelley W.; Lavery, Laura A.; Morris, Laura C.; Pederson, Kari

    2014-01-01

    Structural characterization of biologically important proteins faces many challenges associated with degradation of resolution as molecular size increases and loss of resolution improving tools such as perdeuteration when non-bacterial hosts must be used for expression. In these cases, sparse isotopic labeling (single or small subsets of amino acids) combined with long range paramagnetic constraints and improved computational modeling offer an alternative. This perspective provides a brief overview of this approach and two discussions of potential applications; one involving a very large system (an Hsp90 homolog) in which perdeuteration is possible and methyl-TROSY sequences can potentially be used to improve resolution, and one involving ligand placement in a glycosylated protein where resolution is achieved by single amino acid labeling (the sialyltransferase, ST6Gal1). This is not intended as a comprehensive review, but as a discussion of future prospects that promise impact on important questions in the structural biology area. PMID:24656078

  3. Sparse labeling of proteins: Structural characterization from long range constraints

    NASA Astrophysics Data System (ADS)

    Prestegard, James H.; Agard, David A.; Moremen, Kelley W.; Lavery, Laura A.; Morris, Laura C.; Pederson, Kari

    2014-04-01

    Structural characterization of biologically important proteins faces many challenges associated with degradation of resolution as molecular size increases and loss of resolution improving tools such as perdeuteration when non-bacterial hosts must be used for expression. In these cases, sparse isotopic labeling (single or small subsets of amino acids) combined with long range paramagnetic constraints and improved computational modeling offer an alternative. This perspective provides a brief overview of this approach and two discussions of potential applications; one involving a very large system (an Hsp90 homolog) in which perdeuteration is possible and methyl-TROSY sequences can potentially be used to improve resolution, and one involving ligand placement in a glycosylated protein where resolution is achieved by single amino acid labeling (the sialyltransferase, ST6Gal1). This is not intended as a comprehensive review, but as a discussion of future prospects that promise impact on important questions in the structural biology area.

  4. Sparse-Coding-Based Computed Tomography Image Reconstruction

    PubMed Central

    Yoon, Gang-Joon

    2013-01-01

    Computed tomography (CT) is a popular type of medical imaging that generates images of the internal structure of an object based on projection scans of the object from several angles. There are numerous methods to reconstruct the original shape of the target object from scans, but they are still dependent on the number of angles and iterations. To overcome the drawbacks of iterative reconstruction approaches like the algebraic reconstruction technique (ART), while the recovery is slightly impacted from a random noise (small amount of ℓ2 norm error) and projection scans (small amount of ℓ1 norm error) as well, we propose a medical image reconstruction methodology using the properties of sparse coding. It is a very powerful matrix factorization method which each pixel point is represented as a linear combination of a small number of basis vectors. PMID:23576898

  5. Predicting Homogeneous Pilus Structure from Monomeric Data and Sparse Constraints.

    PubMed

    Xiao, Ke; Shu, Chuanjun; Yan, Qin; Sun, Xiao

    2015-01-01

    Type IV pili (T4P) and T2SS (Type II Secretion System) pseudopili are filaments extending beyond microbial surfaces, comprising homologous subunits called "pilins." In this paper, we presented a new approach to predict pseudo atomic models of pili combining ambiguous symmetric constraints with sparse distance information obtained from experiments and based neither on electronic microscope (EM) maps nor on accurate a priori symmetric details. The approach was validated by the reconstruction of the gonococcal (GC) pilus from Neisseria gonorrhoeae, the type IVb toxin-coregulated pilus (TCP) from Vibrio cholerae, and pseudopilus of the pullulanase T2SS (the PulG pilus) from Klebsiella oxytoca. In addition, analyses of computational errors showed that subunits should be treated cautiously, as they are slightly flexible and not strictly rigid bodies. A global sampling in a wider range was also implemented and implied that a pilus might have more than one but fewer than many possible intact conformations. PMID:26064954

  6. Galaxy redshift surveys with sparse sampling

    SciTech Connect

    Chiang, Chi-Ting; Wullstein, Philipp; Komatsu, Eiichiro; Jee, Inh; Jeong, Donghui; Blanc, Guillermo A.; Ciardullo, Robin; Gronwall, Caryl; Hagen, Alex; Schneider, Donald P.; Drory, Niv; Fabricius, Maximilian; Landriau, Martin; Finkelstein, Steven; Jogee, Shardha; Cooper, Erin Mentuch; Tuttle, Sarah; Gebhardt, Karl; Hill, Gary J.

    2013-12-01

    Survey observations of the three-dimensional locations of galaxies are a powerful approach to measure the distribution of matter in the universe, which can be used to learn about the nature of dark energy, physics of inflation, neutrino masses, etc. A competitive survey, however, requires a large volume (e.g., V{sub survey} ∼ 10Gpc{sup 3}) to be covered, and thus tends to be expensive. A ''sparse sampling'' method offers a more affordable solution to this problem: within a survey footprint covering a given survey volume, V{sub survey}, we observe only a fraction of the volume. The distribution of observed regions should be chosen such that their separation is smaller than the length scale corresponding to the wavenumber of interest. Then one can recover the power spectrum of galaxies with precision expected for a survey covering a volume of V{sub survey} (rather than the volume of the sum of observed regions) with the number density of galaxies given by the total number of observed galaxies divided by V{sub survey} (rather than the number density of galaxies within an observed region). We find that regularly-spaced sampling yields an unbiased power spectrum with no window function effect, and deviations from regularly-spaced sampling, which are unavoidable in realistic surveys, introduce calculable window function effects and increase the uncertainties of the recovered power spectrum. On the other hand, we show that the two-point correlation function (pair counting) is not affected by sparse sampling. While we discuss the sparse sampling method within the context of the forthcoming Hobby-Eberly Telescope Dark Energy Experiment, the method is general and can be applied to other galaxy surveys.

  7. Efficient algorithm for sparse coding and dictionary learning with applications to face recognition

    NASA Astrophysics Data System (ADS)

    Zhao, Zhong; Feng, Guocan

    2015-03-01

    Sparse representation has been successfully applied to pattern recognition problems in recent years. The most common way for producing sparse coding is to use the l1-norm regularization. However, the l1-norm regularization only favors sparsity and does not consider locality. It may select quite different bases for similar samples to favor sparsity, which is disadvantageous to classification. Besides, solving the l1-minimization problem is time consuming, which limits its applications in large-scale problems. We propose an improved algorithm for sparse coding and dictionary learning. This algorithm takes both sparsity and locality into consideration. It selects part of the dictionary columns that are close to the input sample for coding and imposes locality constraint on these selected dictionary columns to obtain discriminative coding for classification. Because an analytic solution of the coding is derived by only using part of the dictionary columns, the proposed algorithm is much faster than the l1-based algorithms for classification. Besides, we also derive an analytic solution for updating the dictionary in the training process. Experiments conducted on five face databases show that the proposed algorithm has better performance than the competing algorithms in terms of accuracy and efficiency.

  8. A Hyperspherical Adaptive Sparse-Grid Method for High-Dimensional Discontinuity Detection

    DOE PAGESBeta

    Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.; Burkardt, John V.

    2015-06-24

    This study proposes and analyzes a hyperspherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hypersurface of an N-dimensional discontinuous quantity of interest, by virtue of a hyperspherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyperspherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the newmore » technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. In addition, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.« less

  9. Edge-preserving traveltime tomography with a sparse multiscale imaging constraint

    NASA Astrophysics Data System (ADS)

    Sun, Mengyao; Zhang, Jie

    2016-08-01

    Solving the near-surface statics problem is often the first step in land or shallow marine seismic data processing. Near-surface velocity structures can be very complex, with large velocity contrasts within a small depth range. First-arrival traveltime tomography is a common approach for near-surface imaging. However, first-arrival traveltime tomography generally produces smooth model solutions due to the Tikhonov regularization, which constrains the model for minimum structures. Failing to resolve high velocity contrasts may result in inaccurate static values for reflection imaging. In this study, we develop a sparse multiscale imaging constraint for traveltime tomography to address this issue. In this method, we assume that the velocity model is sparse under a known wavelet basis. According to the model sparse representation, we first obtain the low wavenumber velocity structures, followed by the finer features, by alternately solving two sets of inversion problems. The synthetic tests and two real data applications show that this method exhibits better performance in reconstructing near-surface models with high velocity contrasts.

  10. Sparse grid discontinuous Galerkin methods for high-dimensional elliptic equations

    NASA Astrophysics Data System (ADS)

    Wang, Zixuan; Tang, Qi; Guo, Wei; Cheng, Yingda

    2016-06-01

    This paper constitutes our initial effort in developing sparse grid discontinuous Galerkin (DG) methods for high-dimensional partial differential equations (PDEs). Over the past few decades, DG methods have gained popularity in many applications due to their distinctive features. However, they are often deemed too costly because of the large degrees of freedom of the approximation space, which are the main bottleneck for simulations in high dimensions. In this paper, we develop sparse grid DG methods for elliptic equations with the aim of breaking the curse of dimensionality. Using a hierarchical basis representation, we construct a sparse finite element approximation space, reducing the degrees of freedom from the standard O (h-d) to O (h-1 |log2 ⁡ h| d - 1) for d-dimensional problems, where h is the uniform mesh size in each dimension. Our method, based on the interior penalty (IP) DG framework, can achieve accuracy of O (hk |log2 ⁡ h| d - 1) in the energy norm, where k is the degree of polynomials used. Error estimates are provided and confirmed by numerical tests in multi-dimensions.

  11. Smoothed low rank and sparse matrix recovery by iteratively reweighted least squares minimization.

    PubMed

    Lu, Canyi; Lin, Zhouchen; Yan, Shuicheng

    2015-02-01

    This paper presents a general framework for solving the low-rank and/or sparse matrix minimization problems, which may involve multiple nonsmooth terms. The iteratively reweighted least squares (IRLSs) method is a fast solver, which smooths the objective function and minimizes it by alternately updating the variables and their weights. However, the traditional IRLS can only solve a sparse only or low rank only minimization problem with squared loss or an affine constraint. This paper generalizes IRLS to solve joint/mixed low-rank and sparse minimization problems, which are essential formulations for many tasks. As a concrete example, we solve the Schatten-p norm and l2,q-norm regularized low-rank representation problem by IRLS, and theoretically prove that the derived solution is a stationary point (globally optimal if p,q ≥ 1). Our convergence proof of IRLS is more general than previous one that depends on the special properties of the Schatten-p norm and l2,q-norm. Extensive experiments on both synthetic and real data sets demonstrate that our IRLS is much more efficient. PMID:25531948

  12. A novel sparse coding algorithm for classification of tumors based on gene expression data.

    PubMed

    Kolali Khormuji, Morteza; Bazrafkan, Mehrnoosh

    2016-06-01

    High-dimensional genomic and proteomic data play an important role in many applications in medicine such as prognosis of diseases, diagnosis, prevention and molecular biology, to name a few. Classifying such data is a challenging task due to the various issues such as curse of dimensionality, noise and redundancy. Recently, some researchers have used the sparse representation (SR) techniques to analyze high-dimensional biological data in various applications in classification of cancer patients based on gene expression datasets. A common problem with all SR-based biological data classification methods is that they cannot utilize the topological (geometrical) structure of data. More precisely, these methods transfer the data into sparse feature space without preserving the local structure of data points. In this paper, we proposed a novel SR-based cancer classification algorithm based on gene expression data that takes into account the geometrical information of all data. Precisely speaking, we incorporate the local linear embedding algorithm into the sparse coding framework, by which we can preserve the geometrical structure of all data. For performance comparison, we applied our algorithm on six tumor gene expression datasets, by which we demonstrate that the proposed method achieves higher classification accuracy than state-of-the-art SR-based tumor classification algorithms. PMID:26337064

  13. A hyper-spherical adaptive sparse-grid method for high-dimensional discontinuity detection

    SciTech Connect

    Zhang, Guannan; Webster, Clayton G; Gunzburger, Max D; Burkardt, John V

    2014-03-01

    This work proposes and analyzes a hyper-spherical adaptive hi- erarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces is proposed. The method is motivated by the the- oretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a func- tion representation of the discontinuity hyper-surface of an N-dimensional dis- continuous quantity of interest, by virtue of a hyper-spherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyper-spherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smooth- ness of the hyper-surface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous error estimates and complexity anal- yses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.

  14. Parallel preconditioning techniques for sparse CG solvers

    SciTech Connect

    Basermann, A.; Reichel, B.; Schelthoff, C.

    1996-12-31

    Conjugate gradient (CG) methods to solve sparse systems of linear equations play an important role in numerical methods for solving discretized partial differential equations. The large size and the condition of many technical or physical applications in this area result in the need for efficient parallelization and preconditioning techniques of the CG method. In particular for very ill-conditioned matrices, sophisticated preconditioner are necessary to obtain both acceptable convergence and accuracy of CG. Here, we investigate variants of polynomial and incomplete Cholesky preconditioners that markedly reduce the iterations of the simply diagonally scaled CG and are shown to be well suited for massively parallel machines.

  15. Distributed memory compiler design for sparse problems

    NASA Technical Reports Server (NTRS)

    Wu, Janet; Saltz, Joel; Berryman, Harry; Hiranandani, Seema

    1991-01-01

    A compiler and runtime support mechanism is described and demonstrated. The methods presented are capable of solving a wide range of sparse and unstructured problems in scientific computing. The compiler takes as input a FORTRAN 77 program enhanced with specifications for distributing data, and the compiler outputs a message passing program that runs on a distributed memory computer. The runtime support for this compiler is a library of primitives designed to efficiently support irregular patterns of distributed array accesses and irregular distributed array partitions. A variety of Intel iPSC/860 performance results obtained through the use of this compiler are presented.

  16. Sparse Multivariate Regression With Covariance Estimation

    PubMed Central

    Rothman, Adam J.; Levina, Elizaveta; Zhu, Ji

    2014-01-01

    We propose a procedure for constructing a sparse estimator of a multivariate regression coefficient matrix that accounts for correlation of the response variables. This method, which we call multivariate regression with covariance estimation (MRCE), involves penalized likelihood with simultaneous estimation of the regression coefficients and the covariance structure. An efficient optimization algorithm and a fast approximation are developed for computing MRCE. Using simulation studies, we show that the proposed method outperforms relevant competitors when the responses are highly correlated. We also apply the new method to a finance example on predicting asset returns. An R-package containing this dataset and code for computing MRCE and its approximation are available online. PMID:24963268

  17. Sparse dynamics for partial differential equations

    PubMed Central

    Schaeffer, Hayden; Caflisch, Russel; Hauck, Cory D.; Osher, Stanley

    2013-01-01

    We investigate the approximate dynamics of several differential equations when the solutions are restricted to a sparse subset of a given basis. The restriction is enforced at every time step by simply applying soft thresholding to the coefficients of the basis approximation. By reducing or compressing the information needed to represent the solution at every step, only the essential dynamics are represented. In many cases, there are natural bases derived from the differential equations, which promote sparsity. We find that our