Science.gov

Sample records for sparse signal reconstruction

  1. Robust Methods for Sensing and Reconstructing Sparse Signals

    ERIC Educational Resources Information Center

    Carrillo, Rafael E.

    2012-01-01

    Compressed sensing (CS) is an emerging signal acquisition framework that goes against the traditional Nyquist sampling paradigm. CS demonstrates that a sparse, or compressible, signal can be acquired using a low rate acquisition process. Since noise is always present in practical data acquisition systems, sensing and reconstruction methods are…

  2. Exponential Decay of Reconstruction Error from Binary Measurements of Sparse Signals

    DTIC Science & Technology

    2014-08-01

    Exponential decay of reconstruction error from binary measurements of sparse signals Richard Baraniukr, Simon Foucartg, Deanna Needellc, Yaniv Planb...Church Street, Ann Arbor, MI 48109, USA. Email: wootters@umich.edu August 1, 2014 Abstract Binary measurements arise naturally in a variety of...greatly improve the ability to reconstruct a signal from binary measurements. This is exemplified by one- bit compressed sensing, which takes the

  3. Sparse reconstruction of blade tip-timing signals for multi-mode blade vibration monitoring

    NASA Astrophysics Data System (ADS)

    Lin, Jun; Hu, Zheng; Chen, Zhong-Sheng; Yang, Yong-Min; Xu, Hai-Long

    2016-12-01

    Severe blade vibrations may reduce the useful life of the high-speed blade. Nowadays, non-contact measurement using blade tip-timing (BTT) technology is becoming promising in blade vibration monitoring. However, blade tip-timing signals are typically under-sampled. How to extract characteristic features of unknown multi-mode blade vibrations by analyzing these under-sampled signals becomes a big challenge. In this paper, a novel BTT analysis method for reconstructing unknown multi-mode blade vibration signals is proposed. The method consists of two key steps. First, a sparse representation (SR) mathematical model for sparse blade tip-timing signals is built. Second, a multi-mode blade vibration reconstruction algorithm is proposed to solve this SR problem. Experiments are carried out to validate the feasibility of the proposed method. The main advantage of this method is its ability to reconstruct unknown multi-mode blade vibration signals with high accuracy. The minimal requirements of probe number are also presented to provide guidelines for BTT system design.

  4. Atomic library optimization for pulse ultrasonic sparse signal decomposition and reconstruction

    NASA Astrophysics Data System (ADS)

    Song, Shoupeng; Li, Yingxue; Dogandžić, Aleksandar

    2016-02-01

    Compressive sampling of pulse ultrasonic NDE signals could bring significant savings in the data acquisition process. Sparse representation of these signals using an atomic library is key to their interpretation and reconstruction from compressive samples. However, the obstacles to practical applicability of such representations are: large size of the atomic library and computational complexity of the sparse decomposition and reconstruction. To help solve these problems, we develop a method for optimizing the ranges of parameters of traditional Gabor-atom library to match a real pulse ultrasonic signal in terms of correlation. As a result of atomic-library optimization, the number of the atoms is greatly reduced. Numerical simulations compare the proposed approach with the traditional method. Simulation results show that both the time efficiency and signal reconstruction energy error are superior to the traditional one even with small-scale atomic library. The performance of the proposed method is also explored under different noise levels. Finally, we apply the proposed method to real pipeline ultrasonic testing data, and the results indicate that our reduced atomic library outperforms the traditional library.

  5. A fast and accurate sparse continuous signal reconstruction by homotopy DCD with non-convex regularization.

    PubMed

    Wang, Tianyun; Lu, Xinfei; Yu, Xiaofei; Xi, Zhendong; Chen, Weidong

    2014-03-26

    In recent years, various applications regarding sparse continuous signal recovery such as source localization, radar imaging, communication channel estimation, etc., have been addressed from the perspective of compressive sensing (CS) theory. However, there are two major defects that need to be tackled when considering any practical utilization. The first issue is off-grid problem caused by the basis mismatch between arbitrary located unknowns and the pre-specified dictionary, which would make conventional CS reconstruction methods degrade considerably. The second important issue is the urgent demand for low-complexity algorithms, especially when faced with the requirement of real-time implementation. In this paper, to deal with these two problems, we have presented three fast and accurate sparse reconstruction algorithms, termed as HR-DCD, Hlog-DCD and Hlp-DCD, which are based on homotopy, dichotomous coordinate descent (DCD) iterations and non-convex regularizations, by combining with the grid refinement technique. Experimental results are provided to demonstrate the effectiveness of the proposed algorithms and related analysis.

  6. A Fast and Accurate Sparse Continuous Signal Reconstruction by Homotopy DCD with Non-Convex Regularization

    PubMed Central

    Wang, Tianyun; Lu, Xinfei; Yu, Xiaofei; Xi, Zhendong; Chen, Weidong

    2014-01-01

    In recent years, various applications regarding sparse continuous signal recovery such as source localization, radar imaging, communication channel estimation, etc., have been addressed from the perspective of compressive sensing (CS) theory. However, there are two major defects that need to be tackled when considering any practical utilization. The first issue is off-grid problem caused by the basis mismatch between arbitrary located unknowns and the pre-specified dictionary, which would make conventional CS reconstruction methods degrade considerably. The second important issue is the urgent demand for low-complexity algorithms, especially when faced with the requirement of real-time implementation. In this paper, to deal with these two problems, we have presented three fast and accurate sparse reconstruction algorithms, termed as HR-DCD, Hlog-DCD and Hlp-DCD, which are based on homotopy, dichotomous coordinate descent (DCD) iterations and non-convex regularizations, by combining with the grid refinement technique. Experimental results are provided to demonstrate the effectiveness of the proposed algorithms and related analysis. PMID:24675758

  7. Sparse Reconstruction of Regional Gravity Signal Based on Stabilized Orthogonal Matching Pursuit (SOMP)

    NASA Astrophysics Data System (ADS)

    Saadat, S. A.; Safari, A.; Needell, D.

    2016-06-01

    The main role of gravity field recovery is the study of dynamic processes in the interior of the Earth especially in exploration geophysics. In this paper, the Stabilized Orthogonal Matching Pursuit (SOMP) algorithm is introduced for sparse reconstruction of regional gravity signals of the Earth. In practical applications, ill-posed problems may be encountered regarding unknown parameters that are sensitive to the data perturbations. Therefore, an appropriate regularization method needs to be applied to find a stabilized solution. The SOMP algorithm aims to regularize the norm of the solution vector, while also minimizing the norm of the corresponding residual vector. In this procedure, a convergence point of the algorithm that specifies optimal sparsity-level of the problem is determined. The results show that the SOMP algorithm finds the stabilized solution for the ill-posed problem at the optimal sparsity-level, improving upon existing sparsity based approaches.

  8. Sparse signal reconstruction from polychromatic X-ray CT measurements via mass attenuation discretization

    SciTech Connect

    Gu, Renliang; Dogandžić, Aleksandar

    2014-02-18

    We propose a method for reconstructing sparse images from polychromatic x-ray computed tomography (ct) measurements via mass attenuation coefficient discretization. The material of the inspected object and the incident spectrum are assumed to be unknown. We rewrite the Lambert-Beer’s law in terms of integral expressions of mass attenuation and discretize the resulting integrals. We then present a penalized constrained least-squares optimization approach for reconstructing the underlying object from log-domain measurements, where an active set approach is employed to estimate incident energy density parameters and the nonnegativity and sparsity of the image density map are imposed using negative-energy and smooth ℓ{sub 1}-norm penalty terms. We propose a two-step scheme for refining the mass attenuation discretization grid by using higher sampling rate over the range with higher photon energy, and eliminating the discretization points that have little effect on accuracy of the forward projection model. This refinement allows us to successfully handle the characteristic lines (Dirac impulses) in the incident energy density spectrum. We compare the proposed method with the standard filtered backprojection, which ignores the polychromatic nature of the measurements and sparsity of the image density map. Numerical simulations using both realistic simulated and real x-ray ct data are presented.

  9. Sparse signal reconstruction from polychromatic X-ray CT measurements via mass attenuation discretization

    NASA Astrophysics Data System (ADS)

    Gu, Renliang; Dogandžić, Aleksandar

    2014-02-01

    We propose a method for reconstructing sparse images from polychromatic x-ray computed tomography (ct) measurements via mass attenuation coefficient discretization. The material of the inspected object and the incident spectrum are assumed to be unknown. We rewrite the Lambert-Beer's law in terms of integral expressions of mass attenuation and discretize the resulting integrals. We then present a penalized constrained least-squares optimization approach for reconstructing the underlying object from log-domain measurements, where an active set approach is employed to estimate incident energy density parameters and the nonnegativity and sparsity of the image density map are imposed using negative-energy and smooth ℓ1-norm penalty terms. We propose a two-step scheme for refining the mass attenuation discretization grid by using higher sampling rate over the range with higher photon energy, and eliminating the discretization points that have little effect on accuracy of the forward projection model. This refinement allows us to successfully handle the characteristic lines (Dirac impulses) in the incident energy density spectrum. We compare the proposed method with the standard filtered backprojection, which ignores the polychromatic nature of the measurements and sparsity of the image density map. Numerical simulations using both realistic simulated and real x-ray ct data are presented.

  10. A unified approach to sparse signal processing

    NASA Astrophysics Data System (ADS)

    Marvasti, Farokh; Amini, Arash; Haddadi, Farzan; Soltanolkotabi, Mahdi; Khalaj, Babak Hossein; Aldroubi, Akram; Sanei, Saeid; Chambers, Janathon

    2012-12-01

    A unified view of the area of sparse signal processing is presented in tutorial form by bringing together various fields in which the property of sparsity has been successfully exploited. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common potential benefits of significant reduction in sampling rate and processing manipulations through sparse signal processing are revealed. The key application domains of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of the sampling process and reconstruction algorithms, linkages are made with random sampling, compressed sensing, and rate of innovation. The redundancy introduced by channel coding in finite and real Galois fields is then related to over-sampling with similar reconstruction algorithms. The error locator polynomial (ELP) and iterative methods are shown to work quite effectively for both sampling and coding applications. The methods of Prony, Pisarenko, and MUltiple SIgnal Classification (MUSIC) are next shown to be targeted at analyzing signals with sparse frequency domain representations. Specifically, the relations of the approach of Prony to an annihilating filter in rate of innovation and ELP in coding are emphasized; the Pisarenko and MUSIC methods are further improvements of the Prony method under noisy environments. The iterative methods developed for sampling and coding applications are shown to be powerful tools in spectral estimation. Such narrowband spectral estimation is then related to multi-source location and direction of arrival estimation in array processing. Sparsity in unobservable source signals is also shown to facilitate source separation in sparse component analysis; the algorithms developed in this area such as linear programming and matching pursuit are also widely used in compressed sensing. Finally

  11. Sparse Reconstruction for Micro Defect Detection in Acoustic Micro Imaging

    PubMed Central

    Zhang, Yichun; Shi, Tielin; Su, Lei; Wang, Xiao; Hong, Yuan; Chen, Kepeng; Liao, Guanglan

    2016-01-01

    Acoustic micro imaging has been proven to be sufficiently sensitive for micro defect detection. In this study, we propose a sparse reconstruction method for acoustic micro imaging. A finite element model with a micro defect is developed to emulate the physical scanning. Then we obtain the point spread function, a blur kernel for sparse reconstruction. We reconstruct deblurred images from the oversampled C-scan images based on l1-norm regularization, which can enhance the signal-to-noise ratio and improve the accuracy of micro defect detection. The method is further verified by experimental data. The results demonstrate that the sparse reconstruction is effective for micro defect detection in acoustic micro imaging. PMID:27783040

  12. Sparse Reconstruction for Micro Defect Detection in Acoustic Micro Imaging.

    PubMed

    Zhang, Yichun; Shi, Tielin; Su, Lei; Wang, Xiao; Hong, Yuan; Chen, Kepeng; Liao, Guanglan

    2016-10-24

    Acoustic micro imaging has been proven to be sufficiently sensitive for micro defect detection. In this study, we propose a sparse reconstruction method for acoustic micro imaging. A finite element model with a micro defect is developed to emulate the physical scanning. Then we obtain the point spread function, a blur kernel for sparse reconstruction. We reconstruct deblurred images from the oversampled C-scan images based on l₁-norm regularization, which can enhance the signal-to-noise ratio and improve the accuracy of micro defect detection. The method is further verified by experimental data. The results demonstrate that the sparse reconstruction is effective for micro defect detection in acoustic micro imaging.

  13. Compressed Sensing Doppler Ultrasound Reconstruction Using Block Sparse Bayesian Learning.

    PubMed

    Lorintiu, Oana; Liebgott, Herve; Friboulet, Denis

    2016-04-01

    In this paper we propose a framework for using duplex Doppler ultrasound systems. These type of systems need to interleave the acquisition and display of a B-mode image and of the pulsed Doppler spectrogram. In a recent study (Richy , 2013), we have shown that compressed sensing-based reconstruction of Doppler signal allowed reducing the number of Doppler emissions and yielded better results than traditional interpolation and at least equivalent or even better depending on the configuration than the study estimating the signal from sparse data sets given in Jensen, 2006. We propose here to improve over this study by using a novel framework for randomly interleaving Doppler and US emissions. The proposed method reconstructs the Doppler signal segment by segment using a block sparse Bayesian learning (BSBL) algorithm based CS reconstruction. The interest of using such framework in the context of duplex Doppler is linked to the unique ability of BSBL to exploit block-correlated signals and to recover non-sparse signals. The performance of the technique is evaluated from simulated data as well as experimental in vivo data and compared to the recent results in Richy , 2013.

  14. Surface reconstruction from sparse fringe contours

    SciTech Connect

    Cong, G.; Parvin, B.

    1998-08-10

    A new approach for reconstruction of 3D surfaces from 2D cross-sectional contours is presented. By using the so-called ''Equal Importance Criterion,'' we reconstruct the surface based on the assumption that every point in the region contributes equally to the surface reconstruction process. In this context, the problem is formulated in terms of a partial differential equation (PDE), and we show that the solution for dense contours can be efficiently derived from distance transform. In the case of sparse contours, we add a regularization term to insure smoothness in surface recovery. The proposed technique allows for surface recovery at any desired resolution. The main advantage of the proposed method is that inherent problems due to correspondence, tiling, and branching are avoided. Furthermore, the computed high resolution surface is better represented for subsequent geometric analysis. We present results on both synthetic and real data.

  15. Image reconstruction from photon sparse data

    PubMed Central

    Mertens, Lena; Sonnleitner, Matthias; Leach, Jonathan; Agnew, Megan; Padgett, Miles J.

    2017-01-01

    We report an algorithm for reconstructing images when the average number of photons recorded per pixel is of order unity, i.e. photon-sparse data. The image optimisation algorithm minimises a cost function incorporating both a Poissonian log-likelihood term based on the deviation of the reconstructed image from the measured data and a regularization-term based upon the sum of the moduli of the second spatial derivatives of the reconstructed image pixel intensities. The balance between these two terms is set by a bootstrapping technique where the target value of the log-likelihood term is deduced from a smoothed version of the original data. When compared to the original data, the processed images exhibit lower residuals with respect to the true object. We use photon-sparse data from two different experimental systems, one system based on a single-photon, avalanche photo-diode array and the other system on a time-gated, intensified camera. However, this same processing technique could most likely be applied to any low photon-number image irrespective of how the data is collected. PMID:28169363

  16. Image reconstruction from photon sparse data

    NASA Astrophysics Data System (ADS)

    Mertens, Lena; Sonnleitner, Matthias; Leach, Jonathan; Agnew, Megan; Padgett, Miles J.

    2017-02-01

    We report an algorithm for reconstructing images when the average number of photons recorded per pixel is of order unity, i.e. photon-sparse data. The image optimisation algorithm minimises a cost function incorporating both a Poissonian log-likelihood term based on the deviation of the reconstructed image from the measured data and a regularization-term based upon the sum of the moduli of the second spatial derivatives of the reconstructed image pixel intensities. The balance between these two terms is set by a bootstrapping technique where the target value of the log-likelihood term is deduced from a smoothed version of the original data. When compared to the original data, the processed images exhibit lower residuals with respect to the true object. We use photon-sparse data from two different experimental systems, one system based on a single-photon, avalanche photo-diode array and the other system on a time-gated, intensified camera. However, this same processing technique could most likely be applied to any low photon-number image irrespective of how the data is collected.

  17. Image reconstruction from photon sparse data.

    PubMed

    Mertens, Lena; Sonnleitner, Matthias; Leach, Jonathan; Agnew, Megan; Padgett, Miles J

    2017-02-07

    We report an algorithm for reconstructing images when the average number of photons recorded per pixel is of order unity, i.e. photon-sparse data. The image optimisation algorithm minimises a cost function incorporating both a Poissonian log-likelihood term based on the deviation of the reconstructed image from the measured data and a regularization-term based upon the sum of the moduli of the second spatial derivatives of the reconstructed image pixel intensities. The balance between these two terms is set by a bootstrapping technique where the target value of the log-likelihood term is deduced from a smoothed version of the original data. When compared to the original data, the processed images exhibit lower residuals with respect to the true object. We use photon-sparse data from two different experimental systems, one system based on a single-photon, avalanche photo-diode array and the other system on a time-gated, intensified camera. However, this same processing technique could most likely be applied to any low photon-number image irrespective of how the data is collected.

  18. Guided wavefield reconstruction from sparse measurements

    NASA Astrophysics Data System (ADS)

    Mesnil, Olivier; Ruzzene, Massimo

    2016-02-01

    Guided wave measurements are at the basis of several Non-Destructive Evaluation (NDE) techniques. Although sparse measurements of guided wave obtained using piezoelectric sensors can efficiently detect and locate defects, extensive informa-tion on the shape and subsurface location of defects can be extracted from full-field measurements acquired by Laser Doppler Vibrometers (LDV). Wavefield acquisition from LDVs is generally a slow operation due to the fact that the wave propagation to record must be repeated for each point measurement and the initial conditions must be reached between each measurement. In this research, a Sparse Wavefield Reconstruction (SWR) process using Compressed Sensing is developed. The goal of this technique is to reduce the number of point measurements needed to apply NDE techniques by at least one order of magnitude by extrapolating the knowledge of a few randomly chosen measured pixels over an over-sampled grid. To achieve this, the Lamb wave propagation equation is used to formulate a basis of shape functions in which the wavefield has a sparse representation, in order to comply with the Compressed Sensing requirements and use l1-minimization solvers. The main assumption of this reconstruction process is that every material point of the studied area is a potential source. The Compressed Sensing matrix is defined as being the contribution that would have been received at a measurement location from each possible source, using the dispersion relations of the specimen computed using a Semi-Analytical Finite Element technique. The measurements are then processed through an l1-minimizer to find a minimum corresponding to the set of active sources and their corresponding excitation functions. This minimum represents the best combination of the parameters of the model matching the sparse measurements. Wavefields are then reconstructed using the propagation equation. The set of active sources found by minimization contains all the wave

  19. Sparse reconstruction of correlated multichannel activity.

    PubMed

    Peelman, Sem; Van der Herten, Joachim; De Vos, Maarten; Lee, Wen-Shin; Van Huffel, Sabine; Cuyt, Annie

    2013-01-01

    Parametric methods for modeling sinusoidal signals with line spectra have been studied for decades. In general, these methods start by representing each sinusoidal component by means of two complex exponential functions, thereby doubling the number of unknown parameters. Recently, a Hankel-plus-Toeplitz matrix pencil method was proposed which directly models sinusoidal signals with discrete spectral content. Compared to its counterpart, which uses a Hankel matrix pencil, it halves the required number of time-domain samples and reduces the size of the involved linear systems. The aim of this paper is twofold. Firstly, to show that this Hankel-plus-Toeplitz matrix pencil also applies to continuous spectra. Secondly, to explore its use in the reconstruction of real-life signals. Promising preliminary results in the reconstruction of correlated multichannel electroencephalographic (EEG) activity are presented. A principal component analysis preprocessing step is carried out to exploit the redundancy in the channel domain. Then the reduced signal representation is successfully reconstructed from fewer samples using the Hankel-plus-Toeplitz matrix pencil. The obtained results encourage the future development of this matrix pencil method along the lines of well-established spectral analysis methods.

  20. Time-frequency manifold sparse reconstruction: A novel method for bearing fault feature extraction

    NASA Astrophysics Data System (ADS)

    Ding, Xiaoxi; He, Qingbo

    2016-12-01

    In this paper, a novel transient signal reconstruction method, called time-frequency manifold (TFM) sparse reconstruction, is proposed for bearing fault feature extraction. This method introduces image sparse reconstruction into the TFM analysis framework. According to the excellent denoising performance of TFM, a more effective time-frequency (TF) dictionary can be learned from the TFM signature by image sparse decomposition based on orthogonal matching pursuit (OMP). Then, the TF distribution (TFD) of the raw signal in a reconstructed phase space would be re-expressed with the sum of learned TF atoms multiplied by corresponding coefficients. Finally, one-dimensional signal can be achieved again by the inverse process of TF analysis (TFA). Meanwhile, the amplitude information of the raw signal would be well reconstructed. The proposed technique combines the merits of the TFM in denoising and the atomic decomposition in image sparse reconstruction. Moreover, the combination makes it possible to express the nonlinear signal processing results explicitly in theory. The effectiveness of the proposed TFM sparse reconstruction method is verified by experimental analysis for bearing fault feature extraction.

  1. Sparse representation for the ISAR image reconstruction

    NASA Astrophysics Data System (ADS)

    Hu, Mengqi; Montalbo, John; Li, Shuxia; Sun, Ligang; Qiao, Zhijun G.

    2016-05-01

    In this paper, a sparse representation of the data for an inverse synthetic aperture radar (ISAR) system is provided in two dimensions. The proposed sparse representation motivates the use a of a Convex Optimization that recovers the image with far less samples, which is required by Nyquist-Shannon sampling theorem to increases the efficiency and decrease the cost of calculation in radar imaging.

  2. Time-frequency signature sparse reconstruction using chirp dictionary

    NASA Astrophysics Data System (ADS)

    Nguyen, Yen T. H.; Amin, Moeness G.; Ghogho, Mounir; McLernon, Des

    2015-05-01

    This paper considers local sparse reconstruction of time-frequency signatures of windowed non-stationary radar returns. These signals can be considered instantaneously narrow-band, thus the local time-frequency behavior can be recovered accurately with incomplete observations. The typically employed sinusoidal dictionary induces competing requirements on window length. It confronts converse requests on the number of measurements for exact recovery, and sparsity. In this paper, we use chirp dictionary for each window position to determine the signal instantaneous frequency laws. This approach can considerably mitigate the problems of sinusoidal dictionary, and enable the utilization of longer windows for accurate time-frequency representations. It also reduces the picket fence by introducing a new factor, the chirp rate α. Simulation examples are provided, demonstrating the superior performance of local chirp dictionary over its sinusoidal counterpart.

  3. Pruning-Based Sparse Recovery for Electrocardiogram Reconstruction from Compressed Measurements

    PubMed Central

    Lee, Jaeseok; Kim, Kyungsoo; Choi, Ji-Woong

    2017-01-01

    Due to the necessity of the low-power implementation of newly-developed electrocardiogram (ECG) sensors, exact ECG data reconstruction from the compressed measurements has received much attention in recent years. Our interest lies in improving the compression ratio (CR), as well as the ECG reconstruction performance of the sparse signal recovery. To this end, we propose a sparse signal reconstruction method by pruning-based tree search, which attempts to choose the globally-optimal solution by minimizing the cost function. In order to achieve low complexity for the real-time implementation, we employ a novel pruning strategy to avoid exhaustive tree search. Through the restricted isometry property (RIP)-based analysis, we show that the exact recovery condition of our approach is more relaxed than any of the existing methods. Through the simulations, we demonstrate that the proposed approach outperforms the existing sparse recovery methods for ECG reconstruction. PMID:28067856

  4. An Assessment of Iterative Reconstruction Methods for Sparse Ultrasound Imaging

    PubMed Central

    Valente, Solivan A.; Zibetti, Marcelo V. W.; Pipa, Daniel R.; Maia, Joaquim M.; Schneider, Fabio K.

    2017-01-01

    Ultrasonic image reconstruction using inverse problems has recently appeared as an alternative to enhance ultrasound imaging over beamforming methods. This approach depends on the accuracy of the acquisition model used to represent transducers, reflectivity, and medium physics. Iterative methods, well known in general sparse signal reconstruction, are also suited for imaging. In this paper, a discrete acquisition model is assessed by solving a linear system of equations by an ℓ1-regularized least-squares minimization, where the solution sparsity may be adjusted as desired. The paper surveys 11 variants of four well-known algorithms for sparse reconstruction, and assesses their optimization parameters with the goal of finding the best approach for iterative ultrasound imaging. The strategy for the model evaluation consists of using two distinct datasets. We first generate data from a synthetic phantom that mimics real targets inside a professional ultrasound phantom device. This dataset is contaminated with Gaussian noise with an estimated SNR, and all methods are assessed by their resulting images and performances. The model and methods are then assessed with real data collected by a research ultrasound platform when scanning the same phantom device, and results are compared with beamforming. A distinct real dataset is finally used to further validate the proposed modeling. Although high computational effort is required by iterative methods, results show that the discrete model may lead to images closer to ground-truth than traditional beamforming. However, computing capabilities of current platforms need to evolve before frame rates currently delivered by ultrasound equipments are achievable. PMID:28282862

  5. Beam hardening correction for sparse-view CT reconstruction

    NASA Astrophysics Data System (ADS)

    Liu, Wenlei; Rong, Junyan; Gao, Peng; Liao, Qimei; Lu, HongBing

    2015-03-01

    Beam hardening, which is caused by spectrum polychromatism of the X-ray beam, may result in various artifacts in the reconstructed image and degrade image quality. The artifacts would be further aggravated for the sparse-view reconstruction due to insufficient sampling data. Considering the advantages of the total-variation (TV) minimization in CT reconstruction with sparse-view data, in this paper, we propose a beam hardening correction method for sparse-view CT reconstruction based on Brabant's modeling. In this correction model for beam hardening, the attenuation coefficient of each voxel at the effective energy is modeled and estimated linearly, and can be applied in an iterative framework, such as simultaneous algebraic reconstruction technique (SART). By integrating the correction model into the forward projector of the algebraic reconstruction technique (ART), the TV minimization can recover images when only a limited number of projections are available. The proposed method does not need prior information about the beam spectrum. Preliminary validation using Monte Carlo simulations indicates that the proposed method can provide better reconstructed images from sparse-view projection data, with effective suppression of artifacts caused by beam hardening. With appropriate modeling of other degrading effects such as photon scattering, the proposed framework may provide a new way for low-dose CT imaging.

  6. An evaluation of GPU acceleration for sparse reconstruction

    NASA Astrophysics Data System (ADS)

    Braun, Thomas R.

    2010-04-01

    Image processing applications typically parallelize well. This gives a developer interested in data throughput several different implementation options, including multiprocessor machines, general purpose computation on the graphics processor, and custom gate-array designs. Herein, we will investigate these first two options for dictionary learning and sparse reconstruction, specifically focusing on the K-SVD algorithm for dictionary learning and the Batch Orthogonal Matching Pursuit for sparse reconstruction. These methods have been shown to provide state of the art results for image denoising, classification, and object recognition. We'll explore the GPU implementation and show that GPUs are not significantly better or worse than CPUs for this application.

  7. Reconstruction Techniques for Sparse Multistatic Linear Array Microwave Imaging

    SciTech Connect

    Sheen, David M.; Hall, Thomas E.

    2014-06-09

    Sequentially-switched linear arrays are an enabling technology for a number of near-field microwave imaging applications. Electronically sequencing along the array axis followed by mechanical scanning along an orthogonal axis allows dense sampling of a two-dimensional aperture in near real-time. In this paper, a sparse multi-static array technique will be described along with associated Fourier-Transform-based and back-projection-based image reconstruction algorithms. Simulated and measured imaging results are presented that show the effectiveness of the sparse array technique along with the merits and weaknesses of each image reconstruction approach.

  8. Multi-shell diffusion signal recovery from sparse measurements

    PubMed Central

    Rathi, Y.; Michailovich, O.; Laun, F.; Setsompop, K.; Grant, P. E.; Westin, C-F

    2014-01-01

    For accurate estimation of the ensemble average diffusion propagator (EAP), traditional multi-shell diffusion imaging (MSDI) approaches require acquisition of diffusion signals for a range of b-values. However, this makes the acquisition time too long for several types of patients, making it difficult to use in a clinical setting. In this work, we propose a new method for the reconstruction of diffusion signals in the entire q-space from highly under-sampled sets of MSDI data, thus reducing the scan time significantly. In particular, to sparsely represent the diffusion signal over multiple q-shells, we propose a novel extension to the framework of spherical ridgelets by accurately modeling the monotonically decreasing radial component of the diffusion signal. Further, we enforce the reconstructed signal to have smooth spatial regularity in the brain, by minimizing the total variation (TV) norm. We combine these requirements into a novel cost function and derive an optimal solution using the Alternating Directions Method of Multipliers (ADMM) algorithm. We use a physical phantom data set with known fiber crossing angle of 45° to determine the optimal number of measurements (gradient directions and b-values) needed for accurate signal recovery. We compare our technique with a state-of-the-art sparse reconstruction method (i.e., the SHORE method of (Cheng et al., 2010)) in terms of angular error in estimating the crossing angle, incorrect number of peaks detected, normalized mean squared error in signal recovery as well as error in estimating the return-to-origin probability (RTOP). Finally, we also demonstrate the behavior of the proposed technique on human in-vivo data sets. Based on these experiments, we conclude that using the proposed algorithm, at least 60 measurements (spread over three b-value shells) are needed for proper recovery of MSDI data in the entire q-space. PMID:25047866

  9. Smoothed l0 Norm Regularization for Sparse-View X-Ray CT Reconstruction

    PubMed Central

    Li, Ming; Peng, Chengtao; Guan, Yihui; Xu, Pin

    2016-01-01

    Low-dose computed tomography (CT) reconstruction is a challenging problem in medical imaging. To complement the standard filtered back-projection (FBP) reconstruction, sparse regularization reconstruction gains more and more research attention, as it promises to reduce radiation dose, suppress artifacts, and improve noise properties. In this work, we present an iterative reconstruction approach using improved smoothed l0 (SL0) norm regularization which is used to approximate l0 norm by a family of continuous functions to fully exploit the sparseness of the image gradient. Due to the excellent sparse representation of the reconstruction signal, the desired tissue details are preserved in the resulting images. To evaluate the performance of the proposed SL0 regularization method, we reconstruct the simulated dataset acquired from the Shepp-Logan phantom and clinical head slice image. Additional experimental verification is also performed with two real datasets from scanned animal experiment. Compared to the referenced FBP reconstruction and the total variation (TV) regularization reconstruction, the results clearly reveal that the presented method has characteristic strengths. In particular, it improves reconstruction quality via reducing noise while preserving anatomical features. PMID:27725935

  10. Robust compressive sensing of sparse signals: a review

    NASA Astrophysics Data System (ADS)

    Carrillo, Rafael E.; Ramirez, Ana B.; Arce, Gonzalo R.; Barner, Kenneth E.; Sadler, Brian M.

    2016-12-01

    Compressive sensing generally relies on the ℓ 2 norm for data fidelity, whereas in many applications, robust estimators are needed. Among the scenarios in which robust performance is required, applications where the sampling process is performed in the presence of impulsive noise, i.e., measurements are corrupted by outliers, are of particular importance. This article overviews robust nonlinear reconstruction strategies for sparse signals based on replacing the commonly used ℓ 2 norm by M-estimators as data fidelity functions. The derived methods outperform existing compressed sensing techniques in impulsive environments, while achieving good performance in light-tailed environments, thus offering a robust framework for CS.

  11. Moving target detection for frequency agility radar by sparse reconstruction.

    PubMed

    Quan, Yinghui; Li, YaChao; Wu, Yaojun; Ran, Lei; Xing, Mengdao; Liu, Mengqi

    2016-09-01

    Frequency agility radar, with randomly varied carrier frequency from pulse to pulse, exhibits superior performance compared to the conventional fixed carrier frequency pulse-Doppler radar against the electromagnetic interference. A novel moving target detection (MTD) method is proposed for the estimation of the target's velocity of frequency agility radar based on pulses within a coherent processing interval by using sparse reconstruction. Hardware implementation of orthogonal matching pursuit algorithm is executed on Xilinx Virtex-7 Field Programmable Gata Array (FPGA) to perform sparse optimization. Finally, a series of experiments are performed to evaluate the performance of proposed MTD method for frequency agility radar systems.

  12. Moving target detection for frequency agility radar by sparse reconstruction

    NASA Astrophysics Data System (ADS)

    Quan, Yinghui; Li, YaChao; Wu, Yaojun; Ran, Lei; Xing, Mengdao; Liu, Mengqi

    2016-09-01

    Frequency agility radar, with randomly varied carrier frequency from pulse to pulse, exhibits superior performance compared to the conventional fixed carrier frequency pulse-Doppler radar against the electromagnetic interference. A novel moving target detection (MTD) method is proposed for the estimation of the target's velocity of frequency agility radar based on pulses within a coherent processing interval by using sparse reconstruction. Hardware implementation of orthogonal matching pursuit algorithm is executed on Xilinx Virtex-7 Field Programmable Gata Array (FPGA) to perform sparse optimization. Finally, a series of experiments are performed to evaluate the performance of proposed MTD method for frequency agility radar systems.

  13. A Comparison of Methods for Ocean Reconstruction from Sparse Observations

    NASA Astrophysics Data System (ADS)

    Streletz, G. J.; Kronenberger, M.; Weber, C.; Gebbie, G.; Hagen, H.; Garth, C.; Hamann, B.; Kreylos, O.; Kellogg, L. H.; Spero, H. J.

    2014-12-01

    We present a comparison of two methods for developing reconstructions of oceanic scalar property fields from sparse scattered observations. Observed data from deep sea core samples provide valuable information regarding the properties of oceans in the past. However, because the locations of sample sites are distributed on the ocean floor in a sparse and irregular manner, developing a global ocean reconstruction is a difficult task. Our methods include a flow-based and a moving least squares -based approximation method. The flow-based method augments the process of interpolating or approximating scattered scalar data by incorporating known flow information. The scheme exploits this additional knowledge to define a non-Euclidean distance measure between points in the spatial domain. This distance measure is used to create a reconstruction of the desired scalar field on the spatial domain. The resulting reconstruction thus incorporates information from both the scattered samples and the known flow field. The second method does not assume a known flow field, but rather works solely with the observed scattered samples. It is based on a modification of the moving least squares approach, a weighted least squares approximation method that blends local approximations into a global result. The modifications target the selection of data used for these local approximations and the construction of the weighting function. The definition of distance used in the weighting function is crucial for this method, so we use a machine learning approach to determine a set of near-optimal parameters for the weighting. We have implemented both of the reconstruction methods and have tested them using several sparse oceanographic datasets. Based upon these studies, we discuss the advantages and disadvantages of each method and suggest possible ways to combine aspects of both methods in order to achieve an overall high-quality reconstruction.

  14. Reconstruction techniques for sparse multistatic linear array microwave imaging

    NASA Astrophysics Data System (ADS)

    Sheen, David M.; Hall, Thomas E.

    2014-06-01

    Sequentially-switched linear arrays are an enabling technology for a number of near-field microwave imaging applications. Electronically sequencing along the array axis followed by mechanical scanning along an orthogonal axis allows dense sampling of a two-dimensional aperture in near real-time. The Pacific Northwest National Laboratory (PNNL) has developed this technology for several applications including concealed weapon detection, groundpenetrating radar, and non-destructive inspection and evaluation. These techniques form three-dimensional images by scanning a diverging beam swept frequency transceiver over a two-dimensional aperture and mathematically focusing or reconstructing the data into three-dimensional images. Recently, a sparse multi-static array technology has been developed that reduces the number of antennas required to densely sample the linear array axis of the spatial aperture. This allows a significant reduction in cost and complexity of the linear-array-based imaging system. The sparse array has been specifically designed to be compatible with Fourier-Transform-based image reconstruction techniques; however, there are limitations to the use of these techniques, especially for extreme near-field operation. In the extreme near-field of the array, back-projection techniques have been developed that account for the exact location of each transmitter and receiver in the linear array and the 3-D image location. In this paper, the sparse array technique will be described along with associated Fourier-Transform-based and back-projection-based image reconstruction algorithms. Simulated imaging results are presented that show the effectiveness of the sparse array technique along with the merits and weaknesses of each image reconstruction approach.

  15. Point-source reconstruction with a sparse light-sensor array for optical TPC readout

    NASA Astrophysics Data System (ADS)

    Rutter, G.; Richards, M.; Bennieston, A. J.; Ramachers, Y. A.

    2011-07-01

    A reconstruction technique for sparse array optical signal readout is introduced and applied to the generic challenge of large-area readout of a large number of point light sources. This challenge finds a prominent example in future, large volume neutrino detector studies based on liquid argon. It is concluded that the sparse array option may be ruled out for reasons of required number of channels when compared to a benchmark derived from charge readout on wire-planes. Smaller-scale detectors, however, could benefit from this technology.

  16. A Sparse Reconstruction Algorithm for Ultrasonic Images in Nondestructive Testing

    PubMed Central

    Guarneri, Giovanni Alfredo; Pipa, Daniel Rodrigues; Junior, Flávio Neves; de Arruda, Lúcia Valéria Ramos; Zibetti, Marcelo Victor Wüst

    2015-01-01

    Ultrasound imaging systems (UIS) are essential tools in nondestructive testing (NDT). In general, the quality of images depends on two factors: system hardware features and image reconstruction algorithms. This paper presents a new image reconstruction algorithm for ultrasonic NDT. The algorithm reconstructs images from A-scan signals acquired by an ultrasonic imaging system with a monostatic transducer in pulse-echo configuration. It is based on regularized least squares using a l1 regularization norm. The method is tested to reconstruct an image of a point-like reflector, using both simulated and real data. The resolution of reconstructed image is compared with four traditional ultrasonic imaging reconstruction algorithms: B-scan, SAFT, ω-k SAFT and regularized least squares (RLS). The method demonstrates significant resolution improvement when compared with B-scan—about 91% using real data. The proposed scheme also outperforms traditional algorithms in terms of signal-to-noise ratio (SNR). PMID:25905700

  17. Accelerated reconstruction of electrical impedance tomography images via patch based sparse representation

    NASA Astrophysics Data System (ADS)

    Wang, Qi; Lian, Zhijie; Wang, Jianming; Chen, Qingliang; Sun, Yukuan; Li, Xiuyan; Duan, Xiaojie; Cui, Ziqiang; Wang, Huaxiang

    2016-11-01

    Electrical impedance tomography (EIT) reconstruction is a nonlinear and ill-posed problem. Exact reconstruction of an EIT image inverts a high dimensional mathematical model to calculate the conductivity field, which causes significant problems regarding that the computational complexity will reduce the achievable frame rate, which is considered as a major advantage of EIT imaging. The single-step method, state estimation method, and projection method were always used to accelerate reconstruction process. The basic principle of these methods is to reduce computational complexity. However, maintaining high resolution in space together with not much cost is still challenging, especially for complex conductivity distribution. This study proposes an idea to accelerate image reconstruction of EIT based on compressive sensing (CS) theory, namely, CSEIT method. The novel CSEIT method reduces the sampling rate through minimizing redundancy in measurements, so that detailed information of reconstruction is not lost. In order to obtain sparse solution, which is the prior condition of signal recovery required by CS theory, a novel image reconstruction algorithm based on patch-based sparse representation is proposed. By applying the new framework of CSEIT, the data acquisition time, or the sampling rate, is reduced by more than two times, while the accuracy of reconstruction is significantly improved.

  18. Tomographic image reconstruction via estimation of sparse unidirectional gradients.

    PubMed

    Polak, Adam G; Mroczka, Janusz; Wysoczański, Dariusz

    2017-02-01

    Since computed tomography (CT) was developed over 35 years ago, new mathematical ideas and computational algorithms have been continuingly elaborated to improve the quality of reconstructed images. In recent years, a considerable effort can be noticed to apply the sparse solution of underdetermined system theory to the reconstruction of CT images from undersampled data. Its significance stems from the possibility of obtaining good quality CT images from low dose projections. Among diverse approaches, total variation (TV) minimizing 2D gradients of an image, seems to be the most popular method. In this paper, a new method for CT image reconstruction via sparse gradients estimation (SGE), is proposed. It consists in estimating 1D gradients specified in four directions using the iterative reweighting algorithm. To investigate its properties and to compare it with TV and other related methods, numerical simulations were performed according to the Monte Carlo scheme, using the Shepp-Logan and more realistic brain phantoms scanned at 9-60 directions in the range from 0 to 179°, with measurement data disturbed by additive Gaussians noise characterized by the relative level of 0.1%, 0.2%, 0.5%, 1%, 2% and 5%. The accuracy of image reconstruction was assessed in terms of the relative root-mean-square (RMS) error. The results show that the proposed SGE algorithm has returned more accurate images than TV for the cases fulfilling the sparsity conditions. Particularly, it preserves sharp edges of regions representing different tissues or organs and yields images of much better quality reconstructed from a small number of projections disturbed by relatively low measurement noise.

  19. DOA estimation based on multiple beamspace measurements sparse reconstruction for manoeuvring towed array

    NASA Astrophysics Data System (ADS)

    Yuan, J.; Xiao, H.; Cai, Z. M.; Xi, C.

    2017-01-01

    The port-starboard ambiguity in the conventional single towed linear array sonar is one of the most deceiving obstacles which exist in the way of development of spatial spectrum estimation. In order to improve the performance of target detection and Direction of Arrival (DOA) estimation, this paper proposes a novel spatial spectrum sparse reconstruction method based on multiple beamspace measurements (MBM-SR). An array sparse signal model for manoeuvring towed array is established. Then the Mutual Incoherent Property (MIP) is analyzed to ensure the proposed algorithm possessing better spatial spectrum reconstruction property. Simulation results demonstrate that, compared with Conventional Beam Forming (CBF) algorithm, the proposed algorithm has evident advantage in ambiguity suppression ratio (ASR) and estimation performance.

  20. Simultaneous Reconstruction and Segmentation of Dynamic PET via Low-Rank and Sparse Matrix Decomposition.

    PubMed

    Chen, Shuhang; Liu, Huafeng; Hu, Zhenghui; Zhang, Heye; Shi, Pengcheng; Chen, Yunmei

    2015-07-01

    Although of great clinical value, accurate and robust reconstruction and segmentation of dynamic positron emission tomography (PET) images are great challenges due to low spatial resolution and high noise. In this paper, we propose a unified framework that exploits temporal correlations and variations within image sequences based on low-rank and sparse matrix decomposition. Thus, the two separate inverse problems, PET image reconstruction and segmentation, are accomplished in a simultaneous fashion. Considering low signal to noise ratio and piece-wise constant assumption of PET images, we also propose to regularize low-rank and sparse matrices with vectorial total variation norm. The resulting optimization problem is solved by augmented Lagrangian multiplier method with variable splitting. The effectiveness of proposed approach is validated on realistic Monte Carlo simulation datasets and the real patient data.

  1. Model-based imaging of damage with Lamb waves via sparse reconstruction.

    PubMed

    Levine, Ross M; Michaels, Jennifer E

    2013-03-01

    Ultrasonic guided waves are gaining acceptance for structural health monitoring and nondestructive evaluation of plate-like structures. One configuration of interest is a spatially distributed array of fixed piezoelectric devices. Typical operation consists of recording signals from all transmit-receive pairs and subtracting pre-recorded baselines to detect changes, possibly due to damage or other effects. While techniques such as delay-and-sum imaging as applied to differential signals are both simple and capable of detecting flaws, their performance is limited, particularly when there are multiple damage sites. Here a very different approach to imaging is considered that exploits the expected sparsity of structural damage; i.e., the structure is mostly damage-free. Differential signals are decomposed into a sparse linear combination of location-based components, which are pre-computed from a simple propagation model. The sparse reconstruction techniques of basis pursuit denoising and orthogonal matching pursuit are applied to achieve this decomposition, and a hybrid reconstruction method is also proposed and evaluated. Noisy simulated data and experimental data recorded on an aluminum plate with artificial damage are considered. Results demonstrate the efficacy of all three methods by producing very sparse indications of damage at the correct locations even in the presence of model mismatch and significant noise.

  2. Recursive Recovery of Sparse Signal Sequences From Compressive Measurements: A Review

    NASA Astrophysics Data System (ADS)

    Vaswani, Namrata; Zhan, Jinchun

    2016-07-01

    In this article, we review the literature on design and analysis of recursive algorithms for reconstructing a time sequence of sparse signals from compressive measurements. The signals are assumed to be sparse in some transform domain or in some dictionary. Their sparsity patterns can change with time, although, in many practical applications, the changes are gradual. An important class of applications where this problem occurs is dynamic projection imaging, e.g., dynamic magnetic resonance imaging (MRI) for real-time medical applications such as interventional radiology, or dynamic computed tomography.

  3. Efficient Reconstruction of Block-Sparse Signals

    DTIC Science & Technology

    2011-01-26

    constraints can be used. (4) (5) Let Coff represent a subset of the positive integers less than or equal 10 N such that k E Coff implies x(k) = O. Let...equalit y into the set Can. Equa- tion (1 0) still holds since Uk = z (h’)/il. Then I’ can be re- duced furth er using ( 10). The se ts Can ,md Coff are...modifi ed in the followin g manner. I. If reducing 11 causes a block component ’Uk with k E Coff to ac hieve unity norm, then en te r the kIll

  4. Multiple sparse volumetric priors for distributed EEG source reconstruction.

    PubMed

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-10-15

    We revisit the multiple sparse priors (MSP) algorithm implemented in the statistical parametric mapping software (SPM) for distributed EEG source reconstruction (Friston et al., 2008). In the present implementation, multiple cortical patches are introduced as source priors based on a dipole source space restricted to a cortical surface mesh. In this note, we present a technique to construct volumetric cortical regions to introduce as source priors by restricting the dipole source space to a segmented gray matter layer and using a region growing approach. This extension allows to reconstruct brain structures besides the cortical surface and facilitates the use of more realistic volumetric head models including more layers, such as cerebrospinal fluid (CSF), compared to the standard 3-layered scalp-skull-brain head models. We illustrated the technique with ERP data and anatomical MR images in 12 subjects. Based on the segmented gray matter for each of the subjects, cortical regions were created and introduced as source priors for MSP-inversion assuming two types of head models. The standard 3-layered scalp-skull-brain head models and extended 4-layered head models including CSF. We compared these models with the current implementation by assessing the free energy corresponding with each of the reconstructions using Bayesian model selection for group studies. Strong evidence was found in favor of the volumetric MSP approach compared to the MSP approach based on cortical patches for both types of head models. Overall, the strongest evidence was found in favor of the volumetric MSP reconstructions based on the extended head models including CSF. These results were verified by comparing the reconstructed activity. The use of volumetric cortical regions as source priors is a useful complement to the present implementation as it allows to introduce more complex head models and volumetric source priors in future studies.

  5. Stable Restoration and Separation of Approximately Sparse Signals

    DTIC Science & Technology

    2011-07-02

    dictionary. Particular applications covered by our framework include the restoration of signals impaired by impulse noise , narrowband interference, or...representation in a second general dictionary. Particular applications covered by our framework include the restoration of signals impaired by impulse noise ...applications (see [1–17] and references therein), including: • Impulse noise : The recovery of approximately sparse signals corrupted by impulse noise [13

  6. Effects of reconstructed magnetic field from sparse noisy boundary measurements on localization of active neural source.

    PubMed

    Shen, Hui-min; Lee, Kok-Meng; Hu, Liang; Foong, Shaohui; Fu, Xin

    2016-01-01

    Localization of active neural source (ANS) from measurements on head surface is vital in magnetoencephalography. As neuron-generated magnetic fields are extremely weak, significant uncertainties caused by stochastic measurement interference complicate its localization. This paper presents a novel computational method based on reconstructed magnetic field from sparse noisy measurements for enhanced ANS localization by suppressing effects of unrelated noise. In this approach, the magnetic flux density (MFD) in the nearby current-free space outside the head is reconstructed from measurements through formulating the infinite series solution of the Laplace's equation, where boundary condition (BC) integrals over the entire measurements provide "smooth" reconstructed MFD with the decrease in unrelated noise. Using a gradient-based method, reconstructed MFDs with good fidelity are selected for enhanced ANS localization. The reconstruction model, spatial interpolation of BC, parametric equivalent current dipole-based inverse estimation algorithm using reconstruction, and gradient-based selection are detailed and validated. The influences of various source depths and measurement signal-to-noise ratio levels on the estimated ANS location are analyzed numerically and compared with a traditional method (where measurements are directly used), and it was demonstrated that gradient-selected high-fidelity reconstructed data can effectively improve the accuracy of ANS localization.

  7. Sparse Reconstruction Techniques in Magnetic Resonance Imaging: Methods, Applications, and Challenges to Clinical Adoption.

    PubMed

    Yang, Alice C; Kretzler, Madison; Sudarski, Sonja; Gulani, Vikas; Seiberlich, Nicole

    2016-06-01

    The family of sparse reconstruction techniques, including the recently introduced compressed sensing framework, has been extensively explored to reduce scan times in magnetic resonance imaging (MRI). While there are many different methods that fall under the general umbrella of sparse reconstructions, they all rely on the idea that a priori information about the sparsity of MR images can be used to reconstruct full images from undersampled data. This review describes the basic ideas behind sparse reconstruction techniques, how they could be applied to improve MRI, and the open challenges to their general adoption in a clinical setting. The fundamental principles underlying different classes of sparse reconstructions techniques are examined, and the requirements that each make on the undersampled data outlined. Applications that could potentially benefit from the accelerations that sparse reconstructions could provide are described, and clinical studies using sparse reconstructions reviewed. Lastly, technical and clinical challenges to widespread implementation of sparse reconstruction techniques, including optimization, reconstruction times, artifact appearance, and comparison with current gold standards, are discussed.

  8. Efficient Sparse Signal Transmission over a Lossy Link Using Compressive Sensing

    PubMed Central

    Wu, Liantao; Yu, Kai; Cao, Dongyu; Hu, Yuhen; Wang, Zhi

    2015-01-01

    Reliable data transmission over lossy communication link is expensive due to overheads for error protection. For signals that have inherent sparse structures, compressive sensing (CS) is applied to facilitate efficient sparse signal transmissions over lossy communication links without data compression or error protection. The natural packet loss in the lossy link is modeled as a random sampling process of the transmitted data, and the original signal will be reconstructed from the lossy transmission results using the CS-based reconstruction method at the receiving end. The impacts of packet lengths on transmission efficiency under different channel conditions have been discussed, and interleaving is incorporated to mitigate the impact of burst data loss. Extensive simulations and experiments have been conducted and compared to the traditional automatic repeat request (ARQ) interpolation technique, and very favorable results have been observed in terms of both accuracy of the reconstructed signals and the transmission energy consumption. Furthermore, the packet length effect provides useful insights for using compressed sensing for efficient sparse signal transmission via lossy links. PMID:26287195

  9. A Novel 2-D Coherent DOA Estimation Method Based on Dimension Reduction Sparse Reconstruction for Orthogonal Arrays

    PubMed Central

    Wang, Xiuhong; Mao, Xingpeng; Wang, Yiming; Zhang, Naitong; Li, Bo

    2016-01-01

    Based on sparse representations, the problem of two-dimensional (2-D) direction of arrival (DOA) estimation is addressed in this paper. A novel sparse 2-D DOA estimation method, called Dimension Reduction Sparse Reconstruction (DRSR), is proposed with pairing by Spatial Spectrum Reconstruction of Sub-Dictionary (SSRSD). By utilizing the angle decoupling method, which transforms a 2-D estimation into two independent one-dimensional (1-D) estimations, the high computational complexity induced by a large 2-D redundant dictionary is greatly reduced. Furthermore, a new angle matching scheme, SSRSD, which is less sensitive to the sparse reconstruction error with higher pair-matching probability, is introduced. The proposed method can be applied to any type of orthogonal array without requirement of a large number of snapshots and a priori knowledge of the number of signals. The theoretical analyses and simulation results show that the DRSR-SSRSD method performs well for coherent signals, which performance approaches Cramer–Rao bound (CRB), even under a single snapshot and low signal-to-noise ratio (SNR) condition. PMID:27649191

  10. An enhanced sparse representation strategy for signal classification

    NASA Astrophysics Data System (ADS)

    Zhou, Yin; Gao, Jinglun; Barner, Kenneth E.

    2012-06-01

    Sparse representation based classification (SRC) has achieved state-of-the-art results on face recognition. It is hence desired to extend its power to a broader range of classification tasks in pattern recognition. SRC first encodes a query sample as a linear combination of a few atoms from a predefined dictionary. It then identifies the label by evaluating which class results in the minimum reconstruction error. The effectiveness of SRC is limited by an important assumption that data points from different classes are not distributed along the same radius direction. Otherwise, this approach will lose their discrimination ability, even though data from different classes are actually well-separated in terms of Euclidean distance. This assumption is reasonable for face recognition as images of the same subject under different intensity levels are still considered to be of same-class. However, the assumption is not always satisfied when dealing with many other real-world data, e.g., the Iris dataset, where classes are stratified along the radius direction. In this paper, we propose a new coding strategy, called Nearest- Farthest Neighbors based SRC (NF-SRC), to effectively overcome the limitation within SRC. The dictionary is composed of both the Nearest Neighbors and the Farthest Neighbors. While the Nearest Neighbors are used to narrow the selection of candidate samples, the Farthest Neighbors are employed to make the dictionary more redundant. NF-SRC encodes each query signal in a greedy way similar to OMP. The proposed approach is evaluated over extensive experiments. The encouraging results demonstrate the feasibility of the proposed method.

  11. An efficient iterative CBCT reconstruction approach using gradient projection sparse reconstruction algorithm

    PubMed Central

    Lee, Heui Chang; Song, Bongyong; Kim, Jin Sung; Jung, James J.; Li, H. Harold; Mutic, Sasa; Park, Justin C.

    2016-01-01

    The purpose of this study is to develop a fast and convergence proofed CBCT reconstruction framework based on the compressed sensing theory which not only lowers the imaging dose but also is computationally practicable in the busy clinic. We simplified the original mathematical formulation of gradient projection for sparse reconstruction (GPSR) to minimize the number of forward and backward projections for line search processes at each iteration. GPSR based algorithms generally showed improved image quality over the FDK algorithm especially when only a small number of projection data were available. When there were only 40 projections from 360 degree fan beam geometry, the quality of GPSR based algorithms surpassed FDK algorithm within 10 iterations in terms of the mean squared relative error. Our proposed GPSR algorithm converged as fast as the conventional GPSR with a reasonably low computational complexity. The outcomes demonstrate that the proposed GPSR algorithm is attractive for use in real time applications such as on-line IGRT. PMID:27894103

  12. Filtered gradient compressive sensing reconstruction algorithm for sparse and structured measurement matrices

    NASA Astrophysics Data System (ADS)

    Mejia, Yuri H.; Arguello, Henry

    2016-05-01

    Compressive sensing state-of-the-art proposes random Gaussian and Bernoulli as measurement matrices. Nev- ertheless, often the design of the measurement matrix is subject to physical constraints, and therefore it is frequently not possible that the matrix follows a Gaussian or Bernoulli distribution. Examples of these lim- itations are the structured and sparse matrices of the compressive X-Ray, and compressive spectral imaging systems. A standard algorithm for recovering sparse signals consists in minimizing an objective function that includes a quadratic error term combined with a sparsity-inducing regularization term. This problem can be solved using the iterative algorithms for solving linear inverse problems. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity. However, current algorithms are slow for getting a high quality image reconstruction because they do not exploit the structured and sparsity characteristics of the compressive measurement matrices. This paper proposes the development of a gradient-based algorithm for compressive sensing reconstruction by including a filtering step that yields improved quality using less iterations. This algorithm modifies the iterative solution such that it forces to converge to a filtered version of the residual AT y, where y is the measurement vector and A is the compressive measurement matrix. We show that the algorithm including the filtering step converges faster than the unfiltered version. We design various filters that are motivated by the structure of AT y. Extensive simulation results using various sparse and structured matrices highlight the relative performance gain over the existing iterative process.

  13. The Use of Compressive Sensing to Reconstruct Radiation Characteristics of Wide-Band Antennas from Sparse Measurements

    DTIC Science & Technology

    2015-06-01

    Wide- Band Antennas from Sparse Measurements by Patrick Debroux and Berenice Verdin Approved for public release...Army Research Laboratory The Use of Compressive Sensing to Reconstruct Radiation Characteristics of Wide- Band Antennas from Sparse Measurements...Compressive Sensing to Reconstruct Radiation Characteristics of Wide-Band Antennas from Sparse Measurements 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c

  14. Flutter signal extracting technique based on FOG and self-adaptive sparse representation algorithm

    NASA Astrophysics Data System (ADS)

    Lei, Jian; Meng, Xiangtao; Xiang, Zheng

    2016-10-01

    Due to various moving parts inside, when a spacecraft runs in orbits, its structure could get a minor angular vibration, which results in vague image formation of space camera. Thus, image compensation technique is required to eliminate or alleviate the effect of movement on image formation and it is necessary to realize precise measuring of flutter angle. Due to the advantages such as high sensitivity, broad bandwidth, simple structure and no inner mechanical moving parts, FOG (fiber optical gyro) is adopted in this study to measure minor angular vibration. Then, movement leading to image degeneration is achieved by calculation. The idea of the movement information extracting algorithm based on self-adaptive sparse representation is to use arctangent function approximating L0 norm to construct unconstrained noisy-signal-aimed sparse reconstruction model and then solve the model by a method based on steepest descent algorithm and BFGS algorithm to estimate sparse signal. Then taking the advantage of the principle of random noises not able to be represented by linear combination of elements, useful signal and random noised are separated effectively. Because the main interference of minor angular vibration to image formation of space camera is random noises, sparse representation algorithm could extract useful information to a large extent and acts as a fitting pre-process method of image restoration. The self-adaptive sparse representation algorithm presented in this paper is used to process the measured minor-angle-vibration signal of FOG used by some certain spacecraft. By component analysis of the processing results, we can find out that the algorithm could extract micro angular vibration signal of FOG precisely and effectively, and can achieve the precision degree of 0.1".

  15. Accuracy of femur reconstruction from sparse geometric data using a statistical shape model.

    PubMed

    Zhang, Ju; Besier, Thor F

    2017-04-01

    Sparse geometric information from limited field-of-view medical images is often used to reconstruct the femur in biomechanical models of the hip and knee. However, the full femur geometry is needed to establish boundary conditions such as muscle attachment sites and joint axes which define the orientation of joint loads. Statistical shape models have been used to estimate the geometry of the full femur from varying amounts of sparse geometric information. However, the effect that different amounts of sparse data have on reconstruction accuracy has not been systematically assessed. In this study, we compared shape model and linear scaling reconstruction of the full femur surface from varying proportions of proximal and distal partial femur geometry in combination with morphometric and landmark data. We quantified reconstruction error in terms of surface-to-surface error as well as deviations in the reconstructed femur's anatomical coordinate system which is important for biomechanical models. Using a partial proximal femur surface, mean shape model-based reconstruction surface error was 1.8 mm with 0.15° or less anatomic axis error, compared to 19.1 mm and 2.7-5.6° for linear scaling. Similar results were found when using a partial distal surface. However, varying amounts of proximal or distal partial surface data had a negligible effect on reconstruction accuracy. Our results show that given an appropriate set of sparse geometric data, a shape model can reconstruct full femur geometry with far greater accuracy than simple scaling.

  16. Universal Collaboration Strategies for Signal Detection: A Sparse Learning Approach

    NASA Astrophysics Data System (ADS)

    Khanduri, Prashant; Kailkhura, Bhavya; Thiagarajan, Jayaraman J.; Varshney, Pramod K.

    2016-10-01

    This paper considers the problem of high dimensional signal detection in a large distributed network whose nodes can collaborate with their one-hop neighboring nodes (spatial collaboration). We assume that only a small subset of nodes communicate with the Fusion Center (FC). We design optimal collaboration strategies which are universal for a class of deterministic signals. By establishing the equivalence between the collaboration strategy design problem and sparse PCA, we solve the problem efficiently and evaluate the impact of collaboration on detection performance.

  17. Clutter Mitigation in Echocardiography Using Sparse Signal Separation

    PubMed Central

    Turek, Javier S.; Elad, Michael; Yavneh, Irad

    2015-01-01

    In ultrasound imaging, clutter artifacts degrade images and may cause inaccurate diagnosis. In this paper, we apply a method called Morphological Component Analysis (MCA) for sparse signal separation with the objective of reducing such clutter artifacts. The MCA approach assumes that the two signals in the additive mix have each a sparse representation under some dictionary of atoms (a matrix), and separation is achieved by finding these sparse representations. In our work, an adaptive approach is used for learning the dictionary from the echo data. MCA is compared to Singular Value Filtering (SVF), a Principal Component Analysis- (PCA-) based filtering technique, and to a high-pass Finite Impulse Response (FIR) filter. Each filter is applied to a simulated hypoechoic lesion sequence, as well as experimental cardiac ultrasound data. MCA is demonstrated in both cases to outperform the FIR filter and obtain results comparable to the SVF method in terms of contrast-to-noise ratio (CNR). Furthermore, MCA shows a lower impact on tissue sections while removing the clutter artifacts. In experimental heart data, MCA obtains in our experiments clutter mitigation with an average CNR improvement of 1.33 dB. PMID:26199622

  18. Optimization-based reconstruction of sparse images from few-view projections

    NASA Astrophysics Data System (ADS)

    Han, Xiao; Bian, Junguo; Ritman, Erik L.; Sidky, Emil Y.; Pan, Xiaochuan

    2012-08-01

    In this work, we investigate optimization-based image reconstruction from few-view (i.e. less than ten views) projections of sparse objects such as coronary-artery specimens. Using optimization programs as a guide, we formulate constraint programs as reconstruction programs and develop algorithms to reconstruct images through solving the reconstruction programs. Characterization studies are carried out for elucidating the algorithm properties of ‘convergence’ (relative to designed solutions) and ‘utility’ (relative to desired solutions) by using simulated few-view data calculated from a discrete FORBILD coronary-artery phantom, and real few-view data acquired from a human coronary-artery specimen. Study results suggest that carefully designed reconstruction programs and algorithms can yield accurate reconstructions of sparse images from few-view projections.

  19. Sparse signal representation and its applications in ultrasonic NDE.

    PubMed

    Zhang, Guang-Ming; Zhang, Cheng-Zhong; Harvey, David M

    2012-03-01

    Many sparse signal representation (SSR) algorithms have been developed in the past decade. The advantages of SSR such as compact representations and super resolution lead to the state of the art performance of SSR for processing ultrasonic non-destructive evaluation (NDE) signals. Choosing a suitable SSR algorithm and designing an appropriate overcomplete dictionary is a key for success. After a brief review of sparse signal representation methods and the design of overcomplete dictionaries, this paper addresses the recent accomplishments of SSR for processing ultrasonic NDE signals. The advantages and limitations of SSR algorithms and various overcomplete dictionaries widely-used in ultrasonic NDE applications are explored in depth. Their performance improvement compared to conventional signal processing methods in many applications such as ultrasonic flaw detection and noise suppression, echo separation and echo estimation, and ultrasonic imaging is investigated. The challenging issues met in practical ultrasonic NDE applications for example the design of a good dictionary are discussed. Representative experimental results are presented for demonstration.

  20. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  1. Reconstructing cortical current density by exploring sparseness in the transform domain.

    PubMed

    Ding, Lei

    2009-05-07

    In the present study, we have developed a novel electromagnetic source imaging approach to reconstruct extended cortical sources by means of cortical current density (CCD) modeling and a novel EEG imaging algorithm which explores sparseness in cortical source representations through the use of L1-norm in objective functions. The new sparse cortical current density (SCCD) imaging algorithm is unique since it reconstructs cortical sources by attaining sparseness in a transform domain (the variation map of cortical source distributions). While large variations are expected to occur along boundaries (sparseness) between active and inactive cortical regions, cortical sources can be reconstructed and their spatial extents can be estimated by locating these boundaries. We studied the SCCD algorithm using numerous simulations to investigate its capability in reconstructing cortical sources with different extents and in reconstructing multiple cortical sources with different extent contrasts. The SCCD algorithm was compared with two L2-norm solutions, i.e. weighted minimum norm estimate (wMNE) and cortical LORETA. Our simulation data from the comparison study show that the proposed sparse source imaging algorithm is able to accurately and efficiently recover extended cortical sources and is promising to provide high-accuracy estimation of cortical source extents.

  2. Variational approach to reconstruct surface from sparse and nonparallel contours in freehand 3D ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Deng, Shuangcheng; Jiang, Lipei; Cao, Yingyu; Zhang, Junwen; Zheng, Haiyang

    2012-01-01

    The 3D reconstruction for freehand 3D ultrasound is a challenging issue because the recorded B-scans are not only sparse, but also non-parallel (actually they may intersect each other). Conventional volume reconstruction methods can't reconstruct sparse data efficiently while not introducing geometrical artifacts, and conventional surface reconstruction methods can't reconstruct surfaces from contours that are arbitrarily oriented in 3D space. We developed a new surface reconstruction method for freehand 3D ultrasound. It is based on variational implicit function which is presented by Greg Turk for shape transformation. In the new method, we first constructed on- & off-surface constraints from the segmented contours of all recorded B-scans, then used a variational interpolation technique to get a single implicit function in 3D. Finally, the implicit function was evaluated to extract the zero-valued surface as reconstruction result. Two experiment was conducted to assess our variational surface reconstruction method, and the experiment results have shown that the new method is capable of reconstructing surface smoothly from sparse contours which can be arbitrarily oriented in 3D space.

  3. Sparse angular CT reconstruction using non-local means based iterative-correction POCS.

    PubMed

    Huang, Jing; Ma, Jianhua; Liu, Nan; Zhang, Hua; Bian, Zhaoying; Feng, Yanqiu; Feng, Qianjin; Chen, Wufan

    2011-04-01

    In divergent-beam computed tomography (CT), sparse angular sampling frequently leads to conspicuous streak artifacts. In this paper, we propose a novel non-local means (NL-means) based iterative-correction projection onto convex sets (POCS) algorithm, named as NLMIC-POCS, for effective and robust sparse angular CT reconstruction. The motivation for using NLMIC-POCS is that NL-means filtered image can produce an acceptable priori solution for sequential POCS iterative reconstruction. The NLMIC-POCS algorithm has been tested on simulated and real phantom data. The experimental results show that the presented NLMIC-POCS algorithm can significantly improve the image quality of the sparse angular CT reconstruction in suppressing streak artifacts and preserving the edges of the image.

  4. Sparse Matrix Motivated Reconstruction of Far-Field Radiation Patterns

    DTIC Science & Technology

    2015-03-01

    Transform (DCT). The algorithm was evaluated by using 3 antennas modeled with the high-frequency structural simulator (HFSS): a half-wave dipole, a Vivaldi...and a pyramidal horn. The 2-D radiation pattern was reconstructed for each antenna using less than 44% of the total number of measurements with low...The 3-D radiation patterns of a pyramidal horn antenna was reconstructed by using only 13% of the total number of measurements. By using the

  5. Real-Space x-ray tomographic reconstruction of randomly oriented objects with sparse data frames.

    PubMed

    Ayyer, Kartik; Philipp, Hugh T; Tate, Mark W; Elser, Veit; Gruner, Sol M

    2014-02-10

    Schemes for X-ray imaging single protein molecules using new x-ray sources, like x-ray free electron lasers (XFELs), require processing many frames of data that are obtained by taking temporally short snapshots of identical molecules, each with a random and unknown orientation. Due to the small size of the molecules and short exposure times, average signal levels of much less than 1 photon/pixel/frame are expected, much too low to be processed using standard methods. One approach to process the data is to use statistical methods developed in the EMC algorithm (Loh & Elser, Phys. Rev. E, 2009) which processes the data set as a whole. In this paper we apply this method to a real-space tomographic reconstruction using sparse frames of data (below 10(-2) photons/pixel/frame) obtained by performing x-ray transmission measurements of a low-contrast, randomly-oriented object. This extends the work by Philipp et al. (Optics Express, 2012) to three dimensions and is one step closer to the single molecule reconstruction problem.

  6. Impact-force sparse reconstruction from highly incomplete and inaccurate measurements

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Chen, Xuefeng

    2016-08-01

    The classical l2-norm-based regularization methods applied for force reconstruction inverse problem require that the number of measurements should not be less than the number of unknown sources. Taking into account the sparse nature of impact-force in time domain, we develop a general sparse methodology based on minimizing l1-norm for solving the highly underdetermined model of impact-force reconstruction. A monotonic two-step iterative shrinkage/thresholding (MTWIST) algorithm is proposed to find the sparse solution to such an underdetermined model from highly incomplete and inaccurate measurements, which can be problematic with Tikhonov regularization. MTWIST is highly efficient for large-scale ill-posed problems since it mainly involves matrix-vector multiplies without matrix factorization. In sparsity frame, the proposed sparse regularization method can not only determine the actual impact location from many candidate sources but also simultaneously reconstruct the time history of impact-force. Simulation and experiment including single-source and two-source impact-force reconstruction are conducted on a simply supported rectangular plate and a shell structure to illustrate the effectiveness and applicability of MTWIST, respectively. Both the locations and force time histories of the single-source and two-source cases are accurately reconstructed from a single accelerometer, where the high noise level is considered in simulation and the primary noise in experiment is supposed to be colored noise. Meanwhile, the consecutive impact-forces reconstruction in a large-scale (greater than 104) sparse frame illustrates that MTWIST has advantages of computational efficiency and identification accuracy over Tikhonov regularization.

  7. Sparse Reconstruction of Electric Fields from Radial Magnetic Data

    NASA Astrophysics Data System (ADS)

    Yeates, Anthony R.

    2017-02-01

    Accurate estimates of the horizontal electric field on the Sun’s visible surface are important not only for estimating the Poynting flux of magnetic energy into the corona but also for driving time-dependent magnetohydrodynamic models of the corona. In this paper, a method is developed for estimating the horizontal electric field from a sequence of radial-component magnetic field maps. This problem of inverting Faraday’s law has no unique solution. Unfortunately, the simplest solution (a divergence-free electric field) is not realistically localized in regions of nonzero magnetic field, as would be expected from Ohm’s law. Our new method generates instead a localized solution, using a basis pursuit algorithm to find a sparse solution for the electric field. The method is shown to perform well on test cases where the input magnetic maps are flux balanced in both Cartesian and spherical geometries. However, we show that if the input maps have a significant imbalance of flux—usually arising from data assimilation—then it is not possible to find a localized, realistic, electric field solution. This is the main obstacle to driving coronal models from time sequences of solar surface magnetic maps.

  8. Learning to sense sparse signals: simultaneous sensing matrix and sparsifying dictionary optimization.

    PubMed

    Duarte-Carvajalino, Julio Martin; Sapiro, Guillermo

    2009-07-01

    Sparse signal representation, analysis, and sensing have received a lot of attention in recent years from the signal processing, optimization, and learning communities. On one hand, learning overcomplete dictionaries that facilitate a sparse representation of the data as a liner combination of a few atoms from such dictionary leads to state-of-the-art results in image and video restoration and classification. On the other hand, the framework of compressed sensing (CS) has shown that sparse signals can be recovered from far less samples than those required by the classical Shannon-Nyquist Theorem. The samples used in CS correspond to linear projections obtained by a sensing projection matrix. It has been shown that, for example, a nonadaptive random sampling matrix satisfies the fundamental theoretical requirements of CS, enjoying the additional benefit of universality. On the other hand, a projection sensing matrix that is optimally designed for a certain class of signals can further improve the reconstruction accuracy or further reduce the necessary number of samples. In this paper, we introduce a framework for the joint design and optimization, from a set of training images, of the nonparametric dictionary and the sensing matrix. We show that this joint optimization outperforms both the use of random sensing matrices and those matrices that are optimized independently of the learning of the dictionary. Particular cases of the proposed framework include the optimization of the sensing matrix for a given dictionary as well as the optimization of the dictionary for a predefined sensing environment. The presentation of the framework and its efficient numerical optimization is complemented with numerous examples on classical image datasets.

  9. Reconstruction of spatio-temporal temperature from sparse historical records using robust probabilistic principal component regression

    NASA Astrophysics Data System (ADS)

    Tipton, John; Hooten, Mevin; Goring, Simon

    2017-01-01

    Scientific records of temperature and precipitation have been kept for several hundred years, but for many areas, only a shorter record exists. To understand climate change, there is a need for rigorous statistical reconstructions of the paleoclimate using proxy data. Paleoclimate proxy data are often sparse, noisy, indirect measurements of the climate process of interest, making each proxy uniquely challenging to model statistically. We reconstruct spatially explicit temperature surfaces from sparse and noisy measurements recorded at historical United States military forts and other observer stations from 1820 to 1894. One common method for reconstructing the paleoclimate from proxy data is principal component regression (PCR). With PCR, one learns a statistical relationship between the paleoclimate proxy data and a set of climate observations that are used as patterns for potential reconstruction scenarios. We explore PCR in a Bayesian hierarchical framework, extending classical PCR in a variety of ways. First, we model the latent principal components probabilistically, accounting for measurement error in the observational data. Next, we extend our method to better accommodate outliers that occur in the proxy data. Finally, we explore alternatives to the truncation of lower-order principal components using different regularization techniques. One fundamental challenge in paleoclimate reconstruction efforts is the lack of out-of-sample data for predictive validation. Cross-validation is of potential value, but is computationally expensive and potentially sensitive to outliers in sparse data scenarios. To overcome the limitations that a lack of out-of-sample records presents, we test our methods using a simulation study, applying proper scoring rules including a computationally efficient approximation to leave-one-out cross-validation using the log score to validate model performance. The result of our analysis is a spatially explicit reconstruction of spatio

  10. Novel regularized sparse model for fluorescence molecular tomography reconstruction

    NASA Astrophysics Data System (ADS)

    Liu, Yuhao; Liu, Jie; An, Yu; Jiang, Shixin

    2017-01-01

    Fluorescence molecular tomography (FMT) is an imaging modality that exploits the specificity of fluorescent biomarkers to enable 3D visualization of molecular targets and pathways in small animals. FMT has been used in surgical navigation for tumor resection and has many potential applications at the physiological, metabolic, and molecular levels in tissues. The hybrid system combined FMT and X-ray computed tomography (XCT) was pursued for accurate detection. However, the result is usually over-smoothed and over-shrunk. In this paper, we propose a region reconstruction method for FMT in which the elastic net (E-net) regularization is used to combine L1-norm and L2-norm. The E-net penalty corresponds to adding the L1-norm penalty and a L2-norm penalty. Elastic net combines the advantages of L1-norm regularization and L2-norm regularization. It could achieve the balance between the sparsity and smooth by simultaneously employing the L1-norm and the L2-norm. To solve the problem effectively, the proximal gradient algorithms was used to accelerate the computation. To evaluate the performance of the proposed E-net method, numerical phantom experiments are conducted. The simulation study shows that the proposed method achieves accurate and is able to reconstruct image effectively.

  11. Spatial Compressive Sensing for Strain Data Reconstruction from Sparse Sensors

    DTIC Science & Technology

    2014-10-01

    the novel theory of compressive sensing and principles of continuum mechanics. Compressive sensing , also known as compressed sensing , refers to the...asserts that certain signals or images can be recovered from what was previously believed to be a highly incomplete measurement. Compressed sensing ...matrix completion problem is quite similar to compressive sensing , as a similar heuristic approach , convex relaxation, is used to recover

  12. Direct reconstruction of enhanced signal in computed tomography perfusion

    NASA Astrophysics Data System (ADS)

    Li, Bin; Lyu, Qingwen; Ma, Jianhua; Wang, Jing

    2016-04-01

    High imaging dose has been a concern in computed tomography perfusion (CTP) as repeated scans are performed at the same location of a patient. On the other hand, signal changes only occur at limited regions in CT acquired at different time points. In this work, we propose a new reconstruction strategy by effectively utilizing the initial phase high-quality CT to reconstruct the later phase CT acquired with a low-dose protocol. In the proposed strategy, initial high-quality CT is considered as a base image and enhanced signal (ES) is reconstructed directly by minimizing the penalized weighted least-square (PWLS) criterion. The proposed PWLS-ES strategy converts the conventional CT reconstruction into a sparse signal reconstruction problem. Digital and anthropomorphic phantom studies were performed to evaluate the performance of the proposed PWLS-ES strategy. Both phantom studies show that the proposed PWLS-ES method outperforms the standard iterative CT reconstruction algorithm based on the same PWLS criterion according to various quantitative metrics including root mean squared error (RMSE) and the universal quality index (UQI).

  13. Sparse reconstruction of liver cirrhosis from monocular mini-laparoscopic sequences

    NASA Astrophysics Data System (ADS)

    Marcinczak, Jan Marek; Painer, Sven; Grigat, Rolf-Rainer

    2015-03-01

    Mini-laparoscopy is a technique which is used by clinicians to inspect the liver surface with ultra-thin laparoscopes. However, so far no quantitative measures based on mini-laparoscopic sequences are possible. This paper presents a Structure from Motion (SfM) based methodology to do 3D reconstruction of liver cirrhosis from mini-laparoscopic videos. The approach combines state-of-the-art tracking, pose estimation, outlier rejection and global optimization to obtain a sparse reconstruction of the cirrhotic liver surface. Specular reflection segmentation is included into the reconstruction framework to increase the robustness of the reconstruction. The presented approach is evaluated on 15 endoscopic sequences using three cirrhotic liver phantoms. The median reconstruction accuracy ranges from 0.3 mm to 1 mm.

  14. Precise RFID localization in impaired environment through sparse signal recovery

    NASA Astrophysics Data System (ADS)

    Subedi, Saurav; Zhang, Yimin D.; Amin, Moeness G.

    2013-05-01

    Radio frequency identification (RFID) is a rapidly developing wireless communication technology for electronically identifying, locating, and tracking products, assets, and personnel. RFID has become one of the most important means to construct real-time locating systems (RTLS) that track and identify the location of objects in real time using simple, inexpensive tags and readers. The applicability and usefulness of RTLS techniques depend on their achievable accuracy. In particular, when multilateration-based localization techniques are exploited, the achievable accuracy primarily relies on the precision of the range estimates between a reader and the tags. Such range information can be obtained by using the received signal strength indicator (RSSI) and/or the phase difference of arrival (PDOA). In both cases, however, the accuracy is significantly compromised when the operation environment is impaired. In particular, multipath propagation significantly affects the measurement accuracy of both RSSI and phase information. In addition, because RFID systems are typically operated in short distances, RSSI and phase measurements are also coupled with the reader and tag antenna patterns, making accurate RFID localization very complicated and challenging. In this paper, we develop new methods to localize RFID tags or readers by exploiting sparse signal recovery techniques. The proposed method allows the channel environment and antenna patterns to be taken into account and be properly compensated at a low computational cost. As such, the proposed technique yields superior performance in challenging operation environments with the above-mentioned impairments.

  15. Compressed sensing techniques for arbitrary frequency-sparse signals in structural health monitoring

    NASA Astrophysics Data System (ADS)

    Duan, Zhongdong; Kang, Jie

    2014-03-01

    Structural health monitoring requires collection of large number sample data and sometimes high frequent vibration data for detecting the damage of structures. The expensive cost for collecting the data is a big challenge. The recent proposed Compressive Sensing method enables a potentially large reduction in the sampling, and it is a way to meet the challenge. The Compressed Sensing theory requires sparse signal, meaning that the signals can be well-approximated as a linear combination of just a few elements from a known discrete basis or dictionary. The signal of structure vibration can be decomposed into a few sinusoid linear combinations in the DFT domain. Unfortunately, in most cases, the frequencies of decomposed sinusoid are arbitrary in that domain, which may not lie precisely on the discrete DFT basis or dictionary. In this case, the signal will lost its sparsity, and that makes recovery performance degrades significantly. One way to improve the sparsity of the signal is to increase the size of the dictionary, but there exists a tradeoff: the closely-spaced DFT dictionary will increase the coherence between the elements in the dictionary, which in turn decreases recovery performance. In this work we introduce three approaches for arbitrary frequency signals recovery. The first approach is the continuous basis pursuit (CBP), which reconstructs a continuous basis by introducing interpolation steps. The second approach is a semidefinite programming (SDP), which searches the sparest signal on continuous basis without establish any dictionary, enabling a very high recovery precision. The third approach is spectral iterative hard threshold (SIHT), which is based on redundant DFT dictionary and a restricted union-of-subspaces signal model, inhibiting closely spaced sinusoids. The three approaches are studied by numerical simulation. Structure vibration signal is simulated by a finite element model, and compressed measurements of the signal are taken to perform

  16. Robust Cell Detection and Segmentation in Histopathological Images Using Sparse Reconstruction and Stacked Denoising Autoencoders

    PubMed Central

    Su, Hai; Xing, Fuyong; Kong, Xiangfei; Xie, Yuanpu; Zhang, Shaoting; Yang, Lin

    2016-01-01

    Computer-aided diagnosis (CAD) is a promising tool for accurate and consistent diagnosis and prognosis. Cell detection and segmentation are essential steps for CAD. These tasks are challenging due to variations in cell shapes, touching cells, and cluttered background. In this paper, we present a cell detection and segmentation algorithm using the sparse reconstruction with trivial templates and a stacked denoising autoencoder (sDAE). The sparse reconstruction handles the shape variations by representing a testing patch as a linear combination of shapes in the learned dictionary. Trivial templates are used to model the touching parts. The sDAE, trained with the original data and their structured labels, is used for cell segmentation. To the best of our knowledge, this is the first study to apply sparse reconstruction and sDAE with structured labels for cell detection and segmentation. The proposed method is extensively tested on two data sets containing more than 3000 cells obtained from brain tumor and lung cancer images. Our algorithm achieves the best performance compared with other state of the arts. PMID:27796013

  17. Dimensionality Reduction Based Optimization Algorithm for Sparse 3-D Image Reconstruction in Diffuse Optical Tomography.

    PubMed

    Bhowmik, Tanmoy; Liu, Hanli; Ye, Zhou; Oraintara, Soontorn

    2016-03-04

    Diffuse optical tomography (DOT) is a relatively low cost and portable imaging modality for reconstruction of optical properties in a highly scattering medium, such as human tissue. The inverse problem in DOT is highly ill-posed, making reconstruction of high-quality image a critical challenge. Because of the nature of sparsity in DOT, sparsity regularization has been utilized to achieve high-quality DOT reconstruction. However, conventional approaches using sparse optimization are computationally expensive and have no selection criteria to optimize the regularization parameter. In this paper, a novel algorithm, Dimensionality Reduction based Optimization for DOT (DRO-DOT), is proposed. It reduces the dimensionality of the inverse DOT problem by reducing the number of unknowns in two steps and thereby makes the overall process fast. First, it constructs a low resolution voxel basis based on the sensing-matrix properties to find an image support. Second, it reconstructs the sparse image inside this support. To compensate for the reduced sensitivity with increasing depth, depth compensation is incorporated in DRO-DOT. An efficient method to optimally select the regularization parameter is proposed for obtaining a high-quality DOT image. DRO-DOT is also able to reconstruct high-resolution images even with a limited number of optodes in a spatially limited imaging set-up.

  18. Dimensionality Reduction Based Optimization Algorithm for Sparse 3-D Image Reconstruction in Diffuse Optical Tomography

    PubMed Central

    Bhowmik, Tanmoy; Liu, Hanli; Ye, Zhou; Oraintara, Soontorn

    2016-01-01

    Diffuse optical tomography (DOT) is a relatively low cost and portable imaging modality for reconstruction of optical properties in a highly scattering medium, such as human tissue. The inverse problem in DOT is highly ill-posed, making reconstruction of high-quality image a critical challenge. Because of the nature of sparsity in DOT, sparsity regularization has been utilized to achieve high-quality DOT reconstruction. However, conventional approaches using sparse optimization are computationally expensive and have no selection criteria to optimize the regularization parameter. In this paper, a novel algorithm, Dimensionality Reduction based Optimization for DOT (DRO-DOT), is proposed. It reduces the dimensionality of the inverse DOT problem by reducing the number of unknowns in two steps and thereby makes the overall process fast. First, it constructs a low resolution voxel basis based on the sensing-matrix properties to find an image support. Second, it reconstructs the sparse image inside this support. To compensate for the reduced sensitivity with increasing depth, depth compensation is incorporated in DRO-DOT. An efficient method to optimally select the regularization parameter is proposed for obtaining a high-quality DOT image. DRO-DOT is also able to reconstruct high-resolution images even with a limited number of optodes in a spatially limited imaging set-up. PMID:26940661

  19. Novel Fourier-based iterative reconstruction for sparse fan projection using alternating direction total variation minimization

    NASA Astrophysics Data System (ADS)

    Zhao, Jin; Han-Ming, Zhang; Bin, Yan; Lei, Li; Lin-Yuan, Wang; Ai-Long, Cai

    2016-03-01

    Sparse-view x-ray computed tomography (CT) imaging is an interesting topic in CT field and can efficiently decrease radiation dose. Compared with spatial reconstruction, a Fourier-based algorithm has advantages in reconstruction speed and memory usage. A novel Fourier-based iterative reconstruction technique that utilizes non-uniform fast Fourier transform (NUFFT) is presented in this work along with advanced total variation (TV) regularization for a fan sparse-view CT. The proposition of a selective matrix contributes to improve reconstruction quality. The new method employs the NUFFT and its adjoin to iterate back and forth between the Fourier and image space. The performance of the proposed algorithm is demonstrated through a series of digital simulations and experimental phantom studies. Results of the proposed algorithm are compared with those of existing TV-regularized techniques based on compressed sensing method, as well as basic algebraic reconstruction technique. Compared with the existing TV-regularized techniques, the proposed Fourier-based technique significantly improves convergence rate and reduces memory allocation, respectively. Projected supported by the National High Technology Research and Development Program of China (Grant No. 2012AA011603) and the National Natural Science Foundation of China (Grant No. 61372172).

  20. Advances in thermographic signal reconstruction

    NASA Astrophysics Data System (ADS)

    Shepard, Steven M.; Frendberg Beemer, Maria

    2015-05-01

    Since its introduction in 2001, the Thermographic Signal Reconstruction (TSR) method has emerged as one of the most widely used methods for enhancement and analysis of thermographic sequences, with applications extending beyond industrial NDT into biomedical research, art restoration and botany. The basic TSR process, in which a noise reduced replica of each pixel time history is created, yields improvement over unprocessed image data that is sufficient for many applications. However, examination of the resulting logarithmic time derivatives of each TSR pixel replica provides significant insight into the physical mechanisms underlying the active thermography process. The deterministic and invariant properties of the derivatives have enabled the successful implementation of automated defect recognition and measurement systems. Unlike most approaches to analysis of thermography data, TSR does not depend on flawbackground contrast, so that it can also be applied to characterization and measurement of thermal properties of flaw-free samples. We present a summary of recent advances in TSR, a review of the underlying theory and examples of its implementation.

  1. A Sparse Reconstruction Approach for Identifying Gene Regulatory Networks Using Steady-State Experiment Data

    PubMed Central

    Zhang, Wanhong; Zhou, Tong

    2015-01-01

    Motivation Identifying gene regulatory networks (GRNs) which consist of a large number of interacting units has become a problem of paramount importance in systems biology. Situations exist extensively in which causal interacting relationships among these units are required to be reconstructed from measured expression data and other a priori information. Though numerous classical methods have been developed to unravel the interactions of GRNs, these methods either have higher computing complexities or have lower estimation accuracies. Note that great similarities exist between identification of genes that directly regulate a specific gene and a sparse vector reconstruction, which often relates to the determination of the number, location and magnitude of nonzero entries of an unknown vector by solving an underdetermined system of linear equations y = Φx. Based on these similarities, we propose a novel framework of sparse reconstruction to identify the structure of a GRN, so as to increase accuracy of causal regulation estimations, as well as to reduce their computational complexity. Results In this paper, a sparse reconstruction framework is proposed on basis of steady-state experiment data to identify GRN structure. Different from traditional methods, this approach is adopted which is well suitable for a large-scale underdetermined problem in inferring a sparse vector. We investigate how to combine the noisy steady-state experiment data and a sparse reconstruction algorithm to identify causal relationships. Efficiency of this method is tested by an artificial linear network, a mitogen-activated protein kinase (MAPK) pathway network and the in silico networks of the DREAM challenges. The performance of the suggested approach is compared with two state-of-the-art algorithms, the widely adopted total least-squares (TLS) method and those available results on the DREAM project. Actual results show that, with a lower computational cost, the proposed method can

  2. Incorporation of noise and prior images in penalized-likelihood reconstruction of sparse data

    NASA Astrophysics Data System (ADS)

    Ding, Yifu; Siewerdsen, Jeffrey H.; Stayman, J. Webster

    2012-03-01

    Many imaging scenarios involve a sequence of tomographic data acquisitions to monitor change over time - e.g., longitudinal studies of disease progression (tumor surveillance) and intraoperative imaging of tissue changes during intervention. Radiation dose imparted for these repeat acquisitions present a concern. Because such image sequences share a great deal of information between acquisitions, using prior image information from baseline scans in the reconstruction of subsequent scans can relax data fidelity requirements of follow-up acquisitions. For example, sparse data acquisitions, including angular undersampling and limited-angle tomography, limit exposure by reducing the number of acquired projections. Various approaches such as prior-image constrained compressed sensing (PICCS) have successfully incorporated prior images in the reconstruction of such sparse data. Another technique to limit radiation dose is to reduce the x-ray fluence per projection. However, many methods for reconstruction of sparse data do not include a noise model accounting for stochastic fluctuations in such low-dose measurements and cannot balance the differing information content of various measurements. In this paper, we present a prior-image, penalized-likelihood estimator (PI-PLE) that utilizes prior image information, compressed-sensing penalties, and a Poisson noise model for measurements. The approach is applied to a lung nodule surveillance scenario with sparse data acquired at low exposures to illustrate performance under cases of extremely limited data fidelity. The results show that PI-PLE is able to greatly reduce streak artifacts that otherwise arise from photon starvation, and maintain high-resolution anatomical features, whereas traditional approaches are subject to streak artifacts or lower-resolution images.

  3. Some Factors Affecting Time Reversal Signal Reconstruction

    NASA Astrophysics Data System (ADS)

    Prevorovsky, Z.; Kober, J.

    Time reversal (TR) ultrasonic signal processing is now broadly used in a variety of applications, and also in NDE/NDT field. TR processing is used e.g. for S/N ratio enhancement, reciprocal transducer calibration, location, identification, and reconstruction of unknown sources, etc. TR procedure in con-junction with nonlinear elastic wave spectroscopy NEWS is also useful for sensitive detection of defects (nonlinearity presence). To enlarge possibilities of acoustic emission (AE) method, we proposed the use of TR signal reconstruction ability for detected AE signals transfer from a structure with AE source onto a similar remote model of the structure (real or numerical), which allows easier source analysis under laboratory conditions. Though the TR signal reconstruction is robust regarding the system variations, some small differences and changes influence space-time TR focus and reconstruction quality. Experiments were performed on metallic parts of both simple and complicated geometry to examine effects of small changes of temperature or configuration (body shape, dimensions, transducers placement, etc.) on TR reconstruction quality. Results of experiments are discussed in this paper. Considering mathematical similarity between TR and Coda Wave Interferometry (CWI), prediction of signal reconstruction quality was possible using only the direct propagation. The results show how some factors like temperature or stress changes may deteriorate the TR reconstruction quality. It is also shown that sometimes the reconstruction quality is not enhanced using longer TR signal (S/N ratio may decrease).

  4. Block Sparse Compressed Sensing of Electroencephalogram (EEG) Signals by Exploiting Linear and Non-Linear Dependencies

    PubMed Central

    Mahrous, Hesham; Ward, Rabab

    2016-01-01

    This paper proposes a compressive sensing (CS) method for multi-channel electroencephalogram (EEG) signals in Wireless Body Area Network (WBAN) applications, where the battery life of sensors is limited. For the single EEG channel case, known as the single measurement vector (SMV) problem, the Block Sparse Bayesian Learning-BO (BSBL-BO) method has been shown to yield good results. This method exploits the block sparsity and the intra-correlation (i.e., the linear dependency) within the measurement vector of a single channel. For the multichannel case, known as the multi-measurement vector (MMV) problem, the Spatio-Temporal Sparse Bayesian Learning (STSBL-EM) method has been proposed. This method learns the joint correlation structure in the multichannel signals by whitening the model in the temporal and the spatial domains. Our proposed method represents the multi-channels signal data as a vector that is constructed in a specific way, so that it has a better block sparsity structure than the conventional representation obtained by stacking the measurement vectors of the different channels. To reconstruct the multichannel EEG signals, we modify the parameters of the BSBL-BO algorithm, so that it can exploit not only the linear but also the non-linear dependency structures in a vector. The modified BSBL-BO is then applied on the vector with the better sparsity structure. The proposed method is shown to significantly outperform existing SMV and also MMV methods. It also shows significant lower compression errors even at high compression ratios such as 10:1 on three different datasets. PMID:26861335

  5. Cloud Removal from SENTINEL-2 Image Time Series Through Sparse Reconstruction from Random Samples

    NASA Astrophysics Data System (ADS)

    Cerra, D.; Bieniarz, J.; Müller, R.; Reinartz, P.

    2016-06-01

    In this paper we propose a cloud removal algorithm for scenes within a Sentinel-2 satellite image time series based on synthetisation of the affected areas via sparse reconstruction. For this purpose, a clouds and clouds shadow mask must be given. With respect to previous works, the process has an increased automation degree. Several dictionaries, on the basis of which the data are reconstructed, are selected randomly from cloud-free areas around the cloud, and for each pixel the dictionary yielding the smallest reconstruction error in non-corrupted images is chosen for the restoration. The values below a cloudy area are therefore estimated by observing the spectral evolution in time of the non-corrupted pixels around it. The proposed restoration algorithm is fast and efficient, requires minimal supervision and yield results with low overall radiometric and spectral distortions.

  6. A new look at signal sparsity paradigm for low-dose computed tomography image reconstruction

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Zhang, Hao; Moore, William; Liang, Zhengrong

    2016-03-01

    Signal sparsity in computed tomography (CT) image reconstruction field is routinely interpreted as sparse angular sampling around the patient body whose image is to be reconstructed. For CT clinical applications, while the normal tissues may be known and treated as sparse signals but the abnormalities inside the body are usually unknown signals and may not be treated as sparse signals. Furthermore, the locations and structures of abnormalities are also usually unknown, and this uncertainty adds in more challenges in interpreting signal sparsity for clinical applications. In this exploratory experimental study, we assume that once the projection data around the continuous body are discretized regardless at what sampling rate, the image reconstruction of the continuous body from the discretized data becomes a signal sparse problem. We hypothesize that a dense prior model describing the continuous body is a desirable choice for achieving an optimal solution for a given clinical task. We tested this hypothesis by adapting total variation stroke (TVS) model to describe the continuous body signals and showing the gain over the classic filtered backprojection (FBP) at a wide range of angular sampling rate. For the given clinical task of detecting lung nodules of size 5mm and larger, a consistent improvement of TVS over FBP on nodule detection was observed by an experienced radiologists from low sample rate to high sampling rate. This experimental outcome concurs with the expectation of the TVS model. Further investigation for theoretical insights and task-dependent evaluations is needed.

  7. Initial experience in primal-dual optimization reconstruction from sparse-PET patient data

    NASA Astrophysics Data System (ADS)

    Zhang, Zheng; Ye, Jinghan; Chen, Buxin; Perkins, Amy E.; Rose, Sean; Sidky, Emil Y.; Kao, Chien-Min; Xia, Dan; Tung, Chi-Hua; Pan, Xiaochuan

    2016-03-01

    There exists interest in designing a PET system with reduced detectors due to cost concerns, while not significantly compromising the PET utility. Recently developed optimization-based algorithms, which have demonstrated the potential clinical utility in image reconstruction from sparse CT data, may be used for enabling such design of innovative PET systems. In this work, we investigate a PET configuration with reduced number of detectors, and carry out preliminary studies from patient data collected by use of such sparse-PET configuration. We consider an optimization problem combining Kullback-Leibler (KL) data fidelity with an image TV constraint, and solve it by using a primal-dual optimization algorithm developed by Chambolle and Pock. Results show that advanced algorithms may enable the design of innovative PET configurations with reduced number of detectors, while yielding potential practical PET utilities.

  8. A sparse reconstruction method for the estimation of multi-resolution emission fields via atmospheric inversion

    DOE PAGES

    Ray, J.; Lee, J.; Yadav, V.; ...

    2015-04-29

    Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) and fitting.more » Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO2 (ffCO2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO2 emissions and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also

  9. Multimodal exploitation and sparse reconstruction for guided-wave structural health monitoring

    NASA Astrophysics Data System (ADS)

    Golato, Andrew; Santhanam, Sridhar; Ahmad, Fauzia; Amin, Moeness G.

    2015-05-01

    The presence of multiple modes in guided-wave structural health monitoring has been usually considered a nuisance and a variety of methods have been devised to ensure the presence of a single mode. However, valuable information regarding the nature of defects can be gleaned by including multiple modes in image recovery. In this paper, we propose an effective approach for localizing defects in thin plates, which involves inversion of a multimodal Lamb wave based model by means of sparse reconstruction. We consider not only the direct symmetric and anti-symmetric fundamental propagating Lamb modes, but also the defect-spawned mixed modes arising due to asymmetry of defects. Model-based dictionaries for the direct and spawned modes are created, which take into account the associated dispersion and attenuation through the medium. Reconstruction of the region of interest is performed jointly across the multiple modes by employing a group sparse reconstruction approach. Performance validation of the proposed defect localization scheme is provided using simulated data for an aluminum plate.

  10. Optimization-based image reconstruction from sparse-view data in offset-detector CBCT

    NASA Astrophysics Data System (ADS)

    Bian, Junguo; Wang, Jiong; Han, Xiao; Sidky, Emil Y.; Shao, Lingxiong; Pan, Xiaochuan

    2013-01-01

    The field of view (FOV) of a cone-beam computed tomography (CBCT) unit in a single-photon emission computed tomography (SPECT)/CBCT system can be increased by offsetting the CBCT detector. Analytic-based algorithms have been developed for image reconstruction from data collected at a large number of densely sampled views in offset-detector CBCT. However, the radiation dose involved in a large number of projections can be of a health concern to the imaged subject. CBCT-imaging dose can be reduced by lowering the number of projections. As analytic-based algorithms are unlikely to reconstruct accurate images from sparse-view data, we investigate and characterize in the work optimization-based algorithms, including an adaptive steepest descent-weighted projection onto convex sets (ASD-WPOCS) algorithms, for image reconstruction from sparse-view data collected in offset-detector CBCT. Using simulated data and real data collected from a physical pelvis phantom and patient, we verify and characterize properties of the algorithms under study. Results of our study suggest that optimization-based algorithms such as ASD-WPOCS may be developed for yielding images of potential utility from a number of projections substantially smaller than those used currently in clinical SPECT/CBCT imaging, thus leading to a dose reduction in CBCT imaging.

  11. A sparse reconstruction method for the estimation of multiresolution emission fields via atmospheric inversion

    DOE PAGES

    Ray, J.; Lee, J.; Yadav, V.; ...

    2014-08-20

    We present a sparse reconstruction scheme that can also be used to ensure non-negativity when fitting wavelet-based random field models to limited observations in non-rectangular geometries. The method is relevant when multiresolution fields are estimated using linear inverse problems. Examples include the estimation of emission fields for many anthropogenic pollutants using atmospheric inversion or hydraulic conductivity in aquifers from flow measurements. The scheme is based on three new developments. Firstly, we extend an existing sparse reconstruction method, Stagewise Orthogonal Matching Pursuit (StOMP), to incorporate prior information on the target field. Secondly, we develop an iterative method that uses StOMP tomore » impose non-negativity on the estimated field. Finally, we devise a method, based on compressive sensing, to limit the estimated field within an irregularly shaped domain. We demonstrate the method on the estimation of fossil-fuel CO2 (ffCO2) emissions in the lower 48 states of the US. The application uses a recently developed multiresolution random field model and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of two. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less

  12. Total Variation-Stokes Strategy for Sparse-View X-ray CT Image Reconstruction

    PubMed Central

    Liu, Yan; Ma, Jianhua; Lu, Hongbing; Wang, Ke; Zhang, Hao; Moore, William

    2014-01-01

    Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and/or other constraints, a piecewise-smooth X-ray computed tomography image can be reconstructed from sparse-view projection data. However, due to the piecewise constant assumption for the TV model, the reconstructed images are frequently reported to suffer from the blocky or patchy artifacts. To eliminate this drawback, we present a total variation-stokes-projection onto convex sets (TVS-POCS) reconstruction method in this paper. The TVS model is derived by introducing isophote directions for the purpose of recovering possible missing information in the sparse-view data situation. Thus the desired consistencies along both the normal and the tangent directions are preserved in the resulting images. Compared to the previous TV-based image reconstruction algorithms, the preserved consistencies by the TVS-POCS method are expected to generate noticeable gains in terms of eliminating the patchy artifacts and preserving subtle structures. To evaluate the presented TVS-POCS method, both qualitative and quantitative studies were performed using digital phantom, physical phantom and clinical data experiments. The results reveal that the presented method can yield images with several noticeable gains, measured by the universal quality index and the full-width-at-half-maximum merit, as compared to its corresponding TV-based algorithms. In addition, the results further indicate that the TVS-POCS method approaches to the gold standard result of the filtered back-projection reconstruction in the full-view data case as theoretically expected, while most previous iterative methods may fail in the full-view case because of their artificial textures in the results. PMID:24595347

  13. Bandlimited graph signal reconstruction by diffusion operator

    NASA Astrophysics Data System (ADS)

    Yang, Lishan; You, Kangyong; Guo, Wenbin

    2016-12-01

    Signal processing on graphs extends signal processing concepts and methodologies from the classical signal processing theory to data indexed by general graphs. For a bandlimited graph signal, the unknown data associated with unsampled vertices can be reconstructed from the sampled data by exploiting the spatial relationship of graph signal. In this paper, we propose a generalized analytical framework of unsampled graph signal and introduce a concept of diffusion operator which consists of local-mean and global-bias diffusion operator. Then, a diffusion operator-based iterative algorithm is proposed to reconstruct bandlimited graph signal from sampled data. In each iteration, the reconstructed residuals associated with the sampled vertices are diffused to all the unsampled vertices for accelerating the convergence. We then prove that the proposed reconstruction strategy converges to the original graph signal. The simulation results demonstrate the effectiveness of the proposed reconstruction strategy with various downsampling patterns, fluctuation of graph cut-off frequency, robustness on the classic graph structures, and noisy scenarios.

  14. Texture enhanced optimization-based image reconstruction (TxE-OBIR) from sparse projection views

    NASA Astrophysics Data System (ADS)

    Xie, Huiqiao; Niu, Tianye; Yang, Yi; Ren, Yi; Tang, Xiangyang

    2016-03-01

    The optimization-based image reconstruction (OBIR) has been proposed and investigated in recent years to reduce radiation dose in X-ray computed tomography (CT) through acquiring sparse projection views. However, the OBIR usually generates images with a quite different noise texture compared to the clinical widely used reconstruction method (i.e. filtered back-projection - FBP). This may make the radiologists/physicians less confident while they are making clinical decisions. Recognizing the fact that the X-ray photon noise statistics is relatively uniform across the detector cells, which is enabled by beam forming devices (e.g. bowtie filters), we propose and evaluate a novel and practical texture enhancement method in this work. In the texture enhanced optimization-based image reconstruction (TxEOBIR), we first reconstruct a texture image with the FBP algorithm from a full set of synthesized projection views of noise. Then, the TxE-OBIR image is generated by adding the texture image into the OBIR reconstruction. As qualitatively confirmed by visual inspection and quantitatively by noise power spectrum (NPS) evaluation, the proposed method can produce images with textures that are visually identical to those of the gold standard FBP images.

  15. A hard-threshold based sparse inverse imaging algorithm for optical scanning holography reconstruction

    NASA Astrophysics Data System (ADS)

    Zhao, Fengjun; Qu, Xiaochao; Zhang, Xing; Poon, Ting-Chung; Kim, Taegeun; Kim, You Seok; Liang, Jimin

    2012-03-01

    The optical imaging takes advantage of coherent optics and has promoted the development of visualization of biological application. Based on the temporal coherence, optical coherence tomography can deliver three-dimensional optical images with superior resolutions, but the axial and lateral scanning is a time-consuming process. Optical scanning holography (OSH) is a spatial coherence technique which integrates three-dimensional object into a two-dimensional hologram through a two-dimensional optical scanning raster. The advantages of high lateral resolution and fast image acquisition offer it a great potential application in three-dimensional optical imaging, but the prerequisite is the accurate and practical reconstruction algorithm. Conventional method was first adopted to reconstruct sectional images and obtained fine results, but some drawbacks restricted its practicality. An optimization method based on 2 l norm obtained more accurate results than that of the conventional methods, but the intrinsic smooth of 2 l norm blurs the reconstruction results. In this paper, a hard-threshold based sparse inverse imaging algorithm is proposed to improve the sectional image reconstruction. The proposed method is characterized by hard-threshold based iterating with shrinkage threshold strategy, which only involves lightweight vector operations and matrix-vector multiplication. The performance of the proposed method has been validated by real experiment, which demonstrated great improvement on reconstruction accuracy at appropriate computational cost.

  16. Image super-resolution reconstruction via RBM-based joint dictionary learning and sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaohui; Liu, Anran; Lei, Qian

    2015-12-01

    In this paper, we propose a method for single image super-resolution(SR). Given the training set produced from large amount of high-low resolution image patches, an over-complete joint dictionary is firstly learned from a pair of high-low resolution image feature space based on Restricted Boltzmann Machines (RBM). Then for each low resolution image patch densely extracted from an up-scaled low resolution input image , its high resolution image patch can be reconstructed based on sparse representation. Finally, the reconstructed image patches are overlapped to form a large image, and a high resolution image can be achieved by means of iterated residual image compensation. Experimental results verify the effectiveness of the proposed method.

  17. Noise reduction by sparse representation in learned dictionaries for application to blind tip reconstruction problem

    NASA Astrophysics Data System (ADS)

    Jóźwiak, Grzegorz

    2017-03-01

    Scanning probe microscopy (SPM) is a well known tool used for the investigation of phenomena in objects in the nanometer size range. However, quantitative results are limited by the size and the shape of the nanoprobe used in experiments. Blind tip reconstruction (BTR) is a very popular method used to reconstruct the upper boundary on the shape of the probe. This method is known to be very sensitive to all kinds of interference in the atomic force microscopy (AFM) image. Due to mathematical morphology calculus, the interference makes the BTR results biased rather than randomly disrupted. For this reason, the careful choice of methods used for image enhancement and denoising, as well as the shape of a calibration sample are very important. In the paper, the results of thorough investigations on the shape of a calibration standard are shown. A novel shape is proposed and a tool for the simulation of AFM images of this calibration standard was designed. It was shown that careful choice of the initial tip allows us to use images of hole structures to blindly reconstruct the shape of a probe. The simulator was used to test the impact of modern filtration algorithms on the BTR process. These techniques are based on sparse approximation with function dictionaries learned on the basis of an image itself. Various learning algorithms and parameters were tested to determine the optimal combination for sparse representation. It was observed that the strong reduction of noise does not guarantee strong reduction in reconstruction errors. It seems that further improvements will be possible by the combination of BTR and a noise reduction procedure.

  18. Progressive Magnetic Resonance Image Reconstruction Based on Iterative Solution of a Sparse Linear System

    PubMed Central

    Fahmy, Ahmed S.; Gabr, Refaat E.; Heberlein, Keith; Hu, Xiaoping P.

    2006-01-01

    Image reconstruction from nonuniformly sampled spatial frequency domain data is an important problem that arises in computed imaging. Current reconstruction techniques suffer from limitations in their model and implementation. In this paper, we present a new reconstruction method that is based on solving a system of linear equations using an efficient iterative approach. Image pixel intensities are related to the measured frequency domain data through a set of linear equations. Although the system matrix is too dense and large to solve by direct inversion in practice, a simple orthogonal transformation to the rows of this matrix is applied to convert the matrix into a sparse one up to a certain chosen level of energy preservation. The transformed system is subsequently solved using the conjugate gradient method. This method is applied to reconstruct images of a numerical phantom as well as magnetic resonance images from experimental spiral imaging data. The results support the theory and demonstrate that the computational load of this method is similar to that of standard gridding, illustrating its practical utility. PMID:23165034

  19. Block-sparse reconstruction and imaging for Lamb wave structural health monitoring.

    PubMed

    Levine, Ross M; Michaels, Jennifer E

    2014-06-01

    A frequently investigated paradigm for monitoring the integrity of plate-like structures is a spatially-distributed array of piezoelectric transducers, with each array element capable of both transmitting and receiving ultrasonic guided waves. This configuration is relatively inexpensive and allows interrogation of defects from multiple directions over a relatively large area. Typically, full sets of pairwise transducer signals are acquired by exciting one transducer at a time in a round-robin fashion. Many algorithms that operate on such data use differential signals that are created by subtracting prerecorded baseline signals, leaving only signal differences introduced by scatterers. Analysis methods such as delay-and-sum imaging operate on these signals to detect and locate point-like defects, but such algorithms have limited performance and suffer when potential scatterers have high directionality or unknown phase-shifting behavior. Signal envelopes are commonly used to mitigate the effects of unknown phase shifts, but this further reduces performance. The blocksparse technique presented here uses a different principle to locate damage: each pixel is assumed to have a corresponding multidimensional linear scattering model, allowing any possible amplitude and phase shift for each transducer pair should a scatterer be present. By assuming that the differential signals are linear combinations of a sparse subset of these models, it is possible to split such signals into location-based components. Results are presented here for three experiments using aluminum and composite plates, each with a different type of scatterer. The scatterers in these images have smaller spot sizes than delay-and-sum imaging, and the images themselves have fewer artifacts. Although a propagation model is required, block-sparse imaging performs well even with a small number of transducers or without access to dispersion curves.

  20. Recovery of sparse translation-invariant signals with continuous basis pursuit.

    PubMed

    Ekanadham, Chaitanya; Tranchina, Daniel; Simoncelli, Eero

    2011-10-01

    We consider the problem of decomposing a signal into a linear combination of features, each a continuously translated version of one of a small set of elementary features. Although these constituents are drawn from a continuous family, most current signal decomposition methods rely on a finite dictionary of discrete examples selected from this family (e.g., shifted copies of a set of basic waveforms), and apply sparse optimization methods to select and solve for the relevant coefficients. Here, we generate a dictionary that includes auxiliary interpolation functions that approximate translates of features via adjustment of their coefficients. We formulate a constrained convex optimization problem, in which the full set of dictionary coefficients represents a linear approximation of the signal, the auxiliary coefficients are constrained so as to only represent translated features, and sparsity is imposed on the primary coefficients using an L1 penalty. The basis pursuit denoising (BP) method may be seen as a special case, in which the auxiliary interpolation functions are omitted, and we thus refer to our methodology as continuous basis pursuit (CBP). We develop two implementations of CBP for a one-dimensional translation-invariant source, one using a first-order Taylor approximation, and another using a form of trigonometric spline. We examine the tradeoff between sparsity and signal reconstruction accuracy in these methods, demonstrating empirically that trigonometric CBP substantially outperforms Taylor CBP, which in turn offers substantial gains over ordinary BP. In addition, the CBP bases can generally achieve equally good or better approximations with much coarser sampling than BP, leading to a reduction in dictionary dimensionality.

  1. MO-FG-204-08: Optimization-Based Image Reconstruction From Unevenly Distributed Sparse Projection Views

    SciTech Connect

    Xie, Huiqiao; Yang, Yi; Tang, Xiangyang; Niu, Tianye; Ren, Yi

    2015-06-15

    Purpose: Optimization-based reconstruction has been proposed and investigated for reconstructing CT images from sparse views, as such the radiation dose can be substantially reduced while maintaining acceptable image quality. The investigation has so far focused on reconstruction from evenly distributed sparse views. Recognizing the clinical situations wherein only unevenly sparse views are available, e.g., image guided radiation therapy, CT perfusion and multi-cycle cardiovascular imaging, we investigate the performance of optimization-based image reconstruction from unevenly sparse projection views in this work. Methods: The investigation is carried out using the FORBILD and an anthropomorphic head phantoms. In the study, 82 views, which are evenly sorted out from a full (360°) axial CT scan consisting of 984 views, form sub-scan I. Another 82 views are sorted out in a similar manner to form sub-scan II. As such, a CT scan with sparse (164) views at 1:6 ratio are formed. By shifting the two sub-scans relatively in view angulation, a CT scan with unevenly distributed sparse (164) views at 1:6 ratio are formed. An optimization-based method is implemented to reconstruct images from the unevenly distributed views. By taking the FBP reconstruction from the full scan (984 views) as the reference, the root mean square (RMS) between the reference and the optimization-based reconstruction is used to evaluate the performance quantitatively. Results: In visual inspection, the optimization-based method outperforms the FBP substantially in the reconstruction from unevenly distributed, which are quantitatively verified by the RMS gauged globally and in ROIs in both the FORBILD and anthropomorphic head phantoms. The RMS increases with increasing severity in the uneven angular distribution, especially in the case of anthropomorphic head phantom. Conclusion: The optimization-based image reconstruction can save radiation dose up to 12-fold while providing acceptable image quality

  2. Sparse Reconstruction Challenge for diffusion MRI: Validation on a physical phantom to determine which acquisition scheme and analysis method to use?

    PubMed Central

    Ning, Lipeng; Laun, Frederik; Gur, Yaniv; DiBella, Edward V. R.; Deslauriers-Gauthier, Samuel; Megherbi, Thinhinane; Ghosh, Aurobrata; Zucchelli, Mauro; Menegaz, Gloria; Fick, Rutger; St-Jean, Samuel; Paquette, Michael; Aranda, Ramon; Descoteaux, Maxime; Deriche, Rachid; O’Donnell, Lauren; Rathi, Yogesh

    2015-01-01

    Diffusion magnetic resonance imaging (dMRI) is the modality of choice for investigating in-vivo white matter connectivity and neural tissue architecture of the brain. The diffusion-weighted signal in dMRI reflects the diffusivity of water molecules in brain tissue and can be utilized to produce image-based biomarkers for clinical research. Due to the constraints on scanning time, a limited number of measurements can be acquired within a clinically feasible scan time. In order to reconstruct the dMRI signal from a discrete set of measurements, a large number of algorithms have been proposed in recent years in conjunction with varying sampling schemes, i.e., with varying b-values and gradient directions. Thus, it is imperative to compare the performance of these reconstruction methods on a single data set to provide appropriate guidelines to neuroscientists on making an informed decision while designing their acquisition protocols. For this purpose, the SParse Reconstruction Challenge (SPARC) was held along with the workshop on Computational Diffusion MRI (at MICCAI 2014) to validate the performance of multiple reconstruction methods using data acquired from a physical phantom. A total of 16 reconstruction algorithms (9 teams) participated in this community challenge. The goal was to reconstruct single b-value and/or multiple b-value data from a sparse set of measurements. In particular, the aim was to determine an appropriate acquisition protocol (in terms of the number of measurements, b-values) and the analysis method to use for a neuroimaging study. The challenge did not delve on the accuracy of these methods in estimating model specific measures such as fractional anisotropy (FA) or mean diffusivity, but on the accuracy of these methods to fit the data. This paper presents several quantitative results pertaining to each reconstruction algorithm. The conclusions in this paper provide a valuable guideline for choosing a suitable algorithm and the corresponding

  3. Sparse Bayesian framework applied to 3D super-resolution reconstruction in fetal brain MRI

    NASA Astrophysics Data System (ADS)

    Becerra, Laura C.; Velasco Toledo, Nelson; Romero Castro, Eduardo

    2015-01-01

    Fetal Magnetic Resonance (FMR) is an imaging technique that is becoming increasingly important as allows assessing brain development and thus make an early diagnostic of congenital abnormalities, spatial resolution is limited by the short acquisition time and the unpredictable fetus movements, in consequence the resulting images are characterized by non-parallel projection planes composed by anisotropic voxels. The sparse Bayesian representation is a flexible strategy which is able to model complex relationships. The Super-resolution is approached as a regression problem, the main advantage is the capability to learn data relations from observations. Quantitative performance evaluation was carried out using synthetic images, the proposed method demonstrates a better reconstruction quality compared with standard interpolation approach. The presented method is a promising approach to improve the information quality related with the 3-D fetal brain structure. It is important because allows assessing brain development and thus make an early diagnostic of congenital abnormalities.

  4. Fast Acquisition and Reconstruction of Optical Coherence Tomography Images via Sparse Representation

    PubMed Central

    Li, Shutao; McNabb, Ryan P.; Nie, Qing; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.; Farsiu, Sina

    2014-01-01

    In this paper, we present a novel technique, based on compressive sensing principles, for reconstruction and enhancement of multi-dimensional image data. Our method is a major improvement and generalization of the multi-scale sparsity based tomographic denoising (MSBTD) algorithm we recently introduced for reducing speckle noise. Our new technique exhibits several advantages over MSBTD, including its capability to simultaneously reduce noise and interpolate missing data. Unlike MSBTD, our new method does not require an a priori high-quality image from the target imaging subject and thus offers the potential to shorten clinical imaging sessions. This novel image restoration method, which we termed sparsity based simultaneous denoising and interpolation (SBSDI), utilizes sparse representation dictionaries constructed from previously collected datasets. We tested the SBSDI algorithm on retinal spectral domain optical coherence tomography images captured in the clinic. Experiments showed that the SBSDI algorithm qualitatively and quantitatively outperforms other state-of-the-art methods. PMID:23846467

  5. Polychromatic sparse image reconstruction and mass attenuation spectrum estimation via B-spline basis function expansion

    SciTech Connect

    Gu, Renliang E-mail: ald@iastate.edu; Dogandžić, Aleksandar E-mail: ald@iastate.edu

    2015-03-31

    We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of the density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.

  6. Reconstructing Boolean Models of Signaling

    PubMed Central

    Karp, Richard M.

    2013-01-01

    Abstract Since the first emergence of protein–protein interaction networks more than a decade ago, they have been viewed as static scaffolds of the signaling–regulatory events taking place in cells, and their analysis has been mainly confined to topological aspects. Recently, functional models of these networks have been suggested, ranging from Boolean to constraint-based methods. However, learning such models from large-scale data remains a formidable task, and most modeling approaches rely on extensive human curation. Here we provide a generic approach to learning Boolean models automatically from data. We apply our approach to growth and inflammatory signaling systems in humans and show how the learning phase can improve the fit of the model to experimental data, remove spurious interactions, and lead to better understanding of the system at hand. PMID:23286509

  7. Review of Sparse Representation-Based Classification Methods on EEG Signal Processing for Epilepsy Detection, Brain-Computer Interface and Cognitive Impairment

    PubMed Central

    Wen, Dong; Jia, Peilei; Lian, Qiusheng; Zhou, Yanhong; Lu, Chengbiao

    2016-01-01

    At present, the sparse representation-based classification (SRC) has become an important approach in electroencephalograph (EEG) signal analysis, by which the data is sparsely represented on the basis of a fixed dictionary or learned dictionary and classified based on the reconstruction criteria. SRC methods have been used to analyze the EEG signals of epilepsy, cognitive impairment and brain computer interface (BCI), which made rapid progress including the improvement in computational accuracy, efficiency and robustness. However, these methods have deficiencies in real-time performance, generalization ability and the dependence of labeled sample in the analysis of the EEG signals. This mini review described the advantages and disadvantages of the SRC methods in the EEG signal analysis with the expectation that these methods can provide the better tools for analyzing EEG signals. PMID:27458376

  8. MORESANE: MOdel REconstruction by Synthesis-ANalysis Estimators. A sparse deconvolution algorithm for radio interferometric imaging

    NASA Astrophysics Data System (ADS)

    Dabbech, A.; Ferrari, C.; Mary, D.; Slezak, E.; Smirnov, O.; Kenyon, J. S.

    2015-04-01

    Context. Recent years have been seeing huge developments of radio telescopes and a tremendous increase in their capabilities (sensitivity, angular and spectral resolution, field of view, etc.). Such systems make designing more sophisticated techniques mandatory not only for transporting, storing, and processing this new generation of radio interferometric data, but also for restoring the astrophysical information contained in such data. Aims.In this paper we present a new radio deconvolution algorithm named MORESANEand its application to fully realistic simulated data of MeerKAT, one of the SKA precursors. This method has been designed for the difficult case of restoring diffuse astronomical sources that are faint in brightness, complex in morphology, and possibly buried in the dirty beam's side lobes of bright radio sources in the field. Methods.MORESANE is a greedy algorithm that combines complementary types of sparse recovery methods in order to reconstruct the most appropriate sky model from observed radio visibilities. A synthesis approach is used for reconstructing images, in which the synthesis atoms representing the unknown sources are learned using analysis priors. We applied this new deconvolution method to fully realistic simulations of the radio observations of a galaxy cluster and of an HII region in M 31. Results.We show that MORESANE is able to efficiently reconstruct images composed of a wide variety of sources (compact point-like objects, extended tailed radio galaxies, low-surface brightness emission) from radio interferometric data. Comparisons with the state of the art algorithms indicate that MORESANE provides competitive results in terms of both the total flux/surface brightness conservation and fidelity of the reconstructed model. MORESANE seems particularly well suited to recovering diffuse and extended sources, as well as bright and compact radio sources known to be hosted in galaxy clusters.

  9. An add-on video compression codec based on content-adaptive sparse super-resolution reconstructions

    NASA Astrophysics Data System (ADS)

    Yang, Shu; Jiang, Jianmin

    2017-02-01

    In this paper, we introduce an idea of content-adaptive sparse reconstruction to achieve optimized magnification quality for those down sampled video frames, to which two stages of pruning are applied to select the closest correlated images for construction of an over-complete dictionary and drive the sparse representation of its enlarged frame. In this way, not only the sampling and dictionary training process is accelerated and optimized in accordance with the input frame content, but also an add-on video compression codec can be further developed by applying such scheme as a preprocessor to any standard video compression algorithm. Our extensive experiments illustrate that (i) the proposed content-adaptive sparse reconstruction outperforms the existing benchmark in terms of super-resolution quality; (ii) When applied to H.264, one of the international video compression standards, the proposed add-on video codec can achieve three times more compression while maintaining competitive decoding quality.

  10. Sparse approximation of long-term biomedical signals for classification via dynamic PCA.

    PubMed

    Xie, Shengkun; Jin, Feng; Krishnan, Sridhar

    2011-01-01

    Sparse approximation is a novel technique in applications of event detection problems to long-term complex biomedical signals. It involves simplifying the extent of resources required to describe a large set of data sufficiently for classification. In this paper, we propose a multivariate statistical approach using dynamic principal component analysis along with the non-overlapping moving window technique to extract feature information from univariate long-term observational signals. Within the dynamic PCA framework, a few principal components plus the energy measure of signals in principal component subspace are highly promising for applying event detection problems to both stationary and non-stationary signals. The proposed method has been first tested using synthetic databases which contain various representative signals. The effectiveness of the method is then verified with real EEG signals for the purpose of epilepsy diagnosis and epileptic seizure detection. This sparse method produces a 100% classification accuracy for both synthetic data and real single channel EEG data.

  11. Sparse electrocardiogram signals recovery based on solving a row echelon-like form of system.

    PubMed

    Cai, Pingmei; Wang, Guinan; Yu, Shiwei; Zhang, Hongjuan; Ding, Shuxue; Wu, Zikai

    2016-02-01

    The study of biology and medicine in a noise environment is an evolving direction in biological data analysis. Among these studies, analysis of electrocardiogram (ECG) signals in a noise environment is a challenging direction in personalized medicine. Due to its periodic characteristic, ECG signal can be roughly regarded as sparse biomedical signals. This study proposes a two-stage recovery algorithm for sparse biomedical signals in time domain. In the first stage, the concentration subspaces are found in advance. Then by exploiting these subspaces, the mixing matrix is estimated accurately. In the second stage, based on the number of active sources at each time point, the time points are divided into different layers. Next, by constructing some transformation matrices, these time points form a row echelon-like system. After that, the sources at each layer can be solved out explicitly by corresponding matrix operations. It is noting that all these operations are conducted under a weak sparse condition that the number of active sources is less than the number of observations. Experimental results show that the proposed method has a better performance for sparse ECG signal recovery problem.

  12. A sparse digital signal model for ultrasonic nondestructive evaluation of layered materials.

    PubMed

    Bochud, N; Gomez, A M; Rus, G; Peinado, A M

    2015-09-01

    Signal modeling has been proven to be an useful tool to characterize damaged materials under ultrasonic nondestructive evaluation (NDE). In this paper, we introduce a novel digital signal model for ultrasonic NDE of multilayered materials. This model borrows concepts from lattice filter theory, and bridges them to the physics involved in the wave-material interactions. In particular, the proposed theoretical framework shows that any multilayered material can be characterized by a transfer function with sparse coefficients. The filter coefficients are linked to the physical properties of the material and are analytically obtained from them, whereas a sparse distribution naturally arises and does not rely on heuristic approaches. The developed model is first validated with experimental measurements obtained from multilayered media consisting of homogeneous solids. Then, the sparse structure of the obtained digital filter is exploited through a model-based inverse problem for damage identification in a carbon fiber-reinforced polymer (CFRP) plate.

  13. Accelerated dynamic cardiac MRI exploiting sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS).

    PubMed

    Park, Suhyung; Park, Jaeseok

    2015-05-07

    Accelerated dynamic MRI, which exploits spatiotemporal redundancies in k  -  t space and coil dimension, has been widely used to reduce the number of signal encoding and thus increase imaging efficiency with minimal loss of image quality. Nonetheless, particularly in cardiac MRI it still suffers from artifacts and amplified noise in the presence of time-drifting coil sensitivity due to relative motion between coil and subject (e.g. free breathing). Furthermore, a substantial number of additional calibrating signals is to be acquired to warrant accurate calibration of coil sensitivity. In this work, we propose a novel, accelerated dynamic cardiac MRI with sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS), which is robust to time-varying coil sensitivity even with a small number of calibrating signals. The proposed k  -  t SPARKS incorporates Kalman-smoother self-calibration in k  -  t space and sparse signal recovery in x  -   f space into a single optimization problem, leading to iterative, joint estimation of time-varying convolution kernels and missing signals in k  -  t space. In the Kalman-smoother calibration, motion-induced uncertainties over the entire time frames were included in modeling state transition while a coil-dependent noise statistic in describing measurement process. The sparse signal recovery iteratively alternates with the self-calibration to tackle the ill-conditioning problem potentially resulting from insufficient calibrating signals. Simulations and experiments were performed using both the proposed and conventional methods for comparison, revealing that the proposed k  -  t SPARKS yields higher signal-to-error ratio and superior temporal fidelity in both breath-hold and free-breathing cardiac applications over all reduction factors.

  14. Subspace weighted ℓ 2,1 minimization for sparse signal recovery

    NASA Astrophysics Data System (ADS)

    Zheng, Chundi; Li, Gang; Liu, Yimin; Wang, Xiqin

    2012-12-01

    In this article, we propose a weighted ℓ 2,1 minimization algorithm for jointly-sparse signal recovery problem. The proposed algorithm exploits the relationship between the noise subspace and the overcomplete basis matrix for designing weights, i.e., large weights are appointed to the entries, whose indices are more likely to be outside of the row support of the jointly sparse signals, so that their indices are expelled from the row support in the solution, and small weights are appointed to the entries, whose indices correspond to the row support of the jointly sparse signals, so that the solution prefers to reserve their indices. Compared with the regular ℓ 2,1 minimization, the proposed algorithm can not only further enhance the sparseness of the solution but also reduce the requirements on both the number of snapshots and the signal-to-noise ratio (SNR) for stable recovery. Both simulations and experiments on real data demonstrate that the proposed algorithm outperforms the ℓ 1-SVD algorithm, which exploits straightforwardly ℓ 2,1 minimization, for both deterministic basis matrix and random basis matrix.

  15. Signal denoising and ultrasonic flaw detection via overcomplete and sparse representations.

    PubMed

    Zhang, Guang-Ming; Harvey, David M; Braden, Derek R

    2008-11-01

    Sparse signal representations from overcomplete dictionaries are the most recent technique in the signal processing community. Applications of this technique extend into many fields. In this paper, this technique is utilized to cope with ultrasonic flaw detection and noise suppression problem. In particular, a noisy ultrasonic signal is decomposed into sparse representations using a sparse Bayesian learning algorithm and an overcomplete dictionary customized from a Gabor dictionary by incorporating some a priori information of the transducer used. Nonlinear postprocessing including thresholding and pruning is then applied to the decomposed coefficients to reduce the noise contribution and extract the flaw information. Because of the high compact essence of sparse representations, flaw echoes are packed into a few significant coefficients, and noise energy is likely scattered all over the dictionary atoms, generating insignificant coefficients. This property greatly increases the efficiency of the pruning and thresholding operations and is extremely useful for detecting flaw echoes embedded in background noise. The performance of the proposed approach is verified experimentally and compared with the wavelet transform signal processor. Experimental results to detect ultrasonic flaw echoes contaminated by white Gaussian additive noise or correlated noise are presented in the paper.

  16. Quadrature demodulation based circuit implementation of pulse stream for ultrasonic signal FRI sparse sampling

    NASA Astrophysics Data System (ADS)

    Shoupeng, Song; Zhou, Jiang

    2017-03-01

    Converting ultrasonic signal to ultrasonic pulse stream is the key step of finite rate of innovation (FRI) sparse sampling. At present, ultrasonic pulse-stream-forming techniques are mainly based on digital algorithms. No hardware circuit that can achieve it has been reported. This paper proposes a new quadrature demodulation (QD) based circuit implementation method for forming an ultrasonic pulse stream. Elaborating on FRI sparse sampling theory, the process of ultrasonic signal is explained, followed by a discussion and analysis of ultrasonic pulse-stream-forming methods. In contrast to ultrasonic signal envelope extracting techniques, a quadrature demodulation method (QDM) is proposed. Simulation experiments were performed to determine its performance at various signal-to-noise ratios (SNRs). The circuit was then designed, with mixing module, oscillator, low pass filter (LPF), and root of square sum module. Finally, application experiments were carried out on pipeline sample ultrasonic flaw testing. The experimental results indicate that the QDM can accurately convert ultrasonic signal to ultrasonic pulse stream, and reverse the original signal information, such as pulse width, amplitude, and time of arrival. This technique lays the foundation for ultrasonic signal FRI sparse sampling directly with hardware circuitry.

  17. Rapid 3D dynamic arterial spin labeling with a sparse model-based image reconstruction.

    PubMed

    Zhao, Li; Fielden, Samuel W; Feng, Xue; Wintermark, Max; Mugler, John P; Meyer, Craig H

    2015-11-01

    Dynamic arterial spin labeling (ASL) MRI measures the perfusion bolus at multiple observation times and yields accurate estimates of cerebral blood flow in the presence of variations in arterial transit time. ASL has intrinsically low signal-to-noise ratio (SNR) and is sensitive to motion, so that extensive signal averaging is typically required, leading to long scan times for dynamic ASL. The goal of this study was to develop an accelerated dynamic ASL method with improved SNR and robustness to motion using a model-based image reconstruction that exploits the inherent sparsity of dynamic ASL data. The first component of this method is a single-shot 3D turbo spin echo spiral pulse sequence accelerated using a combination of parallel imaging and compressed sensing. This pulse sequence was then incorporated into a dynamic pseudo continuous ASL acquisition acquired at multiple observation times, and the resulting images were jointly reconstructed enforcing a model of potential perfusion time courses. Performance of the technique was verified using a numerical phantom and it was validated on normal volunteers on a 3-Tesla scanner. In simulation, a spatial sparsity constraint improved SNR and reduced estimation errors. Combined with a model-based sparsity constraint, the proposed method further improved SNR, reduced estimation error and suppressed motion artifacts. Experimentally, the proposed method resulted in significant improvements, with scan times as short as 20s per time point. These results suggest that the model-based image reconstruction enables rapid dynamic ASL with improved accuracy and robustness.

  18. Kernel-Based Reconstruction of Graph Signals

    NASA Astrophysics Data System (ADS)

    Romero, Daniel; Ma, Meng; Giannakis, Georgios B.

    2017-02-01

    A number of applications in engineering, social sciences, physics, and biology involve inference over networks. In this context, graph signals are widely encountered as descriptors of vertex attributes or features in graph-structured data. Estimating such signals in all vertices given noisy observations of their values on a subset of vertices has been extensively analyzed in the literature of signal processing on graphs (SPoG). This paper advocates kernel regression as a framework generalizing popular SPoG modeling and reconstruction and expanding their capabilities. Formulating signal reconstruction as a regression task on reproducing kernel Hilbert spaces of graph signals permeates benefits from statistical learning, offers fresh insights, and allows for estimators to leverage richer forms of prior information than existing alternatives. A number of SPoG notions such as bandlimitedness, graph filters, and the graph Fourier transform are naturally accommodated in the kernel framework. Additionally, this paper capitalizes on the so-called representer theorem to devise simpler versions of existing Thikhonov regularized estimators, and offers a novel probabilistic interpretation of kernel methods on graphs based on graphical models. Motivated by the challenges of selecting the bandwidth parameter in SPoG estimators or the kernel map in kernel-based methods, the present paper further proposes two multi-kernel approaches with complementary strengths. Whereas the first enables estimation of the unknown bandwidth of bandlimited signals, the second allows for efficient graph filter selection. Numerical tests with synthetic as well as real data demonstrate the merits of the proposed methods relative to state-of-the-art alternatives.

  19. Signal processing using sparse derivatives with applications to chromatograms and ECG

    NASA Astrophysics Data System (ADS)

    Ning, Xiaoran

    In this thesis, we investigate the sparsity exist in the derivative domain. Particularly, we focus on the type of signals which posses up to Mth (M > 0) order sparse derivatives. Efforts are put on formulating proper penalty functions and optimization problems to capture properties related to sparse derivatives, searching for fast, computationally efficient solvers. Also the effectiveness of these algorithms are applied to two real world applications. In the first application, we provide an algorithm which jointly addresses the problems of chromatogram baseline correction and noise reduction. The series of chromatogram peaks are modeled as sparse with sparse derivatives, and the baseline is modeled as a low-pass signal. A convex optimization problem is formulated so as to encapsulate these non-parametric models. To account for the positivity of chromatogram peaks, an asymmetric penalty function is also utilized with symmetric penalty functions. A robust, computationally efficient, iterative algorithm is developed that is guaranteed to converge to the unique optimal solution. The approach, termed Baseline Estimation And Denoising with Sparsity (BEADS), is evaluated and compared with two state-of-the-art methods using both simulated and real chromatogram data. Promising result is obtained. In the second application, a novel Electrocardiography (ECG) enhancement algorithm is designed also based on sparse derivatives. In the real medical environment, ECG signals are often contaminated by various kinds of noise or artifacts, for example, morphological changes due to motion artifact, non-stationary noise due to muscular contraction (EMG), etc. Some of these contaminations severely affect the usefulness of ECG signals, especially when computer aided algorithms are utilized. By solving the proposed convex l1 optimization problem, artifacts are reduced by modeling the clean ECG signal as a sum of two signals whose second and third-order derivatives (differences) are sparse

  20. A CT reconstruction approach from sparse projection with adaptive-weighted diagonal total-variation in biomedical application.

    PubMed

    Deng, Luzhen; Mi, Deling; He, Peng; Feng, Peng; Yu, Pengwei; Chen, Mianyi; Li, Zhichao; Wang, Jian; Wei, Biao

    2015-01-01

    For lack of directivity in Total Variation (TV) which only uses x-coordinate and y-coordinate gradient transform as its sparse representation approach during the iteration process, this paper brought in Adaptive-weighted Diagonal Total Variation (AwDTV) that uses the diagonal direction gradient to constraint reconstructed image and adds associated weights which are expressed as an exponential function and can be adaptively adjusted by the local image-intensity diagonal gradient for the purpose of preserving the edge details, then using the steepest descent method to solve the optimization problem. Finally, we did two sets of numerical simulation and the results show that the proposed algorithm can reconstruct high-quality CT images from few-views projection, which has lower Root Mean Square Error (RMSE) and higher Universal Quality Index (UQI) than Algebraic Reconstruction Technique (ART) and TV-based reconstruction method.

  1. Adaptive Sparse Signal Processing for Discrimination of Satellite-based Radiofrequency (RF) Recordings of Lightning Events

    NASA Astrophysics Data System (ADS)

    Moody, D. I.; Smith, D. A.; Heavner, M.; Hamlin, T.

    2014-12-01

    Ongoing research at Los Alamos National Laboratory studies the Earth's radiofrequency (RF) background utilizing satellite-based RF observations of terrestrial lightning. The Fast On-orbit Recording of Transient Events (FORTE) satellite, launched in 1997, provided a rich RF lightning database. Application of modern pattern recognition techniques to this dataset may further lightning research in the scientific community, and potentially improve on-orbit processing and event discrimination capabilities for future satellite payloads. We extend sparse signal processing techniques to radiofrequency (RF) transient signals, and specifically focus on improved signature extraction using sparse representations in data-adaptive dictionaries. We present various processing options and classification results for on-board discharges, and discuss robustness and potential for capability development.

  2. SU-E-I-45: Reconstruction of CT Images From Sparsely-Sampled Data Using the Logarithmic Barrier Method

    SciTech Connect

    Xu, H

    2014-06-01

    Purpose: To develop and investigate whether the logarithmic barrier (LB) method can result in high-quality reconstructed CT images using sparsely-sampled noisy projection data Methods: The objective function is typically formulated as the sum of the total variation (TV) and a data fidelity (DF) term with a parameter λ that governs the relative weight between them. Finding the optimized value of λ is a critical step for this approach to give satisfactory results. The proposed LB method avoid using λ by constructing the objective function as the sum of the TV and a log function whose augment is the DF term. Newton's method was used to solve the optimization problem. The algorithm was coded in MatLab2013b. Both Shepp-Logan phantom and a patient lung CT image were used for demonstration of the algorithm. Measured data were simulated by calculating the projection data using radon transform. A Poisson noise model was used to account for the simulated detector noise. The iteration stopped when the difference of the current TV and the previous one was less than 1%. Results: Shepp-Logan phantom reconstruction study shows that filtered back-projection (FBP) gives high streak artifacts for 30 and 40 projections. Although visually the streak artifacts are less pronounced for 64 and 90 projections in FBP, the 1D pixel profiles indicate that FBP gives noisier reconstructed pixel values than LB does. A lung image reconstruction is presented. It shows that use of 64 projections gives satisfactory reconstructed image quality with regard to noise suppression and sharp edge preservation. Conclusion: This study demonstrates that the logarithmic barrier method can be used to reconstruct CT images from sparsely-amped data. The number of projections around 64 gives a balance between the over-smoothing of the sharp demarcation and noise suppression. Future study may extend to CBCT reconstruction and improvement on computation speed.

  3. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno

    2015-09-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose

  4. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms.

    PubMed

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M; Asma, Evren; Kinahan, Paul E; De Man, Bruno

    2015-10-07

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition.We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality.With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels

  5. On recovery of block-sparse signals via mixed l 2 /l q (0 < q ≤ 1)norm minimization

    NASA Astrophysics Data System (ADS)

    Wang, Yao; Wang, Jianjun; Xu, Zongben

    2013-12-01

    Compressed sensing (CS) states that a sparse signal can exactly be recovered from very few linear measurements. While in many applications, real-world signals also exhibit additional structures aside from standard sparsity. The typical example is the so-called block-sparse signals whose non-zero coefficients occur in a few blocks. In this article, we investigate the mixed l 2/ l q (0 < q ≤ 1) norm minimization method for the exact and robust recovery of such block-sparse signals. We mainly show that the non-convex l 2/ l q (0 < q < 1) minimization method has stronger sparsity promoting ability than the commonly used l 2/ l 1 minimization method both practically and theoretically. In terms of a block variant of the restricted isometry property of measurement matrix, we present weaker sufficient conditions for exact and robust block-sparse signal recovery than those known for l 2/ l 1 minimization. We also propose an efficient Iteratively Reweighted Least-Squares (IRLS) algorithm for the induced non-convex optimization problem. The obtained weaker conditions and the proposed IRLS algorithm are tested and compared with the mixed l 2/ l 1 minimization method and the standard l q minimization method on a series of noiseless and noisy block-sparse signals. All the comparisons demonstrate the outperformance of the mixed l 2/ l q (0 < q < 1) method for block-sparse signal recovery applications, and meaningfulness in the development of new CS technology.

  6. A 3D Freehand Ultrasound System for Multi-view Reconstructions from Sparse 2D Scanning Planes

    PubMed Central

    2011-01-01

    Background A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. Methods We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes. For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Results Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better

  7. Adaptive sparse signal processing of on-orbit lightning data using learned dictionaries

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Smith, David A.; Hamlin, Timothy D.; Light, Tess E.; Suszcynsky, David M.

    2013-05-01

    For the past two decades, there has been an ongoing research effort at Los Alamos National Laboratory to learn more about the Earth's radiofrequency (RF) background utilizing satellite-based RF observations of terrestrial lightning. The Fast On-orbit Recording of Transient Events (FORTE) satellite provided a rich RF lighting database, comprising of five years of data recorded from its two RF payloads. While some classification work has been done previously on the FORTE RF database, application of modern pattern recognition techniques may advance lightning research in the scientific community and potentially improve on-orbit processing and event discrimination capabilities for future satellite payloads. We now develop and implement new event classification capability on the FORTE database using state-of-the-art adaptive signal processing combined with compressive sensing and machine learning techniques. The focus of our work is improved feature extraction using sparse representations in learned dictionaries. Conventional localized data representations for RF transients using analytical dictionaries, such as a short-time Fourier basis or wavelets, can be suitable for analyzing some types of signals, but not others. Instead, we learn RF dictionaries directly from data, without relying on analytical constraints or additional knowledge about the signal characteristics, using several established machine learning algorithms. Sparse classification features are extracted via matching pursuit search over the learned dictionaries, and used in conjunction with a statistical classifier to distinguish between lightning types. We present preliminary results of our work and discuss classification scenarios and future development.

  8. Multiscale Transient Signal Detection: Localizing Transients in Geodetic Data Through Wavelet Transforms and Sparse Estimation Techniques

    NASA Astrophysics Data System (ADS)

    Riel, B.; Simons, M.; Agram, P.

    2012-12-01

    Transients are a class of deformation signals on the Earth's surface that can be described as non-periodic accumulation of strain in the crust. Over seismically and volcanically active regions, these signals are often challenging to detect due to noise and other modes of deformation. Geodetic datasets that provide precise measurements of surface displacement over wide areas are ideal for exploiting both the spatial and temporal coherence of transient signals. We present an extension to the Multiscale InSAR Time Series (MInTS) approach for analyzing geodetic data by combining the localization benefits of wavelet transforms (localizing signals in space) with sparse optimization techniques (localizing signals in time). Our time parameterization approach allows us to reduce geodetic time series to sparse, compressible signals with very few non-zero coefficients corresponding to transient events. We first demonstrate the temporal transient detection by analyzing GPS data over the Long Valley caldera in California and along the San Andreas fault near Parkfield, CA. For Long Valley, we are able to resolve the documented 2002-2003 uplift event with greater temporal precision. Similarly for Parkfield, we model the postseismic deformation by specific integrated basis splines characterized by timescales that are largely consistent with postseismic relaxation times. We then apply our method to ERS and Envisat InSAR datasets consisting of over 200 interferograms for Long Valley and over 100 interferograms for Parkfield. The wavelet transforms reduce the impact of spatially correlated atmospheric noise common in InSAR data since the wavelet coefficients themselves are essentially uncorrelated. The spatial density and extended temporal coverage of the InSAR data allows us to effectively localize ground deformation events in both space and time with greater precision than has been previously accomplished.

  9. Multilayer material characterization using thermographic signal reconstruction

    NASA Astrophysics Data System (ADS)

    Shepard, Steven M.; Beemer, Maria Frendberg

    2016-02-01

    Active-thermography has become a well-established Nondestructive Testing (NDT) method for detection of subsurface flaws. In its simplest form, flaw detection is based on visual identification of contrast between a flaw and local intact regions in an IR image sequence of the surface temperature as the sample responds to thermal stimulation. However, additional information and insight can be obtained from the sequence, even in the absence of a flaw, through analysis of the logarithmic derivatives of individual pixel time histories using the Thermographic Signal Reconstruction (TSR) method. For example, the response of a flaw-free multilayer sample to thermal stimulation can be viewed as a simple transition between the responses of infinitely thick samples of the individual constituent layers over the lifetime of the thermal diffusion process. The transition is represented compactly and uniquely by the logarithmic derivatives, based on the ratio of thermal effusivities of the layers. A spectrum of derivative responses relative to thermal effusivity ratios allows prediction of the time scale and detectability of the interface, and measurement of the thermophysical properties of one layer if the properties of the other are known. A similar transition between steady diffusion states occurs for flat bottom holes, based on the hole aspect ratio.

  10. Subthreshold membrane responses underlying sparse spiking to natural vocal signals in auditory cortex

    PubMed Central

    Perks, Krista Eva; Gentner, Timothy Q.

    2015-01-01

    Natural acoustic communication signals, such as speech, are typically high-dimensional with a wide range of co-varying spectro-temporal features at multiple timescales. The synaptic and network mechanisms for encoding these complex signals are largely unknown. We are investigating these mechanisms in high-level sensory regions of the songbird auditory forebrain, where single neurons show sparse, object-selective spiking responses to conspecific songs. Using whole-cell in-vivo patch clamp techniques in the caudal mesopallium and the caudal nidopallium of starlings, we examine song-driven subthreshold and spiking activity. We find that both the subthreshold and the spiking activity are reliable (i.e., the same song drives a similar response each time it is presented) and specific (i.e. responses to different songs are distinct). Surprisingly, however, the reliability and specificity of the sub-threshold response was uniformly high regardless of when the cell spiked, even for song stimuli that drove no spikes. We conclude that despite a selective and sparse spiking response, high-level auditory cortical neurons are under continuous, non-selective, stimulus-specific synaptic control. To investigate the role of local network inhibition in this synaptic control, we then recorded extracellularly while pharmacologically blocking local GABA-ergic transmission. This manipulation modulated the strength and the reliability of stimulus-driven spiking, consistent with a role for local inhibition in regulating the reliability of network activity and the stimulus specificity of the subthreshold response in single cells. We discuss these results in the context of underlying computations that could generate sparse, stimulus-selective spiking responses, and models for hierarchical pooling. PMID:25728189

  11. Adaptive sparse signal processing of on-orbit lightning data using learned dictionaries

    NASA Astrophysics Data System (ADS)

    Moody, D. I.; Hamlin, T.; Light, T. E.; Loveland, R. C.; Smith, D. A.; Suszcynsky, D. M.

    2012-12-01

    For the past two decades, there has been an ongoing research effort at Los Alamos National Laboratory (LANL) to learn more about the Earth's radiofrequency (RF) background utilizing satellite-based RF observations of terrestrial lightning. Arguably the richest satellite lightning database ever recorded is that from the Fast On-orbit Recording of Transient Events (FORTE) satellite, which returned at least five years of data from its two RF payloads after launch in 1997. While some classification work has been done previously on the LANL FORTE RF database, application of modern pattern recognition techniques may further lightning research in the scientific community and potentially improve on-orbit processing and event discrimination capabilities for future satellite payloads. We now develop and implement new event classification capability on the FORTE database using state-of-the-art adaptive signal processing combined with compressive sensing and machine learning techniques. The focus of our work is improved feature extraction using sparse representations in learned dictionaries. Extracting classification features from RF signals typically relies on knowledge of the application domain in order to find feature vectors unique to a signal class and robust against background noise. Conventional localized data representations for RF transients using analytical dictionaries, such as a short-time Fourier basis or wavelets, can be suitable for analyzing some types of signals, but not others. Instead, we learn RF dictionaries directly from data, without relying on analytical constraints or additional knowledge about the signal characteristics, using several established machine learning algorithms. Sparse classification features are extracted via matching pursuit search over the learned dictionaries, and used in conjunction with a statistical classifier to distinguish between lightning types. We present preliminary results of our work and discuss classification performance

  12. Learning distance function for regression-based 4D pulmonary trunk model reconstruction estimated from sparse MRI data

    NASA Astrophysics Data System (ADS)

    Vitanovski, Dime; Tsymbal, Alexey; Ionasec, Razvan; Georgescu, Bogdan; Zhou, Shaohua K.; Hornegger, Joachim; Comaniciu, Dorin

    2011-03-01

    Congenital heart defect (CHD) is the most common birth defect and a frequent cause of death for children. Tetralogy of Fallot (ToF) is the most often occurring CHD which affects in particular the pulmonary valve and trunk. Emerging interventional methods enable percutaneous pulmonary valve implantation, which constitute an alternative to open heart surgery. While minimal invasive methods become common practice, imaging and non-invasive assessment tools become crucial components in the clinical setting. Cardiac computed tomography (CT) and cardiac magnetic resonance imaging (cMRI) are techniques with complementary properties and ability to acquire multiple non-invasive and accurate scans required for advance evaluation and therapy planning. In contrary to CT which covers the full 4D information over the cardiac cycle, cMRI often acquires partial information, for example only one 3D scan of the whole heart in the end-diastolic phase and two 2D planes (long and short axes) over the whole cardiac cycle. The data acquired in this way is called sparse cMRI. In this paper, we propose a regression-based approach for the reconstruction of the full 4D pulmonary trunk model from sparse MRI. The reconstruction approach is based on learning a distance function between the sparse MRI which needs to be completed and the 4D CT data with the full information used as the training set. The distance is based on the intrinsic Random Forest similarity which is learnt for the corresponding regression problem of predicting coordinates of unseen mesh points. Extensive experiments performed on 80 cardiac CT and MR sequences demonstrated the average speed of 10 seconds and accuracy of 0.1053mm mean absolute error for the proposed approach. Using the case retrieval workflow and local nearest neighbour regression with the learnt distance function appears to be competitive with respect to "black box" regression with immediate prediction of coordinates, while providing transparency to the

  13. On the estimation of brain signal entropy from sparse neuroimaging data.

    PubMed

    Grandy, Thomas H; Garrett, Douglas D; Schmiedek, Florian; Werkle-Bergner, Markus

    2016-03-29

    Multi-scale entropy (MSE) has been recently established as a promising tool for the analysis of the moment-to-moment variability of neural signals. Appealingly, MSE provides a measure of the predictability of neural operations across the multiple time scales on which the brain operates. An important limitation in the application of the MSE to some classes of neural signals is MSE's apparent reliance on long time series. However, this sparse-data limitation in MSE computation could potentially be overcome via MSE estimation across shorter time series that are not necessarily acquired continuously (e.g., in fMRI block-designs). In the present study, using simulated, EEG, and fMRI data, we examined the dependence of the accuracy and precision of MSE estimates on the number of data points per segment and the total number of data segments. As hypothesized, MSE estimation across discontinuous segments was comparably accurate and precise, despite segment length. A key advance of our approach is that it allows the calculation of MSE scales not previously accessible from the native segment lengths. Consequently, our results may permit a far broader range of applications of MSE when gauging moment-to-moment dynamics in sparse and/or discontinuous neurophysiological data typical of many modern cognitive neuroscience study designs.

  14. Robust detection of premature ventricular contractions using sparse signal decomposition and temporal features

    PubMed Central

    Ramkumar, Barathram; Deshpande, Pranav S.; Choudhary, Tilendra

    2015-01-01

    An automated noise-robust premature ventricular contraction (PVC) detection method is proposed based on the sparse signal decomposition, temporal features, and decision rules. In this Letter, the authors exploit sparse expansion of electrocardiogram (ECG) signals on mixed dictionaries for simultaneously enhancing the QRS complex and reducing the influence of tall P and T waves, baseline wanders, and muscle artefacts. They further investigate a set of ten generalised temporal features combined with decision-rule-based detection algorithm for discriminating PVC beats from non-PVC beats. The accuracy and robustness of the proposed method is evaluated using 47 ECG recordings from the MIT/BIH arrhythmia database. Evaluation results show that the proposed method achieves an average sensitivity of 89.69%, and specificity 99.63%. Results further show that the proposed decision-rule-based algorithm with ten generalised features can accurately detect different patterns of PVC beats (uniform and multiform, couplets, triplets, and ventricular tachycardia) in presence of other normal and abnormal heartbeats. PMID:26713158

  15. On the estimation of brain signal entropy from sparse neuroimaging data

    PubMed Central

    Grandy, Thomas H.; Garrett, Douglas D.; Schmiedek, Florian; Werkle-Bergner, Markus

    2016-01-01

    Multi-scale entropy (MSE) has been recently established as a promising tool for the analysis of the moment-to-moment variability of neural signals. Appealingly, MSE provides a measure of the predictability of neural operations across the multiple time scales on which the brain operates. An important limitation in the application of the MSE to some classes of neural signals is MSE’s apparent reliance on long time series. However, this sparse-data limitation in MSE computation could potentially be overcome via MSE estimation across shorter time series that are not necessarily acquired continuously (e.g., in fMRI block-designs). In the present study, using simulated, EEG, and fMRI data, we examined the dependence of the accuracy and precision of MSE estimates on the number of data points per segment and the total number of data segments. As hypothesized, MSE estimation across discontinuous segments was comparably accurate and precise, despite segment length. A key advance of our approach is that it allows the calculation of MSE scales not previously accessible from the native segment lengths. Consequently, our results may permit a far broader range of applications of MSE when gauging moment-to-moment dynamics in sparse and/or discontinuous neurophysiological data typical of many modern cognitive neuroscience study designs. PMID:27020961

  16. Removal of Nuisance Signals from Limited and Sparse 1H MRSI Data Using a Union-of-Subspaces Model

    PubMed Central

    Ma, Chao; Lam, Fan; Johnson, Curtis L.; Liang, Zhi-Pei

    2015-01-01

    Purpose To remove nuisance signals (e.g., water and lipid signals) for 1H MRSI data collected from the brain with limited and/or sparse (k, t)-space coverage. Methods A union-of-subspace model is proposed for removing nuisance signals. The model exploits the partial separability of both the nuisance signals and the metabolite signal, and decomposes an MRSI dataset into several sets of generalized voxels that share the same spectral distributions. This model enables the estimation of the nuisance signals from an MRSI dataset that has limited and/or sparse (k, t)-space coverage. Results The proposed method has been evaluated using in vivo MRSI data. For conventional CSI data with limited k-space coverage, the proposed method produced “lipid-free” spectra without lipid suppression during data acquisition at 130 ms echo time. For sparse (k, t)-space data acquired with conventional pulses for water and lipid suppression, the proposed method was also able to remove the remaining water and lipid signals with negligible residuals. Conclusions Nuisance signals in 1H MRSI data reside in low-dimensional subspaces. This property can be utilized for estimation and removal of nuisance signals from 1H MRSI data even when they have limited and/or sparse coverage of (k, t)-space. The proposed method should prove useful especially for accelerated high-resolution 1H MRSI of the brain. PMID:25762370

  17. An algorithm for extraction of periodic signals from sparse, irregularly sampled data

    NASA Technical Reports Server (NTRS)

    Wilcox, J. Z.

    1994-01-01

    Temporal gaps in discrete sampling sequences produce spurious Fourier components at the intermodulation frequencies of an oscillatory signal and the temporal gaps, thus significantly complicating spectral analysis of such sparsely sampled data. A new fast Fourier transform (FFT)-based algorithm has been developed, suitable for spectral analysis of sparsely sampled data with a relatively small number of oscillatory components buried in background noise. The algorithm's principal idea has its origin in the so-called 'clean' algorithm used to sharpen images of scenes corrupted by atmospheric and sensor aperture effects. It identifies as the signal's 'true' frequency that oscillatory component which, when passed through the same sampling sequence as the original data, produces a Fourier image that is the best match to the original Fourier space. The algorithm has generally met with succession trials with simulated data with a low signal-to-noise ratio, including those of a type similar to hourly residuals for Earth orientation parameters extracted from VLBI data. For eight oscillatory components in the diurnal and semidiurnal bands, all components with an amplitude-noise ratio greater than 0.2 were successfully extracted for all sequences and duty cycles (greater than 0.1) tested; the amplitude-noise ratios of the extracted signals were as low as 0.05 for high duty cycles and long sampling sequences. When, in addition to these high frequencies, strong low-frequency components are present in the data, the low-frequency components are generally eliminated first, by employing a version of the algorithm that searches for non-integer multiples of the discrete FET minimum frequency.

  18. Cosparsity-based Stagewise Matching Pursuit algorithm for reconstruction of the cosparse signals

    NASA Astrophysics Data System (ADS)

    Wu, Di; Zhao, Yuxin; Wang, Wenwu; Hao, Yanling

    2015-12-01

    The cosparse analysis model has been introduced as an interesting alternative to the standard sparse synthesis model. Given a set of corrupted measurements, finding a signal belonging to this model is known as analysis pursuit, which is an important problem in analysis model based sparse representation. Several pursuit methods have already been proposed, such as the methods based on l 1-relaxation and greedy approaches based on the cosparsity of the signal. This paper presents a novel greedy-like algorithm, called Cosparsity-based Stagewise Matching Pursuit (CSMP), where the cosparsity of the target signal is estimated adaptively with a stagewise approach composed of forward and backward processes. In the forward process, the cosparsity is estimated and the signal is approximated, followed by the refinement of the cosparsity and the signal in the backward process. As a result, the target signal can be reconstructed without the prior information of the cosparsity level. Experiments show that the performance of the proposed algorithm is comparable to those of the l 1-relaxation and Analysis Subspace Pursuit (ASP)/Analysis Compressive Sampling Matching Pursuit (ACoSaMP) in noiseless case and better than that of Greedy Analysis Pursuit (GAP) in noisy case.

  19. Sparsely corrupted stimulated scattering signals recovery by iterative reweighted continuous basis pursuit

    NASA Astrophysics Data System (ADS)

    Wang, Kunpeng; Chai, Yi; Su, Chunxiao

    2013-08-01

    In this paper, we consider the problem of extracting the desired signals from noisy measurements. This is a classical problem of signal recovery which is of paramount importance in inertial confinement fusion. To accomplish this task, we develop a tractable algorithm based on continuous basis pursuit and reweighted ℓ1-minimization. By modeling the observed signals as superposition of scale time-shifted copies of theoretical waveform, structured noise, and unstructured noise on a finite time interval, a sparse optimization problem is obtained. We propose to solve this problem through an iterative procedure that alternates between convex optimization to estimate the amplitude, and local optimization to estimate the dictionary. The performance of the method was evaluated both numerically and experimentally. Numerically, we recovered theoretical signals embedded in increasing amounts of unstructured noise and compared the results with those obtained through popular denoising methods. We also applied the proposed method to a set of actual experimental data acquired from the Shenguang-II laser whose energy was below the detector noise-equivalent energy. Both simulation and experiments show that the proposed method improves the signal recovery performance and extends the dynamic detection range of detectors.

  20. The sparse matrix transform for covariance estimation and analysis of high dimensional signals.

    PubMed

    Cao, Guangzhi; Bachega, Leonardo R; Bouman, Charles A

    2011-03-01

    Covariance estimation for high dimensional signals is a classically difficult problem in statistical signal analysis and machine learning. In this paper, we propose a maximum likelihood (ML) approach to covariance estimation, which employs a novel non-linear sparsity constraint. More specifically, the covariance is constrained to have an eigen decomposition which can be represented as a sparse matrix transform (SMT). The SMT is formed by a product of pairwise coordinate rotations known as Givens rotations. Using this framework, the covariance can be efficiently estimated using greedy optimization of the log-likelihood function, and the number of Givens rotations can be efficiently computed using a cross-validation procedure. The resulting estimator is generally positive definite and well-conditioned, even when the sample size is limited. Experiments on a combination of simulated data, standard hyperspectral data, and face image sets show that the SMT-based covariance estimates are consistently more accurate than both traditional shrinkage estimates and recently proposed graphical lasso estimates for a variety of different classes and sample sizes. An important property of the new covariance estimate is that it naturally yields a fast implementation of the estimated eigen-transformation using the SMT representation. In fact, the SMT can be viewed as a generalization of the classical fast Fourier transform (FFT) in that it uses "butterflies" to represent an orthonormal transform. However, unlike the FFT, the SMT can be used for fast eigen-signal analysis of general non-stationary signals.

  1. Sparse Reconstruction for Temperature Distribution Using DTS Fiber Optic Sensors with Applications in Electrical Generator Stator Monitoring

    PubMed Central

    Bazzo, João Paulo; Pipa, Daniel Rodrigues; da Silva, Erlon Vagner; Martelli, Cicero; Cardozo da Silva, Jean Carlos

    2016-01-01

    This paper presents an image reconstruction method to monitor the temperature distribution of electric generator stators. The main objective is to identify insulation failures that may arise as hotspots in the structure. The method is based on temperature readings of fiber optic distributed sensors (DTS) and a sparse reconstruction algorithm. Thermal images of the structure are formed by appropriately combining atoms of a dictionary of hotspots, which was constructed by finite element simulation with a multi-physical model. Due to difficulties for reproducing insulation faults in real stator structure, experimental tests were performed using a prototype similar to the real structure. The results demonstrate the ability of the proposed method to reconstruct images of hotspots with dimensions down to 15 cm, representing a resolution gain of up to six times when compared to the DTS spatial resolution. In addition, satisfactory results were also obtained to detect hotspots with only 5 cm. The application of the proposed algorithm for thermal imaging of generator stators can contribute to the identification of insulation faults in early stages, thereby avoiding catastrophic damage to the structure. PMID:27618040

  2. Sparse Reconstruction for Temperature Distribution Using DTS Fiber Optic Sensors with Applications in Electrical Generator Stator Monitoring.

    PubMed

    Bazzo, João Paulo; Pipa, Daniel Rodrigues; da Silva, Erlon Vagner; Martelli, Cicero; Cardozo da Silva, Jean Carlos

    2016-09-07

    This paper presents an image reconstruction method to monitor the temperature distribution of electric generator stators. The main objective is to identify insulation failures that may arise as hotspots in the structure. The method is based on temperature readings of fiber optic distributed sensors (DTS) and a sparse reconstruction algorithm. Thermal images of the structure are formed by appropriately combining atoms of a dictionary of hotspots, which was constructed by finite element simulation with a multi-physical model. Due to difficulties for reproducing insulation faults in real stator structure, experimental tests were performed using a prototype similar to the real structure. The results demonstrate the ability of the proposed method to reconstruct images of hotspots with dimensions down to 15 cm, representing a resolution gain of up to six times when compared to the DTS spatial resolution. In addition, satisfactory results were also obtained to detect hotspots with only 5 cm. The application of the proposed algorithm for thermal imaging of generator stators can contribute to the identification of insulation faults in early stages, thereby avoiding catastrophic damage to the structure.

  3. Adaptive sparse signal processing for discrimination of satellite-based radiofrequency (RF) recordings of lightning events

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Smith, David A.

    2015-05-01

    For over two decades, Los Alamos National Laboratory programs have included an active research effort utilizing satellite observations of terrestrial lightning to learn more about the Earth's RF background. The FORTE satellite provided a rich satellite lightning database, which has been previously used for some event classification, and remains relevant for advancing lightning research. Lightning impulses are dispersed as they travel through the ionosphere, appearing as nonlinear chirps at the receiver on orbit. The data processing challenge arises from the combined complexity of the lightning source model, the propagation medium nonlinearities, and the sensor artifacts. We continue to develop modern event classification capability on the FORTE database using adaptive signal processing combined with compressive sensing techniques. The focus of our work is improved feature extraction using sparse representations in overcomplete analytical dictionaries. We explore two possible techniques for detecting lightning events, and showcase the algorithms on few representative data examples. We present preliminary results of our work and discuss future development.

  4. Weak signal detection in hyperspectral imagery using sparse matrix transform (SMT) covariance estimation

    SciTech Connect

    Theiler, James P; Cao, Guangzhi; Bouman, Charles A

    2009-01-01

    Many detection algorithms in hyperspectral image analysis, from well-characterized gaseous and solid targets to deliberately uncharacterized anomalies and anomlous changes, depend on accurately estimating the covariance matrix of the background. In practice, the background covariance is estimated from samples in the image, and imprecision in this estimate can lead to a loss of detection power. In this paper, we describe the sparse matrix transform (SMT) and investigate its utility for estimating the covariance matrix from a limited number of samples. The SMT is formed by a product of pairwise coordinate (Givens) rotations, which can be efficiently estimated using greedy optimization. Experiments on hyperspectral data show that the estimate accurately reproduces even small eigenvalues and eigenvectors. In particular, we find that using the SMT to estimate the covariance matrix used in the adaptive matched filter leads to consistently higher signal-to-noise ratios.

  5. Sparse matrix beamforming and image reconstruction for real-time 2D HIFU monitoring using Harmonic Motion Imaging for Focused Ultrasound (HMIFU) with in vitro validation

    PubMed Central

    Hou, Gary Y.; Provost, Jean; Grondin, Julien; Wang, Shutao; Marquet, Fabrice; Bunting, Ethan; Konofagou, Elisa E.

    2015-01-01

    Harmonic Motion Imaging for Focused Ultrasound (HMIFU) is a recently developed High-Intensity Focused Ultrasound (HIFU) treatment monitoring method. HMIFU utilizes an Amplitude-Modulated (fAM = 25 Hz) HIFU beam to induce a localized focal oscillatory motion, which is simultaneously estimated and imaged by confocally-aligned imaging transducer. HMIFU feasibilities have been previously shown in silico, in vitro, and in vivo in 1-D or 2-D monitoring of HIFU treatment. The objective of this study is to develop and show the feasibility of a novel fast beamforming algorithm for image reconstruction using GPU-based sparse-matrix operation with real-time feedback. In this study, the algorithm was implemented onto a fully integrated, clinically relevant HMIFU system composed of a 93-element HIFU transducer (fcenter = 4.5MHz) and coaxially-aligned 64-element phased array (fcenter = 2.5MHz) for displacement excitation and motion estimation, respectively. A single transmit beam with divergent beam transmit was used while fast beamforming was implemented using a GPU-based delay-and-sum method and a sparse-matrix operation. Axial HMI displacements were then estimated from the RF signals using a 1-D normalized cross-correlation method and streamed to a graphic user interface. The present work developed and implemented a sparse matrix beamforming onto a fully-integrated, clinically relevant system, which can stream displacement images up to 15 Hz using a GPU-based processing, an increase of 100 fold in rate of streaming displacement images compared to conventional CPU-based conventional beamforming and reconstruction processing. The achieved feedback rate is also currently the fastest and only approach that does not require interrupting the HIFU treatment amongst the acoustic radiation force based HIFU imaging techniques. Results in phantom experiments showed reproducible displacement imaging, and monitoring of twenty two in vitro HIFU treatments using the new 2D system showed a

  6. Adaptive-weighted Total Variation Minimization for Sparse Data toward Low-dose X-ray Computed Tomography Image Reconstruction

    PubMed Central

    Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong

    2012-01-01

    Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, a piecewise-smooth X-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing noticeable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously-reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several noticeable gains, in terms of noise-resolution tradeoff plots and full width at half maximum values, as compared to the corresponding conventional TV-POCS algorithm. PMID:23154621

  7. Real time reconstruction of quasiperiodic multi parameter physiological signals

    NASA Astrophysics Data System (ADS)

    Ganeshapillai, Gartheeban; Guttag, John

    2012-12-01

    A modern intensive care unit (ICU) has automated analysis systems that depend on continuous uninterrupted real time monitoring of physiological signals such as electrocardiogram (ECG), arterial blood pressure (ABP), and photo-plethysmogram (PPG). These signals are often corrupted by noise, artifacts, and missing data. We present an automated learning framework for real time reconstruction of corrupted multi-parameter nonstationary quasiperiodic physiological signals. The key idea is to learn a patient-specific model of the relationships between signals, and then reconstruct corrupted segments using the information available in correlated signals. We evaluated our method on MIT-BIH arrhythmia data, a two-channel ECG dataset with many clinically significant arrhythmias, and on the CinC challenge 2010 data, a multi-parameter dataset containing ECG, ABP, and PPG. For each, we evaluated both the residual distance between the original signals and the reconstructed signals, and the performance of a heartbeat classifier on a reconstructed ECG signal. At an SNR of 0 dB, the average residual distance on the CinC data was roughly 3% of the energy in the signal, and on the arrhythmia database it was roughly 16%. The difference is attributable to the large amount of diversity in the arrhythmia database. Remarkably, despite the relatively high residual difference, the classification accuracy on the arrhythmia database was still 98%, indicating that our method restored the physiologically important aspects of the signal.

  8. Sparse signal decomposition method based on multi-scale chirplet and its application to the fault diagnosis of gearboxes

    NASA Astrophysics Data System (ADS)

    Peng, Fuqiang; Yu, Dejie; Luo, Jiesi

    2011-02-01

    Based on the chirplet path pursuit and the sparse signal decomposition method, a new sparse signal decomposition method based on multi-scale chirplet is proposed and applied to the decomposition of vibration signals from gearboxes in fault diagnosis. An over-complete dictionary with multi-scale chirplets as its atoms is constructed using the method. Because of the multi-scale character, this method is superior to the traditional sparse signal decomposition method wherein only a single scale is adopted, and is more applicable to the decomposition of non-stationary signals with multi-components whose frequencies are time-varying. When there are faults in a gearbox, the vibration signals collected are usually AM-FM signals with multiple components whose frequencies vary with the rotational speed of the shaft. The meshing frequency and modulating frequency, which vary with time, can be derived by the proposed method and can be used in gearbox fault diagnosis under time-varying shaft-rotation speed conditions, where the traditional signal processing methods are always blocked. Both simulations and experiments validate the effectiveness of the proposed method.

  9. Dual energy CT with one full scan and a second sparse-view scan using structure preserving iterative reconstruction (SPIR).

    PubMed

    Wang, Tonghe; Zhu, Lei

    2016-09-21

    Conventional dual-energy CT (DECT) reconstruction requires two full-size projection datasets with two different energy spectra. In this study, we propose an iterative algorithm to enable a new data acquisition scheme which requires one full scan and a second sparse-view scan for potential reduction in imaging dose and engineering cost of DECT. A bilateral filter is calculated as a similarity matrix from the first full-scan CT image to quantify the similarity between any two pixels, which is assumed unchanged on a second CT image since DECT scans are performed on the same object. The second CT image from reduced projections is reconstructed by an iterative algorithm which updates the image by minimizing the total variation of the difference between the image and its filtered image by the similarity matrix under data fidelity constraint. As the redundant structural information of the two CT images is contained in the similarity matrix for CT reconstruction, we refer to the algorithm as structure preserving iterative reconstruction (SPIR). The proposed method is evaluated on both digital and physical phantoms, and is compared with the filtered-backprojection (FBP) method, the conventional total-variation-regularization-based algorithm (TVR) and prior-image-constrained-compressed-sensing (PICCS). SPIR with a second 10-view scan reduces the image noise STD by a factor of one order of magnitude with same spatial resolution as full-view FBP image. SPIR substantially improves over TVR on the reconstruction accuracy of a 10-view scan by decreasing the reconstruction error from 6.18% to 1.33%, and outperforms TVR at 50 and 20-view scans on spatial resolution with a higher frequency at the modulation transfer function value of 10% by an average factor of 4. Compared with the 20-view scan PICCS result, the SPIR image has 7 times lower noise STD with similar spatial resolution. The electron density map obtained from the SPIR-based DECT images with a second 10-view scan has an

  10. Dual energy CT with one full scan and a second sparse-view scan using structure preserving iterative reconstruction (SPIR)

    NASA Astrophysics Data System (ADS)

    Wang, Tonghe; Zhu, Lei

    2016-09-01

    Conventional dual-energy CT (DECT) reconstruction requires two full-size projection datasets with two different energy spectra. In this study, we propose an iterative algorithm to enable a new data acquisition scheme which requires one full scan and a second sparse-view scan for potential reduction in imaging dose and engineering cost of DECT. A bilateral filter is calculated as a similarity matrix from the first full-scan CT image to quantify the similarity between any two pixels, which is assumed unchanged on a second CT image since DECT scans are performed on the same object. The second CT image from reduced projections is reconstructed by an iterative algorithm which updates the image by minimizing the total variation of the difference between the image and its filtered image by the similarity matrix under data fidelity constraint. As the redundant structural information of the two CT images is contained in the similarity matrix for CT reconstruction, we refer to the algorithm as structure preserving iterative reconstruction (SPIR). The proposed method is evaluated on both digital and physical phantoms, and is compared with the filtered-backprojection (FBP) method, the conventional total-variation-regularization-based algorithm (TVR) and prior-image-constrained-compressed-sensing (PICCS). SPIR with a second 10-view scan reduces the image noise STD by a factor of one order of magnitude with same spatial resolution as full-view FBP image. SPIR substantially improves over TVR on the reconstruction accuracy of a 10-view scan by decreasing the reconstruction error from 6.18% to 1.33%, and outperforms TVR at 50 and 20-view scans on spatial resolution with a higher frequency at the modulation transfer function value of 10% by an average factor of 4. Compared with the 20-view scan PICCS result, the SPIR image has 7 times lower noise STD with similar spatial resolution. The electron density map obtained from the SPIR-based DECT images with a second 10-view scan has an

  11. Sparse matrix beamforming and image reconstruction for 2-D HIFU monitoring using harmonic motion imaging for focused ultrasound (HMIFU) with in vitro validation.

    PubMed

    Hou, Gary Y; Provost, Jean; Grondin, Julien; Wang, Shutao; Marquet, Fabrice; Bunting, Ethan; Konofagou, Elisa E

    2014-11-01

    Harmonic motion imaging for focused ultrasound (HMIFU) utilizes an amplitude-modulated HIFU beam to induce a localized focal oscillatory motion simultaneously estimated. The objective of this study is to develop and show the feasibility of a novel fast beamforming algorithm for image reconstruction using GPU-based sparse-matrix operation with real-time feedback. In this study, the algorithm was implemented onto a fully integrated, clinically relevant HMIFU system. A single divergent transmit beam was used while fast beamforming was implemented using a GPU-based delay-and-sum method and a sparse-matrix operation. Axial HMI displacements were then estimated from the RF signals using a 1-D normalized cross-correlation method and streamed to a graphic user interface with frame rates up to 15 Hz, a 100-fold increase compared to conventional CPU-based processing. The real-time feedback rate does not require interrupting the HIFU treatment. Results in phantom experiments showed reproducible HMI images and monitoring of 22 in vitro HIFU treatments using the new 2-D system demonstrated reproducible displacement imaging, and monitoring of 22 in vitro HIFU treatments using the new 2-D system showed a consistent average focal displacement decrease of 46.7 ±14.6% during lesion formation. Complementary focal temperature monitoring also indicated an average rate of displacement increase and decrease with focal temperature at 0.84±1.15%/(°)C, and 2.03±0.93%/(°)C , respectively. These results reinforce the HMIFU capability of estimating and monitoring stiffness related changes in real time. Current ongoing studies include clinical translation of the presented system for monitoring of HIFU treatment for breast and pancreatic tumor applications.

  12. Quantification of (1) H-MRS signals based on sparse metabolite profiles in the time-frequency domain.

    PubMed

    Parto Dezfouli, Mohammad Ali; Parto Dezfouli, Mohsen; Ahmadian, Alireza; Frangi, Alejandro F; Esmaeili Rad, Melika; Saligheh Rad, Hamidreza

    2017-02-01

    MRS is an analytical approach used for both quantitative and qualitative analysis of human body metabolites. The accurate and robust quantification capability of proton MRS ((1) H-MRS) enables the accurate estimation of living tissue metabolite concentrations. However, such methods can be efficiently employed for quantification of metabolite concentrations only if the overlapping nature of metabolites, existing static field inhomogeneity and low signal-to-noise ratio (SNR) are taken into consideration. Representation of (1) H-MRS signals in the time-frequency domain enables us to handle the baseline and noise better. This is possible because the MRS signal of each metabolite is sparsely represented, with only a few peaks, in the frequency domain, but still along with specific time-domain features such as distinct decay constant associated with T2 relaxation rate. The baseline, however, has a smooth behavior in the frequency domain. In this study, we proposed a quantification method using continuous wavelet transformation of (1) H-MRS signals in combination with sparse representation of features in the time-frequency domain. Estimation of the sparse representations of MR spectra is performed according to the dictionaries constructed from metabolite profiles. Results on simulated and phantom data show that the proposed method is able to quantify the concentration of metabolites in (1) H-MRS signals with high accuracy and robustness. This is achieved for both low SNR (5 dB) and low signal-to-baseline ratio (-5 dB) regimes.

  13. Sparse sampling and reconstruction for electron and scanning probe microscope imaging

    DOEpatents

    Anderson, Hyrum; Helms, Jovana; Wheeler, Jason W.; Larson, Kurt W.; Rohrer, Brandon R.

    2015-07-28

    Systems and methods for conducting electron or scanning probe microscopy are provided herein. In a general embodiment, the systems and methods for conducting electron or scanning probe microscopy with an undersampled data set include: driving an electron beam or probe to scan across a sample and visit a subset of pixel locations of the sample that are randomly or pseudo-randomly designated; determining actual pixel locations on the sample that are visited by the electron beam or probe; and processing data collected by detectors from the visits of the electron beam or probe at the actual pixel locations and recovering a reconstructed image of the sample.

  14. Light field reconstruction robust to signal dependent noise

    NASA Astrophysics Data System (ADS)

    Ren, Kun; Bian, Liheng; Suo, Jinli; Dai, Qionghai

    2014-11-01

    Capturing four dimensional light field data sequentially using a coded aperture camera is an effective approach but suffers from low signal noise ratio. Although multiplexing can help raise the acquisition quality, noise is still a big issue especially for fast acquisition. To address this problem, this paper proposes a noise robust light field reconstruction method. Firstly, scene dependent noise model is studied and incorporated into the light field reconstruction framework. Then, we derive an optimization algorithm for the final reconstruction. We build a prototype by hacking an off-the-shelf camera for data capturing and prove the concept. The effectiveness of this method is validated with experiments on the real captured data.

  15. Adaptive sparse signal processing of satellite-based radio frequency (RF) recordings of lightning events

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Smith, David A.

    2014-05-01

    Ongoing research at Los Alamos National Laboratory studies the Earth's radio frequency (RF) background utilizing satellite-based RF observations of terrestrial lightning. Such impulsive events are dispersed through the ionosphere and appear as broadband nonlinear chirps at a receiver on-orbit. They occur in the presence of additive noise and structured clutter, making their classification challenging. The Fast On-orbit Recording of Transient Events (FORTE) satellite provided a rich RF lightning database. Application of modern pattern recognition techniques to this database may further lightning research in the scientific community, and potentially improve on-orbit processing and event discrimination capabilities for future satellite payloads. Conventional feature extraction techniques using analytical dictionaries, such as a short-time Fourier basis or wavelets, are not comprehensively suitable for analyzing the broadband RF pulses under consideration here. We explore an alternative approach based on non-analytical dictionaries learned directly from data, and extend two dictionary learning algorithms, K-SVD and Hebbian, for use with satellite RF data. Both algorithms allow us to learn features without relying on analytical constraints or additional knowledge about the expected signal characteristics. We then use a pursuit search over the learned dictionaries to generate sparse classification features, and discuss their performance in terms of event classification. We also use principal component analysis to analyze and compare the respective learned dictionary spaces to the real data space.

  16. Signal enhanced holographic fluorescence microscopy with guide-star reconstruction

    PubMed Central

    Jang, Changwon; Clark, David C.; Kim, Jonghyun; Lee, Byoungho; Kim, Myung K.

    2016-01-01

    We propose a signal enhanced guide-star reconstruction method for holographic fluorescence microscopy. In the late 00’s, incoherent digital holography started to be vigorously studied by several groups to overcome the limitations of conventional digital holography. The basic concept of incoherent digital holography is to acquire the complex hologram from incoherent light by utilizing temporal coherency of a spatially incoherent light source. The advent of incoherent digital holography opened new possibility of holographic fluorescence microscopy (HFM), which was difficult to achieve with conventional digital holography. However there has been an important issue of low and noisy signal in HFM which slows down the system speed and degrades the imaging quality. When guide-star reconstruction is adopted, the image reconstruction gives an improved result compared to the conventional propagation reconstruction method. The guide-star reconstruction method gives higher imaging signal-to-noise ratio since the acquired complex point spread function provides optimal system-adaptive information and can restore the signal buried in the noise more efficiently. We present theoretical explanation and simulation as well as experimental results. PMID:27446653

  17. Sparse multidimensional iterative lineshape-enhanced (SMILE) reconstruction of both non-uniformly sampled and conventional NMR data.

    PubMed

    Ying, Jinfa; Delaglio, Frank; Torchia, Dennis A; Bax, Ad

    2016-11-19

    Implementation of a new algorithm, SMILE, is described for reconstruction of non-uniformly sampled two-, three- and four-dimensional NMR data, which takes advantage of the known phases of the NMR spectrum and the exponential decay of underlying time domain signals. The method is very robust with respect to the chosen sampling protocol and, in its default mode, also extends the truncated time domain signals by a modest amount of non-sampled zeros. SMILE can likewise be used to extend conventional uniformly sampled data, as an effective multidimensional alternative to linear prediction. The program is provided as a plug-in to the widely used NMRPipe software suite, and can be used with default parameters for mainstream application, or with user control over the iterative process to possibly further improve reconstruction quality and to lower the demand on computational resources. For large data sets, the method is robust and demonstrated for sparsities down to ca 1%, and final all-real spectral sizes as large as 300 Gb. Comparison between fully sampled, conventionally processed spectra and randomly selected NUS subsets of this data shows that the reconstruction quality approaches the theoretical limit in terms of peak position fidelity and intensity. SMILE essentially removes the noise-like appearance associated with the point-spread function of signals that are a default of five-fold above the noise level, but impacts the actual thermal noise in the NMR spectra only minimally. Therefore, the appearance and interpretation of SMILE-reconstructed spectra is very similar to that of fully sampled spectra generated by Fourier transformation.

  18. Characterizing and differentiating task-based and resting state fMRI signals via two-stage sparse representations.

    PubMed

    Zhang, Shu; Li, Xiang; Lv, Jinglei; Jiang, Xi; Guo, Lei; Liu, Tianming

    2016-03-01

    A relatively underexplored question in fMRI is whether there are intrinsic differences in terms of signal composition patterns that can effectively characterize and differentiate task-based or resting state fMRI (tfMRI or rsfMRI) signals. In this paper, we propose a novel two-stage sparse representation framework to examine the fundamental difference between tfMRI and rsfMRI signals. Specifically, in the first stage, the whole-brain tfMRI or rsfMRI signals of each subject were composed into a big data matrix, which was then factorized into a subject-specific dictionary matrix and a weight coefficient matrix for sparse representation. In the second stage, all of the dictionary matrices from both tfMRI/rsfMRI data across multiple subjects were composed into another big data-matrix, which was further sparsely represented by a cross-subjects common dictionary and a weight matrix. This framework has been applied on the recently publicly released Human Connectome Project (HCP) fMRI data and experimental results revealed that there are distinctive and descriptive atoms in the cross-subjects common dictionary that can effectively characterize and differentiate tfMRI and rsfMRI signals, achieving 100% classification accuracy. Moreover, our methods and results can be meaningfully interpreted, e.g., the well-known default mode network (DMN) activities can be recovered from the very noisy and heterogeneous aggregated big-data of tfMRI and rsfMRI signals across all subjects in HCP Q1 release.

  19. Gearbox fault diagnosis using adaptive zero phase time-varying filter based on multi-scale chirplet sparse signal decomposition

    NASA Astrophysics Data System (ADS)

    Wu, Chunyan; Liu, Jian; Peng, Fuqiang; Yu, Dejie; Li, Rong

    2013-07-01

    When used for separating multi-component non-stationary signals, the adaptive time-varying filter(ATF) based on multi-scale chirplet sparse signal decomposition(MCSSD) generates phase shift and signal distortion. To overcome this drawback, the zero phase filter is introduced to the mentioned filter, and a fault diagnosis method for speed-changing gearbox is proposed. Firstly, the gear meshing frequency of each gearbox is estimated by chirplet path pursuit. Then, according to the estimated gear meshing frequencies, an adaptive zero phase time-varying filter(AZPTF) is designed to filter the original signal. Finally, the basis for fault diagnosis is acquired by the envelope order analysis to the filtered signal. The signal consisting of two time-varying amplitude modulation and frequency modulation(AM-FM) signals is respectively analyzed by ATF and AZPTF based on MCSSD. The simulation results show the variances between the original signals and the filtered signals yielded by AZPTF based on MCSSD are 13.67 and 41.14, which are far less than variances (323.45 and 482.86) between the original signals and the filtered signals obtained by ATF based on MCSSD. The experiment results on the vibration signals of gearboxes indicate that the vibration signals of the two speed-changing gearboxes installed on one foundation bed can be separated by AZPTF effectively. Based on the demodulation information of the vibration signal of each gearbox, the fault diagnosis can be implemented. Both simulation and experiment examples prove that the proposed filter can extract a mono-component time-varying AM-FM signal from the multi-component time-varying AM-FM signal without distortion.

  20. Two-dimensional signal reconstruction: The correlation sampling method

    SciTech Connect

    Roman, H. E.

    2007-12-15

    An accurate approach for reconstructing a time-dependent two-dimensional signal from non-synchronized time series recorded at points located on a grid is discussed. The method, denoted as correlation sampling, improves the standard conditional sampling approach commonly employed in the study of turbulence in magnetoplasma devices. Its implementation is illustrated in the case of an artificial time-dependent signal constructed using a fractal algorithm that simulates a fluctuating surface. A statistical method is also discussed for distinguishing coherent (i.e., collective) from purely random (noisy) behavior for such two-dimensional fluctuating phenomena.

  1. Reconstruction of Thermographic Signals to Map Perforator Vessels in Humans.

    PubMed

    Liu, Wei-Min; Maivelett, Jordan; Kato, Gregory J; Taylor, James G; Yang, Wen-Chin; Liu, Yun-Chung; Yang, You-Gang; Gorbach, Alexander M

    2012-01-01

    Thermal representations on the surface of a human forearm of underlying perforator vessels have previously been mapped via recovery-enhanced infrared imaging, which is performed as skin blood flow recovers to baseline levels following cooling of the forearm. We noted that the same vessels could also be observed during reactive hyperaemia tests after complete 5-min occlusion of the forearm by an inflatable cuff. However, not all subjects showed vessels with acceptable contrast. Therefore, we applied a thermographic signal reconstruction algorithm to reactive hyperaemia testing, which substantially enhanced signal-to-noise ratios between perforator vessels and their surroundings, thereby enabling their mapping with higher accuracy and a shorter occlusion period.

  2. Network component analysis: reconstruction of regulatory signals in biological systems.

    PubMed

    Liao, James C; Boscolo, Riccardo; Yang, Young-Lyeol; Tran, Linh My; Sabatti, Chiara; Roychowdhury, Vwani P

    2003-12-23

    High-dimensional data sets generated by high-throughput technologies, such as DNA microarray, are often the outputs of complex networked systems driven by hidden regulatory signals. Traditional statistical methods for computing low-dimensional or hidden representations of these data sets, such as principal component analysis and independent component analysis, ignore the underlying network structures and provide decompositions based purely on a priori statistical constraints on the computed component signals. The resulting decomposition thus provides a phenomenological model for the observed data and does not necessarily contain physically or biologically meaningful signals. Here, we develop a method, called network component analysis, for uncovering hidden regulatory signals from outputs of networked systems, when only a partial knowledge of the underlying network topology is available. The a priori network structure information is first tested for compliance with a set of identifiability criteria. For networks that satisfy the criteria, the signals from the regulatory nodes and their strengths of influence on each output node can be faithfully reconstructed. This method is first validated experimentally by using the absorbance spectra of a network of various hemoglobin species. The method is then applied to microarray data generated from yeast Saccharamyces cerevisiae and the activities of various transcription factors during cell cycle are reconstructed by using recently discovered connectivity information for the underlying transcriptional regulatory networks.

  3. Low-dose X-ray computed tomography image reconstruction with a combined low-mAs and sparse-view protocol

    PubMed Central

    Gao, Yang; Bian, Zhaoying; Huang, Jing; Zhang, Yunwan; Niu, Shanzhou; Feng, Qianjin; Chen, Wufan; Liang, Zhengrong; Ma, Jianhua

    2014-01-01

    To realize low-dose imaging in X-ray computed tomography (CT) examination, lowering milliampere-seconds (low-mAs) or reducing the required number of projection views (sparse-view) per rotation around the body has been widely studied as an easy and effective approach. In this study, we are focusing on low-dose CT image reconstruction from the sinograms acquired with a combined low-mAs and sparse-view protocol and propose a two-step image reconstruction strategy. Specifically, to suppress significant statistical noise in the noisy and insufficient sinograms, an adaptive sinogram restoration (ASR) method is first proposed with consideration of the statistical property of sinogram data, and then to further acquire a high-quality image, a total variation based projection onto convex sets (TV-POCS) method is adopted with a slight modification. For simplicity, the present reconstruction strategy was termed as “ASR-TV-POCS.” To evaluate the present ASR-TV-POCS method, both qualitative and quantitative studies were performed on a physical phantom. Experimental results have demonstrated that the present ASR-TV-POCS method can achieve promising gains over other existing methods in terms of the noise reduction, contrast-to-noise ratio, and edge detail preservation. PMID:24977611

  4. Adaptive multimode signal reconstruction from time–frequency representations

    PubMed Central

    Meignen, Sylvain; Oberlin, Thomas; Depalle, Philippe; Flandrin, Patrick

    2016-01-01

    This paper discusses methods for the adaptive reconstruction of the modes of multicomponent AM–FM signals by their time–frequency (TF) representation derived from their short-time Fourier transform (STFT). The STFT of an AM–FM component or mode spreads the information relative to that mode in the TF plane around curves commonly called ridges. An alternative view is to consider a mode as a particular TF domain termed a basin of attraction. Here we discuss two new approaches to mode reconstruction. The first determines the ridge associated with a mode by considering the location where the direction of the reassignment vector sharply changes, the technique used to determine the basin of attraction being directly derived from that used for ridge extraction. A second uses the fact that the STFT of a signal is fully characterized by its zeros (and then the particular distribution of these zeros for Gaussian noise) to deduce an algorithm to compute the mode domains. For both techniques, mode reconstruction is then carried out by simply integrating the information inside these basins of attraction or domains. PMID:26953184

  5. Sparse Sensing of Aerodynamic Loads on Insect Wings

    NASA Astrophysics Data System (ADS)

    Manohar, Krithika; Brunton, Steven; Kutz, J. Nathan

    2015-11-01

    We investigate how insects use sparse sensors on their wings to detect aerodynamic loading and wing deformation using a coupled fluid-structure model given periodically flapping input motion. Recent observations suggest that insects collect sensor information about their wing deformation to inform control actions for maneuvering and rejecting gust disturbances. Given a small number of point measurements of the chordwise aerodynamic loads from the sparse sensors, we reconstruct the entire chordwise loading using sparsesensing - a signal processing technique that reconstructs a signal from a small number of measurements using l1 norm minimization of sparse modal coefficients in some basis. We compare reconstructions from sensors randomly sampled from probability distributions biased toward different regions along the wing chord. In this manner, we determine the preferred regions along the chord for sensor placement and for estimating chordwise loads to inform control decisions in flight.

  6. WE-EF-207-07: Dual Energy CT with One Full Scan and a Second Sparse-View Scan Using Structure Preserving Iterative Reconstruction (SPIR)

    SciTech Connect

    Wang, T; Zhu, L

    2015-06-15

    Purpose: Conventional dual energy CT (DECT) reconstructs CT and basis material images from two full-size projection datasets with different energy spectra. To relax the data requirement, we propose an iterative DECT reconstruction algorithm using one full scan and a second sparse-view scan by utilizing redundant structural information of the same object acquired at two different energies. Methods: We first reconstruct a full-scan CT image using filtered-backprojection (FBP) algorithm. The material similarities of each pixel with other pixels are calculated by an exponential function about pixel value differences. We assume that the material similarities of pixels remains in the second CT scan, although pixel values may vary. An iterative method is designed to reconstruct the second CT image from reduced projections. Under the data fidelity constraint, the algorithm minimizes the L2 norm of the difference between pixel value and its estimation, which is the average of other pixel values weighted by their similarities. The proposed algorithm, referred to as structure preserving iterative reconstruction (SPIR), is evaluated on physical phantoms. Results: On the Catphan600 phantom, SPIR-based DECT method with a second 10-view scan reduces the noise standard deviation of a full-scan FBP CT reconstruction by a factor of 4 with well-maintained spatial resolution, while iterative reconstruction using total-variation regularization (TVR) degrades the spatial resolution at the same noise level. The proposed method achieves less than 1% measurement difference on electron density map compared with the conventional two-full-scan DECT. On an anthropomorphic pediatric phantom, our method successfully reconstructs the complicated vertebra structures and decomposes bone and soft tissue. Conclusion: We develop an effective method to reduce the number of views and therefore data acquisition in DECT. We show that SPIR-based DECT using one full scan and a second 10-view scan can

  7. Complexity-reduced digital predistortion for subcarrier multiplexed radio over fiber systems transmitting sparse multi-band RF signals.

    PubMed

    Pei, Yinqing; Xu, Kun; Li, Jianqiang; Zhang, Anxu; Dai, Yitang; Ji, Yuefeng; Lin, Jintong

    2013-02-11

    A novel multi-band digital predistortion (DPD) technique is proposed to linearize the subcarrier multiplexed radio-over-fiber (SCM-RoF) system transmitting sparse multi-band RF signal with large blank spectra between the constituent RF bands. DPD performs on the baseband signal of each individual RF band before up-conversion and RF combination. By disregarding the blank spectra, the processing bandwidth of the proposed DPD technique is greatly reduced, which is only determined by the baseband signal bandwidth of each individual RF band, rather than the entire bandwidth of the combined multi-band RF signal. Experimental demonstration is performed in a directly modulated SCM-RoF system transmitting two 64QAM modulated OFDM signals on 2.4GHz band and 3.6GHz band. Results show that the adjacent channel power (ACP) is suppressed by 15dB leading to significant improvement of the EVM performances of the signals on both of the two bands.

  8. Reconstruction of movement-related intracortical activity from micro-electrocorticogram array signals in monkey primary motor cortex

    NASA Astrophysics Data System (ADS)

    Watanabe, Hidenori; Sato, Masa-aki; Suzuki, Takafumi; Nambu, Atsushi; Nishimura, Yukio; Kawato, Mitsuo; Isa, Tadashi

    2012-06-01

    Subdural electrode arrays provide stable, less invasive electrocorticogram (ECoG) recordings of neural signals than multichannel needle electrodes. Accurate reconstruction of intracortical local field potentials (LFPs) from ECoG signals would provide a critical step for the development of a less invasive, high-performance brain-machine interface; however, neural signals from individual ECoG channels are generally coarse and have limitations in estimating deep layer LFPs. Here, we developed a high-density, 32-channel, micro-ECoG array and applied a sparse linear regression algorithm to reconstruct the LFPs at various depths of primary motor cortex (M1) in a monkey performing a reach-and-grasp task. At 0.2 mm beneath the cortical surface, the real and estimated LFPs were significantly correlated (correlation coefficient (r); 0.66 ± 0.11), and the r at 3.2 mm was still as high as 0.55 ± 0.04. A time-frequency analysis of the reconstructed LFP showed clear transition between resting and movements by the monkey. These methods would be a powerful tool with wide-ranging applicability in neuroscience studies.

  9. Comparison of pulse phase and thermographic signal reconstruction processing methods

    NASA Astrophysics Data System (ADS)

    Oswald-Tranta, Beata; Shepard, Steven M.

    2013-05-01

    Active thermography data for nondestructive testing has traditionally been evaluated by either visual or numerical identification of anomalous surface temperature contrast in the IR image sequence obtained as the target sample cools in response to thermal stimulation. However, in recent years, it has been demonstrated that considerably more information about the subsurface condition of a sample can be obtained by evaluating the time history of each pixel independently. In this paper, we evaluate the capabilities of two such analysis techniques, Pulse Phase Thermography (PPT) and Thermographic Signal Reconstruction (TSR) using induction and optical flash excitation. Data sequences from optical pulse and scanned induction heating are analyzed with both methods. Results are evaluated in terms of signal-tobackground ratio for a given subsurface feature. In addition to the experimental data, we present finite element simulation models with varying flaw diameter and depth, and discuss size measurement accuracy and the effect of noise on detection limits and sensitivity for both methods.

  10. Joint surface reconstruction and 4D deformation estimation from sparse data and prior knowledge for marker-less Respiratory motion tracking

    SciTech Connect

    Berkels, Benjamin; Rumpf, Martin; Bauer, Sebastian; Ettl, Svenja; Arold, Oliver; Hornegger, Joachim

    2013-09-15

    Purpose: The intraprocedural tracking of respiratory motion has the potential to substantially improve image-guided diagnosis and interventions. The authors have developed a sparse-to-dense registration approach that is capable of recovering the patient's external 3D body surface and estimating a 4D (3D + time) surface motion field from sparse sampling data and patient-specific prior shape knowledge.Methods: The system utilizes an emerging marker-less and laser-based active triangulation (AT) sensor that delivers sparse but highly accurate 3D measurements in real-time. These sparse position measurements are registered with a dense reference surface extracted from planning data. Thereby a dense displacement field is recovered, which describes the spatio-temporal 4D deformation of the complete patient body surface, depending on the type and state of respiration. It yields both a reconstruction of the instantaneous patient shape and a high-dimensional respiratory surrogate for respiratory motion tracking. The method is validated on a 4D CT respiration phantom and evaluated on both real data from an AT prototype and synthetic data sampled from dense surface scans acquired with a structured-light scanner.Results: In the experiments, the authors estimated surface motion fields with the proposed algorithm on 256 datasets from 16 subjects and in different respiration states, achieving a mean surface reconstruction accuracy of ±0.23 mm with respect to ground truth data—down from a mean initial surface mismatch of 5.66 mm. The 95th percentile of the local residual mesh-to-mesh distance after registration did not exceed 1.17 mm for any subject. On average, the total runtime of our proof of concept CPU implementation is 2.3 s per frame, outperforming related work substantially.Conclusions: In external beam radiation therapy, the approach holds potential for patient monitoring during treatment using the reconstructed surface, and for motion-compensated dose delivery using

  11. Adaptive sparse reconstruction with joint parametric estimation for high-speed uniformly moving targets in coincidence imaging radar

    NASA Astrophysics Data System (ADS)

    Zha, Guofeng; Wang, Hongqiang; Yang, Zhaocheng; Cheng, Yongqiang; Qin, Yuliang

    2016-04-01

    As a complementary imaging technology, coincidence imaging radar (CIR) achieves high resolution for stationary or low-speed targets under the assumption of ignoring the influence of the original position mismatching. As to high-speed moving targets moving from the original imaging cell to other imaging cells during imaging, it is inaccurate to reconstruct the target using the previous imaging plane. We focus on the recovery problem for high-speed moving targets in the CIR system based on the intrapulse frequency random modulation signal in a single pulse. The effects induced by the motion on the imaging performance are analyzed. Because the basis matrix in the CIR imaging equation is determined by the unknown velocity parameter of the moving target, both the target images and basis matrix should be estimated jointly. We propose an adaptive joint parametric estimation recovery algorithm based on the Tikhonov regularization method to update the target velocity and basis matrix adaptively and recover the target images synchronously. Finally, the target velocity and target images are obtained in an iterative manner. Simulation results are presented to demonstrate the efficiency of the proposed algorithm.

  12. Truncation Error Analysis on Reconstruction of Signal From Unsymmetrical Local Average Sampling.

    PubMed

    Pang, Yanwei; Song, Zhanjie; Li, Xuelong; Pan, Jing

    2015-10-01

    The classical Shannon sampling theorem is suitable for reconstructing a band-limited signal from its sampled values taken at regular instances with equal step by using the well-known sinc function. However, due to the inertia of the measurement apparatus, it is impossible to measure the value of a signal precisely at such discrete time. In practice, only unsymmetrically local averages of signal near the regular instances can be measured and used as the inputs for a signal reconstruction method. In addition, when implemented in hardware, the traditional sinc function cannot be directly used for signal reconstruction. We propose using the Taylor expansion of sinc function to reconstruct signal sampled from unsymmetrically local averages and give the upper bound of the reconstruction error (i.e., truncation error). The convergency of the reconstruction method is also presented.

  13. Reconstruction of signaling networks regulating fungal morphogenesis by transcriptomics.

    PubMed

    Meyer, Vera; Arentshorst, Mark; Flitter, Simon J; Nitsche, Benjamin M; Kwon, Min Jin; Reynaga-Peña, Cristina G; Bartnicki-Garcia, Salomon; van den Hondel, Cees A M J J; Ram, Arthur F J

    2009-11-01

    Coordinated control of hyphal elongation and branching is essential for sustaining mycelial growth of filamentous fungi. In order to study the molecular machinery ensuring polarity control in the industrial fungus Aspergillus niger, we took advantage of the temperature-sensitive (ts) apical-branching ramosa-1 mutant. We show here that this strain serves as an excellent model system to study critical steps of polar growth control during mycelial development and report for the first time a transcriptomic fingerprint of apical branching for a filamentous fungus. This fingerprint indicates that several signal transduction pathways, including TORC2, phospholipid, calcium, and cell wall integrity signaling, concertedly act to control apical branching. We furthermore identified the genetic locus affected in the ramosa-1 mutant by complementation of the ts phenotype. Sequence analyses demonstrated that a single amino acid exchange in the RmsA protein is responsible for induced apical branching of the ramosa-1 mutant. Deletion experiments showed that the corresponding rmsA gene is essential for the growth of A. niger, and complementation analyses with Saccharomyces cerevisiae evidenced that RmsA serves as a functional equivalent of the TORC2 component Avo1p. TORC2 signaling is required for actin polarization and cell wall integrity in S. cerevisiae. Congruently, our microscopic investigations showed that polarized actin organization and chitin deposition are disturbed in the ramosa-1 mutant. The integration of the transcriptomic, genetic, and phenotypic data obtained in this study allowed us to reconstruct a model for cellular events involved in apical branching.

  14. Feature-Enhanced, Model-Based Sparse Aperture Imaging

    DTIC Science & Technology

    2008-03-01

    imaging, anisotropy characterization, feature-enhanced imaging, inverse problems, superresolution , anisotropy, sparse signal representation, overcomplete...number of such activities ourselves, and we provide very brief information on some of them here. We have developed a superresolution technique for...enhanced, superresolution image reconstruction. This framework provides a number of desirable features including preservation of anisotropic scatterers

  15. Efficient Processing of Acoustic Signals for High Rate Information Transmission over Sparse Underwater Channels

    DTIC Science & Technology

    2016-09-02

    real- time implementation. To reduce computational complexity of signal processing and improve performance of data detection, receiver structures that...the fractionally-spaced channel estimators and the short feedforward equalizer filters. Receiver algorithm is applied to real data transmitted at 10...on minimization of the mean-squared error in data symbol estimation. This tap selection method is not optimal because the input signal to the

  16. On signals faint and sparse: The ACICA algorithm for blind de-trending of exoplanetary transits with low signal-to-noise

    SciTech Connect

    Waldmann, I. P.

    2014-01-01

    Independent component analysis (ICA) has recently been shown to be a promising new path in data analysis and de-trending of exoplanetary time series signals. Such approaches do not require or assume any prior or auxiliary knowledge about the data or instrument in order to de-convolve the astrophysical light curve signal from instrument or stellar systematic noise. These methods are often known as 'blind-source separation' (BSS) algorithms. Unfortunately, all BSS methods suffer from an amplitude and sign ambiguity of their de-convolved components, which severely limits these methods in low signal-to-noise (S/N) observations where their scalings cannot be determined otherwise. Here we present a novel approach to calibrate ICA using sparse wavelet calibrators. The Amplitude Calibrated Independent Component Analysis (ACICA) allows for the direct retrieval of the independent components' scalings and the robust de-trending of low S/N data. Such an approach gives us an unique and unprecedented insight in the underlying morphology of a data set, which makes this method a powerful tool for exoplanetary data de-trending and signal diagnostics.

  17. Getting a decent (but sparse) signal to the brain for users of cochlear implants.

    PubMed

    Wilson, Blake S

    2015-04-01

    The challenge in getting a decent signal to the brain for users of cochlear implants (CIs) is described. A breakthrough occurred in 1989 that later enabled most users to understand conversational speech with their restored hearing alone. Subsequent developments included stimulation in addition to that provided with a unilateral CI, either with electrical stimulation on both sides or with acoustic stimulation in combination with a unilateral CI, the latter for persons with residual hearing at low frequencies in either or both ears. Both types of adjunctive stimulation produced further improvements in performance for substantial fractions of patients. Today, the CI and related hearing prostheses are the standard of care for profoundly deaf persons and ever-increasing indications are now allowing persons with less severe losses to benefit from these marvelous technologies. The steps in achieving the present levels of performance are traced, and some possibilities for further improvements are mentioned. This article is part of a Special Issue entitled .

  18. Application of linear graph embedding as a dimensionality reduction technique and sparse representation classifier as a post classifier for the classification of epilepsy risk levels from EEG signals

    NASA Astrophysics Data System (ADS)

    Prabhakar, Sunil Kumar; Rajaguru, Harikumar

    2015-12-01

    The most common and frequently occurring neurological disorder is epilepsy and the main method useful for the diagnosis of epilepsy is electroencephalogram (EEG) signal analysis. Due to the length of EEG recordings, EEG signal analysis method is quite time-consuming when it is processed manually by an expert. This paper proposes the application of Linear Graph Embedding (LGE) concept as a dimensionality reduction technique for processing the epileptic encephalographic signals and then it is classified using Sparse Representation Classifiers (SRC). SRC is used to analyze the classification of epilepsy risk levels from EEG signals and the parameters such as Sensitivity, Specificity, Time Delay, Quality Value, Performance Index and Accuracy are analyzed.

  19. Reconstruction of laser ultrasonic wavefield images from reduced sparse measurements using compressed sensing aided super-resolution

    NASA Astrophysics Data System (ADS)

    Park, Byeongjin; Sohn, Hoon

    2017-02-01

    Laser ultrasonic scanning is attractive for damage detection due to its noncontact nature, sensitivity to local damage, and high spatial resolution. However, its practicality is limited because scanning at a high spatial resolution demands a prohibitively long scanning time. Recently, compressed sensing (CS) and super-resolution (SR) are gaining popularity in the image recovery field. CS estimates unmeasured ultrasonic responses from measured responses, and SR recovers high spatial frequency information from low resolution images. Inspired by these techniques, a laser ultrasonic wavefield reconstruction technique is developed to localize and visualize damage with a reduced number of ultrasonic measurements. First, a low spatial resolution ultrasonic wavefield image for a given inspection region is reconstructed from reduced number of ultrasonic measurements using CS. Here, the ultrasonic waves are generated using a pulsed laser, and measured at a fixed sensing point using a laser Doppler vibrometer (LDV). Then, a high spatial resolution ultrasonic wave image is recovered from the reconstructed low spatial resolution image using SR. The number of measurement points required for ultrasonic wavefield imaging is significantly reduced over 90%. The performance of the proposed technique is validated by an experiment performed on a cracked aluminum plate.

  20. Method and apparatus for reconstructing in-cylinder pressure and correcting for signal decay

    SciTech Connect

    Huang, Jian

    2013-03-12

    A method comprises steps for reconstructing in-cylinder pressure data from a vibration signal collected from a vibration sensor mounted on an engine component where it can generate a signal with a high signal-to-noise ratio, and correcting the vibration signal for errors introduced by vibration signal charge decay and sensor sensitivity. The correction factors are determined as a function of estimated motoring pressure and the measured vibration signal itself with each of these being associated with the same engine cycle. Accordingly, the method corrects for charge decay and changes in sensor sensitivity responsive to different engine conditions to allow greater accuracy in the reconstructed in-cylinder pressure data. An apparatus is also disclosed for practicing the disclosed method, comprising a vibration sensor, a data acquisition unit for receiving the vibration signal, a computer processing unit for processing the acquired signal and a controller for controlling the engine operation based on the reconstructed in-cylinder pressure.

  1. Reconstruction and signal propagation analysis of the Syk signaling network in breast cancer cells.

    PubMed

    Naldi, Aurélien; Larive, Romain M; Czerwinska, Urszula; Urbach, Serge; Montcourrier, Philippe; Roy, Christian; Solassol, Jérôme; Freiss, Gilles; Coopman, Peter J; Radulescu, Ovidiu

    2017-03-01

    The ability to build in-depth cell signaling networks from vast experimental data is a key objective of computational biology. The spleen tyrosine kinase (Syk) protein, a well-characterized key player in immune cell signaling, was surprisingly first shown by our group to exhibit an onco-suppressive function in mammary epithelial cells and corroborated by many other studies, but the molecular mechanisms of this function remain largely unsolved. Based on existing proteomic data, we report here the generation of an interaction-based network of signaling pathways controlled by Syk in breast cancer cells. Pathway enrichment of the Syk targets previously identified by quantitative phospho-proteomics indicated that Syk is engaged in cell adhesion, motility, growth and death. Using the components and interactions of these pathways, we bootstrapped the reconstruction of a comprehensive network covering Syk signaling in breast cancer cells. To generate in silico hypotheses on Syk signaling propagation, we developed a method allowing to rank paths between Syk and its targets. We first annotated the network according to experimental datasets. We then combined shortest path computation with random walk processes to estimate the importance of individual interactions and selected biologically relevant pathways in the network. Molecular and cell biology experiments allowed to distinguish candidate mechanisms that underlie the impact of Syk on the regulation of cortactin and ezrin, both involved in actin-mediated cell adhesion and motility. The Syk network was further completed with the results of our biological validation experiments. The resulting Syk signaling sub-networks can be explored via an online visualization platform.

  2. Reconstruction and signal propagation analysis of the Syk signaling network in breast cancer cells

    PubMed Central

    Urbach, Serge; Montcourrier, Philippe; Roy, Christian; Solassol, Jérôme; Freiss, Gilles; Radulescu, Ovidiu

    2017-01-01

    The ability to build in-depth cell signaling networks from vast experimental data is a key objective of computational biology. The spleen tyrosine kinase (Syk) protein, a well-characterized key player in immune cell signaling, was surprisingly first shown by our group to exhibit an onco-suppressive function in mammary epithelial cells and corroborated by many other studies, but the molecular mechanisms of this function remain largely unsolved. Based on existing proteomic data, we report here the generation of an interaction-based network of signaling pathways controlled by Syk in breast cancer cells. Pathway enrichment of the Syk targets previously identified by quantitative phospho-proteomics indicated that Syk is engaged in cell adhesion, motility, growth and death. Using the components and interactions of these pathways, we bootstrapped the reconstruction of a comprehensive network covering Syk signaling in breast cancer cells. To generate in silico hypotheses on Syk signaling propagation, we developed a method allowing to rank paths between Syk and its targets. We first annotated the network according to experimental datasets. We then combined shortest path computation with random walk processes to estimate the importance of individual interactions and selected biologically relevant pathways in the network. Molecular and cell biology experiments allowed to distinguish candidate mechanisms that underlie the impact of Syk on the regulation of cortactin and ezrin, both involved in actin-mediated cell adhesion and motility. The Syk network was further completed with the results of our biological validation experiments. The resulting Syk signaling sub-networks can be explored via an online visualization platform. PMID:28306714

  3. A Max-Margin Perspective on Sparse Representation-Based Classification

    DTIC Science & Technology

    2013-11-30

    ABSTRACT 16. SECURITY CLASSIFICATION OF: 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES 12. DISTRIBUTION AVAILIBILITY...Perspective on Sparse Representation-Based Classification Sparse Representation-based Classification (SRC) is a powerful tool in distinguishing signal...a reconstructive perspective, which neither offer- s any guarantee on its classification performance nor pro- The views, opinions and/or findings

  4. Preliminary Study of Image Reconstruction Algorithm on a Digital Signal Processor

    DTIC Science & Technology

    2014-03-01

    Specifically, digital signal processor ( DSP ) architecture is evaluated for computing image reconstruction algorithm. A Freescale DSP , drawing power in the...range of 10 W, is considered for analysis. Software tools and programming techniques used during development are documented for the DSP platform...TERMS Digital signal processor, signal image processing, coprocessor, back-projection, DSP optimization 16. SECURITY CLASSIFICATION OF: 17. LIMITATION

  5. Semi-supervised Stacked Label Consistent Autoencoder for Reconstruction and Analysis of Biomedical Signals.

    PubMed

    Majumdar, Angshul; Gogna, Anupriya; Ward, Rabab

    2016-11-22

    An autoencoder based framework that simultaneously reconstruct and classify biomedical signals is proposed. Previous work has treated reconstruction and classification as separate problems. This is the first work that proposes a combined framework to address the issue in a holistic fashion.

  6. Accelerated nonlinear multichannel ultrasonic tomographic imaging using target sparseness.

    PubMed

    Chengdong Dong; Yuanwei Jin; Enyue Lu

    2014-03-01

    This paper presents an accelerated iterative Landweber method for nonlinear ultrasonic tomographic imaging in a multiple-input multiple-output (MIMO) configuration under a sparsity constraint on the image. The proposed method introduces the emerging MIMO signal processing techniques and target sparseness constraints in the traditional computational imaging field, thus significantly improves the speed of image reconstruction compared with the conventional imaging method while producing high quality images. Using numerical examples, we demonstrate that incorporating prior knowledge about the imaging field such as target sparseness accelerates significantly the convergence of the iterative imaging method, which provides considerable benefits to real-time tomographic imaging applications.

  7. Baseline Signal Reconstruction for Temperature Compensation in Lamb Wave-Based Damage Detection.

    PubMed

    Liu, Guoqiang; Xiao, Yingchun; Zhang, Hua; Ren, Gexue

    2016-08-11

    Temperature variations have significant effects on propagation of Lamb wave and therefore can severely limit the damage detection for Lamb wave. In order to mitigate the temperature effect, a temperature compensation method based on baseline signal reconstruction is developed for Lamb wave-based damage detection. The method is a reconstruction of a baseline signal at the temperature of current signal. In other words, it compensates the baseline signal to the temperature of current signal. The Hilbert transform is used to compensate the phase of baseline signal. The Orthogonal matching pursuit (OMP) is used to compensate the amplitude of baseline signal. Experiments were conducted on two composite panels to validate the effectiveness of the proposed method. Results show that the proposed method could effectively work for temperature intervals of at least 18 °C with the baseline signal temperature as the center, and can be applied to the actual damage detection.

  8. Baseline Signal Reconstruction for Temperature Compensation in Lamb Wave-Based Damage Detection

    PubMed Central

    Liu, Guoqiang; Xiao, Yingchun; Zhang, Hua; Ren, Gexue

    2016-01-01

    Temperature variations have significant effects on propagation of Lamb wave and therefore can severely limit the damage detection for Lamb wave. In order to mitigate the temperature effect, a temperature compensation method based on baseline signal reconstruction is developed for Lamb wave-based damage detection. The method is a reconstruction of a baseline signal at the temperature of current signal. In other words, it compensates the baseline signal to the temperature of current signal. The Hilbert transform is used to compensate the phase of baseline signal. The Orthogonal matching pursuit (OMP) is used to compensate the amplitude of baseline signal. Experiments were conducted on two composite panels to validate the effectiveness of the proposed method. Results show that the proposed method could effectively work for temperature intervals of at least 18 °C with the baseline signal temperature as the center, and can be applied to the actual damage detection. PMID:27529245

  9. Reconstruction of stress corrosion cracks using signals of pulsed eddy current testing

    NASA Astrophysics Data System (ADS)

    Wang, Li; Xie, Shejuan; Chen, Zhenmao; Li, Yong; Wang, Xiaowei; Takagi, Toshiyuki

    2013-06-01

    A scheme to apply signals of pulsed eddy current testing (PECT) to reconstruct a deep stress corrosion crack (SCC) is proposed on the basis of a multi-layer and multi-frequency reconstruction strategy. First, a numerical method is introduced to extract conventional eddy current testing (ECT) signals of different frequencies from the PECT responses at different scanning points, which are necessary for multi-frequency ECT inversion. Second, the conventional fast forward solver for ECT signal simulation is upgraded to calculate the single-frequency pickup signal of a magnetic field by introducing a strategy that employs a tiny search coil. Using the multiple-frequency ECT signals and the upgraded fast signal simulator, we reconstructed the shape profiles and conductivity of an SCC at different depths layer-by-layer with a hybrid inversion scheme of the conjugate gradient and particle swarm optimisation. Several modelled SCCs of rectangular or stepwise shape in an SUS304 plate are reconstructed from simulated PECT signals with artificial noise. The reconstruction results show better precision in crack depth than the conventional ECT inversion method, which demonstrates the validity and efficiency of the proposed PECT inversion scheme.

  10. Reconstruction of sound source signal by analytical passive TR in the environment with airflow

    NASA Astrophysics Data System (ADS)

    Wei, Long; Li, Min; Yang, Debin; Niu, Feng; Zeng, Wu

    2017-03-01

    In the acoustic design of air vehicles, the time-domain signals of noise sources on the surface of air vehicles can serve as data support to reveal the noise source generation mechanism, analyze acoustic fatigue, and take measures for noise insulation and reduction. To rapidly reconstruct the time-domain sound source signals in an environment with flow, a method combining the analytical passive time reversal mirror (AP-TR) with a shear flow correction is proposed. In this method, the negative influence of flow on sound wave propagation is suppressed by the shear flow correction, obtaining the corrected acoustic propagation time delay and path. Those corrected time delay and path together with the microphone array signals are then submitted to the AP-TR, reconstructing more accurate sound source signals in the environment with airflow. As an analytical method, AP-TR offers a supplementary way in 3D space to reconstruct the signal of sound source in the environment with airflow instead of the numerical TR. Experiments on the reconstruction of the sound source signals of a pair of loud speakers are conducted in an anechoic wind tunnel with subsonic airflow to validate the effectiveness and priorities of the proposed method. Moreover the comparison by theorem and experiment result between the AP-TR and the time-domain beamforming in reconstructing the sound source signal is also discussed.

  11. Improved Reconstruction of Radio Holographic Signal for Forward Scatter Radar Imaging.

    PubMed

    Hu, Cheng; Liu, Changjiang; Wang, Rui; Zeng, Tao

    2016-05-07

    Forward scatter radar (FSR), as a specially configured bistatic radar, is provided with the capabilities of target recognition and classification by the Shadow Inverse Synthetic Aperture Radar (SISAR) imaging technology. This paper mainly discusses the reconstruction of radio holographic signal (RHS), which is an important procedure in the signal processing of FSR SISAR imaging. Based on the analysis of signal characteristics, the method for RHS reconstruction is improved in two parts: the segmental Hilbert transformation and the reconstruction of mainlobe RHS. In addition, a quantitative analysis of the method's applicability is presented by distinguishing between the near field and far field in forward scattering. Simulation results validated the method's advantages in improving the accuracy of RHS reconstruction and imaging.

  12. Improved Reconstruction of Radio Holographic Signal for Forward Scatter Radar Imaging

    PubMed Central

    Hu, Cheng; Liu, Changjiang; Wang, Rui; Zeng, Tao

    2016-01-01

    Forward scatter radar (FSR), as a specially configured bistatic radar, is provided with the capabilities of target recognition and classification by the Shadow Inverse Synthetic Aperture Radar (SISAR) imaging technology. This paper mainly discusses the reconstruction of radio holographic signal (RHS), which is an important procedure in the signal processing of FSR SISAR imaging. Based on the analysis of signal characteristics, the method for RHS reconstruction is improved in two parts: the segmental Hilbert transformation and the reconstruction of mainlobe RHS. In addition, a quantitative analysis of the method’s applicability is presented by distinguishing between the near field and far field in forward scattering. Simulation results validated the method’s advantages in improving the accuracy of RHS reconstruction and imaging. PMID:27164114

  13. Signal Reconstruction from Nonuniformly Spaced Samples Using Evolutionary Slepian Transform-Based POCS

    NASA Astrophysics Data System (ADS)

    Oh, Jinsung; Senay, Seda; Chaparro, LuisF

    2010-12-01

    We consider the reconstruction of signals from nonuniformly spaced samples using a projection onto convex sets (POCSs) implemented with the evolutionary time-frequency transform. Signals of practical interest have finite time support and are nearly band-limited, and as such can be better represented by Slepian functions than by sinc functions. The evolutionary spectral theory provides a time-frequency representation of nonstationary signals, and for deterministic signals the kernel of the evolutionary representation can be derived from a Slepian projection of the signal. The representation of low pass and band pass signals is thus efficiently done by means of the Slepian functions. Assuming the given nonuniformly spaced samples are from a signal satisfying the finite time support and the essential band-limitedness conditions with a known center frequency, imposing time and frequency limitations in the evolutionary transformation permit us to reconstruct the signal iteratively. Restricting the signal to a known finite time and frequency support, a closed convex set, the projection generated by the time-frequency transformation converges into a close approximation to the original signal. Simulation results illustrate the evolutionary Slepian-based transform in the representation and reconstruction of signals from irregularly-spaced and contiguous lost samples.

  14. Bayesian Learning in Sparse Graphical Factor Models via Variational Mean-Field Annealing.

    PubMed

    Yoshida, Ryo; West, Mike

    2010-05-01

    We describe a class of sparse latent factor models, called graphical factor models (GFMs), and relevant sparse learning algorithms for posterior mode estimation. Linear, Gaussian GFMs have sparse, orthogonal factor loadings matrices, that, in addition to sparsity of the implied covariance matrices, also induce conditional independence structures via zeros in the implied precision matrices. We describe the models and their use for robust estimation of sparse latent factor structure and data/signal reconstruction. We develop computational algorithms for model exploration and posterior mode search, addressing the hard combinatorial optimization involved in the search over a huge space of potential sparse configurations. A mean-field variational technique coupled with annealing is developed to successively generate "artificial" posterior distributions that, at the limiting temperature in the annealing schedule, define required posterior modes in the GFM parameter space. Several detailed empirical studies and comparisons to related approaches are discussed, including analyses of handwritten digit image and cancer gene expression data.

  15. New signal processing technique for density profile reconstruction using reflectometry

    SciTech Connect

    Clairet, F.; Bottereau, C.; Ricaud, B.; Briolle, F.; Heuraux, S.

    2011-08-15

    Reflectometry profile measurement requires an accurate determination of the plasma reflected signal. Along with a good resolution and a high signal to noise ratio of the phase measurement, adequate data analysis is required. A new data processing based on time-frequency tomographic representation is used. It provides a clearer separation between multiple components and improves isolation of the relevant signals. In this paper, this data processing technique is applied to two sets of signals coming from two different reflectometer devices used on the Tore Supra tokamak. For the standard density profile reflectometry, it improves the initialization process and its reliability, providing a more accurate profile determination in the far scrape-off layer with density measurements as low as 10{sup 16} m{sup -1}. For a second reflectometer, which provides measurements in front of a lower hybrid launcher, this method improves the separation of the relevant plasma signal from multi-reflection processes due to the proximity of the plasma.

  16. Accelerated signal encoding and reconstruction using pixon method

    DOEpatents

    Puetter, Richard; Yahil, Amos; Pina, Robert

    2005-05-17

    The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.

  17. Accelerated signal encoding and reconstruction using pixon method

    DOEpatents

    Puetter, Richard; Yahil, Amos

    2002-01-01

    The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.

  18. Accelerated signal encoding and reconstruction using pixon method

    DOEpatents

    Puetter, Richard; Yahil, Amos

    2002-01-01

    The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape size, and/or position) as needed to best fit the data.

  19. Skull Defects in Finite Element Head Models for Source Reconstruction from Magnetoencephalography Signals.

    PubMed

    Lau, Stephan; Güllmar, Daniel; Flemming, Lars; Grayden, David B; Cook, Mark J; Wolters, Carsten H; Haueisen, Jens

    2016-01-01

    Magnetoencephalography (MEG) signals are influenced by skull defects. However, there is a lack of evidence of this influence during source reconstruction. Our objectives are to characterize errors in source reconstruction from MEG signals due to ignoring skull defects and to assess the ability of an exact finite element head model to eliminate such errors. A detailed finite element model of the head of a rabbit used in a physical experiment was constructed from magnetic resonance and co-registered computer tomography imaging that differentiated nine tissue types. Sources of the MEG measurements above intact skull and above skull defects respectively were reconstructed using a finite element model with the intact skull and one incorporating the skull defects. The forward simulation of the MEG signals reproduced the experimentally observed characteristic magnitude and topography changes due to skull defects. Sources reconstructed from measured MEG signals above intact skull matched the known physical locations and orientations. Ignoring skull defects in the head model during reconstruction displaced sources under a skull defect away from that defect. Sources next to a defect were reoriented. When skull defects, with their physical conductivity, were incorporated in the head model, the location and orientation errors were mostly eliminated. The conductivity of the skull defect material non-uniformly modulated the influence on MEG signals. We propose concrete guidelines for taking into account conducting skull defects during MEG coil placement and modeling. Exact finite element head models can improve localization of brain function, specifically after surgery.

  20. Skull Defects in Finite Element Head Models for Source Reconstruction from Magnetoencephalography Signals

    PubMed Central

    Lau, Stephan; Güllmar, Daniel; Flemming, Lars; Grayden, David B.; Cook, Mark J.; Wolters, Carsten H.; Haueisen, Jens

    2016-01-01

    Magnetoencephalography (MEG) signals are influenced by skull defects. However, there is a lack of evidence of this influence during source reconstruction. Our objectives are to characterize errors in source reconstruction from MEG signals due to ignoring skull defects and to assess the ability of an exact finite element head model to eliminate such errors. A detailed finite element model of the head of a rabbit used in a physical experiment was constructed from magnetic resonance and co-registered computer tomography imaging that differentiated nine tissue types. Sources of the MEG measurements above intact skull and above skull defects respectively were reconstructed using a finite element model with the intact skull and one incorporating the skull defects. The forward simulation of the MEG signals reproduced the experimentally observed characteristic magnitude and topography changes due to skull defects. Sources reconstructed from measured MEG signals above intact skull matched the known physical locations and orientations. Ignoring skull defects in the head model during reconstruction displaced sources under a skull defect away from that defect. Sources next to a defect were reoriented. When skull defects, with their physical conductivity, were incorporated in the head model, the location and orientation errors were mostly eliminated. The conductivity of the skull defect material non-uniformly modulated the influence on MEG signals. We propose concrete guidelines for taking into account conducting skull defects during MEG coil placement and modeling. Exact finite element head models can improve localization of brain function, specifically after surgery. PMID:27092044

  1. EGFR Signal-Network Reconstruction Demonstrates Metabolic Crosstalk in EMT

    PubMed Central

    Choudhary, Kumari Sonal; Rohatgi, Neha; Briem, Eirikur; Gudjonsson, Thorarinn; Gudmundsson, Steinn; Rolfsson, Ottar

    2016-01-01

    Epithelial to mesenchymal transition (EMT) is an important event during development and cancer metastasis. There is limited understanding of the metabolic alterations that give rise to and take place during EMT. Dysregulation of signalling pathways that impact metabolism, including epidermal growth factor receptor (EGFR), are however a hallmark of EMT and metastasis. In this study, we report the investigation into EGFR signalling and metabolic crosstalk of EMT through constraint-based modelling and analysis of the breast epithelial EMT cell model D492 and its mesenchymal counterpart D492M. We built an EGFR signalling network for EMT based on stoichiometric coefficients and constrained the network with gene expression data to build epithelial (EGFR_E) and mesenchymal (EGFR_M) networks. Metabolic alterations arising from differential expression of EGFR genes was derived from a literature review of AKT regulated metabolic genes. Signaling flux differences between EGFR_E and EGFR_M models subsequently allowed metabolism in D492 and D492M cells to be assessed. Higher flux within AKT pathway in the D492 cells compared to D492M suggested higher glycolytic activity in D492 that we confirmed experimentally through measurements of glucose uptake and lactate secretion rates. The signaling genes from the AKT, RAS/MAPK and CaM pathways were predicted to revert D492M to D492 phenotype. Follow-up analysis of EGFR signaling metabolic crosstalk in three additional breast epithelial cell lines highlighted variability in in vitro cell models of EMT. This study shows that the metabolic phenotype may be predicted by in silico analyses of gene expression data of EGFR signaling genes, but this phenomenon is cell-specific and does not follow a simple trend. PMID:27253373

  2. Free-breathing Sparse Sampling Cine MR Imaging with Iterative Reconstruction for the Assessment of Left Ventricular Function and Mass at 3.0 T.

    PubMed

    Sudarski, Sonja; Henzler, Thomas; Haubenreisser, Holger; Dösch, Christina; Zenge, Michael O; Schmidt, Michaela; Nadar, Mariappan S; Borggrefe, Martin; Schoenberg, Stefan O; Papavassiliu, Theano

    2017-01-01

    Purpose To prospectively evaluate the accuracy of left ventricle (LV) analysis with a two-dimensional real-time cine true fast imaging with steady-state precession (trueFISP) magnetic resonance (MR) imaging sequence featuring sparse data sampling with iterative reconstruction (SSIR) performed with and without breath-hold (BH) commands at 3.0 T. Materials and Methods Ten control subjects (mean age, 35 years; range, 25-56 years) and 60 patients scheduled to undergo a routine cardiac examination that included LV analysis (mean age, 58 years; range, 20-86 years) underwent a fully sampled segmented multiple BH cine sequence (standard of reference) and a prototype undersampled SSIR sequence performed during a single BH and during free breathing (non-BH imaging). Quantitative analysis of LV function and mass was performed. Linear regression, Bland-Altman analysis, and paired t testing were performed. Results Similar to the results in control subjects, analysis of the 60 patients showed excellent correlation with the standard of reference for single-BH SSIR (r = 0.93-0.99) and non-BH SSIR (r = 0.92-0.98) for LV ejection fraction (EF), volume, and mass (P < .0001 for all). Irrespective of breath holding, LV end-diastolic mass was overestimated with SSIR (standard of reference: 163.9 g ± 58.9, single-BH SSIR: 178.5 g ± 62.0 [P < .0001], non-BH SSIR: 175.3 g ± 63.7 [P < .0001]); the other parameters were not significantly different (EF: 49.3% ± 11.9 with standard of reference, 48.8% ± 11.8 with single-BH SSIR, 48.8% ± 11 with non-BH SSIR; P = .03 and P = .12, respectively). Bland-Altman analysis showed similar measurement errors for single-BH SSIR and non-BH SSIR when compared with standard of reference measurements for EF, volume, and mass. Conclusion Assessment of LV function with SSIR at 3.0 T is noninferior to the standard of reference irrespective of BH commands. LV mass, however, is overestimated with SSIR. (©) RSNA, 2016 Online supplemental material is available

  3. NOTE: Automated wavelet denoising of photoacoustic signals for circulating melanoma cell detection and burn image reconstruction

    NASA Astrophysics Data System (ADS)

    Holan, Scott H.; Viator, John A.

    2008-06-01

    Photoacoustic image reconstruction may involve hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a biological sample with laser light are used to produce an image of the acoustic source. Each of these measurements must undergo some signal processing, such as denoising or system deconvolution. In order to process the numerous signals, we have developed an automated wavelet algorithm for denoising signals. We appeal to the discrete wavelet transform for denoising photoacoustic signals generated in a dilute melanoma cell suspension and in thermally coagulated blood. We used 5, 9, 45 and 270 melanoma cells in the laser beam path as test concentrations. For the burn phantom, we used coagulated blood in 1.6 mm silicon tube submerged in Intralipid. Although these two targets were chosen as typical applications for photoacoustic detection and imaging, they are of independent interest. The denoising employs level-independent universal thresholding. In order to accommodate nonradix-2 signals, we considered a maximal overlap discrete wavelet transform (MODWT). For the lower melanoma cell concentrations, as the signal-to-noise ratio approached 1, denoising allowed better peak finding. For coagulated blood, the signals were denoised to yield a clean photoacoustic resulting in an improvement of 22% in the reconstructed image. The entire signal processing technique was automated so that minimal user intervention was needed to reconstruct the images. Such an algorithm may be used for image reconstruction and signal extraction for applications such as burn depth imaging, depth profiling of vascular lesions in skin and the detection of single cancer cells in blood samples.

  4. Automated wavelet denoising of photoacoustic signals for burn-depth image reconstruction

    NASA Astrophysics Data System (ADS)

    Holan, Scott H.; Viator, John A.

    2007-02-01

    Photoacoustic image reconstruction involves dozens or perhaps hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a sample with laser light are used to produce an image of the acoustic source. Each of these point measurements must undergo some signal processing, such as denoising and system deconvolution. In order to efficiently process the numerous signals acquired for photoacoustic imaging, we have developed an automated wavelet algorithm for processing signals generated in a burn injury phantom. We used the discrete wavelet transform to denoise photoacoustic signals generated in an optically turbid phantom containing whole blood. The denoising used universal level independent thresholding, as developed by Donoho and Johnstone. The entire signal processing technique was automated so that no user intervention was needed to reconstruct the images. The signals were backprojected using the automated wavelet processing software and showed reconstruction using denoised signals improved image quality by 21%, using a relative 2-norm difference scheme.

  5. Robust reconstruction of a signal from its unthresholded recurrence plot subject to disturbances

    NASA Astrophysics Data System (ADS)

    Sipers, Aloys; Borm, Paul; Peeters, Ralf

    2017-02-01

    To make valid inferences from recurrence plots for time-delay embedded signals, two underlying key questions are: (1) to what extent does an unthresholded recurrence (URP) plot carry the same information as the signal that generated it, and (2) how does the information change when the URP gets distorted. We studied the first question in our earlier work [1], where it was shown that the URP admits the reconstruction of the underlying signal (up to its mean and a choice of sign) if and only if an associated graph is connected. Here we refine this result and we give an explicit condition in terms of the embedding parameters and the discrete Fourier spectrum of the URP. We also develop a method for the reconstruction of the underlying signal which overcomes several drawbacks that earlier approaches had. To address the second question we investigate robustness of the proposed reconstruction method under disturbances. We carry out two simulation experiments which are characterized by a broad band and a narrow band spectrum respectively. For each experiment we choose a noise level and two different pairs of embedding parameters. The conventional binary recurrence plot (RP) is obtained from the URP by thresholding and zero-one conversion, which can be viewed as severe distortion acting on the URP. Typically the reconstruction of the underlying signal from an RP is expected to be rather inaccurate. However, by introducing the concept of a multi-level recurrence plot (MRP) we propose to bridge the information gap between the URP and the RP, while still achieving a high data compression rate. We demonstrate the working of the proposed reconstruction procedure on MRPs, indicating that MRPs with just a few discretization levels can usually capture signal properties and morphologies more accurately than conventional RPs.

  6. On the usage of linear regression models to reconstruct limb kinematics from low frequency EEG signals.

    PubMed

    Antelis, Javier M; Montesano, Luis; Ramos-Murguialday, Ander; Birbaumer, Niels; Minguez, Javier

    2013-01-01

    Several works have reported on the reconstruction of 2D/3D limb kinematics from low-frequency EEG signals using linear regression models based on positive correlation values between the recorded and the reconstructed trajectories. This paper describes the mathematical properties of the linear model and the correlation evaluation metric that may lead to a misinterpretation of the results of this type of decoders. Firstly, the use of a linear regression model to adjust the two temporal signals (EEG and velocity profiles) implies that the relevant component of the signal used for decoding (EEG) has to be in the same frequency range as the signal to be decoded (velocity profiles). Secondly, the use of a correlation to evaluate the fitting of two trajectories could lead to overly-optimistic results as this metric is invariant to scale. Also, the correlation has a non-linear nature that leads to higher values for sinus/cosinus-like signals at low frequencies. Analysis of these properties on the reconstruction results was carried out through an experiment performed in line with previous studies, where healthy participants executed predefined reaching movements of the hand in 3D space. While the correlations of limb velocity profiles reconstructed from low-frequency EEG were comparable to studies in this domain, a systematic statistical analysis revealed that these results were not above the chance level. The empirical chance level was estimated using random assignments of recorded velocity profiles and EEG signals, as well as combinations of randomly generated synthetic EEG with recorded velocity profiles and recorded EEG with randomly generated synthetic velocity profiles. The analysis shows that the positive correlation results in this experiment cannot be used as an indicator of successful trajectory reconstruction based on a neural correlate. Several directions are herein discussed to address the misinterpretation of results as well as the implications on previous

  7. Separation and reconstruction of high pressure water-jet reflective sound signal based on ICA

    NASA Astrophysics Data System (ADS)

    Yang, Hongtao; Sun, Yuling; Li, Meng; Zhang, Dongsu; Wu, Tianfeng

    2011-12-01

    The impact of high pressure water-jet on the different materials target will produce different reflective mixed sound. In order to reconstruct the reflective sound signals distribution on the linear detecting line accurately and to separate the environment noise effectively, the mixed sound signals acquired by linear mike array were processed by ICA. The basic principle of ICA and algorithm of FASTICA were described in detail. The emulation experiment was designed. The environment noise signal was simulated by using band-limited white noise and the reflective sound signal was simulated by using pulse signal. The reflective sound signal attenuation produced by the different distance transmission was simulated by weighting the sound signal with different contingencies. The mixed sound signals acquired by linear mike array were synthesized by using the above simulated signals and were whitened and separated by ICA. The final results verified that the environment noise separation and the reconstruction of the detecting-line sound distribution can be realized effectively.

  8. Sparse representation of signals: from astrophysics to real-time data analysis for fusion plasmas and system optimization analysis for ITER and TCV

    NASA Astrophysics Data System (ADS)

    Testa, D.; Carfantan, H.; Albergante, M.; Blanchard, P.; Bourguignon, S.; Fasoli, A.; Goodyear, A.; Klein, A.; Lister, J. B.; Panis, T.; Contributors, JET

    2016-12-01

    Efficient, real-time and automated data analysis is one of the key elements for achieving scientific success in complex engineering and physical systems, two examples of which include the JET and ITER tokamaks. One problem which is common to these fields is the determination of the pulsation modes from an irregularly sampled time series. To this end, there are a wealth of signal processing techniques that are being applied to post-pulse and real-time data analysis in such complex systems. Here, we wish to present a review of the applications of a method based on the sparse representation of signals, using examples of the synergies that can be exploited when combining ideas and methods from very different fields, such as astronomy, astrophysics and thermonuclear fusion plasmas. Examples of this work in astronomy and astrophysics are the analysis of pulsation modes in various classes of stars and the orbit determination software of the Pioneer spacecraft. Two examples of this work in thermonuclear fusion plasmas include the detection of magneto-hydrodynamic instabilities, which is now performed routinely in JET in real-time on a sub-millisecond time scale, and the studies leading to the optimization of the magnetic diagnostic system in ITER and TCV. These questions have been solved by formulating them as inverse problems, despite the fact that these applicative frameworks are extremely different from the classical use of sparse representations, from both the theoretical and computational point of view. The requirements, prospects and ideas for the signal processing and real-time data analysis applications of this method to the routine operation of ITER will also be discussed. Finally, a very recent development has been the attempt to apply this method to the deconvolution of the measurement of electric potential performed during a ground-based survey of a proto-Villanovian necropolis in central Italy.

  9. Signal quality quantification and waveform reconstruction of arterial blood pressure recordings.

    PubMed

    Fanelli, A; Heldt, T

    2014-01-01

    Arterial blood pressure (ABP) is an important vital sign of the cardiovascular system. As with other physiological signals, its measurement can be corrupted by different sources of noise, interference, and artifact. Here, we present an algorithm for the quantification of signal quality and for the reconstruction of the ABP waveform in noise-corrupted segments of the measurement. The algorithm quantifies the quality of the ABP signal on a beat-by-beat basis by computing the normalized mean of successive differences of the ABP amplitude over each beat. In segments of poor signal quality, the ABP wavelets are then reconstructed on the basis of the expected cycle duration and envelope information derived from neighboring ABP wavelet segments. The algorithm was tested on two datasets of ABP waveform signals containing both invasive radial artery ABP and noninvasive ABP waveforms. Our results show that the approach is efficient in identifying the noisy segments (accuracy, sensitivity and specificity over 95%) and reliable in reconstructing beats that were artificially corrupted.

  10. Reconstruction of fetal vector electrocardiogram from maternal abdominal signals under fetus body rotations.

    PubMed

    Nabeshima, Yuji; Kimura, Yoshitaka; Ito, Takuro; Ohwada, Kazunari; Karashima, Akihiro; Katayama, Norihiro; Nakao, Mitsuyuki

    2013-01-01

    Fetal electrocardiogram (fECG) and its vector form (fVECG) could provide significant clinical information concerning physiological conditions of a fetus. So far various independent component analysis (ICA)-based methods for extracting fECG from maternal abdominal signals have been proposed. Because full extraction of component waves such as P, Q, R, S, and T, is difficult to be realized under noisy and nonstationary situations, the fVECG is further hard to be reconstructed, where different projections of the fetal heart vector are required. In order to reconstruct fVECG, we proposed a novel method for synthesizing different projections of the heart vector, making good use of the fetus movement. This method consists of ICA, estimation of rotation angles of fetus, and synthesis of projections of the heart vector. Through applications to the synthetic and actual data, our method is shown to precisely estimate rotation angle of the fetus and to successfully reconstruct the fVECG.

  11. Reconstruction of field excitatory post-synaptic potentials in the dentate gyrus from amperometric biosensor signals.

    PubMed

    Viggiano, Alessandro; Marinesco, Stéphane; Pain, Frédéric; Meiller, Anne; Gurden, Hirac

    2012-04-30

    A new feasible and reproducible method to reconstruct local field potentials from amperometric biosensor signals is presented. It is based on the least-square fit of the current response of the biosensor electrode to a voltage step by the use of two time constants. After determination of the electrode impedance, Fast Fourier Transform (FFT) and Inverse FFT are performed to convert the recorded amperometric signals into voltage and trace the local field potentials using a resistor-capacitor circuit-based model. We applied this method to reconstruct field evoked potentials from currents recorded by a lactate biosensor in the rat dentate gyrus after stimulation of the perforant pathway in vivo. Initial slope of the reconstructed field excitatory postsynaptic potentials was used in order to demonstrate long term potentiation induced by high frequency stimulation of the perforant path. Our results show that reconstructing evoked potentials from amperometric recordings is a reliable method to obtain in vivo electrophysiological and amperometric information simultaneously from the same electrode in order to understand how chemical compounds vary with and modulate the dynamics of brain activity.

  12. New direction of arrival estimation of coherent signals based on reconstructing matrix under unknown mutual coupling

    NASA Astrophysics Data System (ADS)

    Guo, Rui; Li, Weixing; Zhang, Yue; Chen, Zengping

    2016-01-01

    A direction of arrival (DOA) estimation algorithm for coherent signals in the presence of unknown mutual coupling is proposed. A group of auxiliary sensors in a uniform linear array are applied to eliminate the effects on the orthogonality of subspaces brought by mutual coupling. Then, a Toeplitz matrix, whose rank is independent of the coherency between impinging signals, is reconstructed to eliminate the rank loss of the spatial covariance matrix. Therefore, the signal and noise subspaces can be estimated properly. This method can estimate the DOAs of coherent signals under unknown mutual coupling accurately without any iteration and calibration sources. It has a low computational burden and high accuracy. Simulation results demonstrate the effectiveness of the algorithm.

  13. Sparse Representation of Electrodermal Activity With Knowledge-Driven Dictionaries

    PubMed Central

    Tsiartas, Andreas; Stein, Leah I.; Cermak, Sharon A.; Narayanan, Shrikanth S.

    2015-01-01

    Biometric sensors and portable devices are being increasingly embedded into our everyday life, creating the need for robust physiological models that efficiently represent, analyze, and interpret the acquired signals. We propose a knowledge-driven method to represent electrodermal activity (EDA), a psychophysiological signal linked to stress, affect, and cognitive processing. We build EDA-specific dictionaries that accurately model both the slow varying tonic part and the signal fluctuations, called skin conductance responses (SCR), and use greedy sparse representation techniques to decompose the signal into a small number of atoms from the dictionary. Quantitative evaluation of our method considers signal reconstruction, compression rate, and information retrieval measures, that capture the ability of the model to incorporate the main signal characteristics, such as SCR occurrences. Compared to previous studies fitting a predetermined structure to the signal, results indicate that our approach provides benefits across all aforementioned criteria. This paper demonstrates the ability of appropriate dictionaries along with sparse decomposition methods to reliably represent EDA signals and provides a foundation for automatic measurement of SCR characteristics and the extraction of meaningful EDA features. PMID:25494494

  14. Fast multi-dimensional NMR acquisition and processing using the sparse FFT.

    PubMed

    Hassanieh, Haitham; Mayzel, Maxim; Shi, Lixin; Katabi, Dina; Orekhov, Vladislav Yu

    2015-09-01

    Increasing the dimensionality of NMR experiments strongly enhances the spectral resolution and provides invaluable direct information about atomic interactions. However, the price tag is high: long measurement times and heavy requirements on the computation power and data storage. We introduce sparse fast Fourier transform as a new method of NMR signal collection and processing, which is capable of reconstructing high quality spectra of large size and dimensionality with short measurement times, faster computations than the fast Fourier transform, and minimal storage for processing and handling of sparse spectra. The new algorithm is described and demonstrated for a 4D BEST-HNCOCA spectrum.

  15. Reconstruction of signal in plastic scintillator of PET using Tikhonov regularization.

    PubMed

    Raczynski, Lech

    2015-08-01

    The new concept of Time of Flight Positron Emission Tomography (TOF-PET) detection system, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The Jagiellonian-PET (J-PET) detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on idea from the Tikhonov regularization method, is presented. From the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long plastic scintillator strip. It is shown that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction from 1.05 cm to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm.

  16. Robust signal reconstruction for condition monitoring of industrial components via a modified Auto Associative Kernel Regression method

    NASA Astrophysics Data System (ADS)

    Baraldi, Piero; Di Maio, Francesco; Turati, Pietro; Zio, Enrico

    2015-08-01

    In this work, we propose a modification of the traditional Auto Associative Kernel Regression (AAKR) method which enhances the signal reconstruction robustness, i.e., the capability of reconstructing abnormal signals to the values expected in normal conditions. The modification is based on the definition of a new procedure for the computation of the similarity between the present measurements and the historical patterns used to perform the signal reconstructions. The underlying conjecture for this is that malfunctions causing variations of a small number of signals are more frequent than those causing variations of a large number of signals. The proposed method has been applied to real normal condition data collected in an industrial plant for energy production. Its performance has been verified considering synthetic and real malfunctioning. The obtained results show an improvement in the early detection of abnormal conditions and the correct identification of the signals responsible of triggering the detection.

  17. Sparse cortical source localization using spatio-temporal atoms.

    PubMed

    Korats, Gundars; Ranta, Radu; Le Cam, Steven; Louis-Dorr, Valérie

    2015-01-01

    This paper addresses the problem of sparse localization of cortical sources from scalp EEG recordings. Localization algorithms use propagation model under spatial and/or temporal constraints, but their performance highly depends on the data signal-to-noise ratio (SNR). In this work we propose a dictionary based sparse localization method which uses a data driven spatio-temporal dictionary to reconstruct the measurements using Single Best Replacement (SBR) and Continuation Single Best Replacement (CSBR) algorithms. We tested and compared our methods with the well-known MUSIC and RAP-MUSIC algorithms on simulated realistic data. Tests were carried out for different noise levels. The results show that our method has a strong advantage over MUSIC-type methods in case of synchronized sources.

  18. Reconstruction of interrupted SAR imagery for persistent surveillance change detection

    NASA Astrophysics Data System (ADS)

    Stojanovic, Ivana; Karl, W. C.; Novak, Les

    2012-05-01

    In this paper we apply a sparse signal recovery technique for synthetic aperture radar (SAR) image formation from interrupted phase history data. Timeline constraints imposed on multi-function modern radars result in interrupted SAR data collection, which in turn leads to corrupted imagery that degrades reliable change detection. In this paper we extrapolate the missing data by applying the basis pursuit denoising algorithm (BPDN) in the image formation step, effectively, modeling the SAR scene as sparse. We investigate the effects of regular and random interruptions on the SAR point spread function (PSF), as well as on the quality of both coherent (CCD) and non-coherent (NCCD) change detection. We contrast the sparse reconstruction to the matched filter (MF) method, implemented via Fourier processing with missing data set to zero. To illustrate the capabilities of the gap-filling sparse reconstruction algorithm, we evaluate change detection performance using a pair of images from the GOTCHA data set.

  19. Flexible sparse regularization

    NASA Astrophysics Data System (ADS)

    Lorenz, Dirk A.; Resmerita, Elena

    2017-01-01

    The seminal paper of Daubechies, Defrise, DeMol made clear that {{\\ell }}p spaces with p\\in [1,2) and p-powers of the corresponding norms are appropriate settings for dealing with reconstruction of sparse solutions of ill-posed problems by regularization. It seems that the case p = 1 provides the best results in most of the situations compared to the cases p\\in (1,2). An extensive literature gives great credit also to using {{\\ell }}p spaces with p\\in (0,1) together with the corresponding quasi-norms, although one has to tackle challenging numerical problems raised by the non-convexity of the quasi-norms. In any of these settings, either superlinear, linear or sublinear, the question of how to choose the exponent p has been not only a numerical issue, but also a philosophical one. In this work we introduce a more flexible way of sparse regularization by varying exponents. We introduce the corresponding functional analytic framework, that leaves the setting of normed spaces but works with so-called F-norms. One curious result is that there are F-norms which generate the ℓ 1 space, but they are strictly convex, while the ℓ 1-norm is just convex.

  20. Orthogonal Procrustes Analysis for Dictionary Learning in Sparse Linear Representation

    PubMed Central

    Grossi, Giuliano; Lin, Jianyi

    2017-01-01

    In the sparse representation model, the design of overcomplete dictionaries plays a key role for the effectiveness and applicability in different domains. Recent research has produced several dictionary learning approaches, being proven that dictionaries learnt by data examples significantly outperform structured ones, e.g. wavelet transforms. In this context, learning consists in adapting the dictionary atoms to a set of training signals in order to promote a sparse representation that minimizes the reconstruction error. Finding the best fitting dictionary remains a very difficult task, leaving the question still open. A well-established heuristic method for tackling this problem is an iterative alternating scheme, adopted for instance in the well-known K-SVD algorithm. Essentially, it consists in repeating two stages; the former promotes sparse coding of the training set and the latter adapts the dictionary to reduce the error. In this paper we present R-SVD, a new method that, while maintaining the alternating scheme, adopts the Orthogonal Procrustes analysis to update the dictionary atoms suitably arranged into groups. Comparative experiments on synthetic data prove the effectiveness of R-SVD with respect to well known dictionary learning algorithms such as K-SVD, ILS-DLA and the online method OSDL. Moreover, experiments on natural data such as ECG compression, EEG sparse representation, and image modeling confirm R-SVD’s robustness and wide applicability. PMID:28103283

  1. Sparse Representation for Computer Vision and Pattern Recognition

    DTIC Science & Technology

    2009-05-01

    with 40% occlusion. Figure 2 (right) shows the validation perfor - mance of the various methods, under 30% contiguous occlu- sion, plotted as a...performance. B. Sparse Modeling for Image Reconstruction Let X ∈ Rm×N be a set of N column data vectors xj ∈ Rm (e.g., image patches ), D ∈ Rm×K be a dictionary...K > m, and the patch sizes vary from 7× 7, m = 49, to 20× 20, m = 400 (in the multiscale case), with a sparsity of about 1/10th of the signal

  2. Signal transformation in erosional landscapes: insights for reconstructing tectonic history from sediment flux records

    NASA Astrophysics Data System (ADS)

    Li, Q.; Gasparini, N. M.; Straub, K. M.

    2015-12-01

    Changes in tectonics can affect erosion rates across a mountain belt, leading to non-steady sediment flux delivery to fluvial transport systems. The sediment flux signal produced from time-varying tectonics may eventually be recorded in a depositional basin. However, before the sediment flux from an erosional watershed is fed to the downstream transport system and preserved in sedimentary deposits, tectonic signals can be distorted or even destroyed as they are transformed into a sediment-flux signal that is exported out of a watershed . In this study, we use the Channel-Hillslope Integrated Landscape Development (CHILD) model to explore how the sediment flux delivered from a mountain watershed responds to non-steady rock uplift. We observe that (1) a non-linear relationship between the erosion response and tectonic perturbation can lead to a sediment-flux signal that is out of phase with the change in uplift rate; (2) in some cases in which the uplift perturbation is short, the sediment flux signal may contain no record of the change; (3) uplift rates interpreted from sediment flux at the outlet of a transient erosional landscape are likely to be underestimated. All these observations highlight the difficulty in accurately reconstructing tectonic history from sediment flux records. Results from this study will help to constrain what tectonic signals may be evident in the sediment flux delivered from an erosional system and therefore have the potential to be recorded in stratigraphy, ultimately improving our ability to interpret stratigraphy.

  3. Performances of the signal reconstruction in the ATLAS Hadronic Tile Calorimeter

    NASA Astrophysics Data System (ADS)

    Meoni, E.

    2013-08-01

    The Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of ATLAS. It is a key detector for the reconstruction of hadrons, jets, tau leptons and missing transverse energy. TileCal is a sampling calorimeter using steel as an absorber and plastic scintillators as an active medium. The scintillators are read-out by wavelength shifting fibers coupled to photomultiplier tubes (PMTs). The analogue signals from the PMTs are amplified, shaped and digitized by sampling the signal every 25 ns. The read-out system is designed to reconstruct the data in real time fulfilling the tight time constraint imposed by the ATLAS first level trigger rate (100 kHz). The signal amplitude and phases for each channel are measured using Optimal Filtering algorithms both at online and offline level. We present the performances of these techniques on the data collected in the proton-proton collisions at center-of-mass energy of 7 TeV. We show in particular the measurements of low amplitudes, close to the pedestal value, using as probe high transverse momenta muons produced in the proton-proton collisions.

  4. Bayesian reconstruction of gravitational wave burst signals from simulations of rotating stellar core collapse and bounce

    SciTech Connect

    Roever, Christian; Bizouard, Marie-Anne; Christensen, Nelson; Dimmelmeier, Harald; Heng, Ik Siong; Meyer, Renate

    2009-11-15

    Presented in this paper is a technique that we propose for extracting the physical parameters of a rotating stellar core collapse from the observation of the associated gravitational wave signal from the collapse and core bounce. Data from interferometric gravitational wave detectors can be used to provide information on the mass of the progenitor model, precollapse rotation, and the nuclear equation of state. We use waveform libraries provided by the latest numerical simulations of rotating stellar core collapse models in general relativity, and from them create an orthogonal set of eigenvectors using principal component analysis. Bayesian inference techniques are then used to reconstruct the associated gravitational wave signal that is assumed to be detected by an interferometric detector. Posterior probability distribution functions are derived for the amplitudes of the principal component analysis eigenvectors, and the pulse arrival time. We show how the reconstructed signal and the principal component analysis eigenvector amplitude estimates may provide information on the physical parameters associated with the core collapse event.

  5. TreSpEx—Detection of Misleading Signal in Phylogenetic Reconstructions Based on Tree Information

    PubMed Central

    Struck, Torsten H

    2014-01-01

    Phylogenies of species or genes are commonplace nowadays in many areas of comparative biological studies. However, for phylogenetic reconstructions one must refer to artificial signals such as paralogy, long-branch attraction, saturation, or conflict between different datasets. These signals might eventually mislead the reconstruction even in phylogenomic studies employing hundreds of genes. Unfortunately, there has been no program allowing the detection of such effects in combination with an implementation into automatic process pipelines. TreSpEx (Tree Space Explorer) now combines different approaches (including statistical tests), which utilize tree-based information like nodal support or patristic distances (PDs) to identify misleading signals. The program enables the parallel analysis of hundreds of trees and/or predefined gene partitions, and being command-line driven, it can be integrated into automatic process pipelines. TreSpEx is implemented in Perl and supported on Linux, Mac OS X, and MS Windows. Source code, binaries, and additional material are freely available at http://www.annelida.de/research/bioinformatics/software.html. PMID:24701118

  6. Sparse Methods for Biomedical Data.

    PubMed

    Ye, Jieping; Liu, Jun

    2012-06-01

    Following recent technological revolutions, the investigation of massive biomedical data with growing scale, diversity, and complexity has taken a center stage in modern data analysis. Although complex, the underlying representations of many biomedical data are often sparse. For example, for a certain disease such as leukemia, even though humans have tens of thousands of genes, only a few genes are relevant to the disease; a gene network is sparse since a regulatory pathway involves only a small number of genes; many biomedical signals are sparse or compressible in the sense that they have concise representations when expressed in a proper basis. Therefore, finding sparse representations is fundamentally important for scientific discovery. Sparse methods based on the [Formula: see text] norm have attracted a great amount of research efforts in the past decade due to its sparsity-inducing property, convenient convexity, and strong theoretical guarantees. They have achieved great success in various applications such as biomarker selection, biological network construction, and magnetic resonance imaging. In this paper, we review state-of-the-art sparse methods and their applications to biomedical data.

  7. Sparse Methods for Biomedical Data

    PubMed Central

    Ye, Jieping; Liu, Jun

    2013-01-01

    Following recent technological revolutions, the investigation of massive biomedical data with growing scale, diversity, and complexity has taken a center stage in modern data analysis. Although complex, the underlying representations of many biomedical data are often sparse. For example, for a certain disease such as leukemia, even though humans have tens of thousands of genes, only a few genes are relevant to the disease; a gene network is sparse since a regulatory pathway involves only a small number of genes; many biomedical signals are sparse or compressible in the sense that they have concise representations when expressed in a proper basis. Therefore, finding sparse representations is fundamentally important for scientific discovery. Sparse methods based on the ℓ1 norm have attracted a great amount of research efforts in the past decade due to its sparsity-inducing property, convenient convexity, and strong theoretical guarantees. They have achieved great success in various applications such as biomarker selection, biological network construction, and magnetic resonance imaging. In this paper, we review state-of-the-art sparse methods and their applications to biomedical data. PMID:24076585

  8. Sparse representation utilizing tight frame for phase retrieval

    NASA Astrophysics Data System (ADS)

    Shi, Baoshun; Lian, Qiusheng; Chen, Shuzhen

    2015-12-01

    We treat the phase retrieval (PR) problem of reconstructing the interest signal from its Fourier magnitude. Since the Fourier phase information is lost, the problem is ill-posed. Several techniques have been used to address this problem by utilizing various priors such as non-negative, support, and Fourier magnitude constraints. Recent methods exploiting sparsity are developed to improve the reconstruction quality. However, the previous algorithms of utilizing sparsity prior suffer from either the low reconstruction quality at low oversampled factors or being sensitive to noise. To address these issues, we propose a framework that exploits sparsity of the signal in the translation invariant Haar pyramid (TIHP) tight frame. Based on this sparsity prior, we formulate the sparse representation regularization term and incorporate it into the PR optimization problem. We propose the alternating iterative algorithm for solving the corresponding non-convex problem by dividing the problem into several subproblems. We give the optimal solution to each subproblem, and experimental simulations under the noise-free and noisy scenario indicate that our proposed algorithm can obtain a better reconstruction quality compared to the conventional alternative projection methods, even outperform the recent sparsity-based algorithms in terms of reconstruction quality.

  9. Signal classification and event reconstruction for acoustic neutrino detection in sea water with KM3NeT

    NASA Astrophysics Data System (ADS)

    Kießling, Dominik

    2017-03-01

    The research infrastructure KM3NeT will comprise a multi cubic kilometer neutrino telescope that is currently being constructed in the Mediterranean Sea. Modules with optical and acoustic sensors are used in the detector. While the main purpose of the acoustic sensors is the position calibration of the detection units, they can be used as instruments for studies on acoustic neutrino detection, too. In this article, methods for signal classification and event reconstruction for acoustic neutrino detectors will be presented, which were developed using Monte Carlo simulations. For the signal classification the disk-like emission pattern of the acoustic neutrino signal is used. This approach improves the suppression of transient background by several orders of magnitude. Additionally, an event reconstruction is developed based on the signal classification. An overview of these algorithms will be presented and the efficiency of the classification will be discussed. The quality of the event reconstruction will also be presented.

  10. Vigilance detection based on sparse representation of EEG.

    PubMed

    Yu, Hongbin; Lu, Hongtao; Ouyang, Tian; Liu, Hongjun; Lu, Bao-Liang

    2010-01-01

    Electroencephalogram (EEG) based vigilance detection of those people who engage in long time attention demanding tasks such as monotonous monitoring or driving is a key field in the research of brain-computer interface (BCI). However, robust detection of human vigilance from EEG is very difficult due to the low SNR nature of EEG signals. Recently, compressive sensing and sparse representation become successful tools in the fields of signal reconstruction and machine learning. In this paper, we propose to use the sparse representation of EEG to the vigilance detection problem. We first use continuous wavelet transform to extract the rhythm features of EEG data, and then employ the sparse representation method to the wavelet transform coefficients. We collect five subjects' EEG recordings in a simulation driving environment and apply the proposed method to detect the vigilance of the subjects. The experimental results show that the algorithm framework proposed in this paper can successfully estimate driver's vigilance with the average accuracy about 94.22 %. We also compare our algorithm framework with other vigilance estimation methods using different feature extraction and classifier selection approaches, the result shows that the proposed method has obvious advantages in the classification accuracy.

  11. Separation and reconstruction of BCG and EEG signals during continuous EEG and fMRI recordings

    PubMed Central

    Xia, Hongjing; Ruan, Dan; Cohen, Mark S.

    2014-01-01

    Despite considerable effort to remove it, the ballistocardiogram (BCG) remains a major artifact in electroencephalographic data (EEG) acquired inside magnetic resonance imaging (MRI) scanners, particularly in continuous (as opposed to event-related) recordings. In this study, we have developed a new Direct Recording Prior Encoding (DRPE) method to extract and separate the BCG and EEG components from contaminated signals, and have demonstrated its performance by comparing it quantitatively to the popular Optimal Basis Set (OBS) method. Our modified recording configuration allows us to obtain representative bases of the BCG- and EEG-only signals. Further, we have developed an optimization-based reconstruction approach to maximally incorporate prior knowledge of the BCG/EEG subspaces, and of the signal characteristics within them. Both OBS and DRPE methods were tested with experimental data, and compared quantitatively using cross-validation. In the challenging continuous EEG studies, DRPE outperforms the OBS method by nearly sevenfold in separating the continuous BCG and EEG signals. PMID:25002836

  12. A method for the automatic reconstruction of fetal cardiac signals from magnetocardiographic recordings

    NASA Astrophysics Data System (ADS)

    Mantini, D.; Alleva, G.; Comani, S.

    2005-10-01

    Fetal magnetocardiography (fMCG) allows monitoring the fetal heart function through algorithms able to retrieve the fetal cardiac signal, but no standardized automatic model has become available so far. In this paper, we describe an automatic method that restores the fetal cardiac trace from fMCG recordings by means of a weighted summation of fetal components separated with independent component analysis (ICA) and identified through dedicated algorithms that analyse the frequency content and temporal structure of each source signal. Multichannel fMCG datasets of 66 healthy and 4 arrhythmic fetuses were used to validate the automatic method with respect to a classical procedure requiring the manual classification of fetal components by an expert investigator. ICA was run with input clusters of different dimensions to simulate various MCG systems. Detection rates, true negative and false positive component categorization, QRS amplitude, standard deviation and signal-to-noise ratio of reconstructed fetal signals, and real and per cent QRS differences between paired fetal traces retrieved automatically and manually were calculated to quantify the performances of the automatic method. Its robustness and reliability, particularly evident with the use of large input clusters, might increase the diagnostic role of fMCG during the prenatal period.

  13. Virtual head rotation reveals a process of route reconstruction from human vestibular signals.

    PubMed

    Day, Brian L; Fitzpatrick, Richard C

    2005-09-01

    The vestibular organs can feed perceptual processes that build a picture of our route as we move about in the world. However, raw vestibular signals do not define the path taken because, during travel, the head can undergo accelerations unrelated to the route and also be orientated in any direction to vary the signal. This study investigated the computational process by which the brain transforms raw vestibular signals for the purpose of route reconstruction. We electrically stimulated the vestibular nerves of human subjects to evoke a virtual head rotation fixed in skull co-ordinates and measure its perceptual effect. The virtual head rotation caused subjects to perceive an illusory whole-body rotation that was a cyclic function of head-pitch angle. They perceived whole-body yaw rotation in one direction with the head pitched forwards, the opposite direction with the head pitched backwards, and no rotation with the head in an intermediate position. A model based on vector operations and the anatomy and firing properties of semicircular canals precisely predicted these perceptions. In effect, a neural process computes the vector dot product between the craniocentric vestibular vector of head rotation and the gravitational unit vector. This computation yields the signal of body rotation in the horizontal plane that feeds our perception of the route travelled.

  14. Rank Awareness in Group-Sparse Recovery of Multi-Echo MR Images

    PubMed Central

    Majumdar, Angshul; Ward, Rabab

    2013-01-01

    This work addresses the problem of recovering multi-echo T1 or T2 weighted images from their partial K-space scans. Recent studies have shown that the best results are obtained when all the multi-echo images are reconstructed by simultaneously exploiting their intra-image spatial redundancy and inter-echo correlation. The aforesaid studies either stack the vectorised images (formed by row or columns concatenation) as columns of a Multiple Measurement Vector (MMV) matrix or concatenate them as a long vector. Owing to the inter-image correlation, the thus formed MMV matrix or the long concatenated vector is row-sparse or group-sparse respectively in a transform domain (wavelets). Consequently the reconstruction problem was formulated as a row-sparse MMV recovery or a group-sparse vector recovery. In this work we show that when the multi-echo images are arranged in the MMV form, the thus formed matrix is low-rank. We show that better reconstruction accuracy can be obtained when the information about rank-deficiency is incorporated into the row/group sparse recovery problem. Mathematically, this leads to a constrained optimization problem where the objective function promotes the signal's groups-sparsity as well as its rank-deficiency; the objective function is minimized subject to data fidelity constraints. The experiments were carried out on ex vivo and in vivo T2 weighted images of a rat's spinal cord. Results show that this method yields considerably superior results than state-of-the-art reconstruction techniques. PMID:23519348

  15. Reconstruction and analysis of the pupil dilation signal: Application to a psychophysiological affective protocol.

    PubMed

    Onorati, Francesco; Barbieri, Riccardo; Mauri, Maurizio; Russo, Vincenzo; Mainardi, Luca

    2013-01-01

    Pupil dilation (PD) dynamics reflect the interactions of sympathetic and parasympathetic innervations in the iris muscle. Different pupillary responses have been observed with respect to emotionally characterized stimuli. Evidences of the correlation between PD and respiration, heart rate variability (HRV) and blood pressure (BP) are present in literature, making the pupil dilation a candidate for estimating the activity state of the Autonomic Nervous System (ANS), in particular during stressful and/or emotionally characterized stimuli. The aim of this study is to investigate whether both slow and fast PD dynamics can be addressed to characterized different affective states. Two different frequency bands were considered: the classical autonomic band [0-0.45] Hz and a very high frequency (VHF) band [0.45-5] Hz. The pupil dilation signals from 13 normal subjects were recorded during a psychological protocol suitable to evoke particular affective states. An elaborate reconstruction of the missing data (blink events and artifacts) was performed to obtain a more reliable signal, particularly in the VHF band. Results show a high correlation between the arousal of the event and the power characteristics of the signal, in all frequencies. In particular, for the "Anger" condition, we can observe 10 indices out of 13 significantly different with respect to "Baseline" counterparts. These preliminary results suggest that both slow and fast oscillations of the PD can be used to characterize affective states.

  16. Signal Analysis and Waveform Reconstruction of Shock Waves Generated by Underwater Electrical Wire Explosions with Piezoelectric Pressure Probes.

    PubMed

    Zhou, Haibin; Zhang, Yongmin; Han, Ruoyu; Jing, Yan; Wu, Jiawei; Liu, Qiaojue; Ding, Weidong; Qiu, Aici

    2016-04-22

    Underwater shock waves (SWs) generated by underwater electrical wire explosions (UEWEs) have been widely studied and applied. Precise measurement of this kind of SWs is important, but very difficult to accomplish due to their high peak pressure, steep rising edge and very short pulse width (on the order of tens of μs). This paper aims to analyze the signals obtained by two kinds of commercial piezoelectric pressure probes, and reconstruct the correct pressure waveform from the distorted one measured by the pressure probes. It is found that both PCB138 and Müller-plate probes can be used to measure the relative SW pressure value because of their good uniformities and linearities, but none of them can obtain precise SW waveforms. In order to approach to the real SW signal better, we propose a new multi-exponential pressure waveform model, which has considered the faster pressure decay at the early stage and the slower pressure decay in longer times. Based on this model and the energy conservation law, the pressure waveform obtained by the PCB138 probe has been reconstructed, and the reconstruction accuracy has been verified by the signals obtained by the Müller-plate probe. Reconstruction results show that the measured SW peak pressures are smaller than the real signal. The waveform reconstruction method is both reasonable and reliable.

  17. Signal Analysis and Waveform Reconstruction of Shock Waves Generated by Underwater Electrical Wire Explosions with Piezoelectric Pressure Probes

    PubMed Central

    Zhou, Haibin; Zhang, Yongmin; Han, Ruoyu; Jing, Yan; Wu, Jiawei; Liu, Qiaojue; Ding, Weidong; Qiu, Aici

    2016-01-01

    Underwater shock waves (SWs) generated by underwater electrical wire explosions (UEWEs) have been widely studied and applied. Precise measurement of this kind of SWs is important, but very difficult to accomplish due to their high peak pressure, steep rising edge and very short pulse width (on the order of tens of μs). This paper aims to analyze the signals obtained by two kinds of commercial piezoelectric pressure probes, and reconstruct the correct pressure waveform from the distorted one measured by the pressure probes. It is found that both PCB138 and Müller-plate probes can be used to measure the relative SW pressure value because of their good uniformities and linearities, but none of them can obtain precise SW waveforms. In order to approach to the real SW signal better, we propose a new multi-exponential pressure waveform model, which has considered the faster pressure decay at the early stage and the slower pressure decay in longer times. Based on this model and the energy conservation law, the pressure waveform obtained by the PCB138 probe has been reconstructed, and the reconstruction accuracy has been verified by the signals obtained by the Müller-plate probe. Reconstruction results show that the measured SW peak pressures are smaller than the real signal. The waveform reconstruction method is both reasonable and reliable. PMID:27110789

  18. SACICA: a sparse approximation coefficient-based ICA model for functional magnetic resonance imaging data analysis.

    PubMed

    Wang, Nizhuan; Zeng, Weiming; Chen, Lei

    2013-05-30

    Independent component analysis (ICA) has been widely used in functional magnetic resonance imaging (fMRI) data to evaluate the functional connectivity, which assumes that the sources of functional networks are statistically independent. Recently, many researchers have demonstrated that sparsity is an effective assumption for fMRI signal separation. In this research, we present a sparse approximation coefficient-based ICA (SACICA) model to analyse fMRI data, which is a promising combination model of sparse features and an ICA technique. The SACICA method consists of three procedures. The wavelet packet decomposition procedure, which decomposes the fMRI data into wavelet tree nodes with different degrees of sparsity, is first. Then, the sparse approximation coefficients set formation procedure, in which an effective Lp norm is proposed to measure the sparse degree of the distinct wavelet tree nodes, is second. The ICA decomposition and reconstruction procedure, which utilises the sparse approximation coefficients set of the fMRI data, is last. The hybrid data experimental results demonstrated that the SACICA method exhibited the stronger spatial source reconstruction ability with respect to the unsmoothed fMRI data and better detection sensitivity of the functional signal on the smoothed fMRI data than the FastICA method. Furthermore, task-related experiments also revealed that SACICA was not only effective in discovering the functional networks but also exhibited a better detection sensitivity of the visual-related functional signal. In addition, the SACICA combined with Fast-FENICA proposed by Wang et al. (2012) was demonstrated to conduct the group analysis effectively on the resting-state data set.

  19. An estimation method of MR signal parameters for improved image reconstruction in unilateral scanner

    NASA Astrophysics Data System (ADS)

    Bergman, Elad; Yeredor, Arie; Nevo, Uri

    2013-12-01

    Unilateral NMR devices are used in various applications including non-destructive testing and well logging, but are not used routinely for imaging. This is mainly due to the inhomogeneous magnetic field (B0) in these scanners. This inhomogeneity results in low sensitivity and further forces the use of the slow single point imaging scan scheme. Improving the measurement sensitivity is therefore an important factor as it can improve image quality and reduce imaging times. Short imaging times can facilitate the use of this affordable and portable technology for various imaging applications. This work presents a statistical signal-processing method, designed to fit the unique characteristics of imaging with a unilateral device. The method improves the imaging capabilities by improving the extraction of image information from the noisy data. This is done by the use of redundancy in the acquired MR signal and by the use of the noise characteristics. Both types of data were incorporated into a Weighted Least Squares estimation approach. The method performance was evaluated with a series of imaging acquisitions applied on phantoms. Images were extracted from each measurement with the proposed method and were compared to the conventional image reconstruction. All measurements showed a significant improvement in image quality based on the MSE criterion - with respect to gold standard reference images. An integration of this method with further improvements may lead to a prominent reduction in imaging times aiding the use of such scanners in imaging application.

  20. Climate signal age effects in boreal tree-rings: Lessons to be learned for paleoclimatic reconstructions

    NASA Astrophysics Data System (ADS)

    Konter, Oliver; Büntgen, Ulf; Carrer, Marco; Timonen, Mauri; Esper, Jan

    2016-06-01

    Age-related alternation in the sensitivity of tree-ring width (TRW) to climate variability has been reported for different forest species and environments. The resulting growth-climate response patterns are, however, often inconsistent and similar assessments using maximum latewood density (MXD) are still missing. Here, we analyze climate signal age effects (CSAE, age-related changes in the climate sensitivity of tree growth) in a newly aggregated network of 692 Pinus sylvestris L. TRW and MXD series from northern Fennoscandia. Although summer temperature sensitivity of TRW (rAll = 0.48) ranges below that of MXD (rAll = 0.76), it declines for both parameters as cambial age increases. Assessment of CSAE for individual series further reveals decreasing correlation values as a function of time. This declining signal strength remains temporally robust and negative for MXD, while age-related trends in TRW exhibit resilient meanderings of positive and negative trends. Although CSAE are significant and temporally variable in both tree-ring parameters, MXD is more suitable for the development of climate reconstructions. Our results indicate that sampling of young and old trees, and testing for CSAE, should become routine for TRW and MXD data prior to any paleoclimatic endeavor.

  1. Sparse Bayesian Learning for DOA Estimation with Mutual Coupling

    PubMed Central

    Dai, Jisheng; Hu, Nan; Xu, Weichao; Chang, Chunqi

    2015-01-01

    Sparse Bayesian learning (SBL) has given renewed interest to the problem of direction-of-arrival (DOA) estimation. It is generally assumed that the measurement matrix in SBL is precisely known. Unfortunately, this assumption may be invalid in practice due to the imperfect manifold caused by unknown or misspecified mutual coupling. This paper describes a modified SBL method for joint estimation of DOAs and mutual coupling coefficients with uniform linear arrays (ULAs). Unlike the existing method that only uses stationary priors, our new approach utilizes a hierarchical form of the Student t prior to enforce the sparsity of the unknown signal more heavily. We also provide a distinct Bayesian inference for the expectation-maximization (EM) algorithm, which can update the mutual coupling coefficients more efficiently. Another difference is that our method uses an additional singular value decomposition (SVD) to reduce the computational complexity of the signal reconstruction process and the sensitivity to the measurement noise. PMID:26501284

  2. A non-uniformly under-sampled blade tip-timing signal reconstruction method for blade vibration monitoring.

    PubMed

    Hu, Zheng; Lin, Jun; Chen, Zhong-Sheng; Yang, Yong-Min; Li, Xue-Jun

    2015-01-22

    High-speed blades are often prone to fatigue due to severe blade vibrations. In particular, synchronous vibrations can cause irreversible damages to the blade. Blade tip-timing methods (BTT) have become a promising way to monitor blade vibrations. However, synchronous vibrations are unsuitably monitored by uniform BTT sampling. Therefore, non-equally mounted probes have been used, which will result in the non-uniformity of the sampling signal. Since under-sampling is an intrinsic drawback of BTT methods, how to analyze non-uniformly under-sampled BTT signals is a big challenge. In this paper, a novel reconstruction method for non-uniformly under-sampled BTT data is presented. The method is based on the periodically non-uniform sampling theorem. Firstly, a mathematical model of a non-uniform BTT sampling process is built. It can be treated as the sum of certain uniform sample streams. For each stream, an interpolating function is required to prevent aliasing in the reconstructed signal. Secondly, simultaneous equations of all interpolating functions in each sub-band are built and corresponding solutions are ultimately derived to remove unwanted replicas of the original signal caused by the sampling, which may overlay the original signal. In the end, numerical simulations and experiments are carried out to validate the feasibility of the proposed method. The results demonstrate the accuracy of the reconstructed signal depends on the sampling frequency, the blade vibration frequency, the blade vibration bandwidth, the probe static offset and the number of samples. In practice, both types of blade vibration signals can be particularly reconstructed by non-uniform BTT data acquired from only two probes.

  3. A Non-Uniformly Under-Sampled Blade Tip-Timing Signal Reconstruction Method for Blade Vibration Monitoring

    PubMed Central

    Hu, Zheng; Lin, Jun; Chen, Zhong-Sheng; Yang, Yong-Min; Li, Xue-Jun

    2015-01-01

    High-speed blades are often prone to fatigue due to severe blade vibrations. In particular, synchronous vibrations can cause irreversible damages to the blade. Blade tip-timing methods (BTT) have become a promising way to monitor blade vibrations. However, synchronous vibrations are unsuitably monitored by uniform BTT sampling. Therefore, non-equally mounted probes have been used, which will result in the non-uniformity of the sampling signal. Since under-sampling is an intrinsic drawback of BTT methods, how to analyze non-uniformly under-sampled BTT signals is a big challenge. In this paper, a novel reconstruction method for non-uniformly under-sampled BTT data is presented. The method is based on the periodically non-uniform sampling theorem. Firstly, a mathematical model of a non-uniform BTT sampling process is built. It can be treated as the sum of certain uniform sample streams. For each stream, an interpolating function is required to prevent aliasing in the reconstructed signal. Secondly, simultaneous equations of all interpolating functions in each sub-band are built and corresponding solutions are ultimately derived to remove unwanted replicas of the original signal caused by the sampling, which may overlay the original signal. In the end, numerical simulations and experiments are carried out to validate the feasibility of the proposed method. The results demonstrate the accuracy of the reconstructed signal depends on the sampling frequency, the blade vibration frequency, the blade vibration bandwidth, the probe static offset and the number of samples. In practice, both types of blade vibration signals can be particularly reconstructed by non-uniform BTT data acquired from only two probes. PMID:25621612

  4. Adaptive feature extraction using sparse coding for machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Liu, Haining; Liu, Chengliang; Huang, Yixiang

    2011-02-01

    In the signal processing domain, there has been growing interest in sparse coding with a learned dictionary instead of a predefined one, which is advocated as an effective mathematical description for the underlying principle of mammalian sensory systems in processing information. In this paper, sparse coding is introduced as a feature extraction technique for machinery fault diagnosis and an adaptive feature extraction scheme is proposed based on it. The two core problems of sparse coding, i.e., dictionary learning and coefficients solving, are discussed in detail. A natural extension of sparse coding, shift-invariant sparse coding, is also introduced. Then, the vibration signals of rolling element bearings are taken as the target signals to verify the proposed scheme, and shift-invariant sparse coding is used for vibration analysis. With the purpose of diagnosing the different fault conditions of bearings, features are extracted following the proposed scheme: basis functions are separately learned from each class of vibration signals trying to capture the defective impulses; a redundant dictionary is built by merging all the learned basis functions; based on the redundant dictionary, the diagnostic information is made explicit in the solved sparse representations of vibration signals; sparse features are formulated in terms of activations of atoms. The multiclass linear discriminant analysis (LDA) classifier is used to test the discriminability of the extracted sparse features and the adaptability of the learned atoms. The experiments show that sparse coding is an effective feature extraction technique for machinery fault diagnosis.

  5. Structured sparse priors for image classification.

    PubMed

    Srinivas, Umamahesh; Suo, Yuanming; Dao, Minh; Monga, Vishal; Tran, Trac D

    2015-06-01

    Model-based compressive sensing (CS) exploits the structure inherent in sparse signals for the design of better signal recovery algorithms. This information about structure is often captured in the form of a prior on the sparse coefficients, with the Laplacian being the most common such choice (leading to l1 -norm minimization). Recent work has exploited the discriminative capability of sparse representations for image classification by employing class-specific dictionaries in the CS framework. Our contribution is a logical extension of these ideas into structured sparsity for classification. We introduce the notion of discriminative class-specific priors in conjunction with class specific dictionaries, specifically the spike-and-slab prior widely applied in Bayesian sparse regression. Significantly, the proposed framework takes the burden off the demand for abundant training image samples necessary for the success of sparsity-based classification schemes. We demonstrate this practical benefit of our approach in important applications, such as face recognition and object categorization.

  6. Reconstruction of band-limited signals from multichannel and periodic nonuniform samples in the linear canonical transform domain

    NASA Astrophysics Data System (ADS)

    Wei, Deyun; Ran, Qiwen; Li, Yuanmin

    2011-09-01

    Linear canonical transforms (LCTs) are a family of integral transforms with wide application in optical, acoustical, electromagnetic, and other wave propagation problems. This paper addresses the problem of signal reconstruction from multichannel and periodic nonuniform samples in the LCT domain. Firstly, the multichannel sampling theorem (MST) for band-limited signals with the LCT is proposed based on multichannel system equations, which is the generalization of the well-known sampling theorem for the LCT. We consider the problem of reconstructing the signal from its samples which are acquired using a multichannel sampling scheme. For this purpose, we propose two alternatives. The first scheme is based on the conventional Fourier series and inverse LCT operation. The second is based on the conventional Fourier series and inverse Fourier transform (FT) operation. Moreover, the classical Papoulis MST in FT domain is shown to be special case of the achieved results. Since the periodic nonuniformly sampled signal in the LCT has valuable applications, the reconstruction expression for the periodic nonuniformly sampled signal has been then obtained by using the derived MST and the specific space-shifting property of the LCT. Last, the potential applications of the MST are presented to show the advantage of the theory.

  7. Scattered radiation in flat-detector based cone-beam CT: propagation of signal, contrast, and noise into reconstructed volumes

    NASA Astrophysics Data System (ADS)

    Wiegert, Jens; Hohmann, Steffen; Bertram, Matthias

    2007-03-01

    This paper presents a novel framework for the systematic assessment of the impact of scattered radiation in .at-detector based cone-beam CT. While it is well known that scattered radiation causes three di.erent types of artifacts in reconstructed images (inhomogeneity artifacts such as cupping and streaks, degradation of contrast, and enhancement of noise), investigations in the literature quantify the impact of scatter mostly only in terms of inhomogeneity artifacts, giving little insight, e.g., into the visibility of low contrast lesions. Therefore, for this study a novel framework has been developed that in addition to normal reconstruction of the CT (HU) number allows for reconstruction of voxelized expectation values of three additional important characteristics of image quality: signal degradation, contrast reduction, and noise variances. The new framework has been applied to projection data obtained with voxelized Monte-Carlo simulations of clinical CT data sets of high spatial resolution. Using these data, the impact of scattered radiation was thoroughly studied for realistic and clinically relevant patient geometries of the head, thorax, and pelvis region. By means of spatially resolved reconstructions of contrast and noise propagation, the image quality of a scenario with using standard antiscatter grids could be evaluated with great detail. Results show the spatially resolved contrast degradation and the spatially resolved expected standard deviation of the noise at any position in the reconstructed object. The new framework represents a general tool for analyzing image quality in reconstructed images.

  8. Real-Time Sensor Validation, Signal Reconstruction, and Feature Detection for an RLV Propulsion Testbed

    NASA Technical Reports Server (NTRS)

    Jankovsky, Amy L.; Fulton, Christopher E.; Binder, Michael P.; Maul, William A., III; Meyer, Claudia M.

    1998-01-01

    A real-time system for validating sensor health has been developed in support of the reusable launch vehicle program. This system was designed for use in a propulsion testbed as part of an overall effort to improve the safety, diagnostic capability, and cost of operation of the testbed. The sensor validation system was designed and developed at the NASA Lewis Research Center and integrated into a propulsion checkout and control system as part of an industry-NASA partnership, led by Rockwell International for the Marshall Space Flight Center. The system includes modules for sensor validation, signal reconstruction, and feature detection and was designed to maximize portability to other applications. Review of test data from initial integration testing verified real-time operation and showed the system to perform correctly on both hard and soft sensor failure test cases. This paper discusses the design of the sensor validation and supporting modules developed at LeRC and reviews results obtained from initial test cases.

  9. A unified treatment of some iterative algorithms in signal processing and image reconstruction

    NASA Astrophysics Data System (ADS)

    Byrne, Charles

    2004-02-01

    Let T be a (possibly nonlinear) continuous operator on Hilbert space {\\cal H} . If, for some starting vector x, the orbit sequence {Tkx,k = 0,1,...} converges, then the limit z is a fixed point of T; that is, Tz = z. An operator N on a Hilbert space {\\cal H} is nonexpansive (ne) if, for each x and y in {\\cal H} , \\[ \\| Nx-Ny\\| \\leq \\| x-y\\|. \\] Even when N has fixed points the orbit sequence {Nkx} need not converge; consider the example N = -I, where I denotes the identity operator. However, for any \\alpha \\in (0,1) the iterative procedure defined by \\[ x^{k+1}=(1-\\alpha)x^k+\\alpha Nx^k \\] converges (weakly) to a fixed point of N whenever such points exist. This is the Krasnoselskii-Mann (KM) approach to finding fixed points of ne operators. A wide variety of iterative procedures used in signal processing and image reconstruction and elsewhere are special cases of the KM iterative procedure, for particular choices of the ne operator N. These include the Gerchberg-Papoulis method for bandlimited extrapolation, the SART algorithm of Anderson and Kak, the Landweber and projected Landweber algorithms, simultaneous and sequential methods for solving the convex feasibility problem, the ART and Cimmino methods for solving linear systems of equations, the CQ algorithm for solving the split feasibility problem and Dolidze's procedure for the variational inequality problem for monotone operators.

  10. Protein crystal structure from non-oriented, single-axis sparse X-ray data.

    PubMed

    Wierman, Jennifer L; Lan, Ti-Yen; Tate, Mark W; Philipp, Hugh T; Elser, Veit; Gruner, Sol M

    2016-01-01

    X-ray free-electron lasers (XFELs) have inspired the development of serial femtosecond crystallography (SFX) as a method to solve the structure of proteins. SFX datasets are collected from a sequence of protein microcrystals injected across ultrashort X-ray pulses. The idea behind SFX is that diffraction from the intense, ultrashort X-ray pulses leaves the crystal before the crystal is obliterated by the effects of the X-ray pulse. The success of SFX at XFELs has catalyzed interest in analogous experiments at synchrotron-radiation (SR) sources, where data are collected from many small crystals and the ultrashort pulses are replaced by exposure times that are kept short enough to avoid significant crystal damage. The diffraction signal from each short exposure is so 'sparse' in recorded photons that the process of recording the crystal intensity is itself a reconstruction problem. Using the EMC algorithm, a successful reconstruction is demonstrated here in a sparsity regime where there are no Bragg peaks that conventionally would serve to determine the orientation of the crystal in each exposure. In this proof-of-principle experiment, a hen egg-white lysozyme (HEWL) crystal rotating about a single axis was illuminated by an X-ray beam from an X-ray generator to simulate the diffraction patterns of microcrystals from synchrotron radiation. Millions of these sparse frames, typically containing only ∼200 photons per frame, were recorded using a fast-framing detector. It is shown that reconstruction of three-dimensional diffraction intensity is possible using the EMC algorithm, even with these extremely sparse frames and without knowledge of the rotation angle. Further, the reconstructed intensity can be phased and refined to solve the protein structure using traditional crystallographic software. This suggests that synchrotron-based serial crystallography of micrometre-sized crystals can be practical with the aid of the EMC algorithm even in cases where the data are

  11. Visual tracking via robust multitask sparse prototypes

    NASA Astrophysics Data System (ADS)

    Zhang, Huanlong; Hu, Shiqiang; Yu, Junyang

    2015-03-01

    Sparse representation has been applied to an online subspace learning-based tracking problem. To handle partial occlusion effectively, some researchers introduce l1 regularization to principal component analysis (PCA) reconstruction. However, in these traditional tracking methods, the representation of each object observation is often viewed as an individual task so the inter-relationship between PCA basis vectors is ignored. We propose a new online visual tracking algorithm with multitask sparse prototypes, which combines multitask sparse learning with PCA-based subspace representation. We first extend a visual tracking algorithm with sparse prototypes in multitask learning framework to mine inter-relations between subtasks. Then, to avoid the problem that enforcing all subtasks to share the same structure may result in degraded tracking results, we impose group sparse constraints on the coefficients of PCA basis vectors and element-wise sparse constraints on the error coefficients, respectively. Finally, we show that the proposed optimization problem can be effectively solved using the accelerated proximal gradient method with the fast convergence. Experimental results compared with the state-of-the-art tracking methods demonstrate that the proposed algorithm achieves favorable performance when the object undergoes partial occlusion, motion blur, and illumination changes.

  12. Online sparse representation for remote sensing compressed-sensed video sampling

    NASA Astrophysics Data System (ADS)

    Wang, Jie; Liu, Kun; Li, Sheng-liang; Zhang, Li

    2014-11-01

    Most recently, an emerging Compressed Sensing (CS) theory has brought a major breakthrough for data acquisition and recovery. It asserts that a signal, which is highly compressible in a known basis, can be reconstructed with high probability through sampling frequency which is well below Nyquist Sampling Frequency. When applying CS to Remote Sensing (RS) Video imaging, it can directly and efficiently acquire compressed image data by randomly projecting original data to obtain linear and non-adaptive measurements. In this paper, with the help of distributed video coding scheme which is a low-complexity technique for resource limited sensors, the frames of a RS video sequence are divided into Key frames (K frames) and Non-Key frames (CS frames). In other words, the input video sequence consists of many groups of pictures (GOPs) and each GOP consists of one K frame followed by several CS frames. Both of them are measured based on block, but at different sampling rates. In this way, the major encoding computation burden will be shifted to the decoder. At the decoder, the Side Information (SI) is generated for the CS frames using traditional Motion-Compensated Interpolation (MCI) technique according to the reconstructed key frames. The over-complete dictionary is trained by dictionary learning methods based on SI. These learning methods include ICA-like, PCA, K-SVD, MOD, etc. Using these dictionaries, the CS frames could be reconstructed according to sparse-land model. In the numerical experiments, the reconstruction performance of ICA algorithm, which is often evaluated by Peak Signal-to-Noise Ratio (PSNR), has been made compared with other online sparse representation algorithms. The simulation results show its advantages in reducing reconstruction time and robustness in reconstruction performance when applying ICA algorithm to remote sensing video reconstruction.

  13. Analog system for computing sparse codes

    DOEpatents

    Rozell, Christopher John; Johnson, Don Herrick; Baraniuk, Richard Gordon; Olshausen, Bruno A.; Ortman, Robert Lowell

    2010-08-24

    A parallel dynamical system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements, and for reconstructing compressively sensed images. The system is based on the principles of thresholding and local competition that solves a family of sparse approximation problems corresponding to various sparsity metrics. The system utilizes Locally Competitive Algorithms (LCAs), nodes in a population continually compete with neighboring units using (usually one-way) lateral inhibition to calculate coefficients representing an input in an over complete dictionary.

  14. Vectorized Sparse Elimination.

    DTIC Science & Technology

    1984-03-01

    Grids," Proc. 6th Symposium on Reservoir Simulation , New Orleans, Feb. 1-2, 1982, pp. 489-506. [51 Arya, S., and D. A. Calahan, "Optimal Scheduling of...of Computer Architecture on Direct Sparse Matrix Routines in Petroleum Reservoir Simulation ," Sparse Matrix Symposium, Fairfield Glade, TE, October

  15. Reconstruction of the input signal of the leaky integrate-and-fire neuronal model from its interspike intervals.

    PubMed

    Seydnejad, Saeid R

    2016-02-01

    Extracting the input signal of a neuron by analyzing its spike output is an important step toward understanding how external information is coded into discrete events of action potentials and how this information is exchanged between different neurons in the nervous system. Most of the existing methods analyze this decoding problem in a stochastic framework and use probabilistic metrics such as maximum-likelihood method to determine the parameters of the input signal assuming a leaky and integrate-and-fire (LIF) model. In this article, the input signal of the LIF model is considered as a combination of orthogonal basis functions. The coefficients of the basis functions are found by minimizing the norm of the observed spikes and those generated by the estimated signal. This approach gives rise to the deterministic reconstruction of the input signal and results in a simple matrix identity through which the coefficients of the basis functions and therefore the neuronal stimulus can be identified. The inherent noise of the neuron is considered as an additional factor in the membrane potential and is treated as the disturbance in the reconstruction algorithm. The performance of the proposed scheme is evaluated by numerical simulations, and it is shown that input signals with different characteristics can be well recovered by this algorithm.

  16. Reconstruction of the first derivative EPR spectrum from multiple harmonics of the field-modulated continuous wave signal

    PubMed Central

    Tseitlin, Mark; Eaton, Sandra S.; Eaton, Gareth R.

    2011-01-01

    Selection of the amplitude of magnetic field modulation for continuous wave electron paramagnetic resonance (EPR) often is a trade-off between sensitivity and resolution. Increasing the modulation amplitude improves the signal-to-noise ratio, S/N, at the expense of broadening the signal. Combining information from multiple harmonics of the field-modulated signal is proposed as a method to obtain the first derivative spectrum with minimal broadening and improved signal-to-noise. The harmonics are obtained by digital phase-sensitive detection of the signal at the modulation frequency and its integer multiples. Reconstruction of the first derivative EPR line is done in the Fourier conjugate domain where each harmonic can be represented as the product of the Fourier transform of the 1st derivative signal with an analytical function. The analytical function for each harmonic can be viewed as a filter. The Fourier transform of the 1st derivative spectrum can be calculated from all available harmonics by solving an optimization problem with the goal of maximizing the S/N. Inverse Fourier transformation of the result produces the 1st derivative EPR line in the magnetic field domain. The use of modulation amplitude greater than linewidth improves the S/N, but does not broaden the reconstructed spectrum. The method works for an arbitrary EPR line shape, but is limited to the case when magnetization instantaneously follows the modulation field, which is known as the adiabatic approximation. PMID:21349750

  17. Signal Reconstruction and Analysis Via New Techniques in Harmonic and Complex Analysis

    DTIC Science & Technology

    2005-08-31

    June 30th, 2005 1.) FORWARD We have used tools from theory of harmonic analysis and number theory to extend existing theories and develop new approaches...likelihood estimates for the sparse data sets on which our methods work. We are also working on extending our work to multiply periodic processes. We...deconvolution and sampling to radial domains, exploiting coprime relationships among zero sets of Bessel functions. We have also discussed applications

  18. Inverse polynomial reconstruction method in DCT domain

    NASA Astrophysics Data System (ADS)

    Dadkhahi, Hamid; Gotchev, Atanas; Egiazarian, Karen

    2012-12-01

    The discrete cosine transform (DCT) offers superior energy compaction properties for a large class of functions and has been employed as a standard tool in many signal and image processing applications. However, it suffers from spurious behavior in the vicinity of edge discontinuities in piecewise smooth signals. To leverage the sparse representation provided by the DCT, in this article, we derive a framework for the inverse polynomial reconstruction in the DCT expansion. It yields the expansion of a piecewise smooth signal in terms of polynomial coefficients, obtained from the DCT representation of the same signal. Taking advantage of this framework, we show that it is feasible to recover piecewise smooth signals from a relatively small number of DCT coefficients with high accuracy. Furthermore, automatic methods based on minimum description length principle and cross-validation are devised to select the polynomial orders, as a requirement of the inverse polynomial reconstruction method in practical applications. The developed framework can considerably enhance the performance of the DCT in sparse representation of piecewise smooth signals. Numerical results show that denoising and image approximation algorithms based on the proposed framework indicate significant improvements over wavelet counterparts for this class of signals.

  19. Sparse recovery via convex optimization

    NASA Astrophysics Data System (ADS)

    Randall, Paige Alicia

    This thesis considers the problem of estimating a sparse signal from a few (possibly noisy) linear measurements. In other words, we have y = Ax + z where A is a measurement matrix with more columns than rows, x is a sparse signal to be estimated, z is a noise vector, and y is a vector of measurements. This setup arises frequently in many problems ranging from MRI imaging to genomics to compressed sensing.We begin by relating our setup to an error correction problem over the reals, where a received encoded message is corrupted by a few arbitrary errors, as well as smaller dense errors. We show that under suitable conditions on the encoding matrix and on the number of arbitrary errors, one is able to accurately recover the message.We next show that we are able to achieve oracle optimality for x, up to a log factor and a factor of sqrt{s}, when we require the matrix A to obey an incoherence property. The incoherence property is novel in that it allows the coherence of A to be as large as O(1/ log n) and still allows sparsities as large as O(m/log n). This is in contrast to other existing results involving coherence where the coherence can only be as large as O(1/sqrt{m}) to allow sparsities as large as O(sqrt{m}). We also do not make the common assumption that the matrix A obeys a restricted eigenvalue condition.We then show that we can recover a (non-sparse) signal from a few linear measurements when the signal has an exactly sparse representation in an overcomplete dictionary. We again only require that the dictionary obey an incoherence property.Finally, we introduce the method of l_1 analysis and show that it is guaranteed to give good recovery of a signal from a few measurements, when the signal can be well represented in a dictionary. We require that the combined measurement/dictionary matrix satisfies a uniform uncertainty principle and we compare our results with the more standard l_1 synthesis approach.All our methods involve solving an l_1 minimization

  20. Sparse Superpixel Unmixing for Hyperspectral Image Analysis

    NASA Technical Reports Server (NTRS)

    Castano, Rebecca; Thompson, David R.; Gilmore, Martha

    2010-01-01

    Software was developed that automatically detects minerals that are present in each pixel of a hyperspectral image. An algorithm based on sparse spectral unmixing with Bayesian Positive Source Separation is used to produce mineral abundance maps from hyperspectral images. A superpixel segmentation strategy enables efficient unmixing in an interactive session. The algorithm computes statistically likely combinations of constituents based on a set of possible constituent minerals whose abundances are uncertain. A library of source spectra from laboratory experiments or previous remote observations is used. A superpixel segmentation strategy improves analysis time by orders of magnitude, permitting incorporation into an interactive user session (see figure). Mineralogical search strategies can be categorized as supervised or unsupervised. Supervised methods use a detection function, developed on previous data by hand or statistical techniques, to identify one or more specific target signals. Purely unsupervised results are not always physically meaningful, and may ignore subtle or localized mineralogy since they aim to minimize reconstruction error over the entire image. This algorithm offers advantages of both methods, providing meaningful physical interpretations and sensitivity to subtle or unexpected minerals.

  1. Fast algorithms for nonconvex compression sensing: MRI reconstruction from very few data

    SciTech Connect

    Chartrand, Rick

    2009-01-01

    Compressive sensing is the reconstruction of sparse images or signals from very few samples, by means of solving a tractable optimization problem. In the context of MRI, this can allow reconstruction from many fewer k-space samples, thereby reducing scanning time. Previous work has shown that nonconvex optimization reduces still further the number of samples required for reconstruction, while still being tractable. In this work, we extend recent Fourier-based algorithms for convex optimization to the nonconvex setting, and obtain methods that combine the reconstruction abilities of previous nonconvex approaches with the computational speed of state-of-the-art convex methods.

  2. Temporal Super Resolution Enhancement of Echocardiographic Images Based on Sparse Representation.

    PubMed

    Gifani, Parisa; Behnam, Hamid; Haddadi, Farzan; Sani, Zahra Alizadeh; Shojaeifard, Maryam

    2016-01-01

    A challenging issue for echocardiographic image interpretation is the accurate analysis of small transient motions of myocardium and valves during real-time visualization. A higher frame rate video may reduce this difficulty, and temporal super resolution (TSR) is useful for illustrating the fast-moving structures. In this paper, we introduce a novel framework that optimizes TSR enhancement of echocardiographic images by utilizing temporal information and sparse representation. The goal of this method is to increase the frame rate of echocardiographic videos, and therefore enable more accurate analyses of moving structures. For the proposed method, we first derived temporal information by extracting intensity variation time curves (IVTCs) assessed for each pixel. We then designed both low-resolution and high-resolution overcomplete dictionaries based on prior knowledge of the temporal signals and a set of prespecified known functions. The IVTCs can then be described as linear combinations of a few prototype atoms in the low-resolution dictionary. We used the Bayesian compressive sensing (BCS) sparse recovery algorithm to find the sparse coefficients of the signals. We extracted the sparse coefficients and the corresponding active atoms in the low-resolution dictionary to construct new sparse coefficients corresponding to the high-resolution dictionary. Using the estimated atoms and the high-resolution dictionary, a new IVTC with more samples was constructed. Finally, by placing the new IVTC signals in the original IVTC positions, we were able to reconstruct the original echocardiography video with more frames. The proposed method does not require training of low-resolution and high-resolution dictionaries, nor does it require motion estimation; it does not blur fast-moving objects, and does not have blocking artifacts.

  3. Structured sparse models for classification

    NASA Astrophysics Data System (ADS)

    Castrodad, Alexey

    The main focus of this thesis is the modeling and classification of high dimensional data using structured sparsity. Sparse models, where data is assumed to be well represented as a linear combination of a few elements from a dictionary, have gained considerable attention in recent years, and its use has led to state-of-the-art results in many signal and image processing tasks. The success of sparse modeling is highly due to its ability to efficiently use the redundancy of the data and find its underlying structure. On a classification setting, we capitalize on this advantage to properly model and separate the structure of the classes. We design and validate modeling solutions to challenging problems arising in computer vision and remote sensing. We propose both supervised and unsupervised schemes for the modeling of human actions from motion imagery under a wide variety of acquisition condi- tions. In the supervised case, the main goal is to classify the human actions in the video given a predefined set of actions to learn from. In the unsupervised case, the main goal is to an- alyze the spatio-temporal dynamics of the individuals in the scene without having any prior information on the actions themselves. We also propose a model for remotely sensed hysper- spectral imagery, where the main goal is to perform automatic spectral source separation and mapping at the subpixel level. Finally, we present a sparse model for sensor fusion to exploit the common structure and enforce collaboration of hyperspectral with LiDAR data for better mapping capabilities. In all these scenarios, we demonstrate that these data can be expressed as a combination of atoms from a class-structured dictionary. These data representation becomes essentially a "mixture of classes," and by directly exploiting the sparse codes, one can attain highly accurate classification performance with relatively unsophisticated classifiers.

  4. Evolving sparse stellar populations

    NASA Astrophysics Data System (ADS)

    Bruzual, Gustavo; Gladis Magris, C.; Hernández-Pérez, Fabiola

    2017-03-01

    We examine the role that stochastic fluctuations in the IMF and in the number of interacting binaries have on the spectro-photometric properties of sparse stellar populations as a function of age and metallicity.

  5. Model-based 3D SAR reconstruction

    NASA Astrophysics Data System (ADS)

    Knight, Chad; Gunther, Jake; Moon, Todd

    2014-06-01

    Three dimensional scene reconstruction with synthetic aperture radar (SAR) is desirable for target recognition and improved scene interpretability. The vertical aperture, which is critical to reconstruct 3D SAR scenes, is almost always sparsely sampled due to practical limitations, which creates an underdetermined problem. This papers explores 3D scene reconstruction using a convex model-based approach. The approach developed is demonstrated on 3D scenes, but can be extended to SAR reconstruction of sparsely sampled signals in the spatial and, or, frequency domains. The model-based approach enables knowledge-aided image formation (KAIF) by incorporating spatial, aspect, and sparsity magnitude terms into the image reconstruction. The incorporation of these terms, which are based on prior scene knowledge, will demonstrate improved results compared to traditional image formation algorithms. The SAR image formation problem is formulated as a second order cone program (SOCP) and the results are demonstrated on 3D scenes using simulated data and data from the GOTCHA data collect.1 The model-based results are contrasted against traditional backprojected images.

  6. X-ray computed tomography using curvelet sparse regularization

    SciTech Connect

    Wieczorek, Matthias Vogel, Jakob; Lasser, Tobias; Frikel, Jürgen; Demaret, Laurent; Eggl, Elena; Pfeiffer, Franz; Kopp, Felix; Noël, Peter B.

    2015-04-15

    Purpose: Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. Methods: In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Results: Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method’s strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. Conclusions: The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  7. Characterizing heterogeneity among virus particles by stochastic 3D signal reconstruction

    NASA Astrophysics Data System (ADS)

    Xu, Nan; Gong, Yunye; Wang, Qiu; Zheng, Yili; Doerschuk, Peter C.

    2015-09-01

    In single-particle cryo electron microscopy, many electron microscope images each of a single instance of a biological particle such as a virus or a ribosome are measured and the 3-D electron scattering intensity of the particle is reconstructed by computation. Because each instance of the particle is imaged separately, it should be possible to characterize the heterogeneity of the different instances of the particle as well as a nominal reconstruction of the particle. In this paper, such an algorithm is described and demonstrated on the bacteriophage Hong Kong 97. The algorithm is a statistical maximum likelihood estimator computed by an expectation maximization algorithm implemented in Matlab software.

  8. Pollen reconstructions, tree-rings and early climate data from Minnesota, USA: a cautionary tale of bias and signal attentuation

    NASA Astrophysics Data System (ADS)

    St-Jacques, J. M.; Cumming, B. F.; Smol, J. P.; Sauchyn, D.

    2015-12-01

    High-resolution proxy reconstructions are essential to assess the rate and magnitude of anthropogenic global warming. High-resolution pollen records are being critically examined for the production of accurate climate reconstructions of the last millennium, often as extensions of tree-ring records. Past climate inference from a sedimentary pollen record depends upon the stationarity of the pollen-climate relationship. However, humans have directly altered vegetation, and hence modern pollen deposition is a product of landscape disturbance and climate, unlike in the past with its dominance of climate-derived processes. This could cause serious bias in pollen reconstructions. In the US Midwest, direct human impacts have greatly altered the vegetation and pollen rain since Euro-American settlement in the mid-19th century. Using instrumental climate data from the early 1800s from Fort Snelling (Minnesota), we assessed the bias from the conventional method of inferring climate from pollen assemblages in comparison to a calibration set from pre-settlement pollen assemblages and the earliest instrumental climate data. The pre-settlement calibration set provides more accurate reconstructions of 19th century temperature than the modern set does. When both calibration sets are used to reconstruct temperatures since AD 1116 from a varve-dated pollen record from Lake Mina, Minnesota, the conventional method produces significant low-frequency (centennial-scale) signal attenuation and positive bias of 0.8-1.7 oC, resulting in an overestimation of Little Ice Age temperature and an underestimation of anthropogenic warming. We also compared the pollen-inferred moisture reconstruction to a four-century tree-ring-inferred moisture record from Minnesota and Dakotas, which shows that the tree-ring reconstruction is biased towards dry conditions and records wet periods relatively poorly, giving a false impression of regional aridity. The tree-ring chronology also suggests varve

  9. Grassmannian sparse representations

    NASA Astrophysics Data System (ADS)

    Azary, Sherif; Savakis, Andreas

    2015-05-01

    We present Grassmannian sparse representations (GSR), a sparse representation Grassmann learning framework for efficient classification. Sparse representation classification offers a powerful approach for recognition in a variety of contexts. However, a major drawback of sparse representation methods is their computational performance and memory utilization for high-dimensional data. A Grassmann manifold is a space that promotes smooth surfaces where points represent subspaces and the relationship between points is defined by the mapping of an orthogonal matrix. Grassmann manifolds are well suited for computer vision problems because they promote high between-class discrimination and within-class clustering, while offering computational advantages by mapping each subspace onto a single point. The GSR framework combines Grassmannian kernels and sparse representations, including regularized least squares and least angle regression, to improve high accuracy recognition while overcoming the drawbacks of performance and dependencies on high dimensional data distributions. The effectiveness of GSR is demonstrated on computationally intensive multiview action sequences, three-dimensional action sequences, and face recognition datasets.

  10. Sparse distributed memory overview

    NASA Technical Reports Server (NTRS)

    Raugh, Mike

    1990-01-01

    The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.

  11. Millennial precipitation reconstruction for the Jemez Mountains, New Mexico, reveals changing drought signal

    USGS Publications Warehouse

    Touchan, Ramzi; Woodhouse, Connie A.; Meko, David M.; Allen, Craig

    2011-01-01

    Drought is a recurring phenomenon in the American Southwest. Since the frequency and severity of hydrologic droughts and other hydroclimatic events are of critical importance to the ecology and rapidly growing human population of this region, knowledge of long-term natural hydroclimatic variability is valuable for resource managers and policy-makers. An October–June precipitation reconstruction for the period AD 824–2007 was developed from multi-century tree-ring records of Pseudotsuga menziesii (Douglas-fir), Pinus strobiformis (Southwestern white pine) and Pinus ponderosa (Ponderosa pine) for the Jemez Mountains in Northern New Mexico. Calibration and verification statistics for the period 1896–2007 show a high level of skill, and account for a significant portion of the observed variance (>50%) irrespective of which period is used to develop or verify the regression model. Split-sample validation supports our use of a reconstruction model based on the full period of reliable observational data (1896–2007). A recent segment of the reconstruction (2000–2006) emerges as the driest 7-year period sensed by the trees in the entire record. That this period was only moderately dry in precipitation anomaly likely indicates accentuated stress from other factors, such as warmer temperatures. Correlation field maps of actual and reconstructed October–June total precipitation, sea surface temperatures and 500-mb geopotential heights show characteristics that are similar to those indicative of El Niño–Southern Oscillation patterns, particularly with regard to ocean and atmospheric conditions in the equatorial and north Pacific. Our 1184-year reconstruction of hydroclimatic variability provides long-term perspective on current and 20th century wet and dry events in Northern New Mexico, is useful to guide expectations of future variability, aids sustainable water management, provides scenarios for drought planning and as inputs for hydrologic models under a

  12. Non-local total-variation (NLTV) minimization combined with reweighted L1-norm for compressed sensing CT reconstruction

    NASA Astrophysics Data System (ADS)

    Kim, Hojin; Chen, Josephine; Wang, Adam; Chuang, Cynthia; Held, Mareike; Pouliot, Jean

    2016-09-01

    The compressed sensing (CS) technique has been employed to reconstruct CT/CBCT images from fewer projections as it is designed to recover a sparse signal from highly under-sampled measurements. Since the CT image itself cannot be sparse, a variety of transforms were developed to make the image sufficiently sparse. The total-variation (TV) transform with local image gradient in L1-norm was adopted in most cases. This approach, however, which utilizes very local information and penalizes the weight at a constant rate regardless of different degrees of spatial gradient, may not produce qualified reconstructed images from noise-contaminated CT projection data. This work presents a new non-local operator of total-variation (NLTV) to overcome the deficits stated above by utilizing a more global search and non-uniform weight penalization in reconstruction. To further improve the reconstructed results, a reweighted L1-norm that approximates the ideal sparse signal recovery of the L0-norm is incorporated into the NLTV reconstruction with additional iterates. This study tested the proposed reconstruction method (reweighted NLTV) from under-sampled projections of 4 objects and 5 experiments (1 digital phantom with low and high noise scenarios, 1 pelvic CT, and 2 CBCT images). We assessed its performance against the conventional TV, NLTV and reweighted TV transforms in the tissue contrast, reconstruction accuracy, and imaging resolution by comparing contrast-noise-ratio (CNR), normalized root-mean square error (nRMSE), and profiles of the reconstructed images. Relative to the conventional NLTV, combining the reweighted L1-norm with NLTV further enhanced the CNRs by 2-4 times and improved reconstruction accuracy. Overall, except for the digital phantom with low noise simulation, our proposed algorithm produced the reconstructed image with the lowest nRMSEs and the highest CNRs for each experiment.

  13. Resampling Images to a Regular Grid From a Non-Regular Subset of Pixel Positions Using Frequency Selective Reconstruction.

    PubMed

    Seiler, Jurgen; Jonscher, Markus; Schöberl, Michael; Kaup, André

    2015-11-01

    Even though image signals are typically defined on a regular 2D grid, there also exist many scenarios where this is not the case and the amplitude of the image signal only is available for a non-regular subset of pixel positions. In such a case, a resampling of the image to a regular grid has to be carried out. This is necessary since almost all algorithms and technologies for processing, transmitting or displaying image signals rely on the samples being available on a regular grid. Thus, it is of great importance to reconstruct the image on this regular grid, so that the reconstruction comes closest to the case that the signal has been originally acquired on the regular grid. In this paper, Frequency Selective Reconstruction is introduced for solving this challenging task. This algorithm reconstructs image signals by exploiting the property that small areas of images can be represented sparsely in the Fourier domain. By further considering the basic properties of the optical transfer function of imaging systems, a sparse model of the signal is iteratively generated. In doing so, the proposed algorithm is able to achieve a very high reconstruction quality, in terms of peak signal-to-noise ratio (PSNR) and structural similarity measure as well as in terms of visual quality. The simulation results show that the proposed algorithm is able to outperform state-of-the-art reconstruction algorithms and gains of more than 1 dB PSNR are possible.

  14. Multiple Sparse Representations Classification

    PubMed Central

    Plenge, Esben; Klein, Stefan S.; Niessen, Wiro J.; Meijering, Erik

    2015-01-01

    Sparse representations classification (SRC) is a powerful technique for pixelwise classification of images and it is increasingly being used for a wide variety of image analysis tasks. The method uses sparse representation and learned redundant dictionaries to classify image pixels. In this empirical study we propose to further leverage the redundancy of the learned dictionaries to achieve a more accurate classifier. In conventional SRC, each image pixel is associated with a small patch surrounding it. Using these patches, a dictionary is trained for each class in a supervised fashion. Commonly, redundant/overcomplete dictionaries are trained and image patches are sparsely represented by a linear combination of only a few of the dictionary elements. Given a set of trained dictionaries, a new patch is sparse coded using each of them, and subsequently assigned to the class whose dictionary yields the minimum residual energy. We propose a generalization of this scheme. The method, which we call multiple sparse representations classification (mSRC), is based on the observation that an overcomplete, class specific dictionary is capable of generating multiple accurate and independent estimates of a patch belonging to the class. So instead of finding a single sparse representation of a patch for each dictionary, we find multiple, and the corresponding residual energies provides an enhanced statistic which is used to improve classification. We demonstrate the efficacy of mSRC for three example applications: pixelwise classification of texture images, lumen segmentation in carotid artery magnetic resonance imaging (MRI), and bifurcation point detection in carotid artery MRI. We compare our method with conventional SRC, K-nearest neighbor, and support vector machine classifiers. The results show that mSRC outperforms SRC and the other reference methods. In addition, we present an extensive evaluation of the effect of the main mSRC parameters: patch size, dictionary size, and

  15. An infrared image super-resolution reconstruction method based on compressive sensing

    NASA Astrophysics Data System (ADS)

    Mao, Yuxing; Wang, Yan; Zhou, Jintao; Jia, Haiwei

    2016-05-01

    Limited by the properties of infrared detector and camera lens, infrared images are often detail missing and indistinct in vision. The spatial resolution needs to be improved to satisfy the requirements of practical application. Based on compressive sensing (CS) theory, this thesis presents a single image super-resolution reconstruction (SRR) method. With synthetically adopting image degradation model, difference operation-based sparse transformation method and orthogonal matching pursuit (OMP) algorithm, the image SRR problem is transformed into a sparse signal reconstruction issue in CS theory. In our work, the sparse transformation matrix is obtained through difference operation to image, and, the measurement matrix is achieved analytically from the imaging principle of infrared camera. Therefore, the time consumption can be decreased compared with the redundant dictionary obtained by sample training such as K-SVD. The experimental results show that our method can achieve favorable performance and good stability with low algorithm complexity.

  16. Neuromagnetic source reconstruction

    SciTech Connect

    Lewis, P.S.; Mosher, J.C.; Leahy, R.M.

    1994-12-31

    In neuromagnetic source reconstruction, a functional map of neural activity is constructed from noninvasive magnetoencephalographic (MEG) measurements. The overall reconstruction problem is under-determined, so some form of source modeling must be applied. We review the two main classes of reconstruction techniques-parametric current dipole models and nonparametric distributed source reconstructions. Current dipole reconstructions use a physically plausible source model, but are limited to cases in which the neural currents are expected to be highly sparse and localized. Distributed source reconstructions can be applied to a wider variety of cases, but must incorporate an implicit source, model in order to arrive at a single reconstruction. We examine distributed source reconstruction in a Bayesian framework to highlight the implicit nonphysical Gaussian assumptions of minimum norm based reconstruction algorithms. We conclude with a brief discussion of alternative non-Gaussian approachs.

  17. Implicit kernel sparse shape representation: a sparse-neighbors-based objection segmentation framework.

    PubMed

    Yao, Jincao; Yu, Huimin; Hu, Roland

    2017-01-01

    This paper introduces a new implicit-kernel-sparse-shape-representation-based object segmentation framework. Given an input object whose shape is similar to some of the elements in the training set, the proposed model can automatically find a cluster of implicit kernel sparse neighbors to approximately represent the input shape and guide the segmentation. A distance-constrained probabilistic definition together with a dualization energy term is developed to connect high-level shape representation and low-level image information. We theoretically prove that our model not only derives from two projected convex sets but is also equivalent to a sparse-reconstruction-error-based representation in the Hilbert space. Finally, a "wake-sleep"-based segmentation framework is applied to drive the evolutionary curve to recover the original shape of the object. We test our model on two public datasets. Numerical experiments on both synthetic images and real applications show the superior capabilities of the proposed framework.

  18. A boostrap algorithm for temporal signal reconstruction in the presence of noise from its fractional Fourier transformed intensity spectra

    SciTech Connect

    Tan, Cheng-Yang; /Fermilab

    2011-02-01

    A bootstrap algorithm for reconstructing the temporal signal from four of its fractional Fourier intensity spectra in the presence of noise is described. An optical arrangement is proposed which realises the bootstrap method for the measurement of ultrashort laser pulses. The measurement of short laser pulses which are less than 1 ps is an ongoing challenge in optical physics. One reason is that no oscilloscope exists today which can directly measure the time structure of these pulses and so it becomes necessary to invent other techniques which indirectly provide the necessary information for temporal pulse reconstruction. One method called FROG (frequency resolved optical gating) has been in use since 19911 and is one of the popular methods for recovering these types of short pulses. The idea behind FROG is the use of multiple time-correlated pulse measurements in the frequency domain for the reconstruction. Multiple data sets are required because only intensity information is recorded and not phase, and thus by collecting multiple data sets, there is enough redundant measurements to yield the original time structure, but not necessarily uniquely (or even up to an arbitrary constant phase offset). The objective of this paper is to describe another method which is simpler than FROG. Instead of collecting many auto-correlated data sets, only two spectral intensity measurements of the temporal signal are needed in the absence of noise. The first can be from the intensity components of its usual Fourier transform and the second from its FrFT (fractional Fourier transform). In the presence of noise, a minimum of four measurements are required with the same FrFT order but with two different apertures. Armed with these two or four measurements, a unique solution up to a constant phase offset can be constructed.

  19. Multilevel sparse functional principal component analysis.

    PubMed

    Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S

    2014-01-29

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.

  20. Model-based multirate Kalman filtering approach for optimal two-dimensional signal reconstruction from noisy subband systems

    NASA Astrophysics Data System (ADS)

    Ni, Jiang Q.; Ho, Ka L.; Tse, Kai W.

    1998-08-01

    Conventional synthesis filters in subband systems lose their optimality when additive noise (due, for example, to signal quantization) disturbs the subband components. The multichannel representation of subband signals is combined with the statistical model of input signal to derive the multirate state-space model for the filter bank system with additive subband noises. Thus the signal reconstruction problem in subband systems can be formulated as the process of optimal state estimation in the equivalent multirate state-space model. Incorporated with the vector dynamic model, a 2D multirate state-space model suitable for 2D Kalman filtering is developed. The performance of the proposed 2D multirate Kalman filter can be further improved through adaptive segmentation of the object plane. The object plane is partitioned into disjoint regions based on their spatial activity, and different vector dynamical models are used to characterize the nonstationary object- plane distributions. Finally, computer simulations with the proposed 2D multirate Kalman filter give favorable results.

  1. Towards the low-dose characterization of beam sensitive nanostructures via implementation of sparse image acquisition in scanning transmission electron microscopy

    NASA Astrophysics Data System (ADS)

    Hwang, Sunghwan; Han, Chang Wan; Venkatakrishnan, Singanallur V.; Bouman, Charles A.; Ortalan, Volkan

    2017-04-01

    Scanning transmission electron microscopy (STEM) has been successfully utilized to investigate atomic structure and chemistry of materials with atomic resolution. However, STEM’s focused electron probe with a high current density causes the electron beam damages including radiolysis and knock-on damage when the focused probe is exposed onto the electron-beam sensitive materials. Therefore, it is highly desirable to decrease the electron dose used in STEM for the investigation of biological/organic molecules, soft materials and nanomaterials in general. With the recent emergence of novel sparse signal processing theories, such as compressive sensing and model-based iterative reconstruction, possibilities of operating STEM under a sparse acquisition scheme to reduce the electron dose have been opened up. In this paper, we report our recent approach to implement a sparse acquisition in STEM mode executed by a random sparse-scan and a signal processing algorithm called model-based iterative reconstruction (MBIR). In this method, a small portion, such as 5% of randomly chosen unit sampling areas (i.e. electron probe positions), which corresponds to pixels of a STEM image, within the region of interest (ROI) of the specimen are scanned with an electron probe to obtain a sparse image. Sparse images are then reconstructed using the MBIR inpainting algorithm to produce an image of the specimen at the original resolution that is consistent with an image obtained using conventional scanning methods. Experimental results for down to 5% sampling show consistency with the full STEM image acquired by the conventional scanning method. Although, practical limitations of the conventional STEM instruments, such as internal delays of the STEM control electronics and the continuous electron gun emission, currently hinder to achieve the full potential of the sparse acquisition STEM in realizing the low dose imaging condition required for the investigation of beam-sensitive materials

  2. Multi-element array signal reconstruction with adaptive least-squares algorithms

    NASA Technical Reports Server (NTRS)

    Kumar, R.

    1992-01-01

    Two versions of the adaptive least-squares algorithm are presented for combining signals from multiple feeds placed in the focal plane of a mechanical antenna whose reflector surface is distorted due to various deformations. Coherent signal combining techniques based on the adaptive least-squares algorithm are examined for nearly optimally and adaptively combining the outputs of the feeds. The performance of the two versions is evaluated by simulations. It is demonstrated for the example considered that both of the adaptive least-squares algorithms are capable of offsetting most of the loss in the antenna gain incurred due to reflector surface deformations.

  3. Sparse inpainting and isotropy

    NASA Astrophysics Data System (ADS)

    Feeney, Stephen M.; Marinucci, Domenico; McEwen, Jason D.; Peiris, Hiranya V.; Wandelt, Benjamin D.; Cammarota, Valentina

    2014-01-01

    Sparse inpainting techniques are gaining in popularity as a tool for cosmological data analysis, in particular for handling data which present masked regions and missing observations. We investigate here the relationship between sparse inpainting techniques using the spherical harmonic basis as a dictionary and the isotropy properties of cosmological maps, as for instance those arising from cosmic microwave background (CMB) experiments. In particular, we investigate the possibility that inpainted maps may exhibit anisotropies in the behaviour of higher-order angular polyspectra. We provide analytic computations and simulations of inpainted maps for a Gaussian isotropic model of CMB data, suggesting that the resulting angular trispectrum may exhibit small but non-negligible deviations from isotropy.

  4. Model-Free Reconstruction of Excitatory Neuronal Connectivity from Calcium Imaging Signals

    PubMed Central

    Stetter, Olav; Battaglia, Demian; Soriano, Jordi; Geisel, Theo

    2012-01-01

    A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically infeasible, even in simpler systems like dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct structural connectivity from network activity monitored through calcium imaging. We focus in this study on the inference of excitatory synaptic links. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the functional network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (bursting or non-bursting). Thus by conditioning with respect to the global mean activity, we improve the performance of our method. This allows us to focus the analysis to specific dynamical regimes of the network in which the inferred functional connectivity is shaped by monosynaptic excitatory connections, rather than by collective synchrony. Our method can discriminate between actual causal influences between neurons and spurious non-causal correlations due to light scattering artifacts, which inherently affect the quality of fluorescence imaging. Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good estimation of the excitatory network clustering coefficient, allowing for discrimination between weakly and strongly clustered topologies. Finally, we demonstrate the applicability of our method to analyses of real recordings of in vitro disinhibited cortical cultures where we suggest that excitatory connections are characterized

  5. Compressive measurement and feature reconstruction method for autonomous star trackers

    NASA Astrophysics Data System (ADS)

    Yin, Hang; Yan, Ye; Song, Xin; Yang, Yueneng

    2016-12-01

    Compressive sensing (CS) theory provides a framework for signal reconstruction using a sub-Nyquist sampling rate. CS theory enables the reconstruction of a signal that is sparse or compressible from a small set of measurements. The current CS application in optical field mainly focuses on reconstructing the original image using optimization algorithms and conducts data processing in full-dimensional image, which cannot reduce the data processing rate. This study is based on the spatial sparsity of star image and proposes a new compressive measurement and reconstruction method that extracts the star feature from compressive data and directly reconstructs it to the original image for attitude determination. A pixel-based folding model that preserves the star feature and enables feature reconstruction is presented to encode the original pixel location into the superposed space. A feature reconstruction method is then proposed to extract the star centroid by compensating distortions and to decode the centroid without reconstructing the whole image, which reduces the sampling rate and data processing rate at the same time. The statistical results investigate the proportion of star distortion and false matching results, which verifies the correctness of the proposed method. The results also verify the robustness of the proposed method to a great extent and demonstrate that its performance can be improved by sufficient measurement in noise cases. Moreover, the result on real star images significantly ensures the correct star centroid estimation for attitude determination and confirms the feasibility of applying the proposed method in a star tracker.

  6. Evolutionary Metric-Learning-Based Recognition Algorithm for Online Isolated Persian/Arabic Characters, Reconstructed Using Inertial Pen Signals.

    PubMed

    Sepahvand, Majid; Abdali-Mohammadi, Fardin; Mardukhi, Farhad

    2016-12-13

    The development of sensors with the microelectromechanical systems technology expedites the emergence of new tools for human-computer interaction, such as inertial pens. These pens, which are used as writing tools, do not depend on a specific embedded hardware, and thus, they are inexpensive. Most of the available inertial pen character recognition approaches use the low-level features of inertial signals. This paper introduces a Persian/Arabic handwriting character recognition system for inertial-sensor-equipped pens. First, the motion trajectory of the inertial pen is reconstructed to estimate the position signals by using the theory of inertial navigation systems. The position signals are then used to extract high-level geometrical features. A new metric learning technique is then adopted to enhance the accuracy of character classification. To this end, a characteristic function is calculated for each character using a genetic programming algorithm. These functions form a metric kernel classifying all the characters. The experimental results show that the performance of the proposed method is superior to that of one of the state-of-the-art works in terms of recognizing Persian/Arabic handwriting characters.

  7. Sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1988-01-01

    Theoretical models of the human brain and proposed neural-network computers are developed analytically. Chapters are devoted to the mathematical foundations, background material from computer science, the theory of idealized neurons, neurons as address decoders, and the search of memory for the best match. Consideration is given to sparse memory, distributed storage, the storage and retrieval of sequences, the construction of distributed memory, and the organization of an autonomous learning system.

  8. Sparse matrix test collections

    SciTech Connect

    Duff, I.

    1996-12-31

    This workshop will discuss plans for coordinating and developing sets of test matrices for the comparison and testing of sparse linear algebra software. We will talk of plans for the next release (Release 2) of the Harwell-Boeing Collection and recent work on improving the accessibility of this Collection and others through the World Wide Web. There will only be three talks of about 15 to 20 minutes followed by a discussion from the floor.

  9. Group Sparse Additive Models

    PubMed Central

    Yin, Junming; Chen, Xi; Xing, Eric P.

    2016-01-01

    We consider the problem of sparse variable selection in nonparametric additive models, with the prior knowledge of the structure among the covariates to encourage those variables within a group to be selected jointly. Previous works either study the group sparsity in the parametric setting (e.g., group lasso), or address the problem in the nonparametric setting without exploiting the structural information (e.g., sparse additive models). In this paper, we present a new method, called group sparse additive models (GroupSpAM), which can handle group sparsity in additive models. We generalize the ℓ1/ℓ2 norm to Hilbert spaces as the sparsity-inducing penalty in GroupSpAM. Moreover, we derive a novel thresholding condition for identifying the functional sparsity at the group level, and propose an efficient block coordinate descent algorithm for constructing the estimate. We demonstrate by simulation that GroupSpAM substantially outperforms the competing methods in terms of support recovery and prediction accuracy in additive models, and also conduct a comparative experiment on a real breast cancer dataset.

  10. Annual Temperature Reconstruction by Signal Decomposition and Synthesis from Multi-Proxies in Xinjiang, China, from 1850 to 2001.

    PubMed

    Zheng, Jingyun; Liu, Yang; Hao, Zhixin

    2015-01-01

    We reconstructed the annual temperature anomaly series in Xinjiang during 1850-2001 based on three kinds of proxies, including 17 tree-ring width chronologies, one tree-ring δ13C series and two δ18O series of ice cores, and instrumental observation data. The low- and high-frequency signal decomposition for the raw temperature proxy data was obtained by a fast Fourier transform filter with a window size of 20 years, which was used to build a good relationship that explained the high variance between the temperature and the proxy data used for the reconstruction. The results showed that for 1850-2001, the temperature during most periods prior to the 1920s was lower than the mean temperature in the 20th century. Remarkable warming occurred in the 20th century at a rate of 0.85°C/100a, which was higher than that during the past 150 years. Two cold periods occurred before the 1870s and around the 1910s, and a relatively warm interval occurred around the 1940s. In addition, the temperature series showed a warming hiatus of approximately 20 years around the 1970s, and a rapid increase since the 1980s.

  11. Annual Temperature Reconstruction by Signal Decomposition and Synthesis from Multi-Proxies in Xinjiang, China, from 1850 to 2001

    PubMed Central

    Zheng, Jingyun; Liu, Yang; Hao, Zhixin

    2015-01-01

    We reconstructed the annual temperature anomaly series in Xinjiang during 1850–2001 based on three kinds of proxies, including 17 tree-ring width chronologies, one tree-ring δ13C series and two δ18O series of ice cores, and instrumental observation data. The low- and high-frequency signal decomposition for the raw temperature proxy data was obtained by a fast Fourier transform filter with a window size of 20 years, which was used to build a good relationship that explained the high variance between the temperature and the proxy data used for the reconstruction. The results showed that for 1850–2001, the temperature during most periods prior to the 1920s was lower than the mean temperature in the 20th century. Remarkable warming occurred in the 20th century at a rate of 0.85°C/100a, which was higher than that during the past 150 years. Two cold periods occurred before the 1870s and around the 1910s, and a relatively warm interval occurred around the 1940s. In addition, the temperature series showed a warming hiatus of approximately 20 years around the 1970s, and a rapid increase since the 1980s. PMID:26632814

  12. Sparse and accurate high resolution SAR imaging

    NASA Astrophysics Data System (ADS)

    Vu, Duc; Zhao, Kexin; Rowe, William; Li, Jian

    2012-05-01

    We investigate the usage of an adaptive method, the Iterative Adaptive Approach (IAA), in combination with a maximum a posteriori (MAP) estimate to reconstruct high resolution SAR images that are both sparse and accurate. IAA is a nonparametric weighted least squares algorithm that is robust and user parameter-free. IAA has been shown to reconstruct SAR images with excellent side lobes suppression and high resolution enhancement. We first reconstruct the SAR images using IAA, and then we enforce sparsity by using MAP with a sparsity inducing prior. By coupling these two methods, we can produce a sparse and accurate high resolution image that are conducive for feature extractions and target classification applications. In addition, we show how IAA can be made computationally efficient without sacrificing accuracies, a desirable property for SAR applications where the size of the problems is quite large. We demonstrate the success of our approach using the Air Force Research Lab's "Gotcha Volumetric SAR Data Set Version 1.0" challenge dataset. Via the widely used FFT, individual vehicles contained in the scene are barely recognizable due to the poor resolution and high side lobe nature of FFT. However with our approach clear edges, boundaries, and textures of the vehicles are obtained.

  13. Reconstruction of cellular signal transduction networks using perturbation assays and linear programming.

    PubMed

    Knapp, Bettina; Kaderali, Lars

    2013-01-01

    Perturbation experiments for example using RNA interference (RNAi) offer an attractive way to elucidate gene function in a high throughput fashion. The placement of hit genes in their functional context and the inference of underlying networks from such data, however, are challenging tasks. One of the problems in network inference is the exponential number of possible network topologies for a given number of genes. Here, we introduce a novel mathematical approach to address this question. We formulate network inference as a linear optimization problem, which can be solved efficiently even for large-scale systems. We use simulated data to evaluate our approach, and show improved performance in particular on larger networks over state-of-the art methods. We achieve increased sensitivity and specificity, as well as a significant reduction in computing time. Furthermore, we show superior performance on noisy data. We then apply our approach to study the intracellular signaling of human primary nave CD4(+) T-cells, as well as ErbB signaling in trastuzumab resistant breast cancer cells. In both cases, our approach recovers known interactions and points to additional relevant processes. In ErbB signaling, our results predict an important role of negative and positive feedback in controlling the cell cycle progression.

  14. Large-region acoustic source mapping using a movable array and sparse covariance fitting.

    PubMed

    Zhao, Shengkui; Tuna, Cagdas; Nguyen, Thi Ngoc Tho; Jones, Douglas L

    2017-01-01

    Large-region acoustic source mapping is important for city-scale noise monitoring. Approaches using a single-position measurement scheme to scan large regions using small arrays cannot provide clean acoustic source maps, while deploying large arrays spanning the entire region of interest is prohibitively expensive. A multiple-position measurement scheme is applied to scan large regions at multiple spatial positions using a movable array of small size. Based on the multiple-position measurement scheme, a sparse-constrained multiple-position vectorized covariance matrix fitting approach is presented. In the proposed approach, the overall sample covariance matrix of the incoherent virtual array is first estimated using the multiple-position array data and then vectorized using the Khatri-Rao (KR) product. A linear model is then constructed for fitting the vectorized covariance matrix and a sparse-constrained reconstruction algorithm is proposed for recovering source powers from the model. The user parameter settings are discussed. The proposed approach is tested on a 30 m × 40 m region and a 60 m × 40 m region using simulated and measured data. Much cleaner acoustic source maps and lower sound pressure level errors are obtained compared to the beamforming approaches and the previous sparse approach [Zhao, Tuna, Nguyen, and Jones, Proc. IEEE Intl. Conf. on Acoustics, Speech and Signal Processing (ICASSP) (2016)].

  15. Sparse representations for online-learning-based hyperspectral image compression.

    PubMed

    Ülkü, İrem; Töreyin, Behçet Uğur

    2015-10-10

    Sparse models provide data representations in the fewest possible number of nonzero elements. This inherent characteristic enables sparse models to be utilized for data compression purposes. Hyperspectral data is large in size. In this paper, a framework for sparsity-based hyperspectral image compression methods using online learning is proposed. There are various sparse optimization models. A comparative analysis of sparse representations in terms of their hyperspectral image compression performance is presented. For this purpose, online-learning-based hyperspectral image compression methods are proposed using four different sparse representations. Results indicate that, independent of the sparsity models, online-learning-based hyperspectral data compression schemes yield the best compression performances for data rates of 0.1 and 0.3 bits per sample, compared to other state-of-the-art hyperspectral data compression techniques, in terms of image quality measured as average peak signal-to-noise ratio.

  16. Marker-less reconstruction of dense 4-D surface motion fields using active laser triangulation for respiratory motion management.

    PubMed

    Bauer, Sebastian; Berkels, Benjamin; Ettl, Svenja; Arold, Oliver; Hornegger, Joachim; Rumpf, Martin

    2012-01-01

    To manage respiratory motion in image-guided interventions a novel sparse-to-dense registration approach is presented. We apply an emerging laser-based active triangulation (AT) sensor that delivers sparse but highly accurate 3-D measurements in real-time. These sparse position measurements are registered with a dense reference surface extracted from planning data. Thereby a dense displacement field is reconstructed which describes the 4-D deformation of the complete patient body surface and recovers a multi-dimensional respiratory signal for application in respiratory motion management. The method is validated on real data from an AT prototype and synthetic data sampled from dense surface scans acquired with a structured light scanner. In a study on 16 subjects, the proposed algorithm achieved a mean reconstruction accuracy of +/- 0.22 mm w.r.t. ground truth data.

  17. Sparse representation for color image restoration.

    PubMed

    Mairal, Julien; Elad, Michael; Sapiro, Guillermo

    2008-01-01

    Sparse representations of signals have drawn considerable interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. In particular, the design of well adapted dictionaries for images has been a major challenge. The K-SVD has been recently proposed for this task and shown to perform very well for various grayscale image processing tasks. In this paper, we address the problem of learning dictionaries for color images and extend the K-SVD-based grayscale image denoising algorithm that appears in. This work puts forward ways for handling nonhomogeneous noise and missing information, paving the way to state-of-the-art results in applications such as color image denoising, demosaicing, and inpainting, as demonstrated in this paper.

  18. Input reconstruction for networked control systems subject to deception attacks and data losses on control signals

    NASA Astrophysics Data System (ADS)

    Keller, J. Y.; Chabir, K.; Sauter, D.

    2016-03-01

    State estimation of stochastic discrete-time linear systems subject to unknown inputs or constant biases has been widely studied but no work has been dedicated to the case where a disturbance switches between unknown input and constant bias. We show that such disturbance can affect a networked control system subject to deception attacks and data losses on the control signals transmitted by the controller to the plant. This paper proposes to estimate the switching disturbance from an augmented state version of the intermittent unknown input Kalman filter recently developed by the authors. Sufficient stochastic stability conditions are established when the arrival binary sequence of data losses follows a Bernoulli random process.

  19. Hyperspectral Image Classification via Kernel Sparse Representation

    DTIC Science & Technology

    2013-01-01

    and sparse representations, image processing, wavelets, multirate systems , and filter banks . Nasser M. Nasrabadi (S’80–M’84–SM’92–F’01) received the...ery, sampling, multirate systems , filter banks , transforms, wavelets, and their applications in signal analysis, compression, processing, and...University of Pavia and the Center of Pavia images, are urban images acquired by the Reflective Optics System Imaging Spectrom- eter (ROSIS). The ROSIS

  20. Time-frequency scale decomposition of tectonic tremor signals for space-time reconstruction of tectonic tremor sources

    NASA Astrophysics Data System (ADS)

    Poiata, N.; Satriano, C.; Vilotte, J. P.; Bernard, P.; Obara, K.

    2015-12-01

    Seismic radiation associated with transient deformations along the faults and subduction interfaces encompasses a variety of events, i.e., tectonic tremors, low-frequency earthquakes (LFE), very low-frequency earthquakes (VLFs), and slow-slip events (SSE), with a wide range of seismic moment and characteristic durations. Characterizing in space and time the complex sources of these slow earthquakes, and their relationship with background seismicity and large earthquakes generation, is of great importance for understanding the physics and mechanics of the processes of active deformations along the plate interfaces. We present here first developments towards a methodology for: (1) extracting the different frequency and scale components of observed tectonic tremor signal, using advanced time-frequency and time-scale signal representation such as Gabor transform scheme based on, e.g. Wilson bases or Modified Discrete Cosine Transform (MDCT) bases; (2) reconstructing their corresponding potential sources in space and time, using the array method of Poiata et al. (2015). The methodology is assessed using a dataset of tectonic tremor episodes from Shikoku, Japan, recorded by the Hi-net seismic network operated by NIED. We illustrate its performance and potential in providing activity maps - associated to different scale-components of tectonic tremors - that can be analyzed statistically to improve our understanding of tremor sources and scaling, as well as their relation with the background seismicity.

  1. TASMANIAN Sparse Grids Module

    SciTech Connect

    and Drayton Munster, Miroslav Stoyanov

    2013-09-20

    Sparse Grids are the family of methods of choice for multidimensional integration and interpolation in low to moderate number of dimensions. The method is to select extend a one dimensional set of abscissas, weights and basis functions by taking a subset of all possible tensor products. The module provides the ability to create global and local approximations based on polynomials and wavelets. The software has three components, a library, a wrapper for the library that provides a command line interface via text files ad a MATLAB interface via the command line tool.

  2. Blind spectral unmixing based on sparse nonnegative matrix factorization.

    PubMed

    Yang, Zuyuan; Zhou, Guoxu; Xie, Shengli; Ding, Shuxue; Yang, Jun-Mei; Zhang, Jun

    2011-04-01

    Nonnegative matrix factorization (NMF) is a widely used method for blind spectral unmixing (SU), which aims at obtaining the endmembers and corresponding fractional abundances, knowing only the collected mixing spectral data. It is noted that the abundance may be sparse (i.e., the endmembers may be with sparse distributions) and sparse NMF tends to lead to a unique result, so it is intuitive and meaningful to constrain NMF with sparseness for solving SU. However, due to the abundance sum-to-one constraint in SU, the traditional sparseness measured by L0/L1-norm is not an effective constraint any more. A novel measure (termed as S-measure) of sparseness using higher order norms of the signal vector is proposed in this paper. It features the physical significance. By using the S-measure constraint (SMC), a gradient-based sparse NMF algorithm (termed as NMF-SMC) is proposed for solving the SU problem, where the learning rate is adaptively selected, and the endmembers and abundances are simultaneously estimated. In the proposed NMF-SMC, there is no pure index assumption and no need to know the exact sparseness degree of the abundance in prior. Yet, it does not require the preprocessing of dimension reduction in which some useful information may be lost. Experiments based on synthetic mixtures and real-world images collected by AVIRIS and HYDICE sensors are performed to evaluate the validity of the proposed method.

  3. A non-iterative method for the electrical impedance tomography based on joint sparse recovery

    NASA Astrophysics Data System (ADS)

    Lee, Ok Kyun; Kang, Hyeonbae; Ye, Jong Chul; Lim, Mikyoung

    2015-07-01

    The purpose of this paper is to propose a non-iterative method for the inverse conductivity problem of recovering multiple small anomalies from the boundary measurements. When small anomalies are buried in a conducting object, the electric potential values inside the object can be expressed by integrals of densities with a common sparse support on the location of anomalies. Based on this integral expression, we formulate the reconstruction problem of small anomalies as a joint sparse recovery and present an efficient non-iterative recovery algorithm of small anomalies. Furthermore, we also provide a slightly modified algorithm to reconstruct an extended anomaly. We validate the effectiveness of the proposed algorithm over the linearized method and the multiple signal classification algorithm by numerical simulations. This work is supported by the Korean Ministry of Education, Sciences and Technology through NRF grant No. NRF-2010-0017532 (to H K), the Korean Ministry of Science, ICT & Future Planning; through NRF grant No. NRF-2013R1A1A3012931 (to M L), the R&D Convergence Program of NST (National Research Council of Science & Technology) of Republic of Korea (Grant CAP-13-3-KERI) (to O K L and J C Y).

  4. Reconstruction of hyperspectral CHRIS/PROBA signal by the Earth Observation Land Data Assimilation System (EO-LDAS)

    NASA Astrophysics Data System (ADS)

    Chernetskiy, Maxim; Gobron, Nadine; Gomez-Dans, Jose; Lewis, Philip

    EO-LDAS is a system that allows one to interpret spectral observations of the land surface to provide an optimal estimate of state of the Earth. It allows a consistent combination of observations from different sensors despite the difference in spatial and spectral resolution and acquisition frequencies. The system is based on a variational data assimilation (DA) scheme, and uses physically-based radiative transfer models (RTM) to map from state to observation. In addition the system takes into account observational uncertainty, prior information and a model of spatial/temporal evolution of the state. Such approach is very useful for the future satellite constellations as well as for reanalysis of historical data. The main purpose of EO-LDAS is the retrieval of biophysical land variables. However, once the state is known after inverting some observations, the system can be used to forward model and predict other observations. The main aim of this contribution is the validation of EO-LDAS by reconstructing CHRIS/PROBA hyperspectral signal on the base of MODIS 500 m, Landsat ETM+ and MISR full resolution data over the Barrax site during the SPARC 2004 campaign. First, multispectral data were inverted by EO-LDAS in order to obtain a set of biophysical parameters which were then used in a forward mode to obtain full spectra over various fields covering Barrax area. The reconstruction was performed using the same view/sun geometry as initial PROBA scene. Single set of spectra from MODIS, ETM+ and MISR were used and a combination of MODIS-ETM+ and MISR-ETM+. In addition uncertainties of output biophysical land parameters were considered for understanding real accuracy and applicability of combinations of different sensors. Finally, spatial and temporal regularisation models were applied to add extra constraints to the inversion. The proposed contribution demonstrates the capabilities of EO-LDAS for the reconstruction of hyperspectral bands on the base of different

  5. Digital DC-Reconstruction of AC-Coupled Electrophysiological Signals with a Single Inverting Filter.

    PubMed

    Abächerli, Roger; Isaksen, Jonas; Schmid, Ramun; Leber, Remo; Schmid, Hans-Jakob; Generali, Gianluca

    2016-01-01

    Since the introduction of digital electrocardiographs, high-pass filters have been necessary for successful analog-to-digital conversion with a reasonable amplitude resolution. On the other hand, such high-pass filters may distort the diagnostically significant ST-segment of the ECG, which can result in a misleading diagnosis. We present an inverting filter that successfully undoes the effects of a 0.05 Hz single pole high-pass filter. The inverting filter has been tested on more than 1600 clinical ECGs with one-minute durations and produces a negligible mean RMS-error of 3.1*10(-8) LSB. Alternative, less strong inverting filters have also been tested, as have different applications of the filters with respect to rounding of the signals after filtering. A design scheme for the alternative inverting filters has been suggested, based on the maximum strength of the filter. With the use of the suggested filters, it is possible to recover the original DC-coupled ECGs from AC-coupled ECGs, at least when a 0.05 Hz first order digital single pole high-pass filter is used for the AC-coupling.

  6. Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking

    PubMed Central

    Qu, Shiru

    2016-01-01

    Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness. PMID:27630710

  7. Fast Sparse Level Sets on Graphics Hardware.

    PubMed

    Jalba, Andrei C; van der Laan, Wladimir J; Roerdink, Jos B T M

    2013-01-01

    The level-set method is one of the most popular techniques for capturing and tracking deformable interfaces. Although level sets have demonstrated great potential in visualization and computer graphics applications, such as surface editing and physically based modeling, their use for interactive simulations has been limited due to the high computational demands involved. In this paper, we address this computational challenge by leveraging the increased computing power of graphics processors, to achieve fast simulations based on level sets. Our efficient, sparse GPU level-set method is substantially faster than other state-of-the-art, parallel approaches on both CPU and GPU hardware. We further investigate its performance through a method for surface reconstruction, based on GPU level sets. Our novel multiresolution method for surface reconstruction from unorganized point clouds compares favorably with recent, existing techniques and other parallel implementations. Finally, we point out that both level-set computations and rendering of level-set surfaces can be performed at interactive rates, even on large volumetric grids. Therefore, many applications based on level sets can benefit from our sparse level-set method.

  8. Reconstructing direct and indirect interactions in networked public goods game

    NASA Astrophysics Data System (ADS)

    Han, Xiao; Shen, Zhesi; Wang, Wen-Xu; Lai, Ying-Cheng; Grebogi, Celso

    2016-07-01

    Network reconstruction is a fundamental problem for understanding many complex systems with unknown interaction structures. In many complex systems, there are indirect interactions between two individuals without immediate connection but with common neighbors. Despite recent advances in network reconstruction, we continue to lack an approach for reconstructing complex networks with indirect interactions. Here we introduce a two-step strategy to resolve the reconstruction problem, where in the first step, we recover both direct and indirect interactions by employing the Lasso to solve a sparse signal reconstruction problem, and in the second step, we use matrix transformation and optimization to distinguish between direct and indirect interactions. The network structure corresponding to direct interactions can be fully uncovered. We exploit the public goods game occurring on complex networks as a paradigm for characterizing indirect interactions and test our reconstruction approach. We find that high reconstruction accuracy can be achieved for both homogeneous and heterogeneous networks, and a number of empirical networks in spite of insufficient data measurement contaminated by noise. Although a general framework for reconstructing complex networks with arbitrary types of indirect interactions is yet lacking, our approach opens new routes to separate direct and indirect interactions in a representative complex system.

  9. Blind source separation by sparse decomposition

    NASA Astrophysics Data System (ADS)

    Zibulevsky, Michael; Pearlmutter, Barak A.

    2000-04-01

    The blind source separation problem is to extract the underlying source signals from a set of their linear mixtures, where the mixing matrix is unknown. This situation is common, eg in acoustics, radio, and medical signal processing. We exploit the property of the sources to have a sparse representation in a corresponding signal dictionary. Such a dictionary may consist of wavelets, wavelet packets, etc., or be obtained by learning from a given family of signals. Starting from the maximum a posteriori framework, which is applicable to the case of more sources than mixtures, we derive a few other categories of objective functions, which provide faster and more robust computations, when there are an equal number of sources and mixtures. Our experiments with artificial signals and with musical sounds demonstrate significantly better separation than other known techniques.

  10. Reconstructing the Fastest Chemical and Electrical Signalling Responses to Microgravity Stress in Plants

    NASA Astrophysics Data System (ADS)

    Mugnai, Sergio; Pandolfi, Camilla; Masi, Elisa; Azzarello, Elisa; Voigt, Boris; Baluska, Frantisek; Volkmann, Dieter; Mancuso, Stefano

    Plants are particularly suited to study the response of a living organism to gravity as they are extremely sensitive to its changes. Gravity perception is a well-studied phenomenon, but the chain of events related to signal transduction and transmission still suffers lack of information. Preliminary results obtained in previous parabolic flight campaigns (PFCs) by our Lab show that microgravity (¡0.05g), but not hypergravity (1.8g), repeatedly induced immediate (less than 1.5 s) oxygen bursts when maize roots experienced loss of gravity forces. Interestingly, these changes were located exclusively in the apex, but not in the mature zone of the root. Ground experiments have also revealed the onset of strong and rapid electrical responses in maize root apices subjected to stress, which lead to the hypothesis of an intrinsic capacity of the root apex to generate functional networks. Experiments during the 49th and 51st ESA PFCs were aimed 1) to find out if the different consumption of oxygen at root level recorded in the previous PFCs can lead to a subsequent local emissions of ROS in living root apices; 2) to study the space-temporal pattern of the neuronal network generated by roots under gravity changing conditions; 3) to evaluate the onset of synchronization events during gravity changes conditions. Concerning oxygen bursts, results indicate that they probably implicate a strong generation of ROS (such as nitric oxide) matching exactly the microgravity events suggesting that the sensing mechanism is not only related to a general mechanical stress (i.e. tensegrity model, present also during hypergravity), but can be specific for the microgravity event. To further investigate this hypothesis we studied the distributed/synchronized electrical activity of cells by the use of a Multi-Electrode Array (MEA). The main results obtained are: root transition zone (TZ) showed a higher spike rate activity compared to the mature zone (MZ). Also, microgravity appeared to

  11. Monte Carlo simulation of an optical coherence Doppler tomograph signal: the effect of the concentration of particles in a flow on the reconstructed velocity profile

    SciTech Connect

    Bykov, A V; Kirillin, M Yu; Priezzhev, A V

    2005-02-28

    Model signals of an optical coherence Doppler tomograph (OCDT) are obtained by the Monte Carlo method from a flow of a light-scattering suspension of lipid vesicles (intralipid) at concentrations from 0.7% to 1.5% with an a priori specified parabolic velocity profile. The velocity profile parameters reconstructed from the OCDT signal and scattering orders of the photons contributing to the signal are studied as functions of the suspension concentration. It is shown that the maximum of the reconstructed velocity profile at high concentrations shifts with respect to the symmetry axis of the flow and its value decreases due to a greater contribution from multiply scattered photons. (papers devoted to the 250th anniversary of the moscow state university)

  12. Percolation on Sparse Networks

    NASA Astrophysics Data System (ADS)

    Karrer, Brian; Newman, M. E. J.; Zdeborová, Lenka

    2014-11-01

    We study percolation on networks, which is used as a model of the resilience of networked systems such as the Internet to attack or failure and as a simple model of the spread of disease over human contact networks. We reformulate percolation as a message passing process and demonstrate how the resulting equations can be used to calculate, among other things, the size of the percolating cluster and the average cluster size. The calculations are exact for sparse networks when the number of short loops in the network is small, but even on networks with many short loops we find them to be highly accurate when compared with direct numerical simulations. By considering the fixed points of the message passing process, we also show that the percolation threshold on a network with few loops is given by the inverse of the leading eigenvalue of the so-called nonbacktracking matrix.

  13. Sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1989-01-01

    Sparse distributed memory was proposed be Pentti Kanerva as a realizable architecture that could store large patterns and retrieve them based on partial matches with patterns representing current sensory inputs. This memory exhibits behaviors, both in theory and in experiment, that resemble those previously unapproached by machines - e.g., rapid recognition of faces or odors, discovery of new connections between seemingly unrelated ideas, continuation of a sequence of events when given a cue from the middle, knowing that one doesn't know, or getting stuck with an answer on the tip of one's tongue. These behaviors are now within reach of machines that can be incorporated into the computing systems of robots capable of seeing, talking, and manipulating. Kanerva's theory is a break with the Western rationalistic tradition, allowing a new interpretation of learning and cognition that respects biology and the mysteries of individual human beings.

  14. Precession missile feature extraction using sparse component analysis of radar measurements

    NASA Astrophysics Data System (ADS)

    Liu, Lihua; Du, Xiaoyong; Ghogho, Mounir; Hu, Weidong; McLernon, Des

    2012-12-01

    According to the working mode of the ballistic missile warning radar (BMWR), the radar return from the BMWR is usually sparse. To recognize and identify the warhead, it is necessary to extract the precession frequency and the locations of the scattering centers of the missile. This article first analyzes the radar signal model of the precessing conical missile during flight and develops the sparse dictionary which is parameterized by the unknown precession frequency. Based on the sparse dictionary, the sparse signal model is then established. A nonlinear least square estimation is first applied to roughly extract the precession frequency in the sparse dictionary. Based on the time segmented radar signal, a sparse component analysis method using the orthogonal matching pursuit algorithm is then proposed to jointly estimate the precession frequency and the scattering centers of the missile. Simulation results illustrate the validity of the proposed method.

  15. New Algorithms and Sparse Regularization for Synthetic Aperture Radar Imaging

    DTIC Science & Technology

    2015-10-26

    Demanet Department of Mathematics Massachusetts Institute of Technology. • Grant title: New Algorithms and Sparse Regularization for Synthetic Aperture...statistical analysis of one such method, the so-called MUSIC algorithm (multiple signal classification). We have a publication that mathematically justifies...called MUSIC algorithm (multiple signal classification). We have a publication that mathematically justifies the scaling of the phase transition

  16. Sub-Nyquist signal-reconstruction-free operational modal analysis and damage detection in the presence of noise

    NASA Astrophysics Data System (ADS)

    Gkoktsi, Kyriaki; Giaralis, Agathoklis; TauSiesakul, Bamrung

    2016-04-01

    Motivated by a need to reduce energy consumption in wireless sensors for vibration-based structural health monitoring (SHM) associated with data acquisition and transmission, this paper puts forth a novel approach for undertaking operational modal analysis (OMA) and damage localization relying on compressed vibrations measurements sampled at rates well below the Nyquist rate. Specifically, non-uniform deterministic sub-Nyquist multi-coset sampling of response acceleration signals in white noise excited linear structures is considered in conjunction with a power spectrum blind sampling/estimation technique which retrieves/samples the power spectral density matrix from arrays of sensors directly from the sub-Nyquist measurements (i.e., in the compressed domain) without signal reconstruction in the time-domain and without posing any signal sparsity conditions. The frequency domain decomposition algorithm is then applied to the power spectral density matrix to extract natural frequencies and mode shapes as a standard OMA step. Further, the modal strain energy index (MSEI) is considered for damage localization based on the mode shapes extracted directly from the compressed measurements. The effectiveness and accuracy of the proposed approach is numerically assessed by considering simulated vibration data pertaining to a white-noise excited simply supported beam in healthy and in 3 damaged states, contaminated with Gaussian white noise. Good accuracy is achieved in estimating mode shapes (quantified in terms of the modal assurance criterion) and natural frequencies from an array of 15 multi-coset devices sampling at a 70% slower than the Nyquist frequency rate for SNRs as low as 10db. Damage localization of equal level/quality is also achieved by the MSEI applied to mode shapes derived from noisy sub-Nyquist (70% compression) and Nyquist measurements for all damaged states considered. Overall, the furnished numerical results demonstrate that the herein considered sub

  17. Computation of the ensemble channelized Hotelling observer signal-to-noise ratio for ordered-subset image reconstruction using noisy data

    NASA Astrophysics Data System (ADS)

    Soares, Edward J.; Gifford, Howard C.; Glick, Stephen J.

    2003-05-01

    We investigated the estimation of the ensemble channelized Hotelling observer (CHO) signal-to-noise ratio (SNR) for ordered-subset (OS) image reconstruction using noisy projection data. Previously, we computed the ensemble CHO SNR using a method for approximating the channelized covariance of OS reconstruction, which requires knowledge of the noise-free projection data. Here, we use a "plug-in" approach, in which noisy data is used in place of the noise-free data in the aforementioned channelized covariance approximation. Additionally, we evaluated the use of smoothing of the noisy projections before use in the covariance approximation. Additionally, we evaluated the use of smoothing of the noisy projections before use in the covariance calculation. The task was detection of a 10% contrast Gaussian signal within a slice of the MCAT phantom. Simulated projections of the MCAT phantom were scaled and Poisson noise was added to create 100 noisy signal-absent data sets. Simulated projections of the scaled signal were then added to the noisy background projections to create 100 noisy signal-present data set. These noisy data sets were then used to generate 100 estimates of the ensemble CHO SNR for reconstructions at various iterates. For comparison purposes, the same calculation was repeated with the noise-free data. The results, reported as plots of the average CHO SNR generated in this fashion, along with 95% confidence intervals, demonstrate that this approach works very well, and would allow optimization of imaging systems and reconstruction methods using a more accurate object model (i.e., real patient data).

  18. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    SciTech Connect

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Keall, Paul J.; Kuncic, Zdenka

    2014-04-15

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR

  19. Spectrotemporal CT data acquisition and reconstruction at low dose

    SciTech Connect

    Clark, Darin P.; Badea, Cristian T.; Lee, Chang-Lung; Kirsch, David G.

    2015-11-15

    Purpose: X-ray computed tomography (CT) is widely used, both clinically and preclinically, for fast, high-resolution anatomic imaging; however, compelling opportunities exist to expand its use in functional imaging applications. For instance, spectral information combined with nanoparticle contrast agents enables quantification of tissue perfusion levels, while temporal information details cardiac and respiratory dynamics. The authors propose and demonstrate a projection acquisition and reconstruction strategy for 5D CT (3D + dual energy + time) which recovers spectral and temporal information without substantially increasing radiation dose or sampling time relative to anatomic imaging protocols. Methods: The authors approach the 5D reconstruction problem within the framework of low-rank and sparse matrix decomposition. Unlike previous work on rank-sparsity constrained CT reconstruction, the authors establish an explicit rank-sparse signal model to describe the spectral and temporal dimensions. The spectral dimension is represented as a well-sampled time and energy averaged image plus regularly undersampled principal components describing the spectral contrast. The temporal dimension is represented as the same time and energy averaged reconstruction plus contiguous, spatially sparse, and irregularly sampled temporal contrast images. Using a nonlinear, image domain filtration approach, the authors refer to as rank-sparse kernel regression, the authors transfer image structure from the well-sampled time and energy averaged reconstruction to the spectral and temporal contrast images. This regularization strategy strictly constrains the reconstruction problem while approximately separating the temporal and spectral dimensions. Separability results in a highly compressed representation for the 5D data in which projections are shared between the temporal and spectral reconstruction subproblems, enabling substantial undersampling. The authors solved the 5D reconstruction

  20. Sparse representation of HCP grayordinate data reveals novel functional architecture of cerebral cortex.

    PubMed

    Jiang, Xi; Li, Xiang; Lv, Jinglei; Zhang, Tuo; Zhang, Shu; Guo, Lei; Liu, Tianming

    2015-12-01

    The recently publicly released Human Connectome Project (HCP) grayordinate-based fMRI data not only has high spatial and temporal resolution, but also offers group-corresponding fMRI signals across a large population for the first time in the brain imaging field, thus significantly facilitating mapping the functional brain architecture with much higher resolution and in a group-wise fashion. In this article, we adopt the HCP grayordinate task-based fMRI (tfMRI) data to systematically identify and characterize task-based heterogeneous functional regions (THFRs) on cortical surface, i.e., the regions that are activated during multiple tasks conditions and contribute to multiple task-evoked systems during a specific task performance, and to assess the spatial patterns of identified THFRs on cortical gyri and sulci by applying a computational framework of sparse representations of grayordinate brain tfMRI signals. Experimental results demonstrate that both consistent task-evoked networks and intrinsic connectivity networks across all subjects and tasks in HCP grayordinate data are effectively and robustly reconstructed via the proposed sparse representation framework. Moreover, it is found that there are relatively consistent THFRs locating at bilateral parietal lobe, frontal lobe, and visual association cortices across all subjects and tasks. Particularly, those identified THFRs locate significantly more on gyral regions than on sulcal regions. These results based on sparse representation of HCP grayordinate data reveal novel functional architecture of cortical gyri and sulci, and might provide a foundation to better understand functional mechanisms of the human cerebral cortex in the future.

  1. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/sqrt{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/sqrt{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  2. Sparse-based multispectral image encryption via ptychography

    NASA Astrophysics Data System (ADS)

    Rawat, Nitin; Shi, Yishi; Kim, Byoungho; Lee, Byung-Geun

    2015-12-01

    Recently, we proposed a model of securing a ptychography-based monochromatic image encryption system via the classical Photon-counting imaging (PCI) technique. In this study, we examine a single-channel multispectral sparse-based photon-counting ptychography imaging (SMPI)-based cryptosystem. A ptychography-based cryptosystem creates a complex object wave field, which can be reconstructed by a series of diffraction intensity patterns through an aperture movement. The PCI sensor records only a few complex Bayer patterned samples that have been utilized in the decryption process. Sparse sensing and nonlinear properties of the classical PCI system, together with the scanning probes, enlarge the key space, and such a combination therefore enhances the system's security. We demonstrate that the sparse samples have adequate information for image decryption, as well as information authentication by means of optical correlation.

  3. A Multiobjective Sparse Feature Learning Model for Deep Neural Networks.

    PubMed

    Gong, Maoguo; Liu, Jia; Li, Hao; Cai, Qing; Su, Linzhi

    2015-12-01

    Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features.

  4. Sparseness- and continuity-constrained seismic imaging

    NASA Astrophysics Data System (ADS)

    Herrmann, Felix J.

    2005-04-01

    Non-linear solution strategies to the least-squares seismic inverse-scattering problem with sparseness and continuity constraints are proposed. Our approach is designed to (i) deal with substantial amounts of additive noise (SNR < 0 dB); (ii) use the sparseness and locality (both in position and angle) of directional basis functions (such as curvelets and contourlets) on the model: the reflectivity; and (iii) exploit the near invariance of these basis functions under the normal operator, i.e., the scattering-followed-by-imaging operator. Signal-to-noise ratio and the continuity along the imaged reflectors are significantly enhanced by formulating the solution of the seismic inverse problem in terms of an optimization problem. During the optimization, sparseness on the basis and continuity along the reflectors are imposed by jointly minimizing the l1- and anisotropic diffusion/total-variation norms on the coefficients and reflectivity, respectively. [Joint work with Peyman P. Moghaddam was carried out as part of the SINBAD project, with financial support secured through ITF (the Industry Technology Facilitator) from the following organizations: BG Group, BP, ExxonMobil, and SHELL. Additional funding came from the NSERC Discovery Grants 22R81254.

  5. Automatic target recognition via sparse representations

    NASA Astrophysics Data System (ADS)

    Estabridis, Katia

    2010-04-01

    Automatic target recognition (ATR) based on the emerging technology of Compressed Sensing (CS) can considerably improve accuracy, speed and cost associated with these types of systems. An image based ATR algorithm has been built upon this new theory, which can perform target detection and recognition in a low dimensional space. Compressed dictionaries (A) are formed to include rotational information for a scale of interest. The algorithm seeks to identify y(test sample) as a linear combination of the dictionary elements : y=Ax, where A ∈ Rnxm(n<sparse vector whose non-zero entries identify the input y. The signal x will be sparse with respect to the dictionary A as long as y is a valid target. The algorithm can reject clutter and background, which are part of the input image. The detection and recognition problems are solved by finding the sparse-solution to the undetermined system y=Ax via Orthogonal Matching Pursuit (OMP) and l1 minimization techniques. Visible and MWIR imagery collected by the Army Night Vision and Electronic Sensors Directorate (NVESD) was utilized to test the algorithm. Results show an average detection and recognition rates above 95% for targets at ranges up to 3Km for both image modalities.

  6. Autocorrelation noise removal for optical coherence tomography by sparse filter design

    NASA Astrophysics Data System (ADS)

    Seck, Hon Luen; Zhang, Ying; Soh, Yeng Chai

    2012-07-01

    We present a reconstruction method to eliminate the autocorrelation noise (ACN) in optical coherence tomography (OCT). In this method, the optical fields scattered from the sample features are regarded as the response of a sparse finite impulse response (FIR) filter. Then the OCT reconstruction is formulated as one of identifying the parameters of a sparse FIR filter, which are obtained via an ℓ1 optimization with soft thresholding. The experimental results show that the proposed method can obtain OCT reconstruction results with effective attenuation of ACN.

  7. Sparse Matrix for ECG Identification with Two-Lead Features

    PubMed Central

    Tseng, Kuo-Kun; Luo, Jiao; Wang, Wenmin; Haiting, Dong

    2015-01-01

    Electrocardiograph (ECG) human identification has the potential to improve biometric security. However, improvements in ECG identification and feature extraction are required. Previous work has focused on single lead ECG signals. Our work proposes a new algorithm for human identification by mapping two-lead ECG signals onto a two-dimensional matrix then employing a sparse matrix method to process the matrix. And that is the first application of sparse matrix techniques for ECG identification. Moreover, the results of our experiments demonstrate the benefits of our approach over existing methods. PMID:25961074

  8. Asynchronous signal-dependent non-uniform sampler

    NASA Astrophysics Data System (ADS)

    Can-Cimino, Azime; Chaparro, Luis F.; Sejdić, Ervin

    2014-05-01

    Analog sparse signals resulting from biomedical and sensing network applications are typically non-stationary with frequency-varying spectra. By ignoring that the maximum frequency of their spectra is changing, uniform sampling of sparse signals collects unnecessary samples in quiescent segments of the signal. A more appropriate sampling approach would be signal-dependent. Moreover, in many of these applications power consumption and analog processing are issues of great importance that need to be considered. In this paper we present a signal dependent non-uniform sampler that uses a Modified Asynchronous Sigma Delta Modulator which consumes low-power and can be processed using analog procedures. Using Prolate Spheroidal Wave Functions (PSWF) interpolation of the original signal is performed, thus giving an asynchronous analog to digital and digital to analog conversion. Stable solutions are obtained by using modulated PSWFs functions. The advantage of the adapted asynchronous sampler is that range of frequencies of the sparse signal is taken into account avoiding aliasing. Moreover, it requires saving only the zero-crossing times of the non-uniform samples, or their differences, and the reconstruction can be done using their quantized values and a PSWF-based interpolation. The range of frequencies analyzed can be changed and the sampler can be implemented as a bank of filters for unknown range of frequencies. The performance of the proposed algorithm is illustrated with an electroencephalogram (EEG) signal.

  9. Optimized Color Filter Arrays for Sparse Representation Based Demosaicking.

    PubMed

    Li, Jia; Bai, Chenyan; Lin, Zhouchen; Yu, Jian

    2017-03-08

    Demosaicking is the problem of reconstructing a color image from the raw image captured by a digital color camera that covers its only imaging sensor with a color filter array (CFA). Sparse representation based demosaicking has been shown to produce superior reconstruction quality. However, almost all existing algorithms in this category use the CFAs which are not specifically optimized for the algorithms. In this paper, we consider optimally designing CFAs for sparse representation based demosaicking, where the dictionary is well-chosen. The fact that CFAs correspond to the projection matrices used in compressed sensing inspires us to optimize CFAs via minimizing the mutual coherence. This is more challenging than that for traditional projection matrices because CFAs have physical realizability constraints. However, most of the existing methods for minimizing the mutual coherence require that the projection matrices should be unconstrained, making them inapplicable for designing CFAs. We consider directly minimizing the mutual coherence with the CFA's physical realizability constraints as a generalized fractional programming problem, which needs to find sufficiently accurate solutions to a sequence of nonconvex nonsmooth minimization problems. We adapt the redistributed proximal bundle method to address this issue. Experiments on benchmark images testify to the superiority of the proposed method. In particular, we show that a simple sparse representation based demosaicking algorithm with our specifically optimized CFA can outperform LSSC [1]. To the best of our knowledge, it is the first sparse representation based demosaicking algorithm that beats LSSC in terms of CPSNR.

  10. Algebraic reconstruction combined with the signal space separation method for the inverse magnetoencephalography problem with a dipole-quadrupole source

    NASA Astrophysics Data System (ADS)

    Nara, T.; Koiwa, K.; Takagi, S.; Oyama, D.; Uehara, G.

    2014-05-01

    This paper presents an algebraic reconstruction method for dipole-quadrupole sources using magnetoencephalography data. Compared to the conventional methods with the equivalent current dipoles source model, our method can more accurately reconstruct two close, oppositely directed sources. Numerical simulations show that two sources on both sides of the longitudinal fissure of cerebrum are stably estimated. The method is verified using a quadrupolar source phantom, which is composed of two isosceles-triangle-coils with parallel bases.

  11. Bacterial community reconstruction using compressed sensing.

    PubMed

    Amir, Amnon; Zuk, Or

    2011-11-01

    Bacteria are the unseen majority on our planet, with millions of species and comprising most of the living protoplasm. We propose a novel approach for reconstruction of the composition of an unknown mixture of bacteria using a single Sanger-sequencing reaction of the mixture. Our method is based on compressive sensing theory, which deals with reconstruction of a sparse signal using a small number of measurements. Utilizing the fact that in many cases each bacterial community is comprised of a small subset of all known bacterial species, we show the feasibility of this approach for determining the composition of a bacterial mixture. Using simulations, we show that sequencing a few hundred base-pairs of the 16S rRNA gene sequence may provide enough information for reconstruction of mixtures containing tens of species, out of tens of thousands, even in the presence of realistic measurement noise. Finally, we show initial promising results when applying our method for the reconstruction of a toy experimental mixture with five species. Our approach may have a potential for a simple and efficient way for identifying bacterial species compositions in biological samples. All supplementary data and the MATLAB code are available at www.broadinstitute.org/?orzuk/publications/BCS/.

  12. Method and apparatus for distinguishing actual sparse events from sparse event false alarms

    DOEpatents

    Spalding, Richard E.; Grotbeck, Carter L.

    2000-01-01

    Remote sensing method and apparatus wherein sparse optical events are distinguished from false events. "Ghost" images of actual optical phenomena are generated using an optical beam splitter and optics configured to direct split beams to a single sensor or segmented sensor. True optical signals are distinguished from false signals or noise based on whether the ghost image is presence or absent. The invention obviates the need for dual sensor systems to effect a false target detection capability, thus significantly reducing system complexity and cost.

  13. Biomarker reconstructions of marine and terrestrial climate signals from marginal marine environments: new results from high-resolution archives

    NASA Astrophysics Data System (ADS)

    Bendle, J. A.; Moossen, H.; Jamieson, R.; Das, S. K.; Quillmann, U.; Jennings, A. E.; Andrews, J. T.; Howe, J.; Cage, A.; Austin, W. E.

    2010-12-01

    One of the key questions facing climate scientists, policy makers and the public today, is how important is natural variability in explaining global warming? Sedimentary archives from marginal marine environments, such as fjordic (or sea-loch) environments, typically have higher sediment accumulation rates than deeper ocean sites and thus provide suitably expanded archives of the Holocene against which the 20th Century changes can be compared. Moreover, with suitable temporal resolution, the impact of Holocene rapid climate changes episodes, such as the 8.2 kyr event can be constrained. Since fjords bridge the land-ocean interface, palaeo-environmental records from fjordic environments provide a unique opportunity to study the link between marine and terrestrial climate. Here we present millennial to centennial scale, independent records of marine and terrestrial change in two fjordic cores: from Ìsafjardardjúp, northwest Iceland (core MD99-2266; location: 66° 13' 77'' N, 23° 15' 93'' W; 106m water depth) and from Loch Sunart, northwest Scotland (core MD-04 2832; location: 56° 40.19'N, 05° 52.21 W; 50 m water depth). The cores are extremely high resolution with 1cm of sediment representing <10 years of accumulation, and come from sites influenced by disparate branches of the North Atlantic Drift (i.e. the distal Gulf Stream), the Irminger and Shetland Currents. We reconstruct sea surface temperature (SST) and terrestrial mean air annual temperatures (MAT) derived from alkenone and tetraether biomarkers (using the UK37' and MBT/CBT-MAT indices respectively). Additional insights into terrestrial environmental change are derived from proxy records for soil pH (from the tetraether CBT proxy) and, in the case of MD99-2266, from higher plant wax distributions. The timing of the millennial-scale SST variability in the cores should give insights into the degree of phasing of millennial scale climate variability between the western (Irminger Current) and eastern (SC

  14. High-Performance 3D Compressive Sensing MRI Reconstruction Using Many-Core Architectures

    PubMed Central

    Kim, Daehyun; Trzasko, Joshua; Smelyanskiy, Mikhail; Haider, Clifton; Dubey, Pradeep; Manduca, Armando

    2011-01-01

    Compressive sensing (CS) describes how sparse signals can be accurately reconstructed from many fewer samples than required by the Nyquist criterion. Since MRI scan duration is proportional to the number of acquired samples, CS has been gaining significant attention in MRI. However, the computationally intensive nature of CS reconstructions has precluded their use in routine clinical practice. In this work, we investigate how different throughput-oriented architectures can benefit one CS algorithm and what levels of acceleration are feasible on different modern platforms. We demonstrate that a CUDA-based code running on an NVIDIA Tesla C2050 GPU can reconstruct a 256 × 160 × 80 volume from an 8-channel acquisition in 19 seconds, which is in itself a significant improvement over the state of the art. We then show that Intel's Knights Ferry can perform the same 3D MRI reconstruction in only 12 seconds, bringing CS methods even closer to clinical viability. PMID:21922017

  15. Epileptic Seizure Detection with Log-Euclidean Gaussian Kernel-Based Sparse Representation.

    PubMed

    Yuan, Shasha; Zhou, Weidong; Wu, Qi; Zhang, Yanli

    2016-05-01

    Epileptic seizure detection plays an important role in the diagnosis of epilepsy and reducing the massive workload of reviewing electroencephalography (EEG) recordings. In this work, a novel algorithm is developed to detect seizures employing log-Euclidean Gaussian kernel-based sparse representation (SR) in long-term EEG recordings. Unlike the traditional SR for vector data in Euclidean space, the log-Euclidean Gaussian kernel-based SR framework is proposed for seizure detection in the space of the symmetric positive definite (SPD) matrices, which form a Riemannian manifold. Since the Riemannian manifold is nonlinear, the log-Euclidean Gaussian kernel function is applied to embed it into a reproducing kernel Hilbert space (RKHS) for performing SR. The EEG signals of all channels are divided into epochs and the SPD matrices representing EEG epochs are generated by covariance descriptors. Then, the testing samples are sparsely coded over the dictionary composed by training samples utilizing log-Euclidean Gaussian kernel-based SR. The classification of testing samples is achieved by computing the minimal reconstructed residuals. The proposed method is evaluated on the Freiburg EEG dataset of 21 patients and shows its notable performance on both epoch-based and event-based assessments. Moreover, this method handles multiple channels of EEG recordings synchronously which is more speedy and efficient than traditional seizure detection methods.

  16. Reconstruction of extended Petri nets from time series data and its application to signal transduction and to gene regulatory networks

    PubMed Central

    2011-01-01

    Background Network inference methods reconstruct mathematical models of molecular or genetic networks directly from experimental data sets. We have previously reported a mathematical method which is exclusively data-driven, does not involve any heuristic decisions within the reconstruction process, and deliveres all possible alternative minimal networks in terms of simple place/transition Petri nets that are consistent with a given discrete time series data set. Results We fundamentally extended the previously published algorithm to consider catalysis and inhibition of the reactions that occur in the underlying network. The results of the reconstruction algorithm are encoded in the form of an extended Petri net involving control arcs. This allows the consideration of processes involving mass flow and/or regulatory interactions. As a non-trivial test case, the phosphate regulatory network of enterobacteria was reconstructed using in silico-generated time-series data sets on wild-type and in silico mutants. Conclusions The new exact algorithm reconstructs extended Petri nets from time series data sets by finding all alternative minimal networks that are consistent with the data. It suggested alternative molecular mechanisms for certain reactions in the network. The algorithm is useful to combine data from wild-type and mutant cells and may potentially integrate physiological, biochemical, pharmacological, and genetic data in the form of a single model. PMID:21762503

  17. Precipitation reconstruction for the northwestern Chinese Altay since 1760 indicates the drought signals of the northern part of inner Asia.

    PubMed

    Chen, Feng; Yuan, Yujiang; Zhang, Tongwen; Shang, Huaming

    2016-03-01

    Based on the significant positive correlations between the regional tree-ring width chronology and local climate data, the total precipitation of the previous July to the current June was reconstructed since AD 1760 for the northwestern Chinese Altay. The reconstruction model accounts for 40.7 % of the actual precipitation variance during the calibration period from 1959 to 2013. Wet conditions prevailed during the periods 1764-1777, 1784-1791, 1795-1805, 1829-1835, 1838-1846, 1850-1862, 1867-1872, 1907-1916, 1926-1931, 1935-1943, 1956-1961, 1968-1973, 1984-1997, and 2002-2006. Dry episodes occurred during 1760-1763, 1778-1783, 1792-1794, 1806-1828, 1836-1837, 1847-1849, 1863-1866, 1873-1906, 1917-1925, 1932-1934, 1944-1955, 1962-1967, 1974-1983, 1998-2001, and 2007-2012. The spectral analysis of the precipitation reconstruction shows the existence of some cycles (15.3, 4.5, 3.1, 2.7, and 2.1 years). The significant correlations with the gridded precipitation dataset revealed that the precipitation reconstruction represents the precipitation variation for a large area of the northern part of inner Asia. A comparison with the precipitation reconstruction from the southern Chinese Altay shows the high level of confidence for the precipitation reconstruction for the northwestern Chinese Altay. Precipitation variation of the northwestern Chinese Altay is positively correlated with sea surface temperatures in tropical oceans, suggesting a possible linkage of the precipitation variation of the northwestern Chinese Altay to the El Niño-Southern Oscillation (ENSO) and the North Atlantic Oscillation (NAO). The synoptic climatology analysis reveals that there is the relationship between anomalous atmospheric circulation and extreme climate events in the northwestern Chinese Altay.

  18. A Novel Time-Varying Spectral Filtering Algorithm for Reconstruction of Motion Artifact Corrupted Heart Rate Signals During Intense Physical Activities Using a Wearable Photoplethysmogram Sensor

    PubMed Central

    Salehizadeh, Seyed M. A.; Dao, Duy; Bolkhovsky, Jeffrey; Cho, Chae; Mendelson, Yitzhak; Chon, Ki H.

    2015-01-01

    Accurate estimation of heart rates from photoplethysmogram (PPG) signals during intense physical activity is a very challenging problem. This is because strenuous and high intensity exercise can result in severe motion artifacts in PPG signals, making accurate heart rate (HR) estimation difficult. In this study we investigated a novel technique to accurately reconstruct motion-corrupted PPG signals and HR based on time-varying spectral analysis. The algorithm is called Spectral filter algorithm for Motion Artifacts and heart rate reconstruction (SpaMA). The idea is to calculate the power spectral density of both PPG and accelerometer signals for each time shift of a windowed data segment. By comparing time-varying spectra of PPG and accelerometer data, those frequency peaks resulting from motion artifacts can be distinguished from the PPG spectrum. The SpaMA approach was applied to three different datasets and four types of activities: (1) training datasets from the 2015 IEEE Signal Process. Cup Database recorded from 12 subjects while performing treadmill exercise from 1 km/h to 15 km/h; (2) test datasets from the 2015 IEEE Signal Process. Cup Database recorded from 11 subjects while performing forearm and upper arm exercise. (3) Chon Lab dataset including 10 min recordings from 10 subjects during treadmill exercise. The ECG signals from all three datasets provided the reference HRs which were used to determine the accuracy of our SpaMA algorithm. The performance of the SpaMA approach was calculated by computing the mean absolute error between the estimated HR from the PPG and the reference HR from the ECG. The average estimation errors using our method on the first, second and third datasets are 0.89, 1.93 and 1.38 beats/min respectively, while the overall error on all 33 subjects is 1.86 beats/min and the performance on only treadmill experiment datasets (22 subjects) is 1.11 beats/min. Moreover, it was found that dynamics of heart rate variability can be

  19. A Novel Time-Varying Spectral Filtering Algorithm for Reconstruction of Motion Artifact Corrupted Heart Rate Signals During Intense Physical Activities Using a Wearable Photoplethysmogram Sensor.

    PubMed

    Salehizadeh, Seyed M A; Dao, Duy; Bolkhovsky, Jeffrey; Cho, Chae; Mendelson, Yitzhak; Chon, Ki H

    2015-12-23

    Accurate estimation of heart rates from photoplethysmogram (PPG) signals during intense physical activity is a very challenging problem. This is because strenuous and high intensity exercise can result in severe motion artifacts in PPG signals, making accurate heart rate (HR) estimation difficult. In this study we investigated a novel technique to accurately reconstruct motion-corrupted PPG signals and HR based on time-varying spectral analysis. The algorithm is called Spectral filter algorithm for Motion Artifacts and heart rate reconstruction (SpaMA). The idea is to calculate the power spectral density of both PPG and accelerometer signals for each time shift of a windowed data segment. By comparing time-varying spectra of PPG and accelerometer data, those frequency peaks resulting from motion artifacts can be distinguished from the PPG spectrum. The SpaMA approach was applied to three different datasets and four types of activities: (1) training datasets from the 2015 IEEE Signal Process. Cup Database recorded from 12 subjects while performing treadmill exercise from 1 km/h to 15 km/h; (2) test datasets from the 2015 IEEE Signal Process. Cup Database recorded from 11 subjects while performing forearm and upper arm exercise. (3) Chon Lab dataset including 10 min recordings from 10 subjects during treadmill exercise. The ECG signals from all three datasets provided the reference HRs which were used to determine the accuracy of our SpaMA algorithm. The performance of the SpaMA approach was calculated by computing the mean absolute error between the estimated HR from the PPG and the reference HR from the ECG. The average estimation errors using our method on the first, second and third datasets are 0.89, 1.93 and 1.38 beats/min respectively, while the overall error on all 33 subjects is 1.86 beats/min and the performance on only treadmill experiment datasets (22 subjects) is 1.11 beats/min. Moreover, it was found that dynamics of heart rate variability can be

  20. Sparse and powerful cortical spikes.

    PubMed

    Wolfe, Jason; Houweling, Arthur R; Brecht, Michael

    2010-06-01

    Activity in cortical networks is heterogeneous, sparse and often precisely timed. The functional significance of sparseness and precise spike timing is debated, but our understanding of the developmental and synaptic mechanisms that shape neuronal discharge patterns has improved. Evidence for highly specialized, selective and abstract cortical response properties is accumulating. Singe-cell stimulation experiments demonstrate a high sensitivity of cortical networks to the action potentials of some, but not all, single neurons. It is unclear how this sensitivity of cortical networks to small perturbations comes about and whether it is a generic property of cortex. The unforeseen sensitivity to cortical spikes puts serious constraints on the nature of neural coding schemes.

  1. Partially sparse imaging of stationary indoor scenes

    NASA Astrophysics Data System (ADS)

    Ahmad, Fauzia; Amin, Moeness G.; Dogaru, Traian

    2014-12-01

    In this paper, we exploit the notion of partial sparsity for scene reconstruction associated with through-the-wall radar imaging of stationary targets under reduced data volume. Partial sparsity implies that the scene being imaged consists of a sparse part and a dense part, with the support of the latter assumed to be known. For the problem at hand, sparsity is represented by a few stationary indoor targets, whereas the high scene density is defined by exterior and interior walls. Prior knowledge of wall positions and extent may be available either through building blueprints or from prior surveillance operations. The contributions of the exterior and interior walls are removed from the data through the use of projection matrices, which are determined from wall- and corner-specific dictionaries. The projected data, with enhanced sparsity, is then processed using l 1 norm reconstruction techniques. Numerical electromagnetic data is used to demonstrate the effectiveness of the proposed approach for imaging stationary indoor scenes using a reduced set of measurements.

  2. Photoplethysmograph signal reconstruction based on a novel motion artifact detection-reduction approach. Part II: Motion and noise artifact removal.

    PubMed

    Salehizadeh, S M A; Dao, Duy K; Chong, Jo Woon; McManus, David; Darling, Chad; Mendelson, Yitzhak; Chon, Ki H

    2014-11-01

    We introduce a new method to reconstruct motion and noise artifact (MNA) contaminated photoplethysmogram (PPG) data. A method to detect MNA corrupted data is provided in a companion paper. Our reconstruction algorithm is based on an iterative motion artifact removal (IMAR) approach, which utilizes the singular spectral analysis algorithm to remove MNA artifacts so that the most accurate estimates of uncorrupted heart rates (HRs) and arterial oxygen saturation (SpO2) values recorded by a pulse oximeter can be derived. Using both computer simulations and three different experimental data sets, we show that the proposed IMAR approach can reliably reconstruct MNA corrupted data segments, as the estimated HR and SpO2 values do not significantly deviate from the uncorrupted reference measurements. Comparison of the accuracy of reconstruction of the MNA corrupted data segments between our IMAR approach and the time-domain independent component analysis (TD-ICA) is made for all data sets as the latter method has been shown to provide good performance. For simulated data, there were no significant differences in the reconstructed HR and SpO2 values starting from 10 dB down to -15 dB for both white and colored noise contaminated PPG data using IMAR; for TD-ICA, significant differences were observed starting at 10 dB. Two experimental PPG data sets were created with contrived MNA by having subjects perform random forehead and rapid side-to-side finger movements show that; the performance of the IMAR approach on these data sets was quite accurate as non-significant differences in the reconstructed HR and SpO2 were found compared to non-contaminated reference values, in most subjects. In comparison, the accuracy of the TD-ICA was poor as there were significant differences in reconstructed HR and SpO2 values in most subjects. For non-contrived MNA corrupted PPG data, which were collected with subjects performing walking and stair climbing tasks, the IMAR significantly

  3. SAR moving target imaging using sparse and low-rank decomposition

    NASA Astrophysics Data System (ADS)

    Ni, Kang-Yu; Rao, Shankar

    2014-05-01

    We propose a method to image a complex scene with spotlight synthetic aperture radar (SAR) despite the presence of multiple moving targets. Many recent methods use sparsity-based reconstruction coupled with phase error corrections of moving targets to reconstruct stationary scenes. However, these methods rely on the assumption that the scene itself is sparse and thus unfortunately cannot handle realistic SAR scenarios with complex backgrounds consisting of more than just a few point targets. Our method makes use of sparse and low-rank (SLR) matrix decomposition, an efficient method for decomposing a low-rank matrix and sparse matrix from their sum. For detecting the moving targets and reconstructing the stationary background, SLR uses a convex optimization model that penalizes the nuclear norm of the low rank background structure and the L1 norm of the sparse moving targets. We propose an L1-norm regularization reconstruction method to form the input data matrix, which is grossly corrupted by the moving targets. Each column of the input matrix is a reconstructed SAR image with measurements from a small number of azimuth angles. The use of the L1-norm regularization and a sparse transform permits us to reconstruct the scene with significantly fewer measurements so that moving targets are approximately stationary. We demonstrate our SLR-based approach using simulations adapted from the GOTCHA Volumetric SAR data set. These simulations show that SLR can accurately image multiple moving targets with different individual motions in complex scenes where methods that assume a sparse scene would fail.

  4. Genetic algorithms for minimal source reconstructions

    SciTech Connect

    Lewis, P.S.; Mosher, J.C.

    1993-12-01

    Under-determined linear inverse problems arise in applications in which signals must be estimated from insufficient data. In these problems the number of potentially active sources is greater than the number of observations. In many situations, it is desirable to find a minimal source solution. This can be accomplished by minimizing a cost function that accounts from both the compatibility of the solution with the observations and for its ``sparseness``. Minimizing functions of this form can be a difficult optimization problem. Genetic algorithms are a relatively new and robust approach to the solution of difficult optimization problems, providing a global framework that is not dependent on local continuity or on explicit starting values. In this paper, the authors describe the use of genetic algorithms to find minimal source solutions, using as an example a simulation inspired by the reconstruction of neural currents in the human brain from magnetoencephalographic (MEG) measurements.

  5. Cerebellar Functional Parcellation Using Sparse Dictionary Learning Clustering

    PubMed Central

    Wang, Changqing; Kipping, Judy; Bao, Chenglong; Ji, Hui; Qiu, Anqi

    2016-01-01

    The human cerebellum has recently been discovered to contribute to cognition and emotion beyond the planning and execution of movement, suggesting its functional heterogeneity. We aimed to identify the functional parcellation of the cerebellum using information from resting-state functional magnetic resonance imaging (rs-fMRI). For this, we introduced a new data-driven decomposition-based functional parcellation algorithm, called Sparse Dictionary Learning Clustering (SDLC). SDLC integrates dictionary learning, sparse representation of rs-fMRI, and k-means clustering into one optimization problem. The dictionary is comprised of an over-complete set of time course signals, with which a sparse representation of rs-fMRI signals can be constructed. Cerebellar functional regions were then identified using k-means clustering based on the sparse representation of rs-fMRI signals. We solved SDLC using a multi-block hybrid proximal alternating method that guarantees strong convergence. We evaluated the reliability of SDLC and benchmarked its classification accuracy against other clustering techniques using simulated data. We then demonstrated that SDLC can identify biologically reasonable functional regions of the cerebellum as estimated by their cerebello-cortical functional connectivity. We further provided new insights into the cerebello-cortical functional organization in children. PMID:27199650

  6. Exact reconstruction with directional wavelets on the sphere

    NASA Astrophysics Data System (ADS)

    Wiaux, Y.; McEwen, J. D.; Vandergheynst, P.; Blanc, O.

    2008-08-01

    A new formalism is derived for the analysis and exact reconstruction of band-limited signals on the sphere with directional wavelets. It represents an evolution of a previously developed wavelet formalism developed by Antoine & Vandergheynst and Wiaux et al. The translations of the wavelets at any point on the sphere and their proper rotations are still defined through the continuous three-dimensional rotations. The dilations of the wavelets are directly defined in harmonic space through a new kernel dilation, which is a modification of an existing harmonic dilation. A family of factorized steerable functions with compact harmonic support which are suitable for this kernel dilation are first identified. A scale-discretized wavelet formalism is then derived, relying on this dilation. The discrete nature of the analysis scales allows the exact reconstruction of band-limited signals. A corresponding exact multi-resolution algorithm is finally described and an implementation is tested. The formalism is of interest notably for the denoising or the deconvolution of signals on the sphere with a sparse expansion in wavelets. In astrophysics, it finds a particular application for the identification of localized directional features in the cosmic microwave background data, such as the imprint of topological defects, in particular, cosmic strings, and for their reconstruction after separation from the other signal components.

  7. A new sparse Bayesian learning method for inverse synthetic aperture radar imaging via exploiting cluster patterns

    NASA Astrophysics Data System (ADS)

    Fang, Jun; Zhang, Lizao; Duan, Huiping; Huang, Lei; Li, Hongbin

    2016-05-01

    The application of sparse representation to SAR/ISAR imaging has attracted much attention over the past few years. This new class of sparse representation based imaging methods present a number of unique advantages over conventional range-Doppler methods, the basic idea behind these works is to formulate SAR/ISAR imaging as a sparse signal recovery problem. In this paper, we propose a new two-dimensional pattern-coupled sparse Bayesian learning(SBL) method to capture the underlying cluster patterns of the ISAR target images. Based on this model, an expectation-maximization (EM) algorithm is developed to infer the maximum a posterior (MAP) estimate of the hyperparameters, along with the posterior distribution of the sparse signal. Experimental results demonstrate that the proposed method is able to achieve a substantial performance improvement over existing algorithms, including the conventional SBL method.

  8. Combining sparseness and smoothness improves classification accuracy and interpretability.

    PubMed

    de Brecht, Matthew; Yamagishi, Noriko

    2012-04-02

    Sparse logistic regression (SLR) has been shown to be a useful method for decoding high-dimensional fMRI and MEG data by automatically selecting relevant feature dimensions. However, when applied to signals with high spatio-temporal correlations, SLR often over-prunes the feature space, which can result in overfitting and weight vectors that are difficult to interpret. To overcome this problem, we investigate a modification of ℓ₁-normed sparse logistic regression, called smooth sparse logistic regression (SSLR), which has a spatio-temporal "smoothing" prior that encourages weights that are close in time and space to have similar values. This causes the classifier to select spatio-temporally continuous groups of features, whereas SLR classifiers often select a scattered collection of independent features. We applied the method to both simulation data and real MEG data. We found that SSLR consistently increases classification accuracy, and produces weight vectors that are more meaningful from a neuroscientific perspective.

  9. Input reconstruction of chaos sensors.

    PubMed

    Yu, Dongchuan; Liu, Fang; Lai, Pik-Yin

    2008-06-01

    Although the sensitivity of sensors can be significantly enhanced using chaotic dynamics due to its extremely sensitive dependence on initial conditions and parameters, how to reconstruct the measured signal from the distorted sensor response becomes challenging. In this paper we suggest an effective method to reconstruct the measured signal from the distorted (chaotic) response of chaos sensors. This measurement signal reconstruction method applies the neural network techniques for system structure identification and therefore does not require the precise information of the sensor's dynamics. We discuss also how to improve the robustness of reconstruction. Some examples are presented to illustrate the measurement signal reconstruction method suggested.

  10. Learning doubly sparse transforms for images.

    PubMed

    Ravishankar, Saiprasad; Bresler, Yoram

    2013-12-01

    The sparsity of images in a transform domain or dictionary has been exploited in many applications in image processing. For example, analytical sparsifying transforms, such as wavelets and discrete cosine transform (DCT), have been extensively used in compression standards. Recently, synthesis sparsifying dictionaries that are directly adapted to the data have become popular especially in applications such as image denoising. Following up on our recent research, where we introduced the idea of learning square sparsifying transforms, we propose here novel problem formulations for learning doubly sparse transforms for signals or image patches. These transforms are a product of a fixed, fast analytic transform such as the DCT, and an adaptive matrix constrained to be sparse. Such transforms can be learnt, stored, and implemented efficiently. We show the superior promise of our learnt transforms as compared with analytical sparsifying transforms such as the DCT for image representation. We also show promising performance in image denoising that compares favorably with approaches involving learnt synthesis dictionaries such as the K-SVD algorithm. The proposed approach is also much faster than K-SVD denoising.

  11. Inferring sparse networks for noisy transient processes

    NASA Astrophysics Data System (ADS)

    Tran, Hoang M.; Bukkapatnam, Satish T. S.

    2016-02-01

    Inferring causal structures of real world complex networks from measured time series signals remains an open issue. The current approaches are inadequate to discern between direct versus indirect influences (i.e., the presence or absence of a directed arc connecting two nodes) in the presence of noise, sparse interactions, as well as nonlinear and transient dynamics of real world processes. We report a sparse regression (referred to as the -min) approach with theoretical bounds on the constraints on the allowable perturbation to recover the network structure that guarantees sparsity and robustness to noise. We also introduce averaging and perturbation procedures to further enhance prediction scores (i.e., reduce inference errors), and the numerical stability of -min approach. Extensive investigations have been conducted with multiple benchmark simulated genetic regulatory network and Michaelis-Menten dynamics, as well as real world data sets from DREAM5 challenge. These investigations suggest that our approach can significantly improve, oftentimes by 5 orders of magnitude over the methods reported previously for inferring the structure of dynamic networks, such as Bayesian network, network deconvolution, silencing and modular response analysis methods based on optimizing for sparsity, transients, noise and high dimensionality issues.

  12. Sparse spike coding : applications of neuroscience to the processing of natural images

    NASA Astrophysics Data System (ADS)

    Perrinet, Laurent U.

    2008-04-01

    If modern computers are sometimes superior to cognition in some specialized tasks such as playing chess or browsing a large database, they can't beat the efficiency of biological vision for such simple tasks as recognizing a relative or following an object in a complex background. We present in this paper our attempt at outlining the dynamical, parallel and event-based representation for vision in the architecture of the central nervous system. We will illustrate this by showing that in a signal matching framework, a L/LN (linear/non-linear) cascade may efficiently transform a sensory signal into a neural spiking signal and we apply this framework to a model retina. However, this code gets redundant when using an over-complete basis as is necessary for modeling the primary visual cortex: we therefore optimize the efficiency cost by increasing the sparseness of the code. This is implemented by propagating and canceling redundant information using lateral interactions. We compare the eciency of this representation in terms of compression as the reconstruction quality as a function of the coding length. This will correspond to a modification of the Matching Pursuit algorithm where the ArgMax function is optimized for competition, or Competition Optimized Matching Pursuit (COMP). We will particularly focus on bridging neuroscience and image processing and on the advantages of such an interdisciplinary approach.

  13. Sparse Regression by Projection and Sparse Discriminant Analysis.

    PubMed

    Qi, Xin; Luo, Ruiyan; Carroll, Raymond J; Zhao, Hongyu

    2015-04-01

    Recent years have seen active developments of various penalized regression methods, such as LASSO and elastic net, to analyze high dimensional data. In these approaches, the direction and length of the regression coefficients are determined simultaneously. Due to the introduction of penalties, the length of the estimates can be far from being optimal for accurate predictions. We introduce a new framework, regression by projection, and its sparse version to analyze high dimensional data. The unique nature of this framework is that the directions of the regression coefficients are inferred first, and the lengths and the tuning parameters are determined by a cross validation procedure to achieve the largest prediction accuracy. We provide a theoretical result for simultaneous model selection consistency and parameter estimation consistency of our method in high dimension. This new framework is then generalized such that it can be applied to principal components analysis, partial least squares and canonical correlation analysis. We also adapt this framework for discriminant analysis. Compared to the existing methods, where there is relatively little control of the dependency among the sparse components, our method can control the relationships among the components. We present efficient algorithms and related theory for solving the sparse regression by projection problem. Based on extensive simulations and real data analysis, we demonstrate that our method achieves good predictive performance and variable selection in the regression setting, and the ability to control relationships between the sparse components leads to more accurate classification. In supplemental materials available online, the details of the algorithms and theoretical proofs, and R codes for all simulation studies are provided.

  14. Sparsity-constrained PET image reconstruction with learned dictionaries

    NASA Astrophysics Data System (ADS)

    Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie

    2016-09-01

    PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging.

  15. Sparsity-constrained PET image reconstruction with learned dictionaries.

    PubMed

    Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie

    2016-09-07

    PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging.

  16. The bias and signal attenuation present in conventional pollen-based climate reconstructions as assessed by early climate data from Minnesota, USA.

    PubMed

    St Jacques, Jeannine-Marie; Cumming, Brian F; Sauchyn, David J; Smol, John P

    2015-01-01

    The inference of past temperatures from a sedimentary pollen record depends upon the stationarity of the pollen-climate relationship. However, humans have altered vegetation independent of changes to climate, and consequently modern pollen deposition is a product of landscape disturbance and climate, which is different from the dominance of climate-derived processes in the past. This problem could cause serious signal distortion in pollen-based reconstructions. In the north-central United States, direct human impacts have strongly altered the modern vegetation and hence the pollen rain since Euro-American settlement in the mid-19th century. Using instrumental temperature data from the early 1800 s from Fort Snelling (Minnesota), we assessed the signal distortion and bias introduced by using the conventional method of inferring temperature from pollen assemblages in comparison to a calibration set from pre-settlement pollen assemblages and the earliest instrumental climate data. The early post-settlement calibration set provides more accurate reconstructions of the 19th century instrumental record, with less bias, than the modern set does. When both modern and pre-industrial calibration sets are used to reconstruct past temperatures since AD 1116 from pollen counts from a varve-dated record from Lake Mina, Minnesota, the conventional inference method produces significant low-frequency (centennial-scale) signal attenuation and positive bias of 0.8-1.7 °C, resulting in an overestimation of Little Ice Age temperature and likely an underestimation of the extent and rate of anthropogenic warming in this region. However, high-frequency (annual-scale) signal attenuation exists with both methods. Hence, we conclude that any past pollen spectra from before Euro-American settlement in this region should be interpreted using a pre-Euro-American settlement pollen set, paired to the earliest instrumental climate records. It remains to be explored how widespread this problem is

  17. The Bias and Signal Attenuation Present in Conventional Pollen-Based Climate Reconstructions as Assessed by Early Climate Data from Minnesota, USA

    PubMed Central

    St. Jacques, Jeannine-Marie; Cumming, Brian F.; Sauchyn, David J.; Smol, John P.

    2015-01-01

    The inference of past temperatures from a sedimentary pollen record depends upon the stationarity of the pollen-climate relationship. However, humans have altered vegetation independent of changes to climate, and consequently modern pollen deposition is a product of landscape disturbance and climate, which is different from the dominance of climate-derived processes in the past. This problem could cause serious signal distortion in pollen-based reconstructions. In the north-central United States, direct human impacts have strongly altered the modern vegetation and hence the pollen rain since Euro-American settlement in the mid-19th century. Using instrumental temperature data from the early 1800s from Fort Snelling (Minnesota), we assessed the signal distortion and bias introduced by using the conventional method of inferring temperature from pollen assemblages in comparison to a calibration set from pre-settlement pollen assemblages and the earliest instrumental climate data. The early post-settlement calibration set provides more accurate reconstructions of the 19th century instrumental record, with less bias, than the modern set does. When both modern and pre-industrial calibration sets are used to reconstruct past temperatures since AD 1116 from pollen counts from a varve-dated record from Lake Mina, Minnesota, the conventional inference method produces significant low-frequency (centennial-scale) signal attenuation and positive bias of 0.8-1.7°C, resulting in an overestimation of Little Ice Age temperature and likely an underestimation of the extent and rate of anthropogenic warming in this region. However, high-frequency (annual-scale) signal attenuation exists with both methods. Hence, we conclude that any past pollen spectra from before Euro-American settlement in this region should be interpreted using a pre-Euro-American settlement pollen set, paired to the earliest instrumental climate records. It remains to be explored how widespread this problem is

  18. Sparse Matrix Software Catalog, Sparse Matrix Symposium 1982, Fairfield Glade, Tennessee, October 24-27, 1982,

    DTIC Science & Technology

    1982-10-27

    sparse matrices as well as other areas. Contents 1. operations on Sparse Matrices .. . . . . . . . . . . . . . . . . . . . . . . . I 1.1 Multi...22 2.1.1 Nonsymmetric systems ............................................. 22 2.1.1.1 General sparse matrices ...46 2.1.2.1 General sparse matrices ......................................... 46 2.1.2.2 Band or profile forms

  19. HYPOTHESIS TESTING FOR HIGH-DIMENSIONAL SPARSE BINARY REGRESSION

    PubMed Central

    Mukherjee, Rajarshi; Pillai, Natesh S.; Lin, Xihong

    2015-01-01

    In this paper, we study the detection boundary for minimax hypothesis testing in the context of high-dimensional, sparse binary regression models. Motivated by genetic sequencing association studies for rare variant effects, we investigate the complexity of the hypothesis testing problem when the design matrix is sparse. We observe a new phenomenon in the behavior of detection boundary which does not occur in the case of Gaussian linear regression. We derive the detection boundary as a function of two components: a design matrix sparsity index and signal strength, each of which is a function of the sparsity of the alternative. For any alternative, if the design matrix sparsity index is too high, any test is asymptotically powerless irrespective of the magnitude of signal strength. For binary design matrices with the sparsity index that is not too high, our results are parallel to those in the Gaussian case. In this context, we derive detection boundaries for both dense and sparse regimes. For the dense regime, we show that the generalized likelihood ratio is rate optimal; for the sparse regime, we propose an extended Higher Criticism Test and show it is rate optimal and sharp. We illustrate the finite sample properties of the theoretical results using simulation studies. PMID:26246645

  20. Double shrinking sparse dimension reduction.

    PubMed

    Zhou, Tianyi; Tao, Dacheng

    2013-01-01

    Learning tasks such as classification and clustering usually perform better and cost less (time and space) on compressed representations than on the original data. Previous works mainly compress data via dimension reduction. In this paper, we propose "double shrinking" to compress image data on both dimensionality and cardinality via building either sparse low-dimensional representations or a sparse projection matrix for dimension reduction. We formulate a double shrinking model (DSM) as an l(1) regularized variance maximization with constraint ||x||(2)=1, and develop a double shrinking algorithm (DSA) to optimize DSM. DSA is a path-following algorithm that can build the whole solution path of locally optimal solutions of different sparse levels. Each solution on the path is a "warm start" for searching the next sparser one. In each iteration of DSA, the direction, the step size, and the Lagrangian multiplier are deduced from the Karush-Kuhn-Tucker conditions. The magnitudes of trivial variables are shrunk and the importances of critical variables are simultaneously augmented along the selected direction with the determined step length. Double shrinking can be applied to manifold learning and feature selections for better interpretation of features, and can be combined with classification and clustering to boost their performance. The experimental results suggest that double shrinking produces efficient and effective data compression.

  1. Highly parallel sparse Cholesky factorization

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Schreiber, Robert

    1990-01-01

    Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms.

  2. Sparse Matrices in MATLAB: Design and Implementation

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Moler, Cleve; Schreiber, Robert

    1992-01-01

    The matrix computation language and environment MATLAB is extended to include sparse matrix storage and operations. The only change to the outward appearance of the MATLAB language is a pair of commands to create full or sparse matrices. Nearly all the operations of MATLAB now apply equally to full or sparse matrices, without any explicit action by the user. The sparse data structure represents a matrix in space proportional to the number of nonzero entries, and most of the operations compute sparse results in time proportional to the number of arithmetic operations on nonzeros.

  3. Continental European Eemian and early Würmian climate evolution: comparing signals using different quantitative reconstruction approaches based on pollen

    NASA Astrophysics Data System (ADS)

    Klotz, Stefan; Guiot, Joel; Mosbrugger, Volker

    2003-05-01

    Analyses of Eemian climate dynamics based on different reconstruction methods were conducted for several pollen sequences in the northern alpine foreland. The modern analogue and mutual climate sphere techniques used, which are briefly presented, complement one another with respect to comparable results. The reconstructions reveal the occurrence of at least two similar thermal periods, representing temperate oceanic conditions warmer and with a higher humidity than today. Intense changes of climate processes become obvious with a shift of winter temperatures of about 15 °C from the late Rissian to the first thermal optimum of the Eemian. The transition shows a pattern of summer temperatures and precipitation increasing more rapidly than winter temperatures. With the first optimum during the Pinus-Quercetum mixtum-Corylus phase (PQC) at an early stage of the Eemian and a second optimum period at a later stage, which is characterised by widespread Carpinus, climate gradients across the study area were less intense than today. Average winter temperatures vary between -1.9 and 0.4 °C (present-day -3.6 to 1.4 °C), summer temperatures between 17.8 and 19.6 °C (present-day 14 to 18.9 °C). The timberline expanded about 350 m when compared to the present-day limit represented by Pinus mugo. Whereas the maximum of temperature parameters is related to the first optimum, precipitation above 1100 mm is higher during the second warm period concomitant to somewhat reduced temperatures. Intermediate, smaller climate oscillations and a cooling becomes obvious, which admittedly represent moderate deterioration but not extreme chills. During the boreal semicontinental Eemian Pinus-Picea-Abies phase, another less distinct fluctuation occurs, initiating the oscillating shift from temperate to cold conditions.

  4. Sparse-view ultrasound diffraction tomography using compressed sensing with nonuniform FFT.

    PubMed

    Hua, Shaoyan; Ding, Mingyue; Yuchi, Ming

    2014-01-01

    Accurate reconstruction of the object from sparse-view sampling data is an appealing issue for ultrasound diffraction tomography (UDT). In this paper, we present a reconstruction method based on compressed sensing framework for sparse-view UDT. Due to the piecewise uniform characteristics of anatomy structures, the total variation is introduced into the cost function to find a more faithful sparse representation of the object. The inverse problem of UDT is iteratively resolved by conjugate gradient with nonuniform fast Fourier transform. Simulation results show the effectiveness of the proposed method that the main characteristics of the object can be properly presented with only 16 views. Compared to interpolation and multiband method, the proposed method can provide higher resolution and lower artifacts with the same view number. The robustness to noise and the computation complexity are also discussed.

  5. Adaptive modelling of gene regulatory network using Bayesian information criterion-guided sparse regression approach.

    PubMed

    Shi, Ming; Shen, Weiming; Wang, Hong-Qiang; Chong, Yanwen

    2016-12-01

    Inferring gene regulatory networks (GRNs) from microarray expression data are an important but challenging issue in systems biology. In this study, the authors propose a Bayesian information criterion (BIC)-guided sparse regression approach for GRN reconstruction. This approach can adaptively model GRNs by optimising the l1-norm regularisation of sparse regression based on a modified version of BIC. The use of the regularisation strategy ensures the inferred GRNs to be as sparse as natural, while the modified BIC allows incorporating prior knowledge on expression regulation and thus avoids the overestimation of expression regulators as usual. Especially, the proposed method provides a clear interpretation of combinatorial regulations of gene expression by optimally extracting regulation coordination for a given target gene. Experimental results on both simulation data and real-world microarray data demonstrate the competent performance of discovering regulatory relationships in GRN reconstruction.

  6. Performance comparison of independent component analysis algorithms for fetal cardiac signal reconstruction: a study on synthetic fMCG data

    NASA Astrophysics Data System (ADS)

    Mantini, D.; Hild, K. E., II; Alleva, G.; Comani, S.

    2006-02-01

    Independent component analysis (ICA) algorithms have been successfully used for signal extraction tasks in the field of biomedical signal processing. We studied the performances of six algorithms (FastICA, CubICA, JADE, Infomax, TDSEP and MRMI-SIG) for fetal magnetocardiography (fMCG). Synthetic datasets were used to check the quality of the separated components against the original traces. Real fMCG recordings were simulated with linear combinations of typical fMCG source signals: maternal and fetal cardiac activity, ambient noise, maternal respiration, sensor spikes and thermal noise. Clusters of different dimensions (19, 36 and 55 sensors) were prepared to represent different MCG systems. Two types of signal-to-interference ratios (SIR) were measured. The first involves averaging over all estimated components and the second is based solely on the fetal trace. The computation time to reach a minimum of 20 dB SIR was measured for all six algorithms. No significant dependency on gestational age or cluster dimension was observed. Infomax performed poorly when a sub-Gaussian source was included; TDSEP and MRMI-SIG were sensitive to additive noise, whereas FastICA, CubICA and JADE showed the best performances. Of all six methods considered, FastICA had the best overall performance in terms of both separation quality and computation times.

  7. Multi-frame blind deconvolution using sparse priors

    NASA Astrophysics Data System (ADS)

    Dong, Wende; Feng, Huajun; Xu, Zhihai; Li, Qi

    2012-05-01

    In this paper, we propose a method for multi-frame blind deconvolution. Two sparse priors, i.e., the natural image gradient prior and an l1-norm based prior are used to regularize the latent image and point spread functions (PSFs) respectively. An alternating minimization approach is adopted to solve the resulted optimization problem. We use both gray scale blurred frames from a data set and some colored ones which are captured by a digital camera to verify the robustness of our approach. Experimental results show that the proposed method can accurately reconstruct PSFs with complex structures and the restored images are of high quality.

  8. Universal data-based method for reconstructing complex networks with binary-state dynamics

    NASA Astrophysics Data System (ADS)

    Li, Jingwen; Shen, Zhesi; Wang, Wen-Xu; Grebogi, Celso; Lai, Ying-Cheng

    2017-03-01

    To understand, predict, and control complex networked systems, a prerequisite is to reconstruct the network structure from observable data. Despite recent progress in network reconstruction, binary-state dynamics that are ubiquitous in nature, technology, and society still present an outstanding challenge in this field. Here we offer a framework for reconstructing complex networks with binary-state dynamics by developing a universal data-based linearization approach that is applicable to systems with linear, nonlinear, discontinuous, or stochastic dynamics governed by monotonic functions. The linearization procedure enables us to convert the network reconstruction into a sparse signal reconstruction problem that can be resolved through convex optimization. We demonstrate generally high reconstruction accuracy for a number of complex networks associated with distinct binary-state dynamics from using binary data contaminated by noise and missing data. Our framework is completely data driven, efficient, and robust, and does not require any a priori knowledge about the detailed dynamical process on the network. The framework represents a general paradigm for reconstructing, understanding, and exploiting complex networked systems with binary-state dynamics.

  9. Recent Development of Dual-Dictionary Learning Approach in Medical Image Analysis and Reconstruction

    PubMed Central

    Wang, Bigong; Li, Liang

    2015-01-01

    As an implementation of compressive sensing (CS), dual-dictionary learning (DDL) method provides an ideal access to restore signals of two related dictionaries and sparse representation. It has been proven that this method performs well in medical image reconstruction with highly undersampled data, especially for multimodality imaging like CT-MRI hybrid reconstruction. Because of its outstanding strength, short signal acquisition time, and low radiation dose, DDL has allured a broad interest in both academic and industrial fields. Here in this review article, we summarize DDL's development history, conclude the latest advance, and also discuss its role in the future directions and potential applications in medical imaging. Meanwhile, this paper points out that DDL is still in the initial stage, and it is necessary to make further studies to improve this method, especially in dictionary training. PMID:26089956

  10. An energy-based sparse representation of ultrasonic guided-waves for online damage detection of pipelines under varying environmental and operational conditions

    NASA Astrophysics Data System (ADS)

    Eybpoosh, Matineh; Berges, Mario; Noh, Hae Young

    2017-01-01

    This work addresses the main challenges in real-world application of guided-waves for damage detection of pipelines, namely their complex nature and sensitivity to environmental and operational conditions (EOCs). Different propagation characteristics of the wave modes, their distinctive sensitivities to different types and ranges of EOCs, and to different damage scenarios, make the interpretation of diffuse-field guided-wave signals a challenging task. This paper proposes an unsupervised feature-extraction method for online damage detection of pipelines under varying EOCs. The objective is to simplify diffuse-field guided-wave signals to a sparse subset of the arrivals that contains the majority of the energy carried by the signal. We show that such a subset is less affected by EOCs compared to the complete time-traces of the signals. Moreover, it is shown that the effects of damage on the energy of this subset suppress those of EOCs. A set of signals from the undamaged state of a pipe are used as reference records. The reference dataset is used to extract the aforementioned sparse representation. During the monitoring stage, the sparse subset, representing the undamaged pipe, will not accurately reconstruct the energy of a signal from a damaged pipe. In other words, such a sparse representation of guided-waves is sensitive to occurrence of damage. Therefore, the energy estimation errors are used as damage-sensitive features for damage detection purposes. A diverse set of experimental analyses are conducted to verify the hypotheses of the proposed feature-extraction approach, and to validate the detection performance of the damage-sensitive features. The empirical validation of the proposed method includes (1) detecting a structural abnormality in an aluminum pipe, under varying temperature at different ranges, (2) detecting multiple small damages of different types, at different locations, in a steel pipe, under varying temperature, (3) detecting a structural

  11. Sparse coding based feature representation method for remote sensing images

    NASA Astrophysics Data System (ADS)

    Oguslu, Ender

    In this dissertation, we study sparse coding based feature representation method for the classification of multispectral and hyperspectral images (HSI). The existing feature representation systems based on the sparse signal model are computationally expensive, requiring to solve a convex optimization problem to learn a dictionary. A sparse coding feature representation framework for the classification of HSI is presented that alleviates the complexity of sparse coding through sub-band construction, dictionary learning, and encoding steps. In the framework, we construct the dictionary based upon the extracted sub-bands from the spectral representation of a pixel. In the encoding step, we utilize a soft threshold function to obtain sparse feature representations for HSI. Experimental results showed that a randomly selected dictionary could be as effective as a dictionary learned from optimization. The new representation usually has a very high dimensionality requiring a lot of computational resources. In addition, the spatial information of the HSI data has not been included in the representation. Thus, we modify the framework by incorporating the spatial information of the HSI pixels and reducing the dimension of the new sparse representations. The enhanced model, called sparse coding based dense feature representation (SC-DFR), is integrated with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) classifiers to discriminate different types of land cover. We evaluated the proposed algorithm on three well known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit (SOMP) and image fusion and recursive filtering (IFRF). The results from the experiments showed that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification. To further

  12. Sparse Coding and Dictionary Learning Based on the MDL Principle

    DTIC Science & Technology

    2010-10-01

    dependencies, in a natural way. We demonstrate the performance of the proposed framework with results for image denoising and classification tasks...The idea of using MDL for sparse signal coding was explored in the context of wavelet-based image denoising [6, 7]. These pioneer- ing works were...restricted to denoising using fixed orthonormal basis (wavelets). In addition, the underlying probabilistic models used to describe the transform

  13. Dim moving target tracking algorithm based on particle discriminative sparse representation

    NASA Astrophysics Data System (ADS)

    Li, Zhengzhou; Li, Jianing; Ge, Fengzeng; Shao, Wanxing; Liu, Bing; Jin, Gang

    2016-03-01

    The small dim moving target usually submerged in strong noise, and its motion observability is debased by numerous false alarms for low signal-to-noise ratio (SNR). A target tracking algorithm based on particle filter and discriminative sparse representation is proposed in this paper to cope with the uncertainty of dim moving target tracking. The weight of every particle is the crucial factor to ensuring the accuracy of dim target tracking for particle filter (PF) that can achieve excellent performance even under the situation of non-linear and non-Gaussian motion. In discriminative over-complete dictionary constructed according to image sequence, the target dictionary describes target signal and the background dictionary embeds background clutter. The difference between target particle and background particle is enhanced to a great extent, and the weight of every particle is then measured by means of the residual after reconstruction using the prescribed number of target atoms and their corresponding coefficients. The movement state of dim moving target is then estimated and finally tracked by these weighted particles. Meanwhile, the subspace of over-complete dictionary is updated online by the stochastic estimation algorithm. Some experiments are induced and the experimental results show the proposed algorithm could improve the performance of moving target tracking by enhancing the consistency between the posteriori probability distribution and the moving target state.

  14. Note: simultaneous measurements of magnetization and electrical transport signal by a reconstructed superconducting quantum interference device magnetometer.

    PubMed

    Wang, H L; Yu, X Z; Wang, S L; Chen, L; Zhao, J H

    2013-08-01

    We have developed a sample rod which makes the conventional superconducting quantum interference device magnetometer capable of performing magnetization and electrical transport measurements simultaneously. The sample holder attached to the end of a 140 cm long sample rod is a nonmagnetic drinking straw or a 1.5 mm wide silicon strip with small magnetic background signal. Ferromagnetic semiconductor (Ga,Mn)As films are used to test the new sample rod, and the results are in good agreement with previous report.

  15. Note: Simultaneous measurements of magnetization and electrical transport signal by a reconstructed superconducting quantum interference device magnetometer

    NASA Astrophysics Data System (ADS)

    Wang, H. L.; Yu, X. Z.; Wang, S. L.; Chen, L.; Zhao, J. H.

    2013-08-01

    We have developed a sample rod which makes the conventional superconducting quantum interference device magnetometer capable of performing magnetization and electrical transport measurements simultaneously. The sample holder attached to the end of a 140 cm long sample rod is a nonmagnetic drinking straw or a 1.5 mm wide silicon strip with small magnetic background signal. Ferromagnetic semiconductor (Ga,Mn)As films are used to test the new sample rod, and the results are in good agreement with previous report.

  16. Selecting informative subsets of sparse supermatrices increases the chance to find correct trees

    PubMed Central

    2013-01-01

    Background Character matrices with extensive missing data are frequently used in phylogenomics with potentially detrimental effects on the accuracy and robustness of tree inference. Therefore, many investigators select taxa and genes with high data coverage. Drawbacks of these selections are their exclusive reliance on data coverage without consideration of actual signal in the data which might, thus, not deliver optimal data matrices in terms of potential phylogenetic signal. In order to circumvent this problem, we have developed a heuristics implemented in a software called mare which (1) assesses information content of genes in supermatrices using a measure of potential signal combined with data coverage and (2) reduces supermatrices with a simple hill climbing procedure to submatrices with high total information content. We conducted simulation studies using matrices of 50 taxa × 50 genes with heterogeneous phylogenetic signal among genes and data coverage between 10–30%. Results With matrices of 50 taxa × 50 genes with heterogeneous phylogenetic signal among genes and data coverage between 10–30% Maximum Likelihood (ML) tree reconstructions failed to recover correct trees. A selection of a data subset with the herein proposed approach increased the chance to recover correct partial trees more than 10-fold. The selection of data subsets with the herein proposed simple hill climbing procedure performed well either considering the information content or just a simple presence/absence information of genes. We also applied our approach on an empirical data set, addressing questions of vertebrate systematics. With this empirical dataset selecting a data subset with high information content and supporting a tree with high average boostrap support was most successful if information content of genes was considered. Conclusions Our analyses of simulated and empirical data demonstrate that sparse supermatrices can be reduced on a formal basis outperforming the

  17. Transcranial passive acoustic mapping with hemispherical sparse arrays using CT-based skull-specific aberration corrections: a simulation study.

    PubMed

    Jones, Ryan M; O'Reilly, Meaghan A; Hynynen, Kullervo

    2013-07-21

    The feasibility of transcranial passive acoustic mapping with hemispherical sparse arrays (30 cm diameter, 16 to 1372 elements, 2.48 mm receiver diameter) using CT-based aberration corrections was investigated via numerical simulations. A multi-layered ray acoustic transcranial ultrasound propagation model based on CT-derived skull morphology was developed. By incorporating skull-specific aberration corrections into a conventional passive beamforming algorithm (Norton and Won 2000 IEEE Trans. Geosci. Remote Sens. 38 1337-43), simulated acoustic source fields representing the emissions from acoustically-stimulated microbubbles were spatially mapped through three digitized human skulls, with the transskull reconstructions closely matching the water-path control images. Image quality was quantified based on main lobe beamwidths, peak sidelobe ratio, and image signal-to-noise ratio. The effects on the resulting image quality of the source's emission frequency and location within the skull cavity, the array sparsity and element configuration, the receiver element sensitivity, and the specific skull morphology were all investigated. The system's resolution capabilities were also estimated for various degrees of array sparsity. Passive imaging of acoustic sources through an intact skull was shown possible with sparse hemispherical imaging arrays. This technique may be useful for the monitoring and control of transcranial focused ultrasound (FUS) treatments, particularly non-thermal, cavitation-mediated applications such as FUS-induced blood-brain barrier disruption or sonothrombolysis, for which no real-time monitoring techniques currently exist.

  18. Transcranial passive acoustic mapping with hemispherical sparse arrays using CT-based skull-specific aberration corrections: a simulation study

    PubMed Central

    Jones, Ryan M.; O’Reilly, Meaghan A.; Hynynen, Kullervo

    2013-01-01

    The feasibility of transcranial passive acoustic mapping with hemispherical sparse arrays (30 cm diameter, 16 to 1372 elements, 2.48 mm receiver diameter) using CT-based aberration corrections was investigated via numerical simulations. A multi-layered ray acoustic transcranial ultrasound propagation model based on CT-derived skull morphology was developed. By incorporating skull-specific aberration corrections into a conventional passive beamforming algorithm (Norton and Won 2000 IEEE Trans. Geosci. Remote Sens. 38 1337–43), simulated acoustic source fields representing the emissions from acoustically-stimulated microbubbles were spatially mapped through three digitized human skulls, with the transskull reconstructions closely matching the water-path control images. Image quality was quantified based on main lobe beamwidths, peak sidelobe ratio, and image signal-to-noise ratio. The effects on the resulting image quality of the source’s emission frequency and location within the skull cavity, the array sparsity and element configuration, the receiver element sensitivity, and the specific skull morphology were all investigated. The system’s resolution capabilities were also estimated for various degrees of array sparsity. Passive imaging of acoustic sources through an intact skull was shown possible with sparse hemispherical imaging arrays. This technique may be useful for the monitoring and control of transcranial focused ultrasound (FUS) treatments, particularly non-thermal, cavitation-mediated applications such as FUS-induced blood-brain barrier disruption or sonothrombolysis, for which no real-time monitoring technique currently exists. PMID:23807573

  19. Transcranial passive acoustic mapping with hemispherical sparse arrays using CT-based skull-specific aberration corrections: a simulation study

    NASA Astrophysics Data System (ADS)

    Jones, Ryan M.; O'Reilly, Meaghan A.; Hynynen, Kullervo

    2013-07-01

    The feasibility of transcranial passive acoustic mapping with hemispherical sparse arrays (30 cm diameter, 16 to 1372 elements, 2.48 mm receiver diameter) using CT-based aberration corrections was investigated via numerical simulations. A multi-layered ray acoustic transcranial ultrasound propagation model based on CT-derived skull morphology was developed. By incorporating skull-specific aberration corrections into a conventional passive beamforming algorithm (Norton and Won 2000 IEEE Trans. Geosci. Remote Sens. 38 1337-43), simulated acoustic source fields representing the emissions from acoustically-stimulated microbubbles were spatially mapped through three digitized human skulls, with the transskull reconstructions closely matching the water-path control images. Image quality was quantified based on main lobe beamwidths, peak sidelobe ratio, and image signal-to-noise ratio. The effects on the resulting image quality of the source’s emission frequency and location within the skull cavity, the array sparsity and element configuration, the receiver element sensitivity, and the specific skull morphology were all investigated. The system’s resolution capabilities were also estimated for various degrees of array sparsity. Passive imaging of acoustic sources through an intact skull was shown possible with sparse hemispherical imaging arrays. This technique may be useful for the monitoring and control of transcranial focused ultrasound (FUS) treatments, particularly non-thermal, cavitation-mediated applications such as FUS-induced blood-brain barrier disruption or sonothrombolysis, for which no real-time monitoring techniques currently exist.

  20. Sparse-view proton computed tomography using modulated proton beams

    SciTech Connect

    Lee, Jiseoc; Kim, Changhwan; Cho, Seungryong; Min, Byungjun; Kwak, Jungwon; Park, Seyjoon; Lee, Se Byeong; Park, Sungyong

    2015-02-15

    Purpose: Proton imaging that uses a modulated proton beam and an intensity detector allows a relatively fast image acquisition compared to the imaging approach based on a trajectory tracking detector. In addition, it requires a relatively simple implementation in a conventional proton therapy equipment. The model of geometric straight ray assumed in conventional computed tomography (CT) image reconstruction is however challenged by multiple-Coulomb scattering and energy straggling in the proton imaging. Radiation dose to the patient is another important issue that has to be taken care of for practical applications. In this work, the authors have investigated iterative image reconstructions after a deconvolution of the sparsely view-sampled data to address these issues in proton CT. Methods: Proton projection images were acquired using the modulated proton beams and the EBT2 film as an intensity detector. Four electron-density cylinders representing normal soft tissues and bone were used as imaged object and scanned at 40 views that are equally separated over 360°. Digitized film images were converted to water-equivalent thickness by use of an empirically derived conversion curve. For improving the image quality, a deconvolution-based image deblurring with an empirically acquired point spread function was employed. They have implemented iterative image reconstruction algorithms such as adaptive steepest descent-projection onto convex sets (ASD-POCS), superiorization method–projection onto convex sets (SM-POCS), superiorization method–expectation maximization (SM-EM), and expectation maximization-total variation minimization (EM-TV). Performance of the four image reconstruction algorithms was analyzed and compared quantitatively via contrast-to-noise ratio (CNR) and root-mean-square-error (RMSE). Results: Objects of higher electron density have been reconstructed more accurately than those of lower density objects. The bone, for example, has been reconstructed

  1. Detail-preserving controllable deformation from sparse examples.

    PubMed

    Huang, Haoda; Yin, KangKang; Zhao, Ling; Qi, Yue; Yu, Yizhou; Tong, Xin

    2012-08-01

    Recent advances in laser scanning technology have made it possible to faithfully scan a real object with tiny geometric details, such as pores and wrinkles. However, a faithful digital model should not only capture static details of the real counterpart but also be able to reproduce the deformed versions of such details. In this paper, we develop a data-driven model that has two components; the first accommodates smooth large-scale deformations and the second captures high-resolution details. Large-scale deformations are based on a nonlinear mapping between sparse control points and bone transformations. A global mapping, however, would fail to synthesize realistic geometries from sparse examples, for highly deformable models with a large range of motion. The key is to train a collection of mappings defined over regions locally in both the geometry and the pose space. Deformable fine-scale details are generated from a second nonlinear mapping between the control points and per-vertex displacements. We apply our modeling scheme to scanned human hand models, scanned face models, face models reconstructed from multiview video sequences, and manually constructed dinosaur models. Experiments show that our deformation models, learned from extremely sparse training data, are effective and robust in synthesizing highly deformable models with rich fine features, for keyframe animation as well as performance-driven animation. We also compare our results with those obtained by alternative techniques.

  2. Blind deconvolution using an improved L0 sparse representation

    NASA Astrophysics Data System (ADS)

    Ye, Pengzhao; Feng, Huajun; Li, Qi; Xu, Zhihai; Chen, Yueting

    2014-09-01

    In this paper, we present a method for single image blind deconvolution. Many common forms of blind deconvolution methods need to previously generate a salient image, while the paper presents a novel L0 sparse expression to directly solve the ill-positioned problem. It has no need to filter the blurred image as a restoration step and can use the gradient information as a fidelity term during optimization. The key to blind deconvolution problem is to estimate an accurate kernel. First, based on L2 sparse expression using gradient operator as a prior, the kernel can be estimated roughly and efficiently in the frequency domain. We adopt the multi-scale scheme which can estimate blur kernel from coarser level to finer level. After the estimation of this level's kernel, L0 sparse representation is employed as the fidelity term during restoration. After derivation, L0 norm can be approximately converted to a sum term and L1 norm term which can be addressed by the Split-Bregman method. By using the estimated blur kernel and the TV deconvolution model, the final restoration image is obtained. Experimental results show that the proposed method is fast and can accurately reconstruct the kernel, especially when the blur is motion blur, defocus blur or the superposition of the two. The restored image is of higher quality than that of some of the art algorithms.

  3. Sequential time interleaved random equivalent sampling for repetitive signal

    NASA Astrophysics Data System (ADS)

    Zhao, Yijiu; Liu, Jingjing

    2016-12-01

    Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.

  4. Adaptive dictionary learning in sparse gradient domain for image recovery.

    PubMed

    Liu, Qiegen; Wang, Shanshan; Ying, Leslie; Peng, Xi; Zhu, Yanjie; Liang, Dong

    2013-12-01

    Image recovery from undersampled data has always been challenging due to its implicit ill-posed nature but becomes fascinating with the emerging compressed sensing (CS) theory. This paper proposes a novel gradient based dictionary learning method for image recovery, which effectively integrates the popular total variation (TV) and dictionary learning technique into the same framework. Specifically, we first train dictionaries from the horizontal and vertical gradients of the image and then reconstruct the desired image using the sparse representations of both derivatives. The proposed method enables local features in the gradient images to be captured effectively, and can be viewed as an adaptive extension of the TV regularization. The results of various experiments on MR images consistently demonstrate that the proposed algorithm efficiently recovers images and presents advantages over the current leading CS reconstruction approaches.

  5. Learning feature representations with a cost-relevant sparse autoencoder.

    PubMed

    Längkvist, Martin; Loutfi, Amy

    2015-02-01

    There is an increasing interest in the machine learning community to automatically learn feature representations directly from the (unlabeled) data instead of using hand-designed features. The autoencoder is one method that can be used for this purpose. However, for data sets with a high degree of noise, a large amount of the representational capacity in the autoencoder is used to minimize the reconstruction error for these noisy inputs. This paper proposes a method that improves the feature learning process by focusing on the task relevant information in the data. This selective attention is achieved by weighting the reconstruction error and reducing the influence of noisy inputs during the learning process. The proposed model is trained on a number of publicly available image data sets and the test error rate is compared to a standard sparse autoencoder and other methods, such as the denoising autoencoder and contractive autoencoder.

  6. Sparse Coding for Alpha Matting.

    PubMed

    Johnson, Jubin; Varnousfaderani, Ehsan Shahrian; Cholakkal, Hisham; Rajan, Deepu

    2016-07-01

    Existing color sampling-based alpha matting methods use the compositing equation to estimate alpha at a pixel from the pairs of foreground ( F ) and background ( B ) samples. The quality of the matte depends on the selected ( F,B ) pairs. In this paper, the matting problem is reinterpreted as a sparse coding of pixel features, wherein the sum of the codes gives the estimate of the alpha matte from a set of unpaired F and B samples. A non-parametric probabilistic segmentation provides a certainty measure on the pixel belonging to foreground or background, based on which a dictionary is formed for use in sparse coding. By removing the restriction to conform to ( F,B ) pairs, this method allows for better alpha estimation from multiple F and B samples. The same framework is extended to videos, where the requirement of temporal coherence is handled effectively. Here, the dictionary is formed by samples from multiple frames. A multi-frame graph model, as opposed to a single image as for image matting, is proposed that can be solved efficiently in closed form. Quantitative and qualitative evaluations on a benchmark dataset are provided to show that the proposed method outperforms the current stateoftheart in image and video matting.

  7. Sparse Coding for Alpha Matting.

    PubMed

    Johnson, Jubin; Varnousfaderani, Ehsan; Cholakkal, Hisham; Rajan, Deepu

    2016-04-21

    Existing color sampling based alpha matting methods use the compositing equation to estimate alpha at a pixel from pairs of foreground (F) and background (B) samples. The quality of the matte depends on the selected (F,B) pairs. In this paper, the matting problem is reinterpreted as a sparse coding of pixel features, wherein the sum of the codes gives the estimate of the alpha matte from a set of unpaired F and B samples. A non-parametric probabilistic segmentation provides a certainty measure on the pixel belonging to foreground or background, based on which a dictionary is formed for use in sparse coding. By removing the restriction to conform to (F,B) pairs, this method allows for better alpha estimation from multiple F and B samples. The same framework is extended to videos, where the requirement of temporal coherence is handled effectively. Here, the dictionary is formed by samples from multiple frames. A multi-frame graph model, as opposed to a single image as for image matting, is proposed that can be solved efficiently in closed form. Quantitative and qualitative evaluations on a benchmark dataset are provided to show that the proposed method outperforms current state-of-the-art in image and video matting.

  8. Universal Priors for Sparse Modeling(PREPRINT)

    DTIC Science & Technology

    2009-08-01

    UNIVERSAL PRIORS FOR SPARSE MODELING By Ignacio Ramı́rez Federico Lecumberry and Guillermo Sapiro IMA Preprint Series # 2276 ( August 2009...8-98) Prescribed by ANSI Std Z39-18 Universal Priors for Sparse Modeling (Invited Paper) Ignacio Ramı́rez#1, Federico Lecumberry ∗2, Guillermo Sapiro...I. Ramirez, F. Lecumberry , and G. Sapiro. Sparse modeling with univer- sal priors and learned incoherent dictionaries. Submitted to NIPS, 2009. [22

  9. Discriminative Sparse Representations in Hyperspectral Imagery

    DTIC Science & Technology

    2010-03-01

    classification , and unsupervised labeling (clustering) [2, 3, 4, 5, 6, 7, 8]. Recently, a non-parametric (Bayesian) approach to sparse modeling and com...DISCRIMINATIVE SPARSE REPRESENTATIONS IN HYPERSPECTRAL IMAGERY By Alexey Castrodad, Zhengming Xing John Greer, Edward Bosch Lawrence Carin and...00-00-2010 to 00-00-2010 4. TITLE AND SUBTITLE Discriminative Sparse Representations in Hyperspectral Imagery 5a. CONTRACT NUMBER 5b. GRANT

  10. Array signal recovery algorithm for a single-RF-channel DBF array

    NASA Astrophysics Data System (ADS)

    Zhang, Duo; Wu, Wen; Fang, Da Gang

    2016-12-01

    An array signal recovery algorithm based on sparse signal reconstruction theory is proposed for a single-RF-channel digital beamforming (DBF) array. A single-RF-channel antenna array is a low-cost antenna array in which signals are obtained from all antenna elements by only one microwave digital receiver. The spatially parallel array signals are converted into time-sequence signals, which are then sampled by the system. The proposed algorithm uses these time-sequence samples to recover the original parallel array signals by exploiting the second-order sparse structure of the array signals. Additionally, an optimization method based on the artificial bee colony (ABC) algorithm is proposed to improve the reconstruction performance. Using the proposed algorithm, the motion compensation problem for the single-RF-channel DBF array can be solved effectively, and the angle and Doppler information for the target can be simultaneously estimated. The effectiveness of the proposed algorithms is demonstrated by the results of numerical simulations.

  11. Dictionary learning method for joint sparse representation-based image fusion

    NASA Astrophysics Data System (ADS)

    Zhang, Qiheng; Fu, Yuli; Li, Haifeng; Zou, Jian

    2013-05-01

    Recently, sparse representation (SR) and joint sparse representation (JSR) have attracted a lot of interest in image fusion. The SR models signals by sparse linear combinations of prototype signal atoms that make a dictionary. The JSR indicates that different signals from the various sensors of the same scene form an ensemble. These signals have a common sparse component and each individual signal owns an innovation sparse component. The JSR offers lower computational complexity compared with SR. First, for JSR-based image fusion, we give a new fusion rule. Then, motivated by the method of optimal directions (MOD), for JSR, we propose a novel dictionary learning method (MODJSR) whose dictionary updating procedure is derived by employing the JSR structure one time with singular value decomposition (SVD). MODJSR has lower complexity than the K-SVD algorithm which is often used in previous JSR-based fusion algorithms. To capture the image details more efficiently, we proposed the generalized JSR in which the signals ensemble depends on two dictionaries. MODJSR is extended to MODGJSR in this case. MODJSR/MODGJSR can simultaneously carry out dictionary learning, denoising, and fusion of noisy source images. Some experiments are given to demonstrate the validity of the MODJSR/MODGJSR for image fusion.

  12. Image fusion using sparse overcomplete feature dictionaries

    DOEpatents

    Brumby, Steven P.; Bettencourt, Luis; Kenyon, Garrett T.; Chartrand, Rick; Wohlberg, Brendt

    2015-10-06

    Approaches for deciding what individuals in a population of visual system "neurons" are looking for using sparse overcomplete feature dictionaries are provided. A sparse overcomplete feature dictionary may be learned for an image dataset and a local sparse representation of the image dataset may be built using the learned feature dictionary. A local maximum pooling operation may be applied on the local sparse representation to produce a translation-tolerant representation of the image dataset. An object may then be classified and/or clustered within the translation-tolerant representation of the image dataset using a supervised classification algorithm and/or an unsupervised clustering algorithm.

  13. Mathematical strategies for filtering complex systems: Regularly spaced sparse observations

    SciTech Connect

    Harlim, J. Majda, A.J.

    2008-05-01

    Real time filtering of noisy turbulent signals through sparse observations on a regularly spaced mesh is a notoriously difficult and important prototype filtering problem. Simpler off-line test criteria are proposed here as guidelines for filter performance for these stiff multi-scale filtering problems in the context of linear stochastic partial differential equations with turbulent solutions. Filtering turbulent solutions of the stochastically forced dissipative advection equation through sparse observations is developed as a stringent test bed for filter performance with sparse regular observations. The standard ensemble transform Kalman filter (ETKF) has poor skill on the test bed and even suffers from filter divergence, surprisingly, at observable times with resonant mean forcing and a decaying energy spectrum in the partially observed signal. Systematic alternative filtering strategies are developed here including the Fourier Domain Kalman Filter (FDKF) and various reduced filters called Strongly Damped Approximate Filter (SDAF), Variance Strongly Damped Approximate Filter (VSDAF), and Reduced Fourier Domain Kalman Filter (RFDKF) which operate only on the primary Fourier modes associated with the sparse observation mesh while nevertheless, incorporating into the approximate filter various features of the interaction with the remaining modes. It is shown below that these much cheaper alternative filters have significant skill on the test bed of turbulent solutions which exceeds ETKF and in various regimes often exceeds FDKF, provided that the approximate filters are guided by the off-line test criteria. The skill of the various approximate filters depends on the energy spectrum of the turbulent signal and the observation time relative to the decorrelation time of the turbulence at a given spatial scale in a precise fashion elucidated here.

  14. Effects of sparse sampling schemes on image quality in low-dose CT

    SciTech Connect

    Abbas, Sajid; Lee, Taewon; Cho, Seungryong; Shin, Sukyoung; Lee, Rena

    2013-11-15

    Purpose: Various scanning methods and image reconstruction algorithms are actively investigated for low-dose computed tomography (CT) that can potentially reduce a health-risk related to radiation dose. Particularly, compressive-sensing (CS) based algorithms have been successfully developed for reconstructing images from sparsely sampled data. Although these algorithms have shown promises in low-dose CT, it has not been studied how sparse sampling schemes affect image quality in CS-based image reconstruction. In this work, the authors present several sparse-sampling schemes for low-dose CT, quantitatively analyze their data property, and compare effects of the sampling schemes on the image quality.Methods: Data properties of several sampling schemes are analyzed with respect to the CS-based image reconstruction using two measures: sampling density and data incoherence. The authors present five different sparse sampling schemes, and simulated those schemes to achieve a targeted dose reduction. Dose reduction factors of about 75% and 87.5%, compared to a conventional scan, were tested. A fully sampled circular cone-beam CT data set was used as a reference, and sparse sampling has been realized numerically based on the CBCT data.Results: It is found that both sampling density and data incoherence affect the image quality in the CS-based reconstruction. Among the sampling schemes the authors investigated, the sparse-view, many-view undersampling (MVUS)-fine, and MVUS-moving cases have shown promising results. These sampling schemes produced images with similar image quality compared to the reference image and their structure similarity index values were higher than 0.92 in the mouse head scan with 75% dose reduction.Conclusions: The authors found that in CS-based image reconstructions both sampling density and data incoherence affect the image quality, and suggest that a sampling scheme should be devised and optimized by use of these indicators. With this strategic

  15. Optimization of the signal selection of exclusively reconstructed decays of B0 and B/s mesons at CDF-II

    SciTech Connect

    Doerr, Christian

    2006-06-23

    The work presented in this thesis is mainly focused on the application in a Δms measurement. Chapter 1 starts with a general theoretical introduction on the unitarity triangle with a focus on the impact of a Δms measurement. Chapter 2 then describes the experimental setup, consisting of the Tevatron collider and the CDF II detector, that was used to collect the data. In chapter 3 the concept of parameter estimation using binned and unbinned maximum likelihood fits is laid out. In addition an introduction to the NeuroBayes{reg_sign} neural network package is given. Chapter 4 outlines the analysis steps walking the path from the trigger level selection to fully reconstructed B mesons candidates. In chapter 5 the concepts and formulas that form the ingredients to an unbinned maximum likelihood fit of Δms (Δmd) from a sample of reconstructed B mesons are discussed. Chapter 6 then introduces the novel method of using neural networks to achieve an improved signal selection. First the method is developed, tested and validated using the decay B0 → Dπ, D → Kππ and then applied to the kinematically very similar decay Bs → Dsπ, Ds→ Φπ, Φ → KK. Chapter 7 uses events selected by the neural network selection as input to an unbinned maximum likelihood fit and extracts the B0 lifetime and Δmd. In addition, an amplitude scan and an unbinned maximum likelihood fit of Δms is performed, applying the neural network selection developed for the decay channel Bs → Dsπ, Ds → Φπ, Φ → KK. Finally chapter 8 summarizes and gives an outlook.

  16. Consistent sparse representations of EEG ERP and ICA components based on wavelet and chirplet dictionaries.

    PubMed

    Qiu, Jun-Wei; Zao, John K; Wang, Peng-Hua; Chou, Yu-Hsiang

    2010-01-01

    A randomized search algorithm for sparse representations of EEG event-related potentials (ERPs) and their statistically independent components is presented. This algorithm combines greedy matching pursuit (MP) technique with covariance matrix adaptation evolution strategy (CMA-ES) to select small number of signal atoms from over-complete wavelet and chirplet dictionaries that offer best approximations of quasi-sparse ERP signals. During the search process, adaptive pruning of signal parameters was used to eliminate redundant or degenerative atoms. As a result, the CMA-ES/MP algorithm is capable of producing accurate efficient and consistent sparse representations of ERP signals and their ICA components. This paper explains the working principles of the algorithm and presents the preliminary results of its use.

  17. Sparse and stable Markowitz portfolios.

    PubMed

    Brodie, Joshua; Daubechies, Ingrid; De Mol, Christine; Giannone, Domenico; Loris, Ignace

    2009-07-28

    We consider the problem of portfolio selection within the classical Markowitz mean-variance framework, reformulated as a constrained least-squares regression problem. We propose to add to the objective function a penalty proportional to the sum of the absolute values of the portfolio weights. This penalty regularizes (stabilizes) the optimization problem, encourages sparse portfolios (i.e., portfolios with only few active positions), and allows accounting for transaction costs. Our approach recovers as special cases the no-short-positions portfolios, but does allow for short positions in limited number. We implement this methodology on two benchmark data sets constructed by Fama and French. Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the naïve evenly weighted portfolio.

  18. Effect of asymmetrical eddy currents on magnetic diagnosis signals for equilibrium reconstruction in the Sino-UNIted Spherical Tokamak.

    PubMed

    Jiang, Y Z; Tan, Y; Gao, Z; Wang, L

    2014-11-01

    The vacuum vessel of Sino-UNIted Spherical Tokamak was split into two insulated hemispheres, both of which were insulated from the central cylinder. The eddy currents flowing in the vacuum vessel would become asymmetrical due to discontinuity. A 3D finite elements model was applied in order to study the eddy currents. The modeling results indicated that when the Poloidal Field (PF) was applied, the induced eddy currents would flow in the toroidal direction in the center of the hemispheres and would be forced to turn to the poloidal and radial directions due to the insulated slit. Since the eddy currents converged on the top and bottom of the vessel, the current densities there tended to be much higher than those in the equatorial plane were. Moreover, the eddy currents on the top and bottom of vacuum vessel had the same direction when the current flowed in the PF coils. These features resulted in the leading phases of signals on the top and bottom flux loops when compared with the PF waveforms.

  19. Energy-based scheme for reconstruction of piecewise constant signals observed in the movement of molecular machines

    NASA Astrophysics Data System (ADS)

    Rosskopf, Joachim; Paul-Yuan, Korbinian; Plenio, Martin B.; Michaelis, Jens

    2016-08-01

    Analyzing the physical and chemical properties of single DNA-based molecular machines such as polymerases and helicases requires to track stepping motion on the length scale of base pairs. Although high-resolution instruments have been developed that are capable of reaching that limit, individual steps are oftentimes hidden by experimental noise which complicates data processing. Here we present an effective two-step algorithm which detects steps in a high-bandwidth signal by minimizing an energy-based model (energy-based step finder, EBS). First, an efficient convex denoising scheme is applied which allows compression to tuples of amplitudes and plateau lengths. Second, a combinatorial clustering algorithm formulated on a graph is used to assign steps to the tuple data while accounting for prior information. Performance of the algorithm was tested on Poissonian stepping data simulated based on published kinetics data of RNA polymerase II (pol II). Comparison to existing step-finding methods shows that EBS is superior in speed while providing competitive step-detection results, especially in challenging situations. Moreover, the capability to detect backtracked intervals in experimental data of pol II as well as to detect stepping behavior of the Phi29 DNA packaging motor is demonstrated.

  20. Poisson-gap sampling and forward maximum entropy reconstruction for enhancing the resolution and sensitivity of protein NMR data.

    PubMed

    Hyberts, Sven G; Takeuchi, Koh; Wagner, Gerhard

    2010-02-24

    The Fourier transform has been the gold standard for transforming data from the time domain to the frequency domain in many spectroscopic methods, including NMR spectroscopy. While reliable, it has the drawback that it requires a grid of uniformely sampled data points, which is not efficient for decaying signals, and it also suffers from artifacts when dealing with nondecaying signals. Over several decades, many alternative sampling and transformation schemes have been proposed. Their common problem is that relative signal amplitudes are not well-preserved. Here we demonstrate the superior performance of a sine-weighted Poisson-gap distribution sparse-sampling scheme combined with forward maximum entropy (FM) reconstruction. While the relative signal amplitudes are well-preserved, we also find that the signal-to-noise ratio is enhanced up to 4-fold per unit of data acquisition time relative to traditional linear sampling.

  1. Wide field of view multifocal scanning microscopy with sparse sampling

    NASA Astrophysics Data System (ADS)

    Wang, Jie; Wu, Jigang

    2016-02-01

    We propose to use sparsely sampled line scans with a sparsity-based reconstruction method to obtain images in a wide field of view (WFOV) multifocal scanning microscope. In the WFOV microscope, we used a holographically generated irregular focus grid to scan the sample in one dimension and then reconstructed the sample image from line scans by measuring the transmission of the foci through the sample during scanning. The line scans were randomly spaced with average spacing larger than the Nyquist sampling requirement, and the image was recovered with sparsity-based reconstruction techniques. With this scheme, the acquisition data can be significantly reduced and the restriction for equally spaced foci positions can be removed, indicating simpler experimental requirement. We built a prototype system and demonstrated the effectiveness of the reconstruction by recovering microscopic images of a U.S. Air Force target and an onion skin cell microscope slide with 40, 60, and 80% missing data with respect to the Nyquist sampling requirement.

  2. Rank-One and Transformed Sparse Decomposition for Dynamic Cardiac MRI

    PubMed Central

    Xiu, Xianchao; Kong, Lingchen

    2015-01-01

    It is challenging and inspiring for us to achieve high spatiotemporal resolutions in dynamic cardiac magnetic resonance imaging (MRI). In this paper, we introduce two novel models and algorithms to reconstruct dynamic cardiac MRI data from under-sampled k − t space data. In contrast to classical low-rank and sparse model, we use rank-one and transformed sparse model to exploit the correlations in the dataset. In addition, we propose projected alternative direction method (PADM) and alternative hard thresholding method (AHTM) to solve our proposed models. Numerical experiments of cardiac perfusion and cardiac cine MRI data demonstrate improvement in performance. PMID:26247010

  3. Finding One Community in a Sparse Graph

    NASA Astrophysics Data System (ADS)

    Montanari, Andrea

    2015-10-01

    We consider a random sparse graph with bounded average degree, in which a subset of vertices has higher connectivity than the background. In particular, the average degree inside this subset of vertices is larger than outside (but still bounded). Given a realization of such graph, we aim at identifying the hidden subset of vertices. This can be regarded as a model for the problem of finding a tightly knitted community in a social network, or a cluster in a relational dataset. In this paper we present two sets of contributions: ( i) We use the cavity method from spin glass theory to derive an exact phase diagram for the reconstruction problem. In particular, as the difference in edge probability increases, the problem undergoes two phase transitions, a static phase transition and a dynamic one. ( ii) We establish rigorous bounds on the dynamic phase transition and prove that, above a certain threshold, a local algorithm (belief propagation) correctly identify most of the hidden set. Below the same threshold no local algorithm can achieve this goal. However, in this regime the subset can be identified by exhaustive search. For small hidden sets and large average degree, the phase transition for local algorithms takes an intriguingly simple form. Local algorithms succeed with high probability for deg _in - deg _out > √{deg _out/e} and fail for deg _in - deg _out < √{deg _out/e} (with deg _in, deg _out the average degrees inside and outside the community). We argue that spectral algorithms are also ineffective in the latter regime. It is an open problem whether any polynomial time algorithms might succeed for deg _in - deg _out < √{deg _out/e}.

  4. Switching Markov decoders for asynchronous trajectory reconstruction from ECoG signals in monkeys for BCI applications.

    PubMed

    Schaeffer, Marie-Caroline; Aksenova, Tetiana

    2017-03-10

    Brain-Computer Interfaces (BCIs) are systems which translate brain neural activity into commands for external devices. BCI users generally alternate between No-Control (NC) and Intentional Control (IC) periods. NC/IC discrimination is crucial for clinical BCIs, particularly when they provide neural control over complex effectors such as exoskeletons. Numerous BCI decoders focus on the estimation of continuously-valued limb trajectories from neural signals. The integration of NC support into continuous decoders is investigated in the present article. Most discrete/continuous BCI hybrid decoders rely on static state models which don't exploit the dynamic of NC/IC state succession. A hybrid decoder, referred to as Markov Switching Linear Model (MSLM), is proposed in the present article. The MSLM assumes that the NC/IC state sequence is generated by a first-order Markov chain, and performs dynamic NC/IC state detection. Linear continuous movement models are probabilistically combined using the NC and IC state posterior probabilities yielded by the state decoder. The proposed decoder is evaluated for the task of asynchronous wrist position decoding from high dimensional space-time-frequency ElectroCorticoGraphic (ECoG) features in monkeys. The MSLM is compared with another dynamic hybrid decoder proposed in the literature, namely a Switching Kalman Filter (SKF). A comparison is additionally drawn with a Wiener filter decoder which infers NC states by thresholding trajectory estimates. The MSLM decoder is found to outperform both the SKF and the thresholded Wiener filter decoder in terms of False Positive Ratio and NC/IC state detection error. It additionally surpasses the SKF with respect to the Pearson Correlation Coefficient and Root Mean Squared Error between true and estimated continuous trajectories.

  5. Approximate Orthogonal Sparse Embedding for Dimensionality Reduction.

    PubMed

    Lai, Zhihui; Wong, Wai Keung; Xu, Yong; Yang, Jian; Zhang, David

    2016-04-01

    Locally linear embedding (LLE) is one of the most well-known manifold learning methods. As the representative linear extension of LLE, orthogonal neighborhood preserving projection (ONPP) has attracted widespread attention in the field of dimensionality reduction. In this paper, a unified sparse learning framework is proposed by introducing the sparsity or L1-norm learning, which further extends the LLE-based methods to sparse cases. Theoretical connections between the ONPP and the proposed sparse linear embedding are discovered. The optimal sparse embeddings derived from the proposed framework can be computed by iterating the modified elastic net and singular value decomposition. We also show that the proposed model can be viewed as a general model for sparse linear and nonlinear (kernel) subspace learning. Based on this general model, sparse kernel embedding is also proposed for nonlinear sparse feature extraction. Extensive experiments on five databases demonstrate that the proposed sparse learning framework performs better than the existing subspace learning algorithm, particularly in the cases of small sample sizes.

  6. Large-scale sparse singular value computations

    NASA Technical Reports Server (NTRS)

    Berry, Michael W.

    1992-01-01

    Four numerical methods for computing the singular value decomposition (SVD) of large sparse matrices on a multiprocessor architecture are presented. Lanczos and subspace iteration-based methods for determining several of the largest singular triplets (singular values and corresponding left and right-singular vectors) for sparse matrices arising from two practical applications: information retrieval and seismic reflection tomography are emphasized. The target architectures for implementations are the CRAY-2S/4-128 and Alliant FX/80. The sparse SVD problem is well motivated by recent information-retrieval techniques in which dominant singular values and their corresponding singular vectors of large sparse term-document matrices are desired, and by nonlinear inverse problems from seismic tomography applications which require approximate pseudo-inverses of large sparse Jacobian matrices.

  7. Approximate inverse preconditioners for general sparse matrices

    SciTech Connect

    Chow, E.; Saad, Y.

    1994-12-31

    Preconditioned Krylov subspace methods are often very efficient in solving sparse linear matrices that arise from the discretization of elliptic partial differential equations. However, for general sparse indifinite matrices, the usual ILU preconditioners fail, often because of the fact that the resulting factors L and U give rise to unstable forward and backward sweeps. In such cases, alternative preconditioners based on approximate inverses may be attractive. We are currently developing a number of such preconditioners based on iterating on each column to get the approximate inverse. For this approach to be efficient, the iteration must be done in sparse mode, i.e., we must use sparse-matrix by sparse-vector type operatoins. We will discuss a few options and compare their performance on standard problems from the Harwell-Boeing collection.

  8. Sparse distributed memory: Principles and operation

    NASA Technical Reports Server (NTRS)

    Flynn, M. J.; Kanerva, P.; Bhadkamkar, N.

    1989-01-01

    Sparse distributed memory is a generalized random access memory (RAM) for long (1000 bit) binary words. Such words can be written into and read from the memory, and they can also be used to address the memory. The main attribute of the memory is sensitivity to similarity, meaning that a word can be read back not only by giving the original write address but also by giving one close to it as measured by the Hamming distance between addresses. Large memories of this kind are expected to have wide use in speech recognition and scene analysis, in signal detection and verification, and in adaptive control of automated equipment, in general, in dealing with real world information in real time. The memory can be realized as a simple, massively parallel computer. Digital technology has reached a point where building large memories is becoming practical. Major design issues were resolved which were faced in building the memories. The design is described of a prototype memory with 256 bit addresses and from 8 to 128 K locations for 256 bit words. A key aspect of the design is extensive use of dynamic RAM and other standard components.

  9. Sparse distributed memory prototype: Principles of operation

    NASA Technical Reports Server (NTRS)

    Flynn, Michael J.; Kanerva, Pentti; Ahanin, Bahram; Bhadkamkar, Neal; Flaherty, Paul; Hickey, Philip

    1988-01-01

    Sparse distributed memory is a generalized random access memory (RAM) for long binary words. Such words can be written into and read from the memory, and they can be used to address the memory. The main attribute of the memory is sensitivity to similarity, meaning that a word can be read back not only by giving the original right address but also by giving one close to it as measured by the Hamming distance between addresses. Large memories of this kind are expected to have wide use in speech and scene analysis, in signal detection and verification, and in adaptive control of automated equipment. The memory can be realized as a simple, massively parallel computer. Digital technology has reached a point where building large memories is becoming practical. The research is aimed at resolving major design issues that have to be faced in building the memories. The design of a prototype memory with 256-bit addresses and from 8K to 128K locations for 256-bit words is described. A key aspect of the design is extensive use of dynamic RAM and other standard components.

  10. Generalization of spectral fidelity with flexible measures for the sparse representation classification of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Zhu, Yong; Huang, Xin; Li, Jiayi

    2016-10-01

    Sparse representation classification (SRC) is becoming a promising tool for hyperspectral image (HSI) classification, where the Euclidean spectral distance (ESD) is widely used to reflect the fidelity between the original and reconstructed signals. In this paper, a generalized model is proposed to extend SRC by characterizing the spectral fidelity with flexible similarity measures. To validate the flexibility, several typical similarity measures-the spectral angle similarity (SAS), spectral information divergence (SID), the structural similarity index measure (SSIM), and the ESD-are included in the generalized model. Furthermore, a general solution based on a gradient descent technique is used to solve the nonlinear optimization problem formulated by the flexible similarity measures. To test the generalized model, two actual HSIs were used, and the experimental results confirm the ability of the proposed model to accommodate the various spectral similarity measures. Performance comparisons with the ESD, SAS, SID, and SSIM criteria were also conducted, and the results consistently show the advantages of the generalized model for HSI classification in terms of overall accuracy and kappa coefficient.

  11. A sparse Bayesian representation for super-resolution of cardiac MR images.

    PubMed

    Velasco, Nelson F; Rueda, Andrea; Santa Marta, Cristina; Romero, Eduardo

    2017-02-01

    High-quality cardiac magnetic resonance (CMR) images can be hardly obtained when intrinsic noise sources are present, namely heart and breathing movements. Yet heart images may be acquired in real time, the image quality is really limited and most sequences use ECG gating to capture images at each stage of the cardiac cycle during several heart beats. This paper presents a novel super-resolution algorithm that improves the cardiac image quality using a sparse Bayesian approach. The high-resolution version of the cardiac image is constructed by combining the information of the low-resolution series -observations from different non-orthogonal series composed of anisotropic voxels - with a prior distribution of the high-resolution local coefficients that enforces sparsity. In addition, a global prior, extracted from the observed data, regularizes the solution. Quantitative and qualitative validations were performed in synthetic and real images w.r.t to a baseline, showing an average increment between 2.8 and 3.2 dB in the Peak Signal-to-Noise Ratio (PSNR), between 1.8% and 2.6% in the Structural Similarity Index (SSIM) and 2.% to 4% in quality assessment (IL-NIQE). The obtained results demonstrated that the proposed method is able to accurately reconstruct a cardiac image, recovering the original shape with less artifacts and low noise.

  12. Near-field acoustic holography using sparse regularization and compressive sampling principles.

    PubMed

    Chardon, Gilles; Daudet, Laurent; Peillot, Antoine; Ollivier, François; Bertin, Nancy; Gribonval, Rémi

    2012-09-01

    Regularization of the inverse problem is a complex issue when using near-field acoustic holography (NAH) techniques to identify the vibrating sources. This paper shows that, for convex homogeneous plates with arbitrary boundary conditions, alternative regularization schemes can be developed based on the sparsity of the normal velocity of the plate in a well-designed basis, i.e., the possibility to approximate it as a weighted sum of few elementary basis functions. In particular, these techniques can handle discontinuities of the velocity field at the boundaries, which can be problematic with standard techniques. This comes at the cost of a higher computational complexity to solve the associated optimization problem, though it remains easily tractable with out-of-the-box software. Furthermore, this sparsity framework allows us to take advantage of the concept of compressive sampling; under some conditions on the sampling process (here, the design of a random array, which can be numerically and experimentally validated), it is possible to reconstruct the sparse signals with significantly less measurements (i.e., microphones) than classically required. After introducing the different concepts, this paper presents numerical and experimental results of NAH with two plate geometries, and compares the advantages and limitations of these sparsity-based techniques over standard Tikhonov regularization.

  13. Sparse recovery of the multimodal and dispersive characteristics of Lamb waves.

    PubMed

    Harley, Joel B; Moura, José M F

    2013-05-01

    Guided waves in plates, known as Lamb waves, are characterized by complex, multimodal, and frequency dispersive wave propagation, which distort signals and make their analysis difficult. Estimating these multimodal and dispersive characteristics from experimental data becomes a difficult, underdetermined inverse problem. To accurately and robustly recover these multimodal and dispersive properties, this paper presents a methodology referred to as sparse wavenumber analysis based on sparse recovery methods. By utilizing a general model for Lamb waves, waves propagating in a plate structure, and robust l1 optimization strategies, sparse wavenumber analysis accurately recovers the Lamb wave's frequency-wavenumber representation with a limited number of surface mounted transducers. This is demonstrated with both simulated and experimental data in the presence of multipath reflections. With accurate frequency-wavenumber representations, sparse wavenumber synthesis is then used to accurately remove multipath interference in each measurement and predict the responses between arbitrary points on a plate.

  14. Sparse-aperture adaptive optics

    NASA Astrophysics Data System (ADS)

    Tuthill, Peter; Lloyd, James; Ireland, Michael; Martinache, Frantz; Monnier, John; Woodruff, Henry; ten Brummelaar, Theo; Turner, Nils; Townes, Charles

    2006-06-01

    Aperture masking interferometry and Adaptive Optics (AO) are two of the competing technologies attempting to recover diffraction-limited performance from ground-based telescopes. However, there are good arguments that these techniques should be viewed as complementary, not competitive. Masking has been shown to deliver superior PSF calibration, rejection of atmospheric noise and robust recovery of phase information through the use of closure phases. However, this comes at the penalty of loss of flux at the mask, restricting the technique to bright targets. Adaptive optics, on the other hand, can reach a fainter class of objects but suffers from the difficulty of calibration of the PSF which can vary with observational parameters such as seeing, airmass and source brightness. Here we present results from a fusion of these two techniques: placing an aperture mask downstream of an AO system. The precision characterization of the PSF enabled by sparse-aperture interferometry can now be applied to deconvolution of AO images, recovering structure from the traditionally-difficult regime within the core of the AO-corrected transfer function. Results of this program from the Palomar and Keck adaptive optical systems are presented.

  15. Resistant multiple sparse canonical correlation.

    PubMed

    Coleman, Jacob; Replogle, Joseph; Chandler, Gabriel; Hardin, Johanna

    2016-04-01

    Canonical correlation analysis (CCA) is a multivariate technique that takes two datasets and forms the most highly correlated possible pairs of linear combinations between them. Each subsequent pair of linear combinations is orthogonal to the preceding pair, meaning that new information is gleaned from each pair. By looking at the magnitude of coefficient values, we can find out which variables can be grouped together, thus better understanding multiple interactions that are otherwise difficult to compute or grasp intuitively. CCA appears to have quite powerful applications to high-throughput data, as we can use it to discover, for example, relationships between gene expression and gene copy number variation. One of the biggest problems of CCA is that the number of variables (often upwards of 10,000) makes biological interpretation of linear combinations nearly impossible. To limit variable output, we have employed a method known as sparse canonical correlation analysis (SCCA), while adding estimation which is resistant to extreme observations or other types of deviant data. In this paper, we have demonstrated the success of resistant estimation in variable selection using SCCA. Additionally, we have used SCCA to find multiple canonical pairs for extended knowledge about the datasets at hand. Again, using resistant estimators provided more accurate estimates than standard estimators in the multiple canonical correlation setting. R code is available and documented at https://github.com/hardin47/rmscca.

  16. Sparse High Dimensional Models in Economics

    PubMed Central

    Fan, Jianqing; Lv, Jinchi; Qi, Lei

    2010-01-01

    This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance. Recent developments of theory, methods, and implementations in penalized least squares and penalized likelihood methods are highlighted. These variable selection methods are proved to be effective in high dimensional sparse modeling. The limits of dimensionality that regularization methods can handle, the role of penalty functions, and their statistical properties are detailed. Some recent advances in ultra-high dimensional sparse modeling are also briefly discussed. PMID:22022635

  17. Sparse coded image super-resolution using K-SVD trained dictionary based on regularized orthogonal matching pursuit.

    PubMed

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2015-01-01

    Image super-resolution (SR) plays a vital role in medical imaging that allows a more efficient and effective diagnosis process. Usually, diagnosing is difficult and inaccurate from low-resolution (LR) and noisy images. Resolution enhancement through conventional interpolation methods strongly affects the precision of consequent processing steps, such as segmentation and registration. Therefore, we propose an efficient sparse coded image SR reconstruction technique using a trained dictionary. We apply a simple and efficient regularized version of orthogonal matching pursuit (ROMP) to seek the coefficients of sparse representation. ROMP has the transparency and greediness of OMP and the robustness of the L1-minization that enhance the dictionary learning process to capture feature descriptors such as oriented edges and contours from complex images like brain MRIs. The sparse coding part of the K-SVD dictionary training procedure is modified by substituting OMP with ROMP. The dictionary update stage allows simultaneously updating an arbitrary number of atoms and vectors of sparse coefficients. In SR reconstruction, ROMP is used to determine the vector of sparse coefficients for the underlying patch. The recovered representations are then applied to the trained dictionary, and finally, an optimization leads to high-resolution output of high-quality. Experimental results demonstrate that the super-resolution reconstruction quality of the proposed scheme is comparatively better than other state-of-the-art schemes.

  18. Sparse radar imaging using 2D compressed sensing

    NASA Astrophysics Data System (ADS)

    Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying

    2014-10-01

    Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.

  19. Reconstruction of Undersampled Periodic Signals.

    DTIC Science & Technology

    1986-01-01

    A J SILVA JAN 86 TR-514 NSSO±4-S±-K-S742 UNLASIIE F/O 9/3 NL ,. ...c% VC p. V 9 kph ~b~$ N. C d * C .~ t r C 11111 3 -. .- I * ttt liii! 1.0 ~,ca ~ 4...January 1986 IDTI ^E Th~~~~~a~ dr ’P r .LA~’~c1 cautriul L: This work has been supported in part by the Advanced Research Projects Agency monitored by ONR...m For NTIS GEA&t DTIC TAH U~lnmou~icc~ Just ifico, , By___... Avail L e S’ ’Dlst/V" r , 1;’ 7:0 Ajvr Dis C-N ’ SUICUIfY CLASSIICATICN O "rtt &cz

  20. Robust feature point matching with sparse model.

    PubMed

    Jiang, Bo; Tang, Jin; Luo, Bin; Lin, Liang

    2014-12-01

    Feature point matching that incorporates pairwise constraints can be cast as an integer quadratic programming (IQP) problem. Since it is NP-hard, approximate methods are required. The optimal solution for IQP matching problem is discrete, binary, and thus sparse in nature. This motivates us to use sparse model for feature point matching problem. The main advantage of the proposed sparse feature point matching (SPM) method is that it generates sparse solution and thus naturally imposes the discrete mapping constraints approximately in the optimization process. Therefore, it can optimize the IQP matching problem in an approximate discrete domain. In addition, an efficient algorithm can be derived to solve SPM problem. Promising experimental results on both synthetic points sets matching and real-world image feature sets matching tasks show the effectiveness of the proposed feature point matching method.

  1. LASER APPLICATIONS IN MEDICINE: Analysis of distortions in the velocity profiles of suspension flows inside a light-scattering medium upon their reconstruction from the optical coherence Doppler tomograph signal

    NASA Astrophysics Data System (ADS)

    Bykov, A. V.; Kirillin, M. Yu; Priezzhev, A. V.

    2005-11-01

    Model signals from one and two plane flows of a particle suspension are obtained for an optical coherence Doppler tomograph (OCDT) by the Monte-Carlo method. The optical properties of particles mimic the properties of non-aggregating erythrocytes. The flows are considered in a stationary scattering medium with optical properties close to those of the skin. It is shown that, as the flow position depth increases, the flow velocity determined from the OCDT signal becomes smaller than the specified velocity and the reconstructed profile extends in the direction of the distant boundary, which is accompanied by the shift of its maximum. In the case of two flows, an increase in the velocity of the near-surface flow leads to the overestimated values of velocity of the reconstructed profile of the second flow. Numerical simulations were performed by using a multiprocessor parallel-architecture computer.

  2. Sparse and compositionally robust inference of microbial ecological networks.

    PubMed

    Kurtz, Zachary D; Müller, Christian L; Miraldi, Emily R; Littman, Dan R; Blaser, Martin J; Bonneau, Richard A

    2015-05-01

    16S ribosomal RNA (rRNA) gene and other environmental sequencing techniques provide snapshots of microbial communities, revealing phylogeny and the abundances of microbial populations across diverse ecosystems. While changes in microbial community structure are demonstrably associated with certain environmental conditions (from metabolic and immunological health in mammals to ecological stability in soils and oceans), identification of underlying mechanisms requires new statistical tools, as these datasets present several technical challenges. First, the abundances of microbial operational taxonomic units (OTUs) from amplicon-based datasets are compositional. Counts are normalized to the total number of counts in the sample. Thus, microbial abundances are not independent, and traditional statistical metrics (e.g., correlation) for the detection of OTU-OTU relationships can lead to spurious results. Secondly, microbial sequencing-based studies typically measure hundreds of OTUs on only tens to hundreds of samples; thus, inference of OTU-OTU association networks is severely under-powered, and additional information (or assumptions) are required for accurate inference. Here, we present SPIEC-EASI (SParse InversE Covariance Estimation for Ecological Association Inference), a statistical method for the inference of microbial ecological networks from amplicon sequencing datasets that addresses both of these issues. SPIEC-EASI combines data transformations developed for compositional data analysis with a graphical model inference framework that assumes the underlying ecological association network is sparse. To reconstruct the network, SPIEC-EASI relies on algorithms for sparse neighborhood and inverse covariance selection. To provide a synthetic benchmark in the absence of an experimentally validated gold-standard network, SPIEC-EASI is accompanied by a set of computational tools to generate OTU count data from a set of diverse underlying network topologies. SPIEC

  3. Sparse and Compositionally Robust Inference of Microbial Ecological Networks

    PubMed Central

    Kurtz, Zachary D.; Müller, Christian L.; Miraldi, Emily R.; Littman, Dan R.; Blaser, Martin J.; Bonneau, Richard A.

    2015-01-01

    16S ribosomal RNA (rRNA) gene and other environmental sequencing techniques provide snapshots of microbial communities, revealing phylogeny and the abundances of microbial populations across diverse ecosystems. While changes in microbial community structure are demonstrably associated with certain environmental conditions (from metabolic and immunological health in mammals to ecological stability in soils and oceans), identification of underlying mechanisms requires new statistical tools, as these datasets present several technical challenges. First, the abundances of microbial operational taxonomic units (OTUs) from amplicon-based datasets are compositional. Counts are normalized to the total number of counts in the sample. Thus, microbial abundances are not independent, and traditional statistical metrics (e.g., correlation) for the detection of OTU-OTU relationships can lead to spurious results. Secondly, microbial sequencing-based studies typically measure hundreds of OTUs on only tens to hundreds of samples; thus, inference of OTU-OTU association networks is severely under-powered, and additional information (or assumptions) are required for accurate inference. Here, we present SPIEC-EASI (SParse InversE Covariance Estimation for Ecological Association Inference), a statistical method for the inference of microbial ecological networks from amplicon sequencing datasets that addresses both of these issues. SPIEC-EASI combines data transformations developed for compositional data analysis with a graphical model inference framework that assumes the underlying ecological association network is sparse. To reconstruct the network, SPIEC-EASI relies on algorithms for sparse neighborhood and inverse covariance selection. To provide a synthetic benchmark in the absence of an experimentally validated gold-standard network, SPIEC-EASI is accompanied by a set of computational tools to generate OTU count data from a set of diverse underlying network topologies. SPIEC

  4. Two-Dimensional Pattern-Coupled Sparse Bayesian Learning via Generalized Approximate Message Passing.

    PubMed

    Fang, Jun; Zhang, Lizao; Li, Hohgbin

    2016-04-20

    We consider the problem of recovering twodimensional (2-D) block-sparse signals with unknown cluster patterns. Two-dimensional block-sparse patterns arise naturally in many practical applications such as foreground detection and inverse synthetic aperture radar imaging. To exploit the underlying block-sparse structure, we propose a 2-D pattern-coupled hierarchical Gaussian prior model. The proposed pattern-coupled hierarchical Gaussian prior model imposes a soft coupling mechanism among neighboring coefficients through their shared hyperparameters. This coupling mechanism enables effective and automatic learning of the underlying irregular cluster patterns, without requiring any a priori knowledge of the block partition of sparse signals. We develop a computationally efficient Bayesian inference method which integrates the generalized approximate message passing (GAMP) technique with the proposed prior model. Simulation results show that the proposed method offers competitive recovery performance for a range of 2-D sparse signal recovery and image processing applications over existing method, meanwhile achieving a significant reduction in computational complexity.

  5. Single-Trial Sparse Representation-Based Approach for VEP Extraction

    PubMed Central

    Yu, Nannan; Hu, Funian; Zou, Dexuan; Ding, Qisheng

    2016-01-01

    Sparse representation is a powerful tool in signal denoising, and visual evoked potentials (VEPs) have been proven to have strong sparsity over an appropriate dictionary. Inspired by this idea, we present in this paper a novel sparse representation-based approach to solving the VEP extraction problem. The extraction process is performed in three stages. First, instead of using the mixed signals containing the electroencephalogram (EEG) and VEPs, we utilise an EEG from a previous trial, which did not contain VEPs, to identify the parameters of the EEG autoregressive (AR) model. Second, instead of the moving average (MA) model, sparse representation is used to model the VEPs in the autoregressive-moving average (ARMA) model. Finally, we calculate the sparse coefficients and derive VEPs by using the AR model. Next, we tested the performance of the proposed algorithm with synthetic and real data, after which we compared the results with that of an AR model with exogenous input modelling and a mixed overcomplete dictionary-based sparse component decomposition method. Utilising the synthetic data, the algorithms are then employed to estimate the latencies of P100 of the VEPs corrupted by added simulated EEG at different signal-to-noise ratio (SNR) values. The validations demonstrate that our method can well preserve the details of the VEPs for latency estimation, even in low SNR environments. PMID:27807541

  6. Online Dictionary Learning for Sparse Coding

    DTIC Science & Technology

    2009-04-01

    cessing tasks such as denoising (Elad & Aharon, 2006) as well as higher-level tasks such as classification (Raina et al., 2007; Mairal et al., 2008a...Bruckstein, A. M. (2006). The K- SVD : An algorithm for designing of overcomplete dic- tionaries for sparse representations. IEEE Trans. SP...Tibshirani, R. (2004). Least angle regression. Ann. Statist. Elad, M., & Aharon, M. (2006). Image denoising via sparse and redundant representations

  7. Sparse CSEM inversion driven by seismic coherence

    NASA Astrophysics Data System (ADS)

    Guo, Zhenwei; Dong, Hefeng; Kristensen, Åge

    2016-12-01

    Marine controlled source electromagnetic (CSEM) data inversion for hydrocarbon exploration is often challenging due to high computational cost, physical memory requirement and low resolution of the obtained resistivity map. This paper aims to enhance both the speed and resolution of CSEM inversion by introducing structural geological information in the inversion algorithm. A coarse mesh is generated for Occam’s inversion, where the parameters are fewer than in the fine regular mesh. This sparse mesh is defined as a coherence-based irregular (IC) sparse mesh, which is based on vertices extracted from available geological information. Inversion results on synthetic data illustrate that the IC sparse mesh has a smaller inversion computational cost compared to the regular dense (RD) mesh. It also has a higher resolution than with a regular sparse (RS) mesh for the same number of estimated parameters. In order to study how the IC sparse mesh reduces the computational time, four different meshes are generated for Occam’s inversion. As a result, an IC sparse mesh can reduce the computational cost while it keeps the resolution as good as a fine regular mesh. The IC sparse mesh reduces the computational cost of the matrix operation for model updates. When the number of estimated parameters reduces to a limited value, the computational cost is independent of the number of parameters. For a testing model with two resistive layers, the inversion result using an IC sparse mesh has higher resolution in both horizontal and vertical directions. Overall, the model representing significant geological information in the IC mesh can improve the resolution of the resistivity models obtained from inversion of CSEM data.

  8. CARS Spectral Fitting with Multiple Resonant Species using Sparse Libraries

    NASA Technical Reports Server (NTRS)

    Cutler, Andrew D.; Magnotti, Gaetano

    2010-01-01

    The dual pump CARS technique is often used in the study of turbulent flames. Fast and accurate algorithms are needed for fitting dual-pump CARS spectra for temperature and multiple chemical species. This paper describes the development of such an algorithm. The algorithm employs sparse libraries, whose size grows much more slowly with number of species than a conventional library. The method was demonstrated by fitting synthetic "experimental" spectra containing 4 resonant species (N2, O2, H2 and CO2), both with noise and without it, and by fitting experimental spectra from a H2-air flame produced by a Hencken burner. In both studies, weighted least squares fitting of signal, as opposed to least squares fitting signal or square-root signal, was shown to produce the least random error and minimize bias error in the fitted parameters.

  9. A sparse equivalent source method for near-field acoustic holography.

    PubMed

    Fernandez-Grande, Efren; Xenaki, Angeliki; Gerstoft, Peter

    2017-01-01

    This study examines a near-field acoustic holography method consisting of a sparse formulation of the equivalent source method, based on the compressive sensing (CS) framework. The method, denoted Compressive-Equivalent Source Method (C-ESM), encourages spatially sparse solutions (based on the superposition of few waves) that are accurate when the acoustic sources are spatially localized. The importance of obtaining a non-redundant representation, i.e., a sensing matrix with low column coherence, and the inherent ill-conditioning of near-field reconstruction problems is addressed. Numerical and experimental results on a classical guitar and on a highly reactive dipole-like source are presented. C-ESM is valid beyond the conventional sampling limits, making wide-band reconstruction possible. Spatially extended sources can also be addressed with C-ESM, although in this case the obtained solution does not recover the spatial extent of the source.

  10. Sparse Extreme Learning Machine for Classification

    PubMed Central

    Bai, Zuo; Huang, Guang-Bin; Wang, Danwei; Wang, Han; Westover, M. Brandon

    2016-01-01

    Extreme learning machine (ELM) was initially proposed for single-hidden-layer feedforward neural networks (SLFNs). In the hidden layer (feature mapping), nodes are randomly generated independently of training data. Furthermore, a unified ELM was proposed, providing a single framework to simplify and unify different learning methods, such as SLFNs, least square support vector machines, proximal support vector machines, and so on. However, the solution of unified ELM is dense, and thus, usually plenty of storage space and testing time are required for large-scale applications. In this paper, a sparse ELM is proposed as an alternative solution for classification, reducing storage space and testing time. In addition, unified ELM obtains the solution by matrix inversion, whose computational complexity is between quadratic and cubic with respect to the training size. It still requires plenty of training time for large-scale problems, even though it is much faster than many other traditional methods. In this paper, an efficient training algorithm is specifically developed for sparse ELM. The quadratic programming problem involved in sparse ELM is divided into a series of smallest possible sub-problems, each of which are solved analytically. Compared with SVM, sparse ELM obtains better generalization performance with much faster training speed. Compared with unified ELM, sparse ELM achieves similar generalization performance for binary classification applications, and when dealing with large-scale binary classification problems, sparse ELM realizes even faster training speed than unified ELM. PMID:25222727

  11. Finding Nonoverlapping Substructures of a Sparse Matrix

    SciTech Connect

    Pinar, Ali; Vassilevska, Virginia

    2005-08-11

    Many applications of scientific computing rely on computations on sparse matrices. The design of efficient implementations of sparse matrix kernels is crucial for the overall efficiency of these applications. Due to the high compute-to-memory ratio and irregular memory access patterns, the performance of sparse matrix kernels is often far away from the peak performance on a modern processor. Alternative data structures have been proposed, which split the original matrix A into A{sub d} and A{sub s}, so that A{sub d} contains all dense blocks of a specified size in the matrix, and A{sub s} contains the remaining entries. This enables the use of dense matrix kernels on the entries of A{sub d} producing better memory performance. In this work, we study the problem of finding a maximum number of nonoverlapping dense blocks in a sparse matrix, which is previously not studied in the sparse matrix community. We show that the maximum nonoverlapping dense blocks problem is NP-complete by using a reduction from the maximum independent set problem on cubic planar graphs. We also propose a 2/3-approximation algorithm that runs in linear time in the number of nonzeros in the matrix. This extended abstract focuses on our results for 2x2 dense blocks. However we show that our results can be generalized to arbitrary sized dense blocks, and many other oriented substructures, which can be exploited to improve the memory performance of sparse matrix operations.

  12. Sparse extreme learning machine for classification.

    PubMed

    Bai, Zuo; Huang, Guang-Bin; Wang, Danwei; Wang, Han; Westover, M Brandon

    2014-10-01

    Extreme learning machine (ELM) was initially proposed for single-hidden-layer feedforward neural networks (SLFNs). In the hidden layer (feature mapping), nodes are randomly generated independently of training data. Furthermore, a unified ELM was proposed, providing a single framework to simplify and unify different learning methods, such as SLFNs, least square support vector machines, proximal support vector machines, and so on. However, the solution of unified ELM is dense, and thus, usually plenty of storage space and testing time are required for large-scale applications. In this paper, a sparse ELM is proposed as an alternative solution for classification, reducing storage space and testing time. In addition, unified ELM obtains the solution by matrix inversion, whose computational complexity is between quadratic and cubic with respect to the training size. It still requires plenty of training time for large-scale problems, even though it is much faster than many other traditional methods. In this paper, an efficient training algorithm is specifically developed for sparse ELM. The quadratic programming problem involved in sparse ELM is divided into a series of smallest possible sub-problems, each of which are solved analytically. Compared with SVM, sparse ELM obtains better generalization performance with much faster training speed. Compared with unified ELM, sparse ELM achieves similar generalization performance for binary classification applications, and when dealing with large-scale binary classification problems, sparse ELM realizes even faster training speed than unified ELM.

  13. Sparse representation based image interpolation with nonlocal autoregressive modeling.

    PubMed

    Dong, Weisheng; Zhang, Lei; Lukac, Rastislav; Shi, Guangming

    2013-04-01

    Sparse representation is proven to be a promising approach to image super-resolution, where the low-resolution (LR) image is usually modeled as the down-sampled version of its high-resolution (HR) counterpart after blurring. When the blurring kernel is the Dirac delta function, i.e., the LR image is directly down-sampled from its HR counterpart without blurring, the super-resolution problem becomes an image interpolation problem. In such cases, however, the conventional sparse representation models (SRM) become less effective, because the data fidelity term fails to constrain the image local structures. In natural images, fortunately, many nonlocal similar patches to a given patch could provide nonlocal constraint to the local structure. In this paper, we incorporate the image nonlocal self-similarity into SRM for image interpolation. More specifically, a nonlocal autoregressive model (NARM) is proposed and taken as the data fidelity term in SRM. We show that the NARM-induced sampling matrix is less coherent with the representation dictionary, and consequently makes SRM more effective for image interpolation. Our extensive experimental results demonstrate that the proposed NARM-based image interpolation method can effectively reconstruct the edge structures and suppress the jaggy/ringing artifacts, achieving the best image interpolation results so far in terms of PSNR as well as perceptual quality metrics such as SSIM and FSIM.

  14. Compressive Fresnel digital holography using Fresnelet based sparse representation

    NASA Astrophysics Data System (ADS)

    Ramachandran, Prakash; Alex, Zachariah C.; Nelleri, Anith

    2015-04-01

    Compressive sensing (CS) in digital holography requires only very less number of pixel level detections in hologram plane for accurate image reconstruction and this is achieved by exploiting the sparsity of the object wave. When the input object fields are non-sparse in spatial domain, CS demands a suitable sparsification method like wavelet decomposition. The Fresnelet, a suitable wavelet basis for processing Fresnel digital holograms is an efficient sparsifier for the complex Fresnel field obtained by the Fresnel transform of the object field and minimizes the mutual coherence between sensing and sparsifying matrices involved in CS. The paper demonstrates the merits of Fresnelet based sparsification in compressive digital Fresnel holography over conventional method of sparsifying the input object field. The phase shifting digital Fresnel holography (PSDH) is used to retrieve the complex Fresnel field for the chosen problem. The results are presented from a numerical experiment to show the proof of the concept.

  15. The application of a sparse, distributed memory to the detection, identification and manipulation of physical objects

    NASA Technical Reports Server (NTRS)

    Kanerva, P.

    1986-01-01

    To determine the relation of the sparse, distributed memory to other architectures, a broad review of the literature was made. The memory is called a pattern memory because they work with large patterns of features (high-dimensional vectors). A pattern is stored in a pattern memory by distributing it over a large number of storage elements and by superimposing it over other stored patterns. A pattern is retrieved by mathematical or statistical reconstruction from the distributed elements. Three pattern memories are discussed.

  16. Task-based optimization of image reconstruction in breast CT

    NASA Astrophysics Data System (ADS)

    Sanchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan

    2014-03-01

    We demonstrate a task-based assessment of image quality in dedicated breast CT in order to optimize the number of projection views acquired. The methodology we employ is based on the Hotelling Observer (HO) and its associated metrics. We consider two tasks: the Rayleigh task of discerning between two resolvable objects and a single larger object, and the signal detection task of classifying an image as belonging to either a signalpresent or signal-absent hypothesis. HO SNR values are computed for 50, 100, 200, 500, and 1000 projection view images, with the total imaging radiation dose held constant. We use the conventional fan-beam FBP algorithm and investigate the effect of varying the width of a Hanning window used in the reconstruction, since this affects both the noise properties of the image and the under-sampling artifacts which can arise in the case of sparse-view acquisitions. Our results demonstrate that fewer projection views should be used in order to increase HO performance, which in this case constitutes an upper-bound on human observer performance. However, the impact on HO SNR of using fewer projection views, each with a higher dose, is not as significant as the impact of employing regularization in the FBP reconstruction through a Hanning filter.

  17. Efficient, sparse biological network determination

    PubMed Central

    August, Elias; Papachristodoulou, Antonis

    2009-01-01

    Background Determining the interaction topology of biological systems is a topic that currently attracts significant research interest. Typical models for such systems take the form of differential equations that involve polynomial and rational functions. Such nonlinear models make the problem of determining the connectivity of biochemical networks from time-series experimental data much harder. The use of linear dynamics and linearization techniques that have been proposed in the past can circumvent this, but the general problem of developing efficient algorithms for models that provide more accurate system descriptions remains open. Results We present a network determination algorithm that can treat model descriptions with polynomial and rational functions and which does not make use of linearization. For this purpose, we make use of the observation that biochemical networks are in general 'sparse' and minimize the 1-norm of the decision variables (sum of weighted network connections) while constraints keep the error between data and the network dynamics small. The emphasis of our methodology is on determining the interconnection topology rather than the specific reaction constants and it takes into account the necessary properties that a chemical reaction network should have – something that techniques based on linearization can not. The problem can be formulated as a Linear Program, a convex optimization problem, for which efficient algorithms are available that can treat large data sets efficiently and uncertainties in data or model parameters. Conclusion The presented methodology is able to predict with accuracy and efficiency the connectivity structure of a chemical reaction network with mass action kinetics and of a gene regulatory network from simulation data even if the dynamics of these systems are non-polynomial (rational) and uncertainties in the data are taken into account. It also produces a network structure that can explain the real experimental

  18. Sinogram denoising via simultaneous sparse representation in learned dictionaries.

    PubMed

    Karimi, Davood; Ward, Rabab K

    2016-05-07

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster.

  19. Sinogram denoising via simultaneous sparse representation in learned dictionaries

    NASA Astrophysics Data System (ADS)

    Karimi, Davood; Ward, Rabab K.

    2016-05-01

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster.

  20. Group-sparse representation with dictionary learning for medical image denoising and fusion.

    PubMed

    Li, Shutao; Yin, Haitao; Fang, Leyuan

    2012-12-01

    Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches.

  1. Velocity analysis using high-resolution semblance based on sparse hyperbolic Radon transform

    NASA Astrophysics Data System (ADS)

    Gong, Xiangbo; Wang, Shengchao; Zhang, Tianze

    2016-11-01

    Semblance measures the lateral coherency of the seismic events in a common mid-point gather, and it has been widely used for the normal-moveout-based velocity estimation. In this paper, we propose a new velocity analysis method by using high-resolution semblance based on sparse hyperbolic Radon transform (SHRT). Conventional semblance can be defined as the ratio of signal energy to total energy in the time gate. We replace the signal energy with the square of the sparse Radon panel and replace the total energy with the sparse Radon panel of the square data. Because of the sparsity-constrained inversion of SHRT, the new approach can produce higher resolution semblance spectra than the conventional semblance. We test this new semblance on synthetic and field data to demonstrate the improvements in velocity analysis.

  2. Coherence analysis using canonical coordinate decomposition with applications to sparse processing and optimal array deployment

    NASA Astrophysics Data System (ADS)

    Azimi-Sadjadi, Mahmood R.; Pezeshki, Ali; Wade, Robert L.

    2004-09-01

    Sparse array processing methods are typically used to improve the spatial resolution of sensor arrays for the estimation of direction of arrival (DOA). The fundamental assumption behind these methods is that signals that are received by the sparse sensors (or a group of sensors) are coherent. However, coherence may vary significantly with the changes in environmental, terrain, and, operating conditions. In this paper canonical correlation analysis is used to study the variations in coherence between pairs of sub-arrays in a sparse array problem. The data set for this study is a subset of an acoustic signature data set, acquired from the US Army TACOM-ARDEC, Picatinny Arsenal, NJ. This data set is collected using three wagon-wheel type arrays with five microphones. The results show that in nominal operating conditions, i.e. no extreme wind noise or masking effects by trees, building, etc., the signals collected at different sensor arrays are indeed coherent even at distant node separation.

  3. Accelerating k-t sparse using k-space aliasing for dynamic MRI imaging.

    PubMed

    Pawar, Kamlesh; Egan, Gary F; Zhang, Jingxin

    2013-01-01

    Dynamic imaging is challenging in MRI and acceleration techniques are usually needed to acquire dynamic scene. K-t sparse is an acceleration technique based on compressed sensing, it acquires fewer amounts of data in k-t space by pseudo random ordering of phase encodes and reconstructs dynamic scene by exploiting sparsity of k-t space in transform domain. Another recently introduced technique accelerates dynamic MRI scans by acquiring k-space data in aliased form. K-space aliasing technique uses multiple RF excitation pulses to deliberately acquire aliased k-space data. During reconstruction a simple Fourier transformation along time frames can unaliase the acquired aliased data. This paper presents a novel method to combine k-t sparse and k-space aliasing to achieve higher acceleration than each of the individual technique alone. In this particular combination, a very critical factor of compressed sensing, the ratio of the number of acquired phase encodes to the number of total phase encode (n/N) increases therefore compressed sensing component of reconstruction performs exceptionally well. Comparison of k-t sparse and the proposed technique for acceleration factors of 4, 6 and 8 is demonstrated in simulation on cardiac data.

  4. Detect signals of interdecadal climate variations from an enhanced suite of reconstructed precipitation products since 1850 using the historical station data from Global Historical Climatology Network and the dynamical patterns derived from Global Precipitation Climatology Project

    NASA Astrophysics Data System (ADS)

    Shen, S. S.

    2015-12-01

    This presentation describes the detection of interdecadal climate signals in a newly reconstructed precipitation data from 1850-present. Examples are on precipitation signatures of East Asian Monsoon (EAM), Pacific Decadal Oscillation (PDO) and Atlantic Multidecadal Oscillations (AMO). The new reconstruction dataset is an enhanced edition of a suite of global precipitation products reconstructed by Spectral Optimal Gridding of Precipitation Version 1.0 (SOGP 1.0). The maximum temporal coverage is 1850-present and the spatial coverage is quasi-global (75S, 75N). This enhanced version has three different temporal resolutions (5-day, monthly, and annual) and two different spatial resolutions (2.5 deg and 5.0 deg). It also has a friendly Graphical User Interface (GUI). SOGP uses a multivariate regression method using an empirical orthogonal function (EOF) expansion. The Global Precipitation Climatology Project (GPCP) precipitation data from 1981-20010 are used to calculate the EOFs. The Global Historical Climatology Network (GHCN) gridded data are used to calculate the regression coefficients for reconstructions. The sampling errors of the reconstruction are analyzed according to the number of EOF modes used in the reconstruction. Our reconstructed 1900-2011 time series of the global average annual precipitation shows a 0.024 (mm/day)/100a trend, which is very close to the trend derived from the mean of 25 models of the CMIP5 (Coupled Model Intercomparison Project Phase 5). Our reconstruction has been validated by GPCP data after 1979. Our reconstruction successfully displays the 1877 El Nino (see the attached figure), which is considered a validation before 1900. Our precipitation products are publically available online, including digital data, precipitation animations, computer codes, readme files, and the user manual. This work is a joint effort of San Diego State University (Sam Shen, Gregori Clarke, Christian Junjinger, Nancy Tafolla, Barbara Sperberg, and

  5. Dictionary construction in sparse methods for image restoration

    SciTech Connect

    Wohlberg, Brendt

    2010-01-01

    Sparsity-based methods have achieved very good performance in a wide variety of image restoration problems, including denoising, inpainting, super-resolution, and source separation. These methods are based on the assumption that the image to be reconstructed may be represented as a superposition of a few known components, and the appropriate linear combination of components is estimated by solving an optimization such as Basis Pursuit De-Noising (BPDN). Considering that the K-SVD constructs a dictionary which has been optimised for mean performance over a training set, it is not too surprising that better performance can be achieved by selecting a custom dictionary for each individual block to be reconstructed. The nearest neighbor dictionary construction can be understood geometrically as a method for estimating the local projection into the manifold of image blocks, whereas the K-SVD dictionary makes more sense within a source-coding framework (it is presented as a generalization of the k-means algorithm for constructing a VQ codebook), is therefore, it could be argued, less appropriate in principle, for reconstruction problems. One can, of course, motivate the use of the K-SVD in reconstruction application on practical grounds, avoiding the computational expense of constructing a different dictionary for each block to be denoised. Since the performance of the nearest neighbor dictionary decreases when the dictionary becomes sufficiently large, this method is also superior to the approach of utilizing the entire training set as a dictionary (and this can also be understood within the image block manifold model). In practical terms, the tradeoff is between the computational cost of a nearest neighbor search (which can be achieved very efficiently), or of increased cost at the sparse optimization.

  6. Nonlocal sparse model with adaptive structural clustering for feature extraction of aero-engine bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Li, Xiang; Yan, Ruqiang

    2016-04-01

    Fault information of aero-engine bearings presents two particular phenomena, i.e., waveform distortion and impulsive feature frequency band dispersion, which leads to a challenging problem for current techniques of bearing fault diagnosis. Moreover, although many progresses of sparse representation theory have been made in feature extraction of fault information, the theory also confronts inevitable performance degradation due to the fact that relatively weak fault information has not sufficiently prominent and sparse representations. Therefore, a novel nonlocal sparse model (coined NLSM) and its algorithm framework has been proposed in this paper, which goes beyond simple sparsity by introducing more intrinsic structures of feature information. This work adequately exploits the underlying prior information that feature information exhibits nonlocal self-similarity through clustering similar signal fragments and stacking them together into groups. Within this framework, the prior information is transformed into a regularization term and a sparse optimization problem, which could be solved through block coordinate descent method (BCD), is formulated. Additionally, the adaptive structural clustering sparse dictionary learning technique, which utilizes k-Nearest-Neighbor (kNN) clustering and principal component analysis (PCA) learning, is adopted to further enable sufficient sparsity of feature information. Moreover, the selection rule of regularization parameter and computational complexity are described in detail. The performance of the proposed framework is evaluated through numerical experiment and its superiority with respect to the state-of-the-art method in the field is demonstrated through the vibration signals of experimental rig of aircraft engine bearings.

  7. Finding nonoverlapping substructures of a sparse matrix

    SciTech Connect

    Pinar, Ali; Vassilevska, Virginia

    2004-08-09

    Many applications of scientific computing rely on computations on sparse matrices, thus the design of efficient implementations of sparse matrix kernels is crucial for the overall efficiency of these applications. Due to the high compute-to-memory ratio and irregular memory access patterns, the performance of sparse matrix kernels is often far away from the peak performance on a modern processor. Alternative data structures have been proposed, which split the original matrix A into A{sub d} and A{sub s}, so that A{sub d} contains all dense blocks of a specified size in the matrix, and A{sub s} contains the remaining entries. This enables the use of dense matrix kernels on the entries of A{sub d} producing better memory performance. In this work, we study the problem of finding a maximum number of non overlapping rectangular dense blocks in a sparse matrix, which has not been studied in the sparse matrix community. We show that the maximum non overlapping dense blocks problem is NP-complete by using a reduction from the maximum independent set problem on cubic planar graphs. We also propose a 2/3-approximation algorithm for 2 times 2 blocks that runs in linear time in the number of nonzeros in the matrix. We discuss alternatives to rectangular blocks such as diagonal blocks and cross blocks and present complexity analysis and approximation algorithms.

  8. Dictionary-based image reconstruction for superresolution in integrated circuit imaging.

    PubMed

    Cilingiroglu, T Berkin; Uyar, Aydan; Tuysuzoglu, Ahmet; Karl, W Clem; Konrad, Janusz; Goldberg, Bennett B; Ünlü, M Selim

    2015-06-01

    Resolution improvement through signal processing techniques for integrated circuit imaging is becoming more crucial as the rapid decrease in integrated circuit dimensions continues. Although there is a significant effort to push the limits of optical resolution for backside fault analysis through the use of solid immersion lenses, higher order laser beams, and beam apodization, signal processing techniques are required for additional improvement. In this work, we propose a sparse image reconstruction framework which couples overcomplete dictionary-based representation with a physics-based forward model to improve resolution and localization accuracy in high numerical aperture confocal microscopy systems for backside optical integrated circuit analysis. The effectiveness of the framework is demonstrated on experimental data.

  9. Contrast adaptive total p-norm variation minimization approach to CT reconstruction for artifact reduction in reduced-view brain perfusion CT

    NASA Astrophysics Data System (ADS)

    Kim, Chang-Won; Kim, Jong-Hyo

    2011-03-01

    Perfusion CT (PCT) examinations are getting more frequently used for diagnosis of acute brain diseases such as hemorrhage and infarction, because the functional map images it produces such as regional cerebral blood flow (rCBF), regional cerebral blood volume (rCBV), and mean transit time (MTT) may provide critical information in the emergency work-up of patient care. However, a typical PCT scans the same slices several tens of times after injection of contrast agent, which leads to much increased radiation dose and is inevitability of growing concern for radiation-induced cancer risk. Reducing the number of views in projection in combination of TV minimization reconstruction technique is being regarded as an option for radiation reduction. However, reconstruction artifacts due to insufficient number of X-ray projections become problematic especially when high contrast enhancement signals are present or patient's motion occurred. In this study, we present a novel reconstruction technique using contrast-adaptive TpV minimization that can reduce reconstruction artifacts effectively by using different p-norms in high contrast and low contrast objects. In the proposed method, high contrast components are first reconstructed using thresholded projection data and low p-norm total variation to reflect sparseness in both projection and reconstruction spaces. Next, projection data are modified to contain only low contrast objects by creating projection data of reconstructed high contrast components and subtracting them from original projection data. Then, the low contrast projection data are reconstructed by using relatively high p-norm TV minimization technique, and are combined with the reconstructed high contrast component images to produce final reconstructed images. The proposed algorithm was applied to numerical phantom and a clinical data set of brain PCT exam, and the resultant images were compared with those using filtered back projection (FBP) and conventional TV

  10. Fast wavelet based sparse approximate inverse preconditioner

    SciTech Connect

    Wan, W.L.

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  11. Recovery of Clustered Sparse Signals from Compressive Measurements

    DTIC Science & Technology

    2009-12-21

    mi- croarrays, MIMO channel equalization, source localiza- tion in sensor networks, and magnetoencephalography [3, 5, 6, 10–14]. It has been shown that...Packard Fel- lowship and by MADALGO (Center for Massive Data Algorith- mics, funded by the Danish National Research Association) and by NSF grant CCF

  12. Protein family classification using sparse Markov transducers.

    PubMed

    Eskin, E; Grundy, W N; Singer, Y

    2000-01-01

    In this paper we present a method for classifying proteins into families using sparse Markov transducers (SMTs). Sparse Markov transducers, similar to probabilistic suffix trees, estimate a probability distribution conditioned on an input sequence. SMTs generalize probabilistic suffix trees by allowing for wild-cards in the conditioning sequences. Because substitutions of amino acids are common in protein families, incorporating wildcards into the model significantly improves classification performance. We present two models for building protein family classifiers using SMTs. We also present efficient data structures to improve the memory usage of the models. We evaluate SMTs by building protein family classifiers using the Pfam database and compare our results to previously published results.

  13. Tensor methods for large, sparse unconstrained optimization

    SciTech Connect

    Bouaricha, A.

    1996-11-01

    Tensor methods for unconstrained optimization were first introduced by Schnabel and Chow [SIAM J. Optimization, 1 (1991), pp. 293-315], who describe these methods for small to moderate size problems. This paper extends these methods to large, sparse unconstrained optimization problems. This requires an entirely new way of solving the tensor model that makes the methods suitable for solving large, sparse optimization problems efficiently. We present test results for sets of problems where the Hessian at the minimizer is nonsingular and where it is singular. These results show that tensor methods are significantly more efficient and more reliable than standard methods based on Newton`s method.

  14. Sparse coding joint decision rule for ear print recognition

    NASA Astrophysics Data System (ADS)

    Guermoui, Mawloud; Melaab, Djamel; Mekhalfi, Mohamed Lamine

    2016-09-01

    Human ear recognition has been promoted as a profitable biometric over the past few years. With respect to other modalities, such as the face and iris, that have undergone a significant investigation in the literature, ear pattern is relatively still uncommon. We put forth a sparse coding-induced decision-making for ear recognition. It jointly involves the reconstruction residuals and the respective reconstruction coefficients pertaining to the input features (co-occurrence of adjacent local binary patterns) for a further fusion. We particularly show that combining both components (i.e., the residuals as well as the coefficients) yields better outcomes than the case when either of them is deemed singly. The proposed method has been evaluated on two benchmark datasets, namely IITD1 (125 subject) and IITD2 (221 subjects). The recognition rates of the suggested scheme amount for 99.5% and 98.95% for both datasets, respectively, which suggest that our method decently stands out against reference state-of-the-art methodologies. Furthermore, experiments conclude that the presented scheme manifests a promising robustness under large-scale occlusion scenarios.

  15. Multiple Kernel Sparse Representation based Orthogonal Discriminative Projection and Its Cost-Sensitive Extension.

    PubMed

    Zhang, Guoqing; Sun, Huaijiang; Xia, Guiyu; Sun, Quansen

    2016-07-07

    Sparse representation based classification (SRC) has been developed and shown great potential for real-world application. Based on SRC, Yang et al. [10] devised a SRC steered discriminative projection (SRC-DP) method. However, as a linear algorithm, SRC-DP cannot handle the data with highly nonlinear distribution. Kernel sparse representation-based classifier (KSRC) is a non-linear extension of SRC and can remedy the drawback of SRC. KSRC requires the use of a predetermined kernel function and selection of the kernel function and its parameters is difficult. Recently, multiple kernel learning for SRC (MKL-SRC) [22] has been proposed to learn a kernel from a set of base kernels. However, MKL-SRC only considers the within-class reconstruction residual while ignoring the between-class relationship, when learning the kernel weights. In this paper, we propose a novel multiple kernel sparse representation-based classifier (MKSRC), and then we use it as a criterion to design a multiple kernel sparse representation based orthogonal discriminative projection method (MK-SR-ODP). The proposed algorithm aims at learning a projection matrix and a corresponding kernel from the given base kernels such that in the low dimension subspace the between-class reconstruction residual is maximized and the within-class reconstruction residual is minimized. Furthermore, to achieve a minimum overall loss by performing recognition in the learned low-dimensional subspace, we introduce cost information into the dimensionality reduction method. The solutions for the proposed method can be efficiently found based on trace ratio optimization method [33]. Extensive experimental results demonstrate the superiority of the proposed algorithm when compared with the state-of-the-art methods.

  16. Reconstruction of Neural Activity from EEG Data Using Dynamic Spatiotemporal Constraints.

    PubMed

    Giraldo-Suarez, E; Martinez-Vargas, J D; Castellanos-Dominguez, G

    2016-11-01

    We present a novel iterative regularized algorithm (IRA) for neural activity reconstruction that explicitly includes spatiotemporal constraints, performing a trade-off between space and time resolutions. For improving the spatial accuracy provided by electroencephalography (EEG) signals, we explore a basis set that describes the smooth, localized areas of potentially active brain regions. In turn, we enhance the time resolution by adding the Markovian assumption for brain activity estimation at each time period. Moreover, to deal with applications that have either distributed or localized neural activity, the spatiotemporal constraints are expressed through [Formula: see text] and [Formula: see text] norms, respectively. For the purpose of validation, we estimate the neural reconstruction performance in time and space separately. Experimental testing is carried out on artificial data, simulating stationary and non-stationary EEG signals. Also, validation is accomplished on two real-world databases, one holding Evoked Potentials and another with EEG data of focal epilepsy. Moreover, responses of functional magnetic resonance imaging for the former EEG data have been measured in advance, allowing to contrast our findings. Obtained results show that the [Formula: see text]-based IRA produces a spatial resolution that is comparable to the one achieved by some widely used sparse-based estimators of brain activity. At the same time, the [Formula: see text]-based IRA outperforms other similar smooth solutions, providing a spatial resolution that is lower than the sparse [Formula: see text]-based solution. As a result, the proposed IRA is a promising method for improving the accuracy of brain activity reconstruction.

  17. Penile Reconstruction

    PubMed Central

    Salgado, Christopher J.; Chim, Harvey; Tang, Jennifer C.; Monstrey, Stan J.; Mardini, Samir

    2011-01-01

    A variety of surgical options exists for penile reconstruction. The key to success of therapy is holistic management of the patient, with attention to the psychological aspects of treatment. In this article, we review reconstructive modalities for various types of penile defects inclusive of partial and total defects as well as the buried penis, and also describe recent basic science advances, which may promise new options for penile reconstruction. PMID:22851914

  18. A high resolution spectrum reconstruction algorithm using compressive sensing theory

    NASA Astrophysics Data System (ADS)

    Zheng, Zhaoyu; Liang, Dakai; Liu, Shulin; Feng, Shuqing

    2015-07-01

    This paper proposes a quick spectrum scanning and reconstruction method using compressive sensing in composite structure. The strain field of corrugated structure is simulated by finite element analysis. Then the reflect spectrum is calculated using an improved transfer matrix algorithm. The K-means singular value decomposition sparse dictionary is trained . In the test the spectrum with limited sample points can be obtained and the high resolution spectrum is reconstructed by solving sparse representation equation. Compared with the other conventional basis, the effect of this method is better. The match rate of the recovered spectrum and the original spectrum is over 95%.

  19. Sparse coding-based correlaton model for land-use scene classification in high-resolution remote-sensing images

    NASA Astrophysics Data System (ADS)

    Kunlun, Qi; Xiaochun, Zhang; Baiyan, Wu; Huayi, Wu

    2016-10-01

    High-resolution remote-sensing images are increasingly applied in land-use classification problems. Land-use scenes are often very complex and difficult to represent. Subsequently, the recognition of multiple land-cover classes is a continuing research question. We propose a classification framework based on a sparse coding-based correlaton (termed sparse correlaton) model to solve this challenge. Specifically, a general mapping strategy is presented to label visual words and generate sparse coding-based correlograms, which can exploit the spatial co-occurrences of visual words. A compact spatial representation without loss discrimination is achieved through adaptive vector quantization of correlogram in land-use scene classification. Moreover, instead of using K-means for visual word encoding in the original correlaton model, our proposed sparse correlaton model uses sparse coding to achieve lower reconstruction error. Experiments on a public ground truth image dataset of 21 land-use classes demonstrate that our sparse coding-based correlaton method can improve the performance of land-use scene classification and outperform many existing bag-of-visual-words-based methods.

  20. Improved statistical power with a sparse shape model in detecting an aging effect in the hippocampus and amygdala

    NASA Astrophysics Data System (ADS)

    Chung, Moo K.; Kim, Seung-Goo; Schaefer, Stacey M.; van Reekum, Carien M.; Peschke-Schmitz, Lara; Sutterer, Matthew J.; Davidson, Richard J.

    2014-03-01

    The sparse regression framework has been widely used in medical image processing and analysis. However, it has been rarely used in anatomical studies. We present a sparse shape modeling framework using the Laplace- Beltrami (LB) eigenfunctions of the underlying shape and show its improvement of statistical power. Tradition- ally, the LB-eigenfunctions are used as a basis for intrinsically representing surface shapes as a form of Fourier descriptors. To reduce high frequency noise, only the first few terms are used in the expansion and higher frequency terms are simply thrown away. However, some lower frequency terms may not necessarily contribute significantly in reconstructing the surfaces. Motivated by this idea, we present a LB-based method to filter out only the significant eigenfunctions by imposing a sparse penalty. For dense anatomical data such as deformation fields on a surface mesh, the sparse regression behaves like a smoothing process, which will reduce the error of incorrectly detecting false negatives. Hence the statistical power improves. The sparse shape model is then applied in investigating the influence of age on amygdala and hippocampus shapes in the normal population. The advantage of the LB sparse framework is demonstrated by showing the increased statistical power.

  1. Social biases determine spatiotemporal sparseness of ciliate mating heuristics

    PubMed Central

    2012-01-01

    Ciliates become highly social, even displaying animal-like qualities, in the joint presence of aroused conspecifics and nonself mating pheromones. Pheromone detection putatively helps trigger instinctual and learned courtship and dominance displays from which social judgments are made about the availability, compatibility, and fitness representativeness or likelihood of prospective mates and rivals. In earlier studies, I demonstrated the heterotrich Spirostomum ambiguum improves mating competence by effecting preconjugal strategies and inferences in mock social trials via behavioral heuristics built from Hebbian-like associative learning. Heuristics embody serial patterns of socially relevant action that evolve into ordered, topologically invariant computational networks supporting intra- and intermate selection. S. ambiguum employs heuristics to acquire, store, plan, compare, modify, select, and execute sets of mating propaganda. One major adaptive constraint over formation and use of heuristics involves a ciliate’s initial subjective bias, responsiveness, or preparedness, as defined by Stevens’ Law of subjective stimulus intensity, for perceiving the meaningfulness of mechanical pressures accompanying cell-cell contacts and additional perimating events. This bias controls durations and valences of nonassociative learning, search rates for appropriate mating strategies, potential net reproductive payoffs, levels of social honesty and deception, successful error diagnosis and correction of mating signals, use of insight or analysis to solve mating dilemmas, bioenergetics expenditures, and governance of mating decisions by classical or quantum statistical mechanics. I now report this same social bias also differentially affects the spatiotemporal sparseness, as measured with metric entropy, of ciliate heuristics. Sparseness plays an important role in neural systems through optimizing the specificity, efficiency, and capacity of memory representations. The

  2. Second SIAM conference on sparse matrices: Abstracts. Final technical report

    SciTech Connect

    1996-12-31

    This report contains abstracts on the following topics: invited and long presentations (IP1 & LP1); sparse matrix reordering & graph theory I; sparse matrix tools & environments I; eigenvalue computations I; iterative methods & acceleration techniques I; applications I; parallel algorithms I; sparse matrix reordering & graphy theory II; sparse matrix tool & environments II; least squares & optimization I; iterative methods & acceleration techniques II; applications II; eigenvalue computations II; least squares & optimization II; parallel algorithms II; sparse direct methods; iterative methods & acceleration techniques III; eigenvalue computations III; and sparse matrix reordering & graph theory III.

  3. Adaptive Nonlocal Sparse Representation for Dual-Camera Compressive Hyperspectral Imaging.

    PubMed

    Wang, Lizhi; Xiong, Zhiwei; Shi, Guangming; Wu, Feng; Zeng, Wenjun

    2016-10-25

    Leveraging the compressive sensing (CS) theory, coded aperture snapshot spectral imaging (CASSI) provides an efficient solution to recover 3D hyperspectral data from a 2D measurement. The dual-camera design of CASSI, by adding an uncoded panchromatic measurement, enhances the reconstruction fidelity while maintaining the snapshot advantage. In this paper, we propose an adaptive nonlocal sparse representation (ANSR) model to boost the performance of dualcamera compressive hyperspectral imaging (DCCHI). Specifically, the CS reconstruction problem is formulated as a 3D cube based sparse representation to make full use of the nonlocal similarity in both the spatial and spectral domains. Our key observation is that, the panchromatic image, besides playing the role of direct measurement, can be further exploited to help the nonlocal similarity estimation. Therefore, we design a joint similarity metric by adaptively combining the internal similarity within the reconstructed hyperspectral image and the external similarity within the panchromatic image. In this way, the fidelity of CS reconstruction is greatly enhanced. Both simulation and hardware experimental results show significant improvement of the proposed method over the state-of-the-art.

  4. Self-adaptive image reconstruction inspired by insect compound eye mechanism.

    PubMed

    Zhang, Jiahua; Shi, Aiye; Wang, Xin; Bian, Linjie; Huang, Fengchen; Xu, Lizhong

    2012-01-01

    Inspired by the mechanism of imaging and adaptation to luminosity in insect compound eyes (ICE), we propose an ICE-based adaptive reconstruction method (ARM-ICE), which can adjust the sampling vision field of image according to the environment light intensity. The target scene can be compressive, sampled independently with multichannel through ARM-ICE. Meanwhile, ARM-ICE can regulate the visual field of sampling to control imaging according to the environment light intensity. Based on the compressed sensing joint sparse model (JSM-1), we establish an information processing system of ARM-ICE. The simulation of a four-channel ARM-ICE system shows that the new method improves the peak signal-to-noise ratio (PSNR) and resolution of the reconstructed target scene under two different cases of light intensity. Furthermore, there is no distinct block effect in the result, and the edge of the reconstructed image is smoother than that obtained by the other two reconstruction methods in this work.

  5. Structured Sparse Method for Hyperspectral Unmixing

    NASA Astrophysics Data System (ADS)

    Zhu, Feiyun; Wang, Ying; Xiang, Shiming; Fan, Bin; Pan, Chunhong

    2014-02-01

    Hyperspectral Unmixing (HU) has received increasing attention in the past decades due to its ability of unveiling information latent in hyperspectral data. Unfortunately, most existing methods fail to take advantage of the spatial information in data. To overcome this limitation, we propose a Structured Sparse regularized Nonnegative Matrix Factorization (SS-NMF) method based on the following two aspects. First, we incorporate a graph Laplacian to encode the manifold structures embedded in the hyperspectral data space. In this way, the highly similar neighboring pixels can be grouped together. Second, the lasso penalty is employed in SS-NMF for the fact that pixels in the same manifold structure are sparsely mixed by a common set of relevant bases. These two factors act as a new structured sparse constraint. With this constraint, our method can learn a compact space, where highly similar pixels are grouped to share correlated sparse representations. Experiments on real hyperspectral data sets with different noise levels demonstrate that our method outperforms the state-of-the-art methods significantly.

  6. Sparse matrix orderings for factorized inverse preconditioners

    SciTech Connect

    Benzi, M.; Tuama, M.

    1998-09-01

    The effect of reorderings on the performance of factorized sparse approximate inverse preconditioners is considered. It is shown that certain reorderings can be very beneficial both in the preconditioner construction phase and in terms of the rate of convergence of the preconditioned iteration.

  7. Learning Stable Multilevel Dictionaries for Sparse Representations.

    PubMed

    Thiagarajan, Jayaraman J; Ramamurthy, Karthikeyan Natesan; Spanias, Andreas

    2015-09-01

    Sparse representations using learned dictionaries are being increasingly used with success in several data processing and machine learning applications. The increasing need for learning sparse models in large-scale applications motivates the development of efficient, robust, and provably good dictionary learning algorithms. Algorithmic stability and generalizability are desirable characteristics for dictionary learning algorithms that aim to build global dictionaries, which can efficiently model any test data similar to the training samples. In this paper, we propose an algorithm to learn dictionaries for sparse representations from large scale data, and prove that the proposed learning algorithm is stable and generalizable asymptotically. The algorithm employs a 1-D subspace clustering procedure, the K-hyperline clustering, to learn a hierarchical dictionary with multiple levels. We also propose an information-theoretic scheme to estimate the number of atoms needed in each level of learning and develop an ensemble approach to learn robust dictionaries. Using the proposed dictionaries, the sparse code for novel test data can be computed using a low-complexity pursuit procedure. We demonstrate the stability and generalization characteristics of the proposed algorithm using simulations. We also evaluate the utility of the multilevel dictionaries in compressed recovery and subspace learning applications.

  8. SAR Image Despeckling Via Structural Sparse Representation

    NASA Astrophysics Data System (ADS)

    Lu, Ting; Li, Shutao; Fang, Leyuan; Benediktsson, Jón Atli

    2016-12-01

    A novel synthetic aperture radar (SAR) image despeckling method based on structural sparse representation is introduced. The proposed method utilizes the fact that different regions in SAR images correspond to varying terrain reflectivity. Therefore, SAR images can be split into a heterogeneous class (with a varied terrain reflectivity) and a homogeneous class (with a constant terrain reflectivity). In the proposed method, different sparse representation based despeckling schemes are designed by combining the different region characteristics in SAR images. For heterogeneous regions with rich structure and texture information, structural dictionaries are learned to appropriately represent varied structural characteristics. Specifically, each patch in these regions is sparsely coded with the best fitted structural dictionary, thus good structure preservation can be obtained. For homogenous regions without rich structure and texture information, the highly redundant photometric self-similarity is exploited to suppress speckle noise without introducing artifacts. That is achieved by firstly learning the sub-dictionary, then simultaneously sparsely coding for each group of photometrically similar image patches. Visual and objective experimental results demonstrate the superiority of the proposed method over the-state-of-the-art methods.

  9. Maxdenominator Reweighted Sparse Representation for Tumor Classification

    PubMed Central

    Li, Weibiao; Liao, Bo; Zhu, Wen; Chen, Min; Peng, Li; Wei, Xiaohui; Gu, Changlong; Li, Keqin

    2017-01-01

    The classification of tumors is crucial for the proper treatment of cancer. Sparse representation-based classifier (SRC) exhibits good classification performance and has been successfully used to classify tumors using gene expression profile data. In this study, we propose a three-step maxdenominator reweighted sparse representation classification (MRSRC) method to classify tumors. First, we extract a set of metagenes from the training samples. These metagenes can capture the structures inherent to the data and are more effective for classification than the original gene expression data. Second, we use a reweighted regularization method to obtain the sparse representation coefficients. Reweighted regularization can enhance sparsity and obtain better sparse representation coefficients. Third, we classify the data by utilizing a maxdenominator residual error function. Maxdenominator strategy can reduce the residual error and improve the accuracy of the final classification. Extensive experiments using publicly available gene expression profile data sets show that the performance of MRSRC is comparable with or better than many existing representative methods. PMID:28393883

  10. Application of sparse array and MIMO in near-range microwave imaging

    NASA Astrophysics Data System (ADS)

    Qi, Yaolong; Wang, Yanping; Tan, Weixian; Hong, Wen

    2011-11-01

    Near range microwave imaging systems have broad application prospects in the field of concealed weapon detection, biomedical imaging, nondestructive testing, etc. In this paper, the techniques of MIMO and sparse line array are applied to near range microwave imaging, which can greatly reduce the complexity of imaging systems. In detail, the paper establishes two-dimensional near range MIMO imaging geometry and corresponding echo model, where the imaging geometry is formed by arranging sparse antenna array in azimuth direction and transmitting broadband signals in range direction; then, by analyzing the relationship between MIMO and convolution principle, the paper develops a method of arranging sparse line array which can be equivalent to a full array; and the paper deduces the backprojection algorithm applied to near ranging MIMO imaging geometry; finally, the imaging geometry and corresponding imaging algorithm proposed in this paper are investigated and verified by means of theoretical analysis and numerical simulations.

  11. Low-dose computed tomography image denoising based on joint wavelet and sparse representation.

    PubMed

    Ghadrdan, Samira; Alirezaie, Javad; Dillenseger, Jean-Louis; Babyn, Paul

    2014-01-01

    Image denoising and signal enhancement are the most challenging issues in low dose computed tomography (CT) imaging. Sparse representational methods have shown initial promise for these applications. In this work we present a wavelet based sparse representation denoising technique utilizing dictionary learning and clustering. By using wavelets we extract the most suitable features in the images to obtain accurate dictionary atoms for the denoising algorithm. To achieve improved results we also lower the number of clusters which reduces computational complexity. In addition, a single image noise level estimation is developed to update the cluster centers in higher PSNRs. Our results along with the computational efficiency of the proposed algorithm clearly demonstrates the improvement of the proposed algorithm over other clustering based sparse representation (CSR) and K-SVD methods.

  12. Embedded sparse representation of fMRI data via group-wise dictionary optimization

    NASA Astrophysics Data System (ADS)

    Zhu, Dajiang; Lin, Binbin; Faskowitz, Joshua; Ye, Jieping; Thompson, Paul M.

    2016-03-01

    Sparse learning enables dimension reduction and efficient modeling of high dimensional signals and images, but it may need to be tailored to best suit specific applications and datasets. Here we used sparse learning to efficiently represent functional magnetic resonance imaging (fMRI) data from the human brain. We propose a novel embedded sparse representation (ESR), to identify the most consistent dictionary atoms across different brain datasets via an iterative group-wise dictionary optimization procedure. In this framework, we introduced additional criteria to make the learned dictionary atoms more consistent across different subjects. We successfully identified four common dictionary atoms that follow the external task stimuli with very high accuracy. After projecting the corresponding coefficient vectors back into the 3-D brain volume space, the spatial patterns are also consistent with traditional fMRI analysis results. Our framework reveals common features of brain activation in a population, as a new, efficient fMRI analysis method.

  13. Efficient holoscopy image reconstruction.

    PubMed

    Hillmann, Dierck; Franke, Gesa; Lührs, Christian; Koch, Peter; Hüttmann, Gereon

    2012-09-10

    Holoscopy is a tomographic imaging technique that combines digital holography and Fourier-domain optical coherence tomography (OCT) to gain tomograms with diffraction limited resolution and uniform sensitivity over several Rayleigh lengths. The lateral image information is calculated from the spatial interference pattern formed by light scattered from the sample and a reference beam. The depth information is obtained from the spectral dependence of the recorded digital holograms. Numerous digital holograms are acquired at different wavelengths and then reconstructed for a common plane in the sample. Afterwards standard Fourier-domain OCT signal processing achieves depth discrimination. Here we describe and demonstrate an optimized data reconstruction algorithm for holoscopy which is related to the inverse scattering reconstruction of wavelength-scanned full-field optical coherence tomography data. Instead of calculating a regularized pseudoinverse of the forward operator, the recorded optical fields are propagated back into the sample volume. In one processing step the high frequency components of the scattering potential are reconstructed on a non-equidistant grid in three-dimensional spatial frequency space. A Fourier transform yields an OCT equivalent image of the object structure. In contrast to the original holoscopy reconstruction with backpropagation and Fourier transform with respect to the wavenumber, the required processing time does neither depend on the confocal parameter nor on the depth of the volume. For an imaging NA of 0.14, the processing time was decreased by a factor of 15, at higher NA the gain in reconstruction speed may reach two orders of magnitude.

  14. Iterative CT reconstruction using shearlet-based regularization

    NASA Astrophysics Data System (ADS)

    Vandeghinste, Bert; Goossens, Bart; Van Holen, Roel; Vanhove, Christian; Pizurica, Aleksandra; Vandenberghe, Stefaan; Staelens, Steven

    2012-03-01

    In computerized tomography, it is important to reduce the image noise without increasing the acquisition dose. Extensive research has been done into total variation minimization for image denoising and sparse-view reconstruction. However, TV minimization methods show superior denoising performance for simple images (with little texture), but result in texture information loss when applied to more complex images. Since in medical imaging, we are often confronted with textured images, it might not be beneficial to use TV. Our objective is to find a regularization term outperforming TV for sparse-view reconstruction and image denoising in general. A recent efficient solver was developed for convex problems, based on a split-Bregman approach, able to incorporate regularization terms different from TV. In this work, a proof-of-concept study demonstrates the usage of the discrete shearlet transform as a sparsifying transform within this solver for CT reconstructions. In particular, the regularization term is the 1-norm of the shearlet coefficients. We compared our newly developed shearlet approach to traditional TV on both sparse-view and on low-count simulated and measured preclinical data. Shearlet-based regularization does not outperform TV-based regularization for all datasets. Reconstructed images exhibit small aliasing artifacts in sparse-view reconstruction problems, but show no staircasing effect. This results in a slightly higher resolution than with TV-based regularization.

  15. Enhancement of snow cover change detection with sparse representation and dictionary learning

    NASA Astrophysics Data System (ADS)

    Varade, D.; Dikshit, O.

    2014-11-01

    Sparse representation and decoding is often used for denoising images and compression of images with respect to inherent features. In this paper, we adopt a methodology incorporating sparse representation of a snow cover change map using the K-SVD trained dictionary and sparse decoding to enhance the change map. The pixels often falsely characterized as "changes" are eliminated using this approach. The preliminary change map was generated using differenced NDSI or S3 maps in case of Resourcesat-2 and Landsat 8 OLI imagery respectively. These maps are extracted into patches for compressed sensing using Discrete Cosine Transform (DCT) to generate an initial dictionary which is trained by the K-SVD approach. The trained dictionary is used for sparse coding of the change map using the Orthogonal Matching Pursuit (OMP) algorithm. The reconstructed change map incorporates a greater degree of smoothing and represents the features (snow cover changes) with better accuracy. The enhanced change map is segmented using kmeans to discriminate between the changed and non-changed pixels. The segmented enhanced change map is compared, firstly with the difference of Support Vector Machine (SVM) classified NDSI maps and secondly with a reference data generated as a mask by visual interpretation of the two input images. The methodology is evaluated using multi-spectral datasets from Resourcesat-2 and Landsat-8. The k-hat statistic is computed to determine the accuracy of the proposed approach.

  16. Low-rank approximations with sparse factors II: Penalized methods with discrete Newton-like iterations

    SciTech Connect

    Zhang, Zhenyue; Zha, Hongyuan; Simon, Horst

    2006-07-31

    In this paper, we developed numerical algorithms for computing sparse low-rank approximations of matrices, and we also provided a detailed error analysis of the proposed algorithms together with some numerical experiments. The low-rank approximations are constructed in a certain factored form with the degree of sparsity of the factors controlled by some user-specified parameters. In this paper, we cast the sparse low-rank approximation problem in the framework of penalized optimization problems. We discuss various approximation schemes for the penalized optimization problem which are more amenable to numerical computations. We also include some analysis to show the relations between the original optimization problem and the reduced one. We then develop a globally convergent discrete Newton-like iterative method for solving the approximate penalized optimization problems. We also compare the reconstruction errors of the sparse low-rank approximations computed by our new methods with those obtained using the methods in the earlier paper and several other existing methods for computing sparse low-rank approximations. Numerical examples show that the penalized methods are more robust and produce approximations with factors which have fewer columns and are sparser.

  17. Implementing an Accurate and Rapid Sparse Sampling Approach for Low-Dose Atomic Resolution STEM Imaging

    SciTech Connect

    Kovarik, Libor; Stevens, Andrew J.; Liyu, Andrey V.; Browning, Nigel D.

    2016-10-17

    Aberration correction for scanning transmission electron microscopes (STEM) has dramatically increased spatial image resolution for beam-stable materials, but it is the sample stability rather than the microscope that often limits the practical resolution of STEM images. To extract physical information from images of beam sensitive materials it is becoming clear that there is a critical dose/dose-rate below which the images can be interpreted as representative of the pristine material, while above it the observation is dominated by beam effects. Here we describe an experimental approach for sparse sampling in the STEM and in-painting image reconstruction in order to reduce the electron dose/dose-rate to the sample during imaging. By characterizi